Recherche avancée

Médias (1)

Mot : - Tags -/epub

Autres articles (63)

  • Demande de création d’un canal

    12 mars 2010, par

    En fonction de la configuration de la plateforme, l’utilisateur peu avoir à sa disposition deux méthodes différentes de demande de création de canal. La première est au moment de son inscription, la seconde, après son inscription en remplissant un formulaire de demande.
    Les deux manières demandent les mêmes choses fonctionnent à peu près de la même manière, le futur utilisateur doit remplir une série de champ de formulaire permettant tout d’abord aux administrateurs d’avoir des informations quant à (...)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

Sur d’autres sites (14018)

  • Encoding/Decoding H264 using libav in C++ [closed]

    20 mai, par gbock93

    I want to build an application to

    


      

    • capture frames in YUYV 4:2:2 format
    • 


    • encode them to H264
    • 


    • send over network
    • 


    • decode the received data
    • 


    • display the video stream
    • 


    


    To do so I wrote 2 classes, H264Encoder and H264Decoder.

    


    I post only the .cpp contents, the .h are trivial :

    


    H264Encoder.cpp

    


    #include &#xA;&#xA;#include <stdexcept>&#xA;#include <iostream>&#xA;&#xA;H264Encoder::H264Encoder(unsigned int width_, unsigned int height_, unsigned int fps_):&#xA;    m_width(width_),&#xA;    m_height(height_),&#xA;    m_fps(fps_),&#xA;    m_frame_index(0),&#xA;    m_context(nullptr),&#xA;    m_frame(nullptr),&#xA;    m_packet(nullptr),&#xA;    m_sws_ctx(nullptr)&#xA;{&#xA;    // Find the video codec&#xA;    AVCodec* codec;&#xA;    codec = avcodec_find_encoder(AV_CODEC_ID_H264);&#xA;    if (!codec)&#xA;        throw std::runtime_error("[Encoder]: Error: Codec not found");&#xA;&#xA;    // Allocate codec&#xA;    m_context = avcodec_alloc_context3(codec);&#xA;    if (!m_context)&#xA;        throw std::runtime_error("[Encoder]: Error: Could not allocate codec context");&#xA;&#xA;    // Configure codec&#xA;    av_opt_set(m_context->priv_data, "preset", "ultrafast", 0);&#xA;    av_opt_set(m_context->priv_data, "tune", "zerolatency", 0);&#xA;    av_opt_set(m_context->priv_data, "crf",           "35", 0); // Range: [0; 51], sane range: [18; 26], lower -> higher compression&#xA;&#xA;    m_context->width        = (int)width_;&#xA;    m_context->height       = (int)height_;&#xA;    m_context->time_base    = {1, (int)fps_};&#xA;    m_context->framerate    = {(int)fps_, 1};&#xA;    m_context->codec_id     = AV_CODEC_ID_H264;&#xA;    m_context->pix_fmt      = AV_PIX_FMT_YUV420P; // H265|4 codec take as input only AV_PIX_FMT_YUV420P&#xA;    m_context->bit_rate     = 400000;&#xA;    m_context->gop_size     = 10;&#xA;    m_context->max_b_frames = 1;&#xA;&#xA;    // Open codec&#xA;    if (avcodec_open2(m_context, codec, nullptr) &lt; 0)&#xA;        throw std::runtime_error("[Encoder]: Error: Could not open codec");&#xA;&#xA;    // Allocate frame and its buffer&#xA;    m_frame = av_frame_alloc();&#xA;    if (!m_frame) &#xA;        throw std::runtime_error("[Encoder]: Error: Could not allocate frame");&#xA;&#xA;    m_frame->format = m_context->pix_fmt;&#xA;    m_frame->width  = m_context->width;&#xA;    m_frame->height = m_context->height;&#xA;&#xA;    if (av_frame_get_buffer(m_frame, 0) &lt; 0)&#xA;        throw std::runtime_error("[Encoder]: Error: Cannot allocate frame buffer");&#xA;    &#xA;    // Allocate packet&#xA;    m_packet = av_packet_alloc();&#xA;    if (!m_packet) &#xA;        throw std::runtime_error("[Encoder]: Error: Could not allocate packet");&#xA;&#xA;    // Convert from YUYV422 to YUV420P&#xA;    m_sws_ctx = sws_getContext(&#xA;        width_, height_, AV_PIX_FMT_YUYV422,&#xA;        width_, height_, AV_PIX_FMT_YUV420P,&#xA;        SWS_BILINEAR, nullptr, nullptr, nullptr&#xA;    );&#xA;    if (!m_sws_ctx) &#xA;        throw std::runtime_error("[Encoder]: Error: Could not allocate sws context");&#xA;&#xA;    //&#xA;    printf("[Encoder]: H264Encoder ready.\n");&#xA;}&#xA;&#xA;H264Encoder::~H264Encoder()&#xA;{&#xA;    sws_freeContext(m_sws_ctx);&#xA;    av_packet_free(&amp;m_packet);&#xA;    av_frame_free(&amp;m_frame);&#xA;    avcodec_free_context(&amp;m_context);&#xA;&#xA;    printf("[Encoder]: H264Encoder destroyed.\n");&#xA;}&#xA;&#xA;std::vector H264Encoder::encode(const cv::Mat&amp; img_)&#xA;{&#xA;    /*&#xA;    - YUYV422 is a packed format. It has 3 components (av_pix_fmt_desc_get((AVPixelFormat)AV_PIX_FMT_YUYV422)->nb_components == 3) but&#xA;        data is stored in a single plane (av_pix_fmt_count_planes((AVPixelFormat)AV_PIX_FMT_YUYV422) == 1).&#xA;    - YUV420P is a planar format. It has 3 components (av_pix_fmt_desc_get((AVPixelFormat)AV_PIX_FMT_YUV420P)->nb_components == 3) and&#xA;        each component is stored in a separate plane (av_pix_fmt_count_planes((AVPixelFormat)AV_PIX_FMT_YUV420P) == 3) with its&#xA;        own stride.&#xA;    */&#xA;    std::cout &lt;&lt; "[Encoder]" &lt;&lt; std::endl;&#xA;    std::cout &lt;&lt; "[Encoder]: Encoding img " &lt;&lt; img_.cols &lt;&lt; "x" &lt;&lt; img_.rows &lt;&lt; " | element size " &lt;&lt; img_.elemSize() &lt;&lt; std::endl;&#xA;    assert(img_.elemSize() == 2);&#xA;&#xA;    uint8_t* input_data[1] = {(uint8_t*)img_.data};&#xA;    int input_linesize[1] = {2 * (int)m_width};&#xA;    &#xA;    if (av_frame_make_writable(m_frame) &lt; 0)&#xA;        throw std::runtime_error("[Encoder]: Error: Cannot make frame data writable");&#xA;&#xA;    // Convert from YUV422 image to YUV420 frame. Apply scaling if necessary&#xA;    sws_scale(&#xA;        m_sws_ctx,&#xA;        input_data, input_linesize, 0, m_height,&#xA;        m_frame->data, m_frame->linesize&#xA;    );&#xA;    m_frame->pts = m_frame_index;&#xA;&#xA;    int n_planes = av_pix_fmt_count_planes((AVPixelFormat)m_frame->format);&#xA;    std::cout &lt;&lt; "[Encoder]: Sending Frame " &lt;&lt; m_frame_index &lt;&lt; " with dimensions " &lt;&lt; m_frame->width &lt;&lt; "x" &lt;&lt; m_frame->height &lt;&lt; "x" &lt;&lt; n_planes &lt;&lt; std::endl;&#xA;    for (int i=0; iframerate.num) &#x2B; 1;&#xA;            break;&#xA;        case AVERROR(EAGAIN):&#xA;            throw std::runtime_error("[Encoder]: avcodec_send_frame: EAGAIN");&#xA;        case AVERROR_EOF:&#xA;            throw std::runtime_error("[Encoder]: avcodec_send_frame: EOF");&#xA;        case AVERROR(EINVAL):&#xA;            throw std::runtime_error("[Encoder]: avcodec_send_frame: EINVAL");&#xA;        case AVERROR(ENOMEM):&#xA;            throw std::runtime_error("[Encoder]: avcodec_send_frame: ENOMEM");&#xA;        default:&#xA;            throw std::runtime_error("[Encoder]: avcodec_send_frame: UNKNOWN");&#xA;    }&#xA;&#xA;    // Receive packet from codec&#xA;    std::vector result;&#xA;    while(ret >= 0)&#xA;    {&#xA;        ret = avcodec_receive_packet(m_context, m_packet);&#xA;&#xA;        switch (ret)&#xA;        {&#xA;        case 0:&#xA;            std::cout &lt;&lt; "[Encoder]: Received packet from codec of size " &lt;&lt; m_packet->size &lt;&lt; " bytes " &lt;&lt; std::endl;&#xA;            result.insert(result.end(), m_packet->data, m_packet->data &#x2B; m_packet->size);&#xA;            av_packet_unref(m_packet);&#xA;            break;&#xA;&#xA;        case AVERROR(EAGAIN):&#xA;            std::cout &lt;&lt; "[Encoder]: avcodec_receive_packet: EAGAIN" &lt;&lt; std::endl;&#xA;            break;&#xA;        case AVERROR_EOF:&#xA;            std::cout &lt;&lt; "[Encoder]: avcodec_receive_packet: EOF" &lt;&lt; std::endl;&#xA;            break;&#xA;        case AVERROR(EINVAL):&#xA;            throw std::runtime_error("[Encoder]: avcodec_receive_packet: EINVAL");&#xA;        default:&#xA;            throw std::runtime_error("[Encoder]: avcodec_receive_packet: UNKNOWN");&#xA;        }&#xA;    }&#xA;&#xA;    std::cout &lt;&lt; "[Encoder]: Encoding complete" &lt;&lt; std::endl;&#xA;    return result;&#xA;}&#xA;</iostream></stdexcept>

    &#xA;

    H264Decoder.cpp

    &#xA;

    #include &#xA;&#xA;#include <iostream>&#xA;#include <stdexcept>&#xA;&#xA;H264Decoder::H264Decoder():&#xA;    m_context(nullptr),&#xA;    m_frame(nullptr),&#xA;    m_packet(nullptr)&#xA;{&#xA;    // Find the video codec&#xA;    AVCodec* codec;&#xA;    codec = avcodec_find_decoder(AV_CODEC_ID_H264);&#xA;    if (!codec)&#xA;        throw std::runtime_error("[Decoder]: Error: Codec not found");&#xA;&#xA;    // Allocate codec&#xA;    m_context = avcodec_alloc_context3(codec);&#xA;    if (!m_context)&#xA;        throw std::runtime_error("[Decoder]: Error: Could not allocate codec context");&#xA;&#xA;    // Open codec&#xA;    if (avcodec_open2(m_context, codec, nullptr) &lt; 0)&#xA;        throw std::runtime_error("[Decoder]: Error: Could not open codec");&#xA;&#xA;    // Allocate frame&#xA;    m_frame = av_frame_alloc();&#xA;    if (!m_frame)&#xA;        throw std::runtime_error("[Decoder]: Error: Could not allocate frame");&#xA;&#xA;    // Allocate packet&#xA;    m_packet = av_packet_alloc();&#xA;    if (!m_packet) &#xA;        throw std::runtime_error("[Decoder]: Error: Could not allocate packet");&#xA;&#xA;    //&#xA;    printf("[Decoder]: H264Decoder ready.\n");&#xA;}&#xA;&#xA;H264Decoder::~H264Decoder()&#xA;{&#xA;    av_packet_free(&amp;m_packet);&#xA;    av_frame_free(&amp;m_frame);&#xA;    avcodec_free_context(&amp;m_context);&#xA;&#xA;    printf("[Decoder]: H264Decoder destroyed.\n");&#xA;}&#xA;&#xA;bool H264Decoder::decode(uint8_t* data_, size_t size_, cv::Mat&amp; img_)&#xA;{&#xA;    std::cout &lt;&lt; "[Decoder]" &lt;&lt; std::endl;&#xA;    std::cout &lt;&lt; "[Decoder]: decoding " &lt;&lt; size_ &lt;&lt; " bytes of data" &lt;&lt; std::endl;&#xA;&#xA;    // Fill packet&#xA;    m_packet->data = data_;&#xA;    m_packet->size = size_;&#xA;&#xA;    if (size_ == 0)&#xA;        return false;&#xA;&#xA;    // Send packet to codec&#xA;    int send_result = avcodec_send_packet(m_context, m_packet);&#xA;&#xA;    switch (send_result)&#xA;    {&#xA;        case 0:&#xA;            std::cout &lt;&lt; "[Decoder]: Sent packet to codec" &lt;&lt; std::endl;&#xA;            break;&#xA;        case AVERROR(EAGAIN):&#xA;            throw std::runtime_error("[Decoder]: avcodec_send_packet: EAGAIN");&#xA;        case AVERROR_EOF:&#xA;            throw std::runtime_error("[Decoder]: avcodec_send_packet: EOF");&#xA;        case AVERROR(EINVAL):&#xA;            throw std::runtime_error("[Decoder]: avcodec_send_packet: EINVAL");&#xA;        case AVERROR(ENOMEM):&#xA;            throw std::runtime_error("[Decoder]: avcodec_send_packet: ENOMEM");&#xA;        default:&#xA;            throw std::runtime_error("[Decoder]: avcodec_send_packet: UNKNOWN");&#xA;    }&#xA;&#xA;    // Receive frame from codec&#xA;    int n_planes;&#xA;    uint8_t* output_data[1];&#xA;    int output_line_size[1];&#xA;&#xA;    int receive_result = avcodec_receive_frame(m_context, m_frame);&#xA;&#xA;    switch (receive_result)&#xA;    {&#xA;        case 0:&#xA;            n_planes = av_pix_fmt_count_planes((AVPixelFormat)m_frame->format);&#xA;            std::cout &lt;&lt; "[Decoder]: Received Frame with dimensions " &lt;&lt; m_frame->width &lt;&lt; "x" &lt;&lt; m_frame->height &lt;&lt; "x" &lt;&lt; n_planes &lt;&lt; std::endl;&#xA;            for (int i=0; i/&#xA;    std::cout &lt;&lt; "[Decoder]: Decoding complete" &lt;&lt; std::endl;&#xA;    return true;&#xA;}&#xA;</stdexcept></iostream>

    &#xA;

    To test the two classes I put together a main.cpp to grab a frame, encode/decode and display the decoded frame (no network transmission in place) :

    &#xA;

    main.cpp

    &#xA;

    while(...)&#xA;{&#xA;    // get frame from custom camera class. Format is YUYV 4:2:2&#xA;    camera.getFrame(camera_frame);&#xA;    // Construct a cv::Mat to represent the grabbed frame&#xA;    cv::Mat camera_frame_yuyv = cv::Mat(camera_frame.height, camera_frame.width, CV_8UC2, camera_frame.data.data());&#xA;    // Encode image&#xA;    std::vector encoded_data = encoder.encode(camera_frame_yuyv);&#xA;    if (!encoded_data.empty())&#xA;    {&#xA;        // Decode image&#xA;        cv::Mat decoded_frame;&#xA;        if (decoder.decode(encoded_data.data(), encoded_data.size(), decoded_frame))&#xA;        {&#xA;            // Display image&#xA;            cv::imshow("Camera", decoded_frame);&#xA;            cv::waitKey(1);&#xA;        }&#xA;    }&#xA;}&#xA;

    &#xA;

    Compiling and executing the code I get random results between subsequent executions :

    &#xA;

      &#xA;
    • Sometimes the whole loop runs without problems and I see the decoded image.
    • &#xA;

    • Sometimes the program crashes at the sws_scale(...) call in the decoder with "Assertion desc failed at src/libswscale/swscale_internal.h:757".
    • &#xA;

    • Sometimes the loop runs but I see a black image and the message Slice parameters 0, 720 are invalid is displayed when executing the sws_scale(...) call in the decoder.
    • &#xA;

    &#xA;

    Why is the behaviour so random ? What am I doing wrong with the libav API ?

    &#xA;

    Some resources I found useful :

    &#xA;

    &#xA;

  • Using FFmpeg encode and UDP with a Webcam ?

    14 mars, par Rendres

    I'm trying to get frames from a Webcam using OpenCV, encode them with FFmpeg and send them using UDP.

    &#xA;

    I did before a similar project that instead of sending the packets with UDP, it saved them in a video file.

    &#xA;

    My code is.

    &#xA;

    #include &#xA;#include &#xA;#include &#xA;#include &#xA;&#xA;extern "C" {&#xA;#include <libavcodec></libavcodec>avcodec.h>&#xA;#include <libavformat></libavformat>avformat.h>&#xA;#include <libavutil></libavutil>opt.h>&#xA;#include <libavutil></libavutil>imgutils.h>&#xA;#include <libavutil></libavutil>mathematics.h>&#xA;#include <libswscale></libswscale>swscale.h>&#xA;#include <libswresample></libswresample>swresample.h>&#xA;}&#xA;&#xA;#include <opencv2></opencv2>opencv.hpp>&#xA;&#xA;using namespace std;&#xA;using namespace cv;&#xA;&#xA;#define WIDTH 640&#xA;#define HEIGHT 480&#xA;#define CODEC_ID AV_CODEC_ID_H264&#xA;#define STREAM_PIX_FMT AV_PIX_FMT_YUV420P&#xA;&#xA;static AVFrame *frame, *pFrameBGR;&#xA;&#xA;int main(int argc, char **argv)&#xA;{&#xA;VideoCapture cap(0);&#xA;const char *url = "udp://127.0.0.1:8080";&#xA;&#xA;AVFormatContext *formatContext;&#xA;AVStream *stream;&#xA;AVCodec *codec;&#xA;AVCodecContext *c;&#xA;AVDictionary *opts = NULL;&#xA;&#xA;int ret, got_packet;&#xA;&#xA;if (!cap.isOpened())&#xA;{&#xA;    return -1;&#xA;}&#xA;&#xA;av_log_set_level(AV_LOG_TRACE);&#xA;&#xA;av_register_all();&#xA;avformat_network_init();&#xA;&#xA;avformat_alloc_output_context2(&amp;formatContext, NULL, "h264", url);&#xA;if (!formatContext)&#xA;{&#xA;    av_log(NULL, AV_LOG_FATAL, "Could not allocate an output context for &#x27;%s&#x27;.\n", url);&#xA;}&#xA;&#xA;codec = avcodec_find_encoder(CODEC_ID);&#xA;if (!codec)&#xA;{&#xA;    av_log(NULL, AV_LOG_ERROR, "Could not find encoder.\n");&#xA;}&#xA;&#xA;stream = avformat_new_stream(formatContext, codec);&#xA;&#xA;c = avcodec_alloc_context3(codec);&#xA;&#xA;stream->id = formatContext->nb_streams - 1;&#xA;stream->time_base = (AVRational){1, 25};&#xA;&#xA;c->codec_id = CODEC_ID;&#xA;c->bit_rate = 400000;&#xA;c->width = WIDTH;&#xA;c->height = HEIGHT;&#xA;c->time_base = stream->time_base;&#xA;c->gop_size = 12;&#xA;c->pix_fmt = STREAM_PIX_FMT;&#xA;&#xA;if (formatContext->flags &amp; AVFMT_GLOBALHEADER)&#xA;    c->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;&#xA;&#xA;av_dict_set(&amp;opts, "preset", "fast", 0);&#xA;&#xA;av_dict_set(&amp;opts, "tune", "zerolatency", 0);&#xA;&#xA;ret = avcodec_open2(c, codec, NULL);&#xA;if (ret &lt; 0)&#xA;{&#xA;    av_log(NULL, AV_LOG_ERROR, "Could not open video codec.\n");&#xA;}&#xA;&#xA;pFrameBGR = av_frame_alloc();&#xA;if (!pFrameBGR)&#xA;{&#xA;    av_log(NULL, AV_LOG_ERROR, "Could not allocate video frame.\n");&#xA;}&#xA;&#xA;frame = av_frame_alloc();&#xA;if (!frame)&#xA;{&#xA;    av_log(NULL, AV_LOG_ERROR, "Could not allocate video frame.\n");&#xA;}&#xA;&#xA;frame->format = c->pix_fmt;&#xA;frame->width = c->width;&#xA;frame->height = c->height;&#xA;&#xA;ret = avcodec_parameters_from_context(stream->codecpar, c);&#xA;if (ret &lt; 0)&#xA;{&#xA;    av_log(NULL, AV_LOG_ERROR, "Could not open video codec.\n");&#xA;}&#xA;&#xA;av_dump_format(formatContext, 0, url, 1);&#xA;&#xA;ret = avformat_write_header(formatContext, NULL);&#xA;if (ret != 0)&#xA;{&#xA;    av_log(NULL, AV_LOG_ERROR, "Failed to connect to &#x27;%s&#x27;.\n", url);&#xA;}&#xA;&#xA;Mat image(Size(HEIGHT, WIDTH), CV_8UC3);&#xA;SwsContext *swsctx = sws_getContext(WIDTH, HEIGHT, AV_PIX_FMT_BGR24, WIDTH, HEIGHT, AV_PIX_FMT_YUV420P, SWS_BILINEAR, NULL, NULL, NULL);&#xA;int frame_pts = 0;&#xA;&#xA;while (1)&#xA;{&#xA;    cap >> image;&#xA;&#xA;    int numBytesYUV = av_image_get_buffer_size(STREAM_PIX_FMT, WIDTH, HEIGHT, 1);&#xA;    uint8_t *bufferYUV = (uint8_t *)av_malloc(numBytesYUV * sizeof(uint8_t));&#xA;&#xA;    avpicture_fill((AVPicture *)pFrameBGR, image.data, AV_PIX_FMT_BGR24, WIDTH, HEIGHT);&#xA;    avpicture_fill((AVPicture *)frame, bufferYUV, STREAM_PIX_FMT, WIDTH, HEIGHT);&#xA;&#xA;    sws_scale(swsctx, (uint8_t const *const *)pFrameBGR->data, pFrameBGR->linesize, 0, HEIGHT, frame->data, frame->linesize);&#xA;&#xA;    AVPacket pkt = {0};&#xA;    av_init_packet(&amp;pkt);&#xA;&#xA;    frame->pts = frame_pts;&#xA;&#xA;    ret = avcodec_encode_video2(c, &amp;pkt, frame, &amp;got_packet);&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        av_log(NULL, AV_LOG_ERROR, "Error encoding frame\n");&#xA;    }&#xA;&#xA;    if (got_packet)&#xA;    {&#xA;        pkt.pts = av_rescale_q_rnd(pkt.pts, c->time_base, stream->time_base, AVRounding(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));&#xA;        pkt.dts = av_rescale_q_rnd(pkt.dts, c->time_base, stream->time_base, AVRounding(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));&#xA;        pkt.duration = av_rescale_q(pkt.duration, c->time_base, stream->time_base);&#xA;        pkt.stream_index = stream->index;&#xA;&#xA;        return av_interleaved_write_frame(formatContext, &amp;pkt);&#xA;&#xA;        cout &lt;&lt; "Seguro que si" &lt;&lt; endl;&#xA;    }&#xA;    frame_pts&#x2B;&#x2B;;&#xA;}&#xA;&#xA;avcodec_free_context(&amp;c);&#xA;av_frame_free(&amp;frame);&#xA;avformat_free_context(formatContext);&#xA;&#xA;return 0;&#xA;}&#xA;

    &#xA;

    The code compiles but it returns Segmentation fault in the function av_interleaved_write_frame(). I've tried several implementations or several codecs (in this case I'm using libopenh264, but using mpeg2video returns the same segmentation fault). I tried also with av_write_frame() but it returns the same error.

    &#xA;

    As I told before, I only want to grab frames from a webcam connected via USB, encode them to H264 and send the packets through UDP to another PC.

    &#xA;

    My console log when I run the executable is.

    &#xA;

    [100%] Built target display&#xA;[OpenH264] this = 0x0x244b4f0, Info:CWelsH264SVCEncoder::SetOption():ENCODER_OPTION_TRACE_CALLBACK callback = 0x7f0c302a87c0.&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:CWelsH264SVCEncoder::InitEncoder(), openh264 codec version = 5a5c4f1&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:iUsageType = 0,iPicWidth= 640;iPicHeight= 480;iTargetBitrate= 400000;iMaxBitrate= 400000;iRCMode= 0;iPaddingFlag= 0;iTemporalLayerNum= 1;iSpatialLayerNum= 1;fFrameRate= 25.000000f;uiIntraPeriod= 12;eSpsPpsIdStrategy = 0;bPrefixNalAddingCtrl = 0;bSimulcastAVC=0;bEnableDenoise= 0;bEnableBackgroundDetection= 1;bEnableSceneChangeDetect = 1;bEnableAdaptiveQuant= 1;bEnableFrameSkip= 0;bEnableLongTermReference= 0;iLtrMarkPeriod= 30, bIsLosslessLink=0;iComplexityMode = 0;iNumRefFrame = 1;iEntropyCodingModeFlag = 0;uiMaxNalSize = 0;iLTRRefNum = 0;iMultipleThreadIdc = 1;iLoopFilterDisableIdc = 0 (offset(alpha/beta): 0,0;iComplexityMode = 0,iMaxQp = 51;iMinQp = 0)&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:sSpatialLayers[0]: .iVideoWidth= 640; .iVideoHeight= 480; .fFrameRate= 25.000000f; .iSpatialBitrate= 400000; .iMaxSpatialBitrate= 400000; .sSliceArgument.uiSliceMode= 1; .sSliceArgument.iSliceNum= 0; .sSliceArgument.uiSliceSizeConstraint= 1500;uiProfileIdc = 66;uiLevelIdc = 41&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Warning:SliceArgumentValidationFixedSliceMode(), unsupported setting with Resolution and uiSliceNum combination under RC on! So uiSliceNum is changed to 6!&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:Setting MaxSpatialBitrate (400000) the same at SpatialBitrate (400000) will make the    actual bit rate lower than SpatialBitrate&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Warning:bEnableFrameSkip = 0,bitrate can&#x27;t be controlled for RC_QUALITY_MODE,RC_BITRATE_MODE and RC_TIMESTAMP_MODE without enabling skip frame.&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Warning:Change QP Range from(0,51) to (12,42)&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:WELS CPU features/capacities (0x4007fe3f) detected:   HTT:      Y, MMX:      Y, MMXEX:    Y, SSE:      Y, SSE2:     Y, SSE3:     Y, SSSE3:    Y, SSE4.1:   Y, SSE4.2:   Y, AVX:      Y, FMA:      Y, X87-FPU:  Y, 3DNOW:    N, 3DNOWEX:  N, ALTIVEC:  N, CMOV:     Y, MOVBE:    Y, AES:      Y, NUMBER OF LOGIC PROCESSORS ON CHIP: 8, CPU CACHE LINE SIZE (BYTES):        64&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:WelsInitEncoderExt() exit, overall memory usage: 4542878 bytes&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:WelsInitEncoderExt(), pCtx= 0x0x245a400.&#xA;Output #0, h264, to &#x27;udp://192.168.100.39:8080&#x27;:&#xA;Stream #0:0, 0, 1/25: Video: h264 (libopenh264), 1 reference frame, yuv420p, 640x480 (0x0), 0/1, q=2-31, 400 kb/s, 25 tbn&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Debug:RcUpdateIntraComplexity iFrameDqBits = 385808,iQStep= 2016,iIntraCmplx = 777788928&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Debug:[Rc]Layer 0: Frame timestamp = 0, Frame type = 2, encoding_qp = 30, average qp = 30, max qp = 33, min qp = 27, index = 0, iTid = 0, used = 385808, bitsperframe = 16000, target = 64000, remainingbits = -257808, skipbuffersize = 200000&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Debug:WelsEncoderEncodeExt() OutputInfo iLayerNum = 2,iFrameSize = 48252&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Debug:WelsEncoderEncodeExt() OutputInfo iLayerId = 0,iNalType = 0,iNalCount = 2, first Nal Length=18,uiSpatialId = 0,uiTemporalId = 0,iSubSeqId = 0&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Debug:WelsEncoderEncodeExt() OutputInfo iLayerId = 1,iNalType = 1,iNalCount = 6, first Nal Length=6057,uiSpatialId = 0,uiTemporalId = 0,iSubSeqId = 0&#xA;[libopenh264 @ 0x244aa00] 6 slices&#xA;./scriptBuild.sh: line 20: 10625 Segmentation fault      (core dumped) ./display&#xA;

    &#xA;

    As you can see, FFmpeg uses libopenh264 and configures it correctly. However, no matter what. It always returns the same Segmentation fault error...

    &#xA;

    I've used commands like this.

    &#xA;

    ffmpeg -s 640x480 -f video4linux2 -i /dev/video0 -r 30 -vcodec libopenh264 -an -f h264 udp://127.0.0.1:8080&#xA;

    &#xA;

    And it works perfectly, but I need to process the frames before sending them. Thats why I'm trying to use the libs.

    &#xA;

    My FFmpeg version is.

    &#xA;

    ffmpeg version 3.3.6 Copyright (c) 2000-2017 the FFmpeg developers&#xA;built with gcc 4.8 (Ubuntu 4.8.4-2ubuntu1~14.04.3)&#xA;configuration: --disable-yasm --enable-shared --enable-libopenh264 --cc=&#x27;gcc -fPIC&#x27;&#xA;libavutil      55. 58.100 / 55. 58.100&#xA;libavcodec     57. 89.100 / 57. 89.100&#xA;libavformat    57. 71.100 / 57. 71.100&#xA;libavdevice    57.  6.100 / 57.  6.100&#xA;libavfilter     6. 82.100 /  6. 82.100&#xA;libswscale      4.  6.100 /  4.  6.100&#xA;libswresample   2.  7.100 /  2.  7.100&#xA;

    &#xA;

    I tried to get more information of the error using gbd, but it didn't give me debugging info.

    &#xA;

    How can I solve this problem ?

    &#xA;

  • Piping yt_dlp to FFMPEG using python : ffmpeg failing to recognize video data from pipe

    19 juillet 2024, par ThePrince

    I am trying to pipe the output of yt_dlp into a pipe used by ffmpeg which then outputs. :

    &#xA;

    def pipe_function(url):&#xA;&#xA;ydl_command = [&#xA;    &#x27;yt-dlp&#x27;, &#x27;-f&#x27;, &#x27;bestvideo&#x2B;bestaudio&#x27;, &#x27;--quiet&#x27;, &#x27;--no-warnings&#x27;, &#x27;-o&#x27;, &#x27;-&#x27;, url&#xA;]&#xA;&#xA;&#xA;ffmpeg_command = [&#xA;    &#x27;ffmpeg&#x27;, &#x27;-v&#x27;, &#x27;debug&#x27;, &#x27;-i&#x27;, &#x27;-&#x27;, &#x27;-c&#x27;, &#x27;copy&#x27;, &#x27;output.mp4&#x27;&#xA;]&#xA;&#xA;&#xA;# Start yt-dlp process&#xA;ydl_process = subprocess.Popen(&#xA;    ydl_command,&#xA;    stdout=subprocess.PIPE,&#xA;    stderr=subprocess.PIPE&#xA;)&#xA;&#xA;&#xA;# Start ffmpeg process and pipe output from yt-dlp to ffmpeg&#xA;ffmpeg_process = subprocess.Popen(&#xA;    ffmpeg_command,&#xA;    stdin=ydl_process.stdout,&#xA;    stdout=subprocess.PIPE,&#xA;    stderr=subprocess.PIPE&#xA;)&#xA;&#xA;&#xA;&#xA;ydl_stdout, ydl_stderr = ydl_process.communicate()&#xA;&#xA;&#xA;try:&#xA;    print(ydl_stdout[:100].decode(&#x27;utf-8&#x27;))&#xA;except UnicodeDecodeError:&#xA;    print(ydl_stdout[:100].decode(&#x27;latin-1&#x27;, errors=&#x27;ignore&#x27;))&#xA;&#xA;ffmpeg_stdout, ffmpeg_stderr = ffmpeg_process.communicate()&#xA;&#xA;&#xA;if ydl_stderr:&#xA;    try:&#xA;        print("error in yt-dlp: ", ydl_stderr.decode(&#x27;utf-8&#x27;))&#xA;    except UnicodeDecodeError:&#xA;        print("decode error in yt-dlp: ", ydl_stderr)&#xA;&#xA;if ffmpeg_stderr:&#xA;    try:&#xA;        print("error in ffmpeg: ", ffmpeg_stderr.decode(&#x27;utf-8&#x27;))&#xA;    except UnicodeDecodeError:&#xA;        print("decode error in ffmpeg: ", ffmpeg_stderr)&#xA;&#xA;&#xA;try:&#xA;    print(ffmpeg_stdout[:100].decode(&#x27;utf-8&#x27;))&#xA;except UnicodeDecodeError:&#xA;    print(ffmpeg_stdout[:100].decode(&#x27;latin-1&#x27;, errors=&#x27;ignore&#x27;))&#xA;&#xA;return ffmpeg_stdout&#xA;

    &#xA;

    This outputs an audio file. If I look at the console I see some messages. I have removed the lengthier stuff and shown only what I find the most interesting :

    &#xA;

    [mpegts @ 0000021b8576a380] probing stream 0 pp:1457&#xA;[mpegts @ 0000021b8576a380] probed stream 0&#xA;[mpegts @ 0000021b8576a380] parser not found for codec bin_data, packets or times may be invalid.&#xA;For transform of length 120, inverse, mdct_float, flags: [aligned, out_of_place], found 6 matches:&#xA;    1: mdct_inv_float_avx2 - type: mdct_float, len: [16, ∞], factors[2]: [2, any], flags: [aligned, out_of_place, inv_only], prio: 544&#xA;    2: mdct_pfa_15xM_inv_float_c - type: mdct_float, len: [30, ∞], factors[2]: [15, any], flags: [unaligned, out_of_place, inv_only], prio: 304&#xA;    3: mdct_pfa_5xM_inv_float_c - type: mdct_float, len: [10, ∞], factors[2]: [5, any], flags: [unaligned, out_of_place, inv_only], prio: 144&#xA;    4: mdct_pfa_3xM_inv_float_c - type: mdct_float, len: [6, ∞], factors[2]: [3, any], flags: [unaligned, out_of_place, inv_only], prio: 112&#xA;    5: mdct_inv_float_c - type: mdct_float, len: [2, ∞], factors[2]: [2, any], flags: [unaligned, out_of_place, inv_only], prio: 96&#xA;    6: mdct_naive_inv_float_c - type: mdct_float, len: [2, ∞], factors[2]: [2, any], flags: [unaligned, out_of_place, inv_only], prio: -130976&#xA;For transform of length 60, inverse, fft_float, flags: [aligned, inplace, preshuf, asm_call], found 1 matches:&#xA;    1: fft_pfa_15xM_asm_float_avx2 - type: fft_float, len: [60, ∞], factors[2]: [15, 2], flags: [aligned, inplace, out_of_place, preshuf, asm_call], prio: 688&#xA;For transform of length 4, inverse, fft_float, flags: [aligned, inplace, preshuf, asm_call], found 1 matches:&#xA;    1: fft4_fwd_asm_float_sse2 - type: fft_float, len: 4, factor: 2, flags: [aligned, inplace, out_of_place, preshuf, asm_call], prio: 352&#xA;Transform tree:&#xA;    mdct_inv_float_avx2 - type: mdct_float, len: 120, factors[2]: [2, any], flags: [aligned, out_of_place, inv_only]&#xA;        fft_pfa_15xM_asm_float_avx2 - type: fft_float, len: 60, factors[2]: [15, 2], flags: [aligned, inplace, out_of_place, preshuf, asm_call]&#xA;            fft4_fwd_asm_float_sse2 - type: fft_float, len: 4, factor: 2, flags: [aligned, inplace, out_of_place, preshuf, asm_call]&#xA;For transform of length 128, inverse, mdct_float, flags: [aligned, out_of_place], found 3 matches:&#xA;    1: mdct_inv_float_avx2 - type: mdct_float, len: [16, ∞], factors[2]: [2, any], flags: [aligned, out_of_place, inv_only], prio: 544&#xA;    2: mdct_inv_float_c - type: mdct_float, len: [2, ∞], factors[2]: [2, any], flags: [unaligned, out_of_place, inv_only], prio: 96&#xA;    3: mdct_naive_inv_float_c - type: mdct_float, len: [2, ∞], factors[2]: [2, any], flags: [unaligned, out_of_place, inv_only], prio: -130976&#xA;For transform of length 64, inverse, fft_float, flags: [aligned, inplace, preshuf, asm_call], found 3 matches:&#xA;    1: fft_sr_asm_float_avx2 - type: fft_float, len: [64, 131072], factor: 2, flags: [aligned, inplace, out_of_place, preshuf, asm_call], prio: 480&#xA;    2: fft_sr_asm_float_fma3 - type: fft_float, len: [64, 131072], factor: 2, flags: [aligned, inplace, out_of_place, preshuf, asm_call], prio: 448&#xA;    3: fft_sr_asm_float_avx - type: fft_float, len: [64, 131072], factor: 2, flags: [aligned, inplace, out_of_place, preshuf, asm_call], prio: 416&#xA;Transform tree:&#xA;    mdct_inv_float_avx2 - type: mdct_float, len: 128, factors[2]: [2, any], flags: [aligned, out_of_place, inv_only]&#xA;        fft_sr_asm_float_avx2 - type: fft_float, len: 64, factor: 2, flags: [aligned, inplace, out_of_place, preshuf, asm_call]&#xA;For transform of length 480, inverse, mdct_float, flags: [aligned, out_of_place], found 6 matches:&#xA;    1: mdct_inv_float_avx2 - type: mdct_float, len: [16, ∞], factors[2]: [2, any], flags: [aligned, out_of_place, inv_only], prio: 544&#xA;    2: mdct_pfa_15xM_inv_float_c - type: mdct_float, len: [30, ∞], factors[2]: [15, any], flags: [unaligned, out_of_place, inv_only], prio: 304&#xA;    3: mdct_pfa_5xM_inv_float_c - type: mdct_float, len: [10, ∞], factors[2]: [5, any], flags: [unaligned, out_of_place, inv_only], prio: 144&#xA;    4: mdct_pfa_3xM_inv_float_c - type: mdct_float, len: [6, ∞], factors[2]: [3, any], flags: [unaligned, out_of_place, inv_only], prio: 112&#xA;    5: mdct_inv_float_c - type: mdct_float, len: [2, ∞], factors[2]: [2, any], flags: [unaligned, out_of_place, inv_only], prio: 96&#xA;    6: mdct_naive_inv_float_c - type: mdct_float, len: [2, ∞], factors[2]: [2, any], flags: [unaligned, out_of_place, inv_only], prio: -130976&#xA;For transform of length 240, inverse, fft_float, flags: [aligned, inplace, preshuf, asm_call], found 1 matches:&#xA;    1: fft_pfa_15xM_asm_float_avx2 - type: fft_float, len: [60, ∞], factors[2]: [15, 2], flags: [aligned, inplace, out_of_place, preshuf, asm_call], prio: 688&#xA;For transform of length 16, inverse, fft_float, flags: [aligned, inplace, preshuf, asm_call], found 2 matches:&#xA;    1: fft16_asm_float_fma3 - type: fft_float, len: 16, factor: 2, flags: [aligned, inplace, out_of_place, preshuf, asm_call], prio: 512&#xA;    2: fft16_asm_float_avx - type: fft_float, len: 16, factor: 2, flags: [aligned, inplace, out_of_place, preshuf, asm_call], prio: 480&#xA;Transform tree:&#xA;    mdct_inv_float_avx2 - type: mdct_float, len: 480, factors[2]: [2, any], flags: [aligned, out_of_place, inv_only]&#xA;        fft_pfa_15xM_asm_float_avx2 - type: fft_float, len: 240, factors[2]: [15, 2], flags: [aligned, inplace, out_of_place, preshuf, asm_call]&#xA;            fft16_asm_float_fma3 - type: fft_float, len: 16, factor: 2, flags: [aligned, inplace, out_of_place, preshuf, asm_call]&#xA;For transform of length 512, inverse, mdct_float, flags: [aligned, out_of_place], found 3 matches:&#xA;    1: mdct_inv_float_avx2 - type: mdct_float, len: [16, ∞], factors[2]: [2, any], flags: [aligned, out_of_place, inv_only], prio: 544&#xA;    2: mdct_inv_float_c - type: mdct_float, len: [2, ∞], factors[2]: [2, any], flags: [unaligned, out_of_place, inv_only], prio: 96&#xA;    3: mdct_naive_inv_float_c - type: mdct_float, len: [2, ∞], factors[2]: [2, any], flags: [unaligned, out_of_place, inv_only], prio: -130976&#xA;For transform of length 256, inverse, fft_float, flags: [aligned, inplace, preshuf, asm_call], found 3 matches:&#xA;    1: fft_sr_asm_float_avx2 - type: fft_float, len: [64, 131072], factor: 2, flags: [aligned, inplace, out_of_place, preshuf, asm_call], prio: 480&#xA;    2: fft_sr_asm_float_fma3 - type: fft_float, len: [64, 131072], factor: 2, flags: [aligned, inplace, out_of_place, preshuf, asm_call], prio: 448&#xA;    3: fft_sr_asm_float_avx - type: fft_float, len: [64, 131072], factor: 2, flags: [aligned, inplace, out_of_place, preshuf, asm_call], prio: 416&#xA;Transform tree:&#xA;    mdct_inv_float_avx2 - type: mdct_float, len: 512, factors[2]: [2, any], flags: [aligned, out_of_place, inv_only]&#xA;        fft_sr_asm_float_avx2 - type: fft_float, len: 256, factor: 2, flags: [aligned, inplace, out_of_place, preshuf, asm_call]&#xA;For transform of length 960, inverse, mdct_float, flags: [aligned, out_of_place], found 6 matches:&#xA;    1: mdct_inv_float_avx2 - type: mdct_float, len: [16, ∞], factors[2]: [2, any], flags: [aligned, out_of_place, inv_only], prio: 544&#xA;    2: mdct_pfa_15xM_inv_float_c - type: mdct_float, len: [30, ∞], factors[2]: [15, any], flags: [unaligned, out_of_place, inv_only], prio: 304&#xA;    3: mdct_pfa_5xM_inv_float_c - type: mdct_float, len: [10, ∞], factors[2]: [5, any], flags: [unaligned, out_of_place, inv_only], prio: 144&#xA;    4: mdct_pfa_3xM_inv_float_c - type: mdct_float, len: [6, ∞], factors[2]: [3, any], flags: [unaligned, out_of_place, inv_only], prio: 112&#xA;    5: mdct_inv_float_c - type: mdct_float, len: [2, ∞], factors[2]: [2, any], flags: [unaligned, out_of_place, inv_only], prio: 96&#xA;    6: mdct_naive_inv_float_c - type: mdct_float, len: [2, ∞], factors[2]: [2, any], flags: [unaligned, out_of_place, inv_only], prio: -130976&#xA;For transform of length 480, inverse, fft_float, flags: [aligned, inplace, preshuf, asm_call], found 1 matches:&#xA;    1: fft_pfa_15xM_asm_float_avx2 - type: fft_float, len: [60, ∞], factors[2]: [15, 2], flags: [aligned, inplace, out_of_place, preshuf, asm_call], prio: 688&#xA;For transform of length 32, inverse, fft_float, flags: [aligned, inplace, preshuf, asm_call], found 2 matches:&#xA;    1: fft32_asm_float_fma3 - type: fft_float, len: 32, factor: 2, flags: [aligned, inplace, out_of_place, preshuf, asm_call], prio: 512&#xA;    2: fft32_asm_float_avx - type: fft_float, len: 32, factor: 2, flags: [aligned, inplace, out_of_place, preshuf, asm_call], prio: 480&#xA;Transform tree:&#xA;    mdct_inv_float_avx2 - type: mdct_float, len: 960, factors[2]: [2, any], flags: [aligned, out_of_place, inv_only]&#xA;        fft_pfa_15xM_asm_float_avx2 - type: fft_float, len: 480, factors[2]: [15, 2], flags: [aligned, inplace, out_of_place, preshuf, asm_call]&#xA;            fft32_asm_float_fma3 - type: fft_float, len: 32, factor: 2, flags: [aligned, inplace, out_of_place, preshuf, asm_call]&#xA;For transform of length 1024, inverse, mdct_float, flags: [aligned, out_of_place], found 3 matches:&#xA;    1: mdct_inv_float_avx2 - type: mdct_float, len: [16, ∞], factors[2]: [2, any], flags: [aligned, out_of_place, inv_only], prio: 544&#xA;    2: mdct_inv_float_c - type: mdct_float, len: [2, ∞], factors[2]: [2, any], flags: [unaligned, out_of_place, inv_only], prio: 96&#xA;    3: mdct_naive_inv_float_c - type: mdct_float, len: [2, ∞], factors[2]: [2, any], flags: [unaligned, out_of_place, inv_only], prio: -130976&#xA;For transform of length 512, inverse, fft_float, flags: [aligned, inplace, preshuf, asm_call], found 3 matches:&#xA;    1: fft_sr_asm_float_avx2 - type: fft_float, len: [64, 131072], factor: 2, flags: [aligned, inplace, out_of_place, preshuf, asm_call], prio: 480&#xA;    2: fft_sr_asm_float_fma3 - type: fft_float, len: [64, 131072], factor: 2, flags: [aligned, inplace, out_of_place, preshuf, asm_call], prio: 448&#xA;    3: fft_sr_asm_float_avx - type: fft_float, len: [64, 131072], factor: 2, flags: [aligned, inplace, out_of_place, preshuf, asm_call], prio: 416&#xA;Transform tree:&#xA;    mdct_inv_float_avx2 - type: mdct_float, len: 1024, factors[2]: [2, any], flags: [aligned, out_of_place, inv_only]&#xA;        fft_sr_asm_float_avx2 - type: fft_float, len: 512, factor: 2, flags: [aligned, inplace, out_of_place, preshuf, asm_call]&#xA;For transform of length 1024, forward, mdct_float, flags: [aligned, out_of_place], found 2 matches:&#xA;    1: mdct_fwd_float_c - type: mdct_float, len: [2, ∞], factors[2]: [2, any], flags: [unaligned, out_of_place, fwd_only], prio: 96&#xA;    2: mdct_naive_fwd_float_c - type: mdct_float, len: [2, ∞], factors[2]: [2, any], flags: [unaligned, out_of_place, fwd_only], prio: -130976&#xA;For transform of length 512, forward, fft_float, flags: [aligned, inplace, preshuf], found 5 matches:&#xA;    1: fft_sr_ns_float_avx2 - type: fft_float, len: [64, 131072], factor: 2, flags: [aligned, inplace, out_of_place, preshuf], prio: 480&#xA;    2: fft_sr_ns_float_fma3 - type: fft_float, len: [64, 131072], factor: 2, flags: [aligned, inplace, out_of_place, preshuf], prio: 448&#xA;    3: fft_sr_ns_float_avx - type: fft_float, len: [64, 131072], factor: 2, flags: [aligned, inplace, out_of_place, preshuf], prio: 416&#xA;    4: fft_pfa_ns_float_c - type: fft_float, len: [6, ∞], factors[2]: [7, 5, 3, 2, any], flags: [unaligned, inplace, out_of_place, preshuf], prio: 112&#xA;    5: fft512_ns_float_c - type: fft_float, len: 512, factor: 2, flags: [unaligned, inplace, out_of_place, preshuf], prio: 96&#xA;Transform tree:&#xA;    mdct_fwd_float_c - type: mdct_float, len: 1024, factors[2]: [2, any], flags: [unaligned, out_of_place, fwd_only]&#xA;        fft_sr_ns_float_avx2 - type: fft_float, len: 512, factor: 2, flags: [aligned, inplace, out_of_place, preshuf]&#xA;For transform of length 64, inverse, mdct_float, flags: [aligned, out_of_place], found 3 matches:&#xA;    1: mdct_inv_float_avx2 - type: mdct_float, len: [16, ∞], factors[2]: [2, any], flags: [aligned, out_of_place, inv_only], prio: 544&#xA;    2: mdct_inv_float_c - type: mdct_float, len: [2, ∞], factors[2]: [2, any], flags: [unaligned, out_of_place, inv_only], prio: 96&#xA;    3: mdct_naive_inv_float_c - type: mdct_float, len: [2, ∞], factors[2]: [2, any], flags: [unaligned, out_of_place, inv_only], prio: -130976&#xA;For transform of length 32, inverse, fft_float, flags: [aligned, inplace, preshuf, asm_call], found 2 matches:&#xA;    1: fft32_asm_float_fma3 - type: fft_float, len: 32, factor: 2, flags: [aligned, inplace, out_of_place, preshuf, asm_call], prio: 512&#xA;    2: fft32_asm_float_avx - type: fft_float, len: 32, factor: 2, flags: [aligned, inplace, out_of_place, preshuf, asm_call], prio: 480&#xA;Transform tree:&#xA;    mdct_inv_float_avx2 - type: mdct_float, len: 64, factors[2]: [2, any], flags: [aligned, out_of_place, inv_only]&#xA;        fft32_asm_float_fma3 - type: fft_float, len: 32, factor: 2, flags: [aligned, inplace, out_of_place, preshuf, asm_call]&#xA;For transform of length 64, inverse, mdct_float, flags: [aligned, out_of_place], found 3 matches:&#xA;    1: mdct_inv_float_avx2 - type: mdct_float, len: [16, ∞], factors[2]: [2, any], flags: [aligned, out_of_place, inv_only], prio: 544&#xA;    2: mdct_inv_float_c - type: mdct_float, len: [2, ∞], factors[2]: [2, any], flags: [unaligned, out_of_place, inv_only], prio: 96&#xA;    3: mdct_naive_inv_float_c - type: mdct_float, len: [2, ∞], factors[2]: [2, any], flags: [unaligned, out_of_place, inv_only], prio: -130976&#xA;For transform of length 32, inverse, fft_float, flags: [aligned, inplace, preshuf, asm_call], found 2 matches:&#xA;    1: fft32_asm_float_fma3 - type: fft_float, len: 32, factor: 2, flags: [aligned, inplace, out_of_place, preshuf, asm_call], prio: 512&#xA;    2: fft32_asm_float_avx - type: fft_float, len: 32, factor: 2, flags: [aligned, inplace, out_of_place, preshuf, asm_call], prio: 480&#xA;Transform tree:&#xA;    mdct_inv_float_avx2 - type: mdct_float, len: 64, factors[2]: [2, any], flags: [aligned, out_of_place, inv_only]&#xA;        fft32_asm_float_fma3 - type: fft_float, len: 32, factor: 2, flags: [aligned, inplace, out_of_place, preshuf, asm_call]&#xA;[mpegts @ 0000021b8576a380] max_analyze_duration 5000000 reached at 5005000 microseconds st:0&#xA;[mpegts @ 0000021b8576a380] After avformat_find_stream_info() pos: 5403308 bytes read:5404624 seeks:0 frames:365&#xA;Input #0, mpegts, from &#x27;fd:&#x27;:&#xA;  Duration: N/A, start: 1.400000, bitrate: 130 kb/s&#xA;  Program 1&#xA;    Metadata:&#xA;      service_name    : Service01&#xA;      service_provider: FFmpeg&#xA;  Stream #0:0[0x100], 152, 1/90000: Data: bin_data ([6][0][0][0] / 0x0006), 0/1&#xA;  Stream #0:1[0x101](und), 213, 1/90000: Audio: aac (LC) ([15][0][0][0] / 0x000F), 44100 Hz, stereo, fltp, 130 kb/s&#xA;Successfully opened the file.&#xA;Parsing a group of options: output url output.mp4.&#xA;Applying option c (codec name) with argument copy.&#xA;Successfully parsed a group of options.&#xA;Opening an output file: output.mp4.&#xA;[out#0/mp4 @ 0000021b89b251c0] No explicit maps, mapping streams automatically...&#xA;[aost#0:0/copy @ 0000021b89b8d800] Created audio stream from input stream 0:1&#xA;[file @ 0000021b89586680] Setting default whitelist &#x27;file,crypto,data&#x27;&#xA;Successfully opened the file.&#xA;Stream mapping:&#xA;  Stream #0:1 -> #0:0 (copy)&#xA;Output #0, mp4, to &#x27;output.mp4&#x27;:&#xA;  Metadata:&#xA;    encoder         : Lavf60.20.100&#xA;  Stream #0:0(und), 0, 1/44100: Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 130 kb/s&#xA;[out#0/mp4 @ 0000021b89b251c0] Starting thread...&#xA;[in#0/mpegts @ 0000021b85724080] Starting thread...&#xA;Automatically inserted bitstream filter &#x27;aac_adtstoasc&#x27;; args=&#x27;&#x27;&#xA;[in#0/mpegts @ 0000021b85724080] EOF while reading inputpeed=  40x&#xA;[in#0/mpegts @ 0000021b85724080] Terminating thread with return code 0 (success)&#xA;[out#0/mp4 @ 0000021b89b251c0] All streams finished&#xA;[out#0/mp4 @ 0000021b89b251c0] Terminating thread with return code 0 (success)&#xA;[AVIOContext @ 0000021b87146340] Statistics: 2723207 bytes written, 2 seeks, 14 writeouts&#xA;[out#0/mp4 @ 0000021b89b251c0] Output file #0 (output.mp4):&#xA;[out#0/mp4 @ 0000021b89b251c0]   Output stream #0:0 (audio): 7232 packets muxed (2737451 bytes);&#xA;[out#0/mp4 @ 0000021b89b251c0]   Total: 7232 packets (2737451 bytes) muxed&#xA;[out#0/mp4 @ 0000021b89b251c0] video:0kB audio:2673kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown&#xA;size=    2659kB time=00:02:47.92 bitrate= 129.7kbits/s speed=41.3x&#xA;  Metadata:&#xA;    encoder         : Lavf60.20.100&#xA;  Stream #0:0(und), 0, 1/44100: Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 130 kb/s&#xA;[out#0/mp4 @ 0000021b89b251c0] Starting thread...&#xA;[in#0/mpegts @ 0000021b85724080] Starting thread...&#xA;Automatically inserted bitstream filter &#x27;aac_adtstoasc&#x27;; args=&#x27;&#x27;&#xA;[in#0/mpegts @ 0000021b85724080] EOF while reading inputpeed=  40x&#xA;[in#0/mpegts @ 0000021b85724080] Terminating thread with return code 0 (success)&#xA;[out#0/mp4 @ 0000021b89b251c0] All streams finished&#xA;[out#0/mp4 @ 0000021b89b251c0] Terminating thread with return code 0 (success)&#xA;[AVIOContext @ 0000021b87146340] Statistics: 2723207 bytes written, 2 seeks, 14 writeouts&#xA;[out#0/mp4 @ 0000021b89b251c0] Output file #0 (output.mp4):&#xA;[out#0/mp4 @ 0000021b89b251c0]   Output stream #0:0 (audio): 7232 packets muxed (2737451 bytes);&#xA;[out#0/mp4 @ 0000021b89b251c0]   Total: 7232 packets (2737451 bytes) muxed&#xA;[out#0/mp4 @ 0000021b89b251c0] video:0kB audio:2673kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown&#xA;size=    2659kB time=00:02:47.92 bitrate= 129.7kbits/s speed=41.3x&#xA;[in#0/mpegts @ 0000021b85724080] Starting thread...&#xA;Automatically inserted bitstream filter &#x27;aac_adtstoasc&#x27;; args=&#x27;&#x27;&#xA;[in#0/mpegts @ 0000021b85724080] EOF while reading inputpeed=  40x&#xA;[in#0/mpegts @ 0000021b85724080] Terminating thread with return code 0 (success)&#xA;[out#0/mp4 @ 0000021b89b251c0] All streams finished&#xA;[out#0/mp4 @ 0000021b89b251c0] Terminating thread with return code 0 (success)&#xA;[AVIOContext @ 0000021b87146340] Statistics: 2723207 bytes written, 2 seeks, 14 writeouts&#xA;[out#0/mp4 @ 0000021b89b251c0] Output file #0 (output.mp4):&#xA;[out#0/mp4 @ 0000021b89b251c0]   Output stream #0:0 (audio): 7232 packets muxed (2737451 bytes);&#xA;[out#0/mp4 @ 0000021b89b251c0]   Total: 7232 packets (2737451 bytes) muxed&#xA;[out#0/mp4 @ 0000021b89b251c0] video:0kB audio:2673kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown&#xA;size=    2659kB time=00:02:47.92 bitrate= 129.7kbits/s speed=41.3x&#xA;[out#0/mp4 @ 0000021b89b251c0] All streams finished&#xA;[out#0/mp4 @ 0000021b89b251c0] Terminating thread with return code 0 (success)&#xA;[AVIOContext @ 0000021b87146340] Statistics: 2723207 bytes written, 2 seeks, 14 writeouts&#xA;[out#0/mp4 @ 0000021b89b251c0] Output file #0 (output.mp4):&#xA;[out#0/mp4 @ 0000021b89b251c0]   Output stream #0:0 (audio): 7232 packets muxed (2737451 bytes);&#xA;[out#0/mp4 @ 0000021b89b251c0]   Total: 7232 packets (2737451 bytes) muxed&#xA;[out#0/mp4 @ 0000021b89b251c0] video:0kB audio:2673kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown&#xA;size=    2659kB time=00:02:47.92 bitrate= 129.7kbits/s speed=41.3x&#xA;[out#0/mp4 @ 0000021b89b251c0] Output file #0 (output.mp4):&#xA;[out#0/mp4 @ 0000021b89b251c0]   Output stream #0:0 (audio): 7232 packets muxed (2737451 bytes);&#xA;[out#0/mp4 @ 0000021b89b251c0]   Total: 7232 packets (2737451 bytes) muxed&#xA;[out#0/mp4 @ 0000021b89b251c0] video:0kB audio:2673kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown&#xA;size=    2659kB time=00:02:47.92 bitrate= 129.7kbits/s speed=41.3x&#xA;[out#0/mp4 @ 0000021b89b251c0]   Total: 7232 packets (2737451 bytes) muxed&#xA;[out#0/mp4 @ 0000021b89b251c0] video:0kB audio:2673kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown&#xA;size=    2659kB time=00:02:47.92 bitrate= 129.7kbits/s speed=41.3x&#xA;[out#0/mp4 @ 0000021b89b251c0] video:0kB audio:2673kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown&#xA;size=    2659kB time=00:02:47.92 bitrate= 129.7kbits/s speed=41.3x&#xA;[in#0/mpegts @ 0000021b85724080] Input file #0 (fd:):&#xA;size=    2659kB time=00:02:47.92 bitrate= 129.7kbits/s speed=41.3x&#xA;[in#0/mpegts @ 0000021b85724080] Input file #0 (fd:):&#xA;[in#0/mpegts @ 0000021b85724080] Input file #0 (fd:):&#xA;[in#0/mpegts @ 0000021b85724080]   Input stream #0:1 (audio): 7232 packets read (2737451 bytes);&#xA;[in#0/mpegts @ 0000021b85724080]   Total: 7232 packets (2737451 bytes) demuxed&#xA;[AVIOContext @ 0000021b85744880] Statistics: 27226912 bytes read, 0 seeks&#xA;

    &#xA;

    It seems to me that the video data is being interpreted as bin_data and no codec is found for it. What is going on here ?

    &#xA;

    Must YT_DLP data be converted to some other format before being read by FFMPEG ?

    &#xA;

    Thank you.

    &#xA;