Recherche avancée

Médias (0)

Mot : - Tags -/page unique

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (31)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

  • Problèmes fréquents

    10 mars 2010, par

    PHP et safe_mode activé
    Une des principales sources de problèmes relève de la configuration de PHP et notamment de l’activation du safe_mode
    La solution consiterait à soit désactiver le safe_mode soit placer le script dans un répertoire accessible par apache pour le site

Sur d’autres sites (6473)

  • Video creation with the most recent ffmpeg API (2017)

    19 octobre 2022, par ar2015

    I have started learning how to work with ffmpeg which has a suffering deprecation of all tutorial and available examples such as this.

    



    I am looking for a code which creates an output video.

    



    Unfortunately, most of good examples are focusing on reading from a file rather than creating one.

    



    Here, I have found a deprecated example and I spent a long time to fix its errors until it became like this :

    



    #include <iostream>&#xA;#include &#xA;#include &#xA;#include <string>&#xA;&#xA;extern "C" {&#xA;        #include <libavcodec></libavcodec>avcodec.h>&#xA;        #include <libavformat></libavformat>avformat.h>&#xA;        #include <libswscale></libswscale>swscale.h>&#xA;        #include <libavformat></libavformat>avio.h>&#xA;        #include <libavutil></libavutil>opt.h>&#xA;}&#xA;&#xA;#define WIDTH 800&#xA;#define HEIGHT 480&#xA;#define STREAM_NB_FRAMES  ((int)(STREAM_DURATION * FRAME_RATE))&#xA;#define FRAME_RATE 24&#xA;#define PIXEL_FORMAT AV_PIX_FMT_YUV420P&#xA;#define STREAM_DURATION 5.0 //seconds&#xA;#define BIT_RATE 400000&#xA;&#xA;#define AV_CODEC_FLAG_GLOBAL_HEADER (1 &lt;&lt; 22)&#xA;#define CODEC_FLAG_GLOBAL_HEADER AV_CODEC_FLAG_GLOBAL_HEADER&#xA;#define AVFMT_RAWPICTURE 0x0020&#xA;&#xA;using namespace std;&#xA;&#xA;static int sws_flags = SWS_BICUBIC;&#xA;&#xA;AVFrame *picture, *tmp_picture;&#xA;uint8_t *video_outbuf;&#xA;int frame_count, video_outbuf_size;&#xA;&#xA;&#xA;/****** IF LINUX ******/&#xA;inline int sprintf_s(char* buffer, size_t sizeOfBuffer, const char* format, ...)&#xA;{&#xA;    va_list ap;&#xA;    va_start(ap, format);&#xA;    int result = vsnprintf(buffer, sizeOfBuffer, format, ap);&#xA;    va_end(ap);&#xA;    return result;&#xA;}&#xA;&#xA;/****** IF LINUX ******/&#xA;template&#xA;inline int sprintf_s(char (&amp;buffer)[sizeOfBuffer], const char* format, ...)&#xA;{&#xA;    va_list ap;&#xA;    va_start(ap, format);&#xA;    int result = vsnprintf(buffer, sizeOfBuffer, format, ap);&#xA;    va_end(ap);&#xA;    return result;&#xA;}&#xA;&#xA;&#xA;static void closeVideo(AVFormatContext *oc, AVStream *st)&#xA;{&#xA;    avcodec_close(st->codec);&#xA;    av_free(picture->data[0]);&#xA;    av_free(picture);&#xA;    if (tmp_picture)&#xA;    {&#xA;        av_free(tmp_picture->data[0]);&#xA;        av_free(tmp_picture);&#xA;    }&#xA;    av_free(video_outbuf);&#xA;}&#xA;&#xA;static AVFrame *alloc_picture(enum AVPixelFormat pix_fmt, int width, int height)&#xA;{&#xA;    AVFrame *picture;&#xA;    uint8_t *picture_buf;&#xA;    int size;&#xA;&#xA;    picture = av_frame_alloc();&#xA;    if(!picture)&#xA;        return NULL;&#xA;    size = avpicture_get_size(pix_fmt, width, height);&#xA;    picture_buf = (uint8_t*)(av_malloc(size));&#xA;    if (!picture_buf)&#xA;    {&#xA;        av_free(picture);&#xA;        return NULL;&#xA;    }&#xA;    avpicture_fill((AVPicture *) picture, picture_buf, pix_fmt, WIDTH, HEIGHT);&#xA;    return picture;&#xA;}&#xA;&#xA;static void openVideo(AVFormatContext *oc, AVStream *st)&#xA;{&#xA;    AVCodec *codec;&#xA;    AVCodecContext *c;&#xA;&#xA;    c = st->codec;&#xA;    if(c->idct_algo == AV_CODEC_ID_H264)&#xA;        av_opt_set(c->priv_data, "preset", "slow", 0);&#xA;&#xA;    codec = avcodec_find_encoder(c->codec_id);&#xA;    if(!codec)&#xA;    {&#xA;        std::cout &lt;&lt; "Codec not found." &lt;&lt; std::endl;&#xA;        std::cin.get();std::cin.get();exit(1);&#xA;    }&#xA;&#xA;    if(codec->id == AV_CODEC_ID_H264)&#xA;        av_opt_set(c->priv_data, "preset", "medium", 0);&#xA;&#xA;    if(avcodec_open2(c, codec, NULL) &lt; 0)&#xA;    {&#xA;        std::cout &lt;&lt; "Could not open codec." &lt;&lt; std::endl;&#xA;        std::cin.get();std::cin.get();exit(1);&#xA;    }&#xA;    video_outbuf = NULL;&#xA;    if(!(oc->oformat->flags &amp; AVFMT_RAWPICTURE))&#xA;    {&#xA;        video_outbuf_size = 200000;&#xA;        video_outbuf = (uint8_t*)(av_malloc(video_outbuf_size));&#xA;    }&#xA;    picture = alloc_picture(c->pix_fmt, c->width, c->height);&#xA;    if(!picture)&#xA;    {&#xA;        std::cout &lt;&lt; "Could not allocate picture" &lt;&lt; std::endl;&#xA;        std::cin.get();exit(1);&#xA;    }&#xA;    tmp_picture = NULL;&#xA;    if(c->pix_fmt != AV_PIX_FMT_YUV420P)&#xA;    {&#xA;        tmp_picture = alloc_picture(AV_PIX_FMT_YUV420P, WIDTH, HEIGHT);&#xA;        if(!tmp_picture)&#xA;        {&#xA;            std::cout &lt;&lt; " Could not allocate temporary picture" &lt;&lt; std::endl;&#xA;            std::cin.get();exit(1);&#xA;        }&#xA;    }&#xA;}&#xA;&#xA;&#xA;static AVStream* addVideoStream(AVFormatContext *context, enum AVCodecID codecID)&#xA;{&#xA;    AVCodecContext *codec;&#xA;    AVStream *stream;&#xA;    stream = avformat_new_stream(context, NULL);&#xA;    if(!stream)&#xA;    {&#xA;        std::cout &lt;&lt; "Could not alloc stream." &lt;&lt; std::endl;&#xA;        std::cin.get();exit(1);&#xA;    }&#xA;&#xA;    codec = stream->codec;&#xA;    codec->codec_id = codecID;&#xA;    codec->codec_type = AVMEDIA_TYPE_VIDEO;&#xA;&#xA;    // sample rate&#xA;    codec->bit_rate = BIT_RATE;&#xA;    // resolution must be a multiple of two&#xA;    codec->width = WIDTH;&#xA;    codec->height = HEIGHT;&#xA;    codec->time_base.den = FRAME_RATE; // stream fps&#xA;    codec->time_base.num = 1;&#xA;    codec->gop_size = 12; // intra frame every twelve frames at most&#xA;    codec->pix_fmt = PIXEL_FORMAT;&#xA;    if(codec->codec_id == AV_CODEC_ID_MPEG2VIDEO)&#xA;        codec->max_b_frames = 2; // for testing, B frames&#xA;&#xA;    if(codec->codec_id == AV_CODEC_ID_MPEG1VIDEO)&#xA;        codec->mb_decision = 2;&#xA;&#xA;    if(context->oformat->flags &amp; AVFMT_GLOBALHEADER)&#xA;        codec->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;&#xA;&#xA;    return stream;&#xA;}&#xA;&#xA;static void fill_yuv_image(AVFrame *pict, int frame_index, int width, int height)&#xA;{&#xA;    int x, y, i;&#xA;    i = frame_index;&#xA;&#xA;    /* Y */&#xA;    for(y=0;ydata[0][y * pict->linesize[0] &#x2B; x] = x &#x2B; y &#x2B; i * 3;&#xA;        }&#xA;    }&#xA;&#xA;    /* Cb and Cr */&#xA;    for(y=0;y<height></height>2;y&#x2B;&#x2B;) {&#xA;        for(x=0;x<width></width>2;x&#x2B;&#x2B;) {&#xA;            pict->data[1][y * pict->linesize[1] &#x2B; x] = 128 &#x2B; y &#x2B; i * 2;&#xA;            pict->data[2][y * pict->linesize[2] &#x2B; x] = 64 &#x2B; x &#x2B; i * 5;&#xA;        }&#xA;    }&#xA;}&#xA;&#xA;static void write_video_frame(AVFormatContext *oc, AVStream *st)&#xA;{&#xA;    int out_size, ret;&#xA;    AVCodecContext *c;&#xA;    static struct SwsContext *img_convert_ctx;&#xA;    c = st->codec;&#xA;&#xA;    if(frame_count >= STREAM_NB_FRAMES)&#xA;    {&#xA;&#xA;    }&#xA;    else&#xA;    {&#xA;        if(c->pix_fmt != AV_PIX_FMT_YUV420P)&#xA;        {&#xA;            if(img_convert_ctx = NULL)&#xA;            {&#xA;                img_convert_ctx = sws_getContext(WIDTH, HEIGHT, AV_PIX_FMT_YUV420P, WIDTH, HEIGHT,&#xA;                                                c->pix_fmt, sws_flags, NULL, NULL, NULL);&#xA;                if(img_convert_ctx == NULL)&#xA;                {&#xA;                    std::cout &lt;&lt; "Cannot initialize the conversion context" &lt;&lt; std::endl;&#xA;                    std::cin.get();exit(1);&#xA;                }&#xA;            }&#xA;            fill_yuv_image(tmp_picture, frame_count, WIDTH, HEIGHT);&#xA;            sws_scale(img_convert_ctx, tmp_picture->data, tmp_picture->linesize, 0, HEIGHT,&#xA;                        picture->data, picture->linesize);&#xA;        }&#xA;        else&#xA;        {&#xA;            fill_yuv_image(picture, frame_count, WIDTH, HEIGHT);&#xA;        }&#xA;    }&#xA;&#xA;    if (oc->oformat->flags &amp; AVFMT_RAWPICTURE)&#xA;    {&#xA;        /* raw video case. The API will change slightly in the near&#xA;           futur for that */&#xA;        AVPacket pkt;&#xA;        av_init_packet(&amp;pkt);&#xA;&#xA;        pkt.flags |= AV_PKT_FLAG_KEY;&#xA;        pkt.stream_index= st->index;&#xA;        pkt.data= (uint8_t *)picture;&#xA;        pkt.size= sizeof(AVPicture);&#xA;&#xA;        ret = av_interleaved_write_frame(oc, &amp;pkt);&#xA;    }&#xA;    else&#xA;    {&#xA;        /* encode the image */&#xA;        out_size = avcodec_encode_video(c, video_outbuf, video_outbuf_size, picture);&#xA;        /* if zero size, it means the image was buffered */&#xA;        if (out_size > 0)&#xA;        {&#xA;            AVPacket pkt;&#xA;            av_init_packet(&amp;pkt);&#xA;&#xA;            if (c->coded_frame->pts != AV_NOPTS_VALUE)&#xA;                pkt.pts= av_rescale_q(c->coded_frame->pts, c->time_base, st->time_base);&#xA;            if(c->coded_frame->key_frame)&#xA;                pkt.flags |= AV_PKT_FLAG_KEY;&#xA;            pkt.stream_index= st->index;&#xA;            pkt.data= video_outbuf;&#xA;            pkt.size= out_size;&#xA;            /* write the compressed frame in the media file */&#xA;            ret = av_interleaved_write_frame(oc, &amp;pkt);&#xA;        } else {&#xA;            ret = 0;&#xA;        }&#xA;    }&#xA;    if (ret != 0) {&#xA;        std::cout &lt;&lt; "Error while writing video frames" &lt;&lt; std::endl;&#xA;        std::cin.get();exit(1);&#xA;    }&#xA;    frame_count&#x2B;&#x2B;;&#xA;}&#xA;&#xA;int main ( int argc, char *argv[] )&#xA;{&#xA;    const char* filename = "test.h264";&#xA;    AVOutputFormat *outputFormat;&#xA;    AVFormatContext *context;&#xA;    AVCodecContext *codec;&#xA;    AVStream *videoStream;&#xA;    double videoPTS;&#xA;&#xA;    // init libavcodec, register all codecs and formats&#xA;    av_register_all(); &#xA;    // auto detect the output format from the name&#xA;    outputFormat = av_guess_format(NULL, filename, NULL);&#xA;    if(!outputFormat)&#xA;    {&#xA;        std::cout &lt;&lt; "Cannot guess output format! Using mpeg!" &lt;&lt; std::endl;&#xA;        std::cin.get();&#xA;        outputFormat = av_guess_format(NULL, "h263" , NULL);&#xA;    }&#xA;    if(!outputFormat)&#xA;    {&#xA;        std::cout &lt;&lt; "Could not find suitable output format." &lt;&lt; std::endl;&#xA;        std::cin.get();exit(1);&#xA;    }&#xA;&#xA;    context = avformat_alloc_context();&#xA;    if(!context)&#xA;    {&#xA;        std::cout &lt;&lt; "Cannot allocate avformat memory." &lt;&lt; std::endl;&#xA;        std::cin.get();exit(1);&#xA;    }&#xA;    context->oformat = outputFormat;&#xA;    sprintf_s(context->filename, sizeof(context->filename), "%s", filename);&#xA;    std::cout &lt;&lt; "Is &#x27;" &lt;&lt; context->filename &lt;&lt; "&#x27; = &#x27;" &lt;&lt; filename &lt;&lt; "&#x27;" &lt;&lt; std::endl;&#xA;&#xA;&#xA;    videoStream = NULL;&#xA;    outputFormat->audio_codec = AV_CODEC_ID_NONE;&#xA;    videoStream = addVideoStream(context, outputFormat->video_codec);&#xA;&#xA;    /* still needed?&#xA;    if(av_set_parameters(context, NULL) &lt; 0)&#xA;    {&#xA;        std::cout &lt;&lt; "Invalid output format parameters." &lt;&lt; std::endl;&#xA;        exit(0);&#xA;    }*/&#xA;&#xA;    av_dump_format(context, 0, filename, 1);&#xA;&#xA;    if(videoStream)&#xA;        openVideo(context, videoStream);&#xA;&#xA;    if(!outputFormat->flags &amp; AVFMT_NOFILE)&#xA;    {&#xA;        if(avio_open(&amp;context->pb, filename, AVIO_FLAG_READ_WRITE) &lt; 0)&#xA;        {&#xA;            std::cout &lt;&lt; "Could not open " &lt;&lt; filename &lt;&lt; std::endl;&#xA;            std::cin.get();exit(1);&#xA;        }&#xA;    }&#xA;&#xA;    avformat_write_header(context, 0);&#xA;&#xA;    while(true)&#xA;    {&#xA;        if(videoStream)&#xA;            videoPTS = (double) videoStream->pts.val * videoStream->time_base.num / videoStream->time_base.den;&#xA;        else&#xA;            videoPTS = 0.;&#xA;&#xA;        if((!videoStream || videoPTS >= STREAM_DURATION))&#xA;        {&#xA;            break;&#xA;        }&#xA;        write_video_frame(context, videoStream);&#xA;    }&#xA;    av_write_trailer(context);&#xA;    if(videoStream)&#xA;        closeVideo(context, videoStream);&#xA;    for(int i = 0; i &lt; context->nb_streams; i&#x2B;&#x2B;)&#xA;    {&#xA;        av_freep(&amp;context->streams[i]->codec);&#xA;        av_freep(&amp;context->streams[i]);&#xA;    }&#xA;&#xA;    if(!(outputFormat->flags &amp; AVFMT_NOFILE))&#xA;    {&#xA;        avio_close(context->pb);&#xA;    }&#xA;    av_free(context);&#xA;    std::cin.get();&#xA;    return 0;&#xA;}&#xA;</string></iostream>

    &#xA;&#xA;

    Compile :

    &#xA;&#xA;

    g&#x2B;&#x2B; -I ./FFmpeg/ video.cpp -L fflibs -lavcodec -lavformat&#xA;

    &#xA;&#xA;

    The code comes with two errors :

    &#xA;&#xA;

    video.cpp:249:84: error: ‘avcodec_encode_video’ was not declared in this scope&#xA;         out_size = avcodec_encode_video(c, video_outbuf, video_outbuf_size, picture);&#xA;                                                                                    ^&#xA;&#xA;&#xA;video.cpp: In function ‘int main(int, char**)’:&#xA;video.cpp:342:46: error: ‘AVStream {aka struct AVStream}’ has no member named ‘pts’&#xA;             videoPTS = (double) videoStream->pts.val * videoStream->time_base.num / videoStream->time_base.den;&#xA;                                              ^&#xA;

    &#xA;&#xA;

    and a huge number of warnings for deprecation.

    &#xA;&#xA;

    video.cpp: In function ‘void closeVideo(AVFormatContext*, AVStream*)’:&#xA;video.cpp:60:23: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]&#xA;     avcodec_close(st->codec);&#xA;                       ^&#xA;In file included from video.cpp:9:0:&#xA;./FFmpeg/libavformat/avformat.h:876:21: note: declared here&#xA;     AVCodecContext *codec;&#xA;                     ^&#xA;video.cpp:60:23: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]&#xA;     avcodec_close(st->codec);&#xA;                       ^&#xA;In file included from video.cpp:9:0:&#xA;./FFmpeg/libavformat/avformat.h:876:21: note: declared here&#xA;     AVCodecContext *codec;&#xA;                     ^&#xA;video.cpp:60:23: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]&#xA;     avcodec_close(st->codec);&#xA;                       ^&#xA;In file included from video.cpp:9:0:&#xA;./FFmpeg/libavformat/avformat.h:876:21: note: declared here&#xA;     AVCodecContext *codec;&#xA;                     ^&#xA;video.cpp: In function ‘AVFrame* alloc_picture(AVPixelFormat, int, int)’:&#xA;video.cpp:80:12: warning: ‘int avpicture_get_size(AVPixelFormat, int, int)’ is deprecated [-Wdeprecated-declarations]&#xA;     size = avpicture_get_size(pix_fmt, width, height);&#xA;            ^&#xA;In file included from video.cpp:8:0:&#xA;./FFmpeg/libavcodec/avcodec.h:5228:5: note: declared here&#xA; int avpicture_get_size(enum AVPixelFormat pix_fmt, int width, int height);&#xA;     ^&#xA;video.cpp:80:12: warning: ‘int avpicture_get_size(AVPixelFormat, int, int)’ is deprecated [-Wdeprecated-declarations]&#xA;     size = avpicture_get_size(pix_fmt, width, height);&#xA;            ^&#xA;In file included from video.cpp:8:0:&#xA;./FFmpeg/libavcodec/avcodec.h:5228:5: note: declared here&#xA; int avpicture_get_size(enum AVPixelFormat pix_fmt, int width, int height);&#xA;     ^&#xA;video.cpp:80:53: warning: ‘int avpicture_get_size(AVPixelFormat, int, int)’ is deprecated [-Wdeprecated-declarations]&#xA;     size = avpicture_get_size(pix_fmt, width, height);&#xA;                                                     ^&#xA;In file included from video.cpp:8:0:&#xA;./FFmpeg/libavcodec/avcodec.h:5228:5: note: declared here&#xA; int avpicture_get_size(enum AVPixelFormat pix_fmt, int width, int height);&#xA;     ^&#xA;video.cpp:87:5: warning: ‘int avpicture_fill(AVPicture*, const uint8_t*, AVPixelFormat, int, int)’ is deprecated [-Wdeprecated-declarations]&#xA;     avpicture_fill((AVPicture *) picture, picture_buf, pix_fmt, WIDTH, HEIGHT);&#xA;     ^&#xA;In file included from video.cpp:8:0:&#xA;./FFmpeg/libavcodec/avcodec.h:5213:5: note: declared here&#xA; int avpicture_fill(AVPicture *picture, const uint8_t *ptr,&#xA;     ^&#xA;video.cpp:87:5: warning: ‘int avpicture_fill(AVPicture*, const uint8_t*, AVPixelFormat, int, int)’ is deprecated [-Wdeprecated-declarations]&#xA;     avpicture_fill((AVPicture *) picture, picture_buf, pix_fmt, WIDTH, HEIGHT);&#xA;     ^&#xA;In file included from video.cpp:8:0:&#xA;./FFmpeg/libavcodec/avcodec.h:5213:5: note: declared here&#xA; int avpicture_fill(AVPicture *picture, const uint8_t *ptr,&#xA;     ^&#xA;video.cpp:87:78: warning: ‘int avpicture_fill(AVPicture*, const uint8_t*, AVPixelFormat, int, int)’ is deprecated [-Wdeprecated-declarations]&#xA;     avpicture_fill((AVPicture *) picture, picture_buf, pix_fmt, WIDTH, HEIGHT);&#xA;                                                                              ^&#xA;In file included from video.cpp:8:0:&#xA;./FFmpeg/libavcodec/avcodec.h:5213:5: note: declared here&#xA; int avpicture_fill(AVPicture *picture, const uint8_t *ptr,&#xA;     ^&#xA;video.cpp: In function ‘void openVideo(AVFormatContext*, AVStream*)’:&#xA;video.cpp:96:13: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]&#xA;     c = st->codec;&#xA;             ^&#xA;In file included from video.cpp:9:0:&#xA;./FFmpeg/libavformat/avformat.h:876:21: note: declared here&#xA;     AVCodecContext *codec;&#xA;                     ^&#xA;video.cpp:96:13: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]&#xA;     c = st->codec;&#xA;             ^&#xA;In file included from video.cpp:9:0:&#xA;./FFmpeg/libavformat/avformat.h:876:21: note: declared here&#xA;     AVCodecContext *codec;&#xA;                     ^&#xA;video.cpp:96:13: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]&#xA;     c = st->codec;&#xA;             ^&#xA;In file included from video.cpp:9:0:&#xA;./FFmpeg/libavformat/avformat.h:876:21: note: declared here&#xA;     AVCodecContext *codec;&#xA;                     ^&#xA;video.cpp: In function ‘AVStream* addVideoStream(AVFormatContext*, AVCodecID)’:&#xA;video.cpp:151:21: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]&#xA;     codec = stream->codec;&#xA;                     ^&#xA;In file included from video.cpp:9:0:&#xA;./FFmpeg/libavformat/avformat.h:876:21: note: declared here&#xA;     AVCodecContext *codec;&#xA;                     ^&#xA;video.cpp:151:21: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]&#xA;     codec = stream->codec;&#xA;                     ^&#xA;In file included from video.cpp:9:0:&#xA;./FFmpeg/libavformat/avformat.h:876:21: note: declared here&#xA;     AVCodecContext *codec;&#xA;                     ^&#xA;video.cpp:151:21: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]&#xA;     codec = stream->codec;&#xA;                     ^&#xA;In file included from video.cpp:9:0:&#xA;./FFmpeg/libavformat/avformat.h:876:21: note: declared here&#xA;     AVCodecContext *codec;&#xA;                     ^&#xA;video.cpp: In function ‘void write_video_frame(AVFormatContext*, AVStream*)’:&#xA;video.cpp:202:13: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]&#xA;     c = st->codec;&#xA;             ^&#xA;In file included from video.cpp:9:0:&#xA;./FFmpeg/libavformat/avformat.h:876:21: note: declared here&#xA;     AVCodecContext *codec;&#xA;                     ^&#xA;video.cpp:202:13: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]&#xA;     c = st->codec;&#xA;             ^&#xA;In file included from video.cpp:9:0:&#xA;./FFmpeg/libavformat/avformat.h:876:21: note: declared here&#xA;     AVCodecContext *codec;&#xA;                     ^&#xA;video.cpp:202:13: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]&#xA;     c = st->codec;&#xA;             ^&#xA;In file included from video.cpp:9:0:&#xA;./FFmpeg/libavformat/avformat.h:876:21: note: declared here&#xA;     AVCodecContext *codec;&#xA;                     ^&#xA;video.cpp:256:20: warning: ‘AVCodecContext::coded_frame’ is deprecated [-Wdeprecated-declarations]&#xA;             if (c->coded_frame->pts != AV_NOPTS_VALUE)&#xA;                    ^&#xA;In file included from video.cpp:8:0:&#xA;./FFmpeg/libavcodec/avcodec.h:2723:35: note: declared here&#xA;     attribute_deprecated AVFrame *coded_frame;&#xA;                                   ^&#xA;video.cpp:256:20: warning: ‘AVCodecContext::coded_frame’ is deprecated [-Wdeprecated-declarations]&#xA;             if (c->coded_frame->pts != AV_NOPTS_VALUE)&#xA;                    ^&#xA;In file included from video.cpp:8:0:&#xA;./FFmpeg/libavcodec/avcodec.h:2723:35: note: declared here&#xA;     attribute_deprecated AVFrame *coded_frame;&#xA;                                   ^&#xA;video.cpp:256:20: warning: ‘AVCodecContext::coded_frame’ is deprecated [-Wdeprecated-declarations]&#xA;             if (c->coded_frame->pts != AV_NOPTS_VALUE)&#xA;                    ^&#xA;In file included from video.cpp:8:0:&#xA;./FFmpeg/libavcodec/avcodec.h:2723:35: note: declared here&#xA;     attribute_deprecated AVFrame *coded_frame;&#xA;                                   ^&#xA;video.cpp:257:42: warning: ‘AVCodecContext::coded_frame’ is deprecated [-Wdeprecated-declarations]&#xA;                 pkt.pts= av_rescale_q(c->coded_frame->pts, c->time_base, st->time_base);&#xA;                                          ^&#xA;In file included from video.cpp:8:0:&#xA;./FFmpeg/libavcodec/avcodec.h:2723:35: note: declared here&#xA;     attribute_deprecated AVFrame *coded_frame;&#xA;                                   ^&#xA;video.cpp:257:42: warning: ‘AVCodecContext::coded_frame’ is deprecated [-Wdeprecated-declarations]&#xA;                 pkt.pts= av_rescale_q(c->coded_frame->pts, c->time_base, st->time_base);&#xA;                                          ^&#xA;In file included from video.cpp:8:0:&#xA;./FFmpeg/libavcodec/avcodec.h:2723:35: note: declared here&#xA;     attribute_deprecated AVFrame *coded_frame;&#xA;                                   ^&#xA;video.cpp:257:42: warning: ‘AVCodecContext::coded_frame’ is deprecated [-Wdeprecated-declarations]&#xA;                 pkt.pts= av_rescale_q(c->coded_frame->pts, c->time_base, st->time_base);&#xA;                                          ^&#xA;In file included from video.cpp:8:0:&#xA;./FFmpeg/libavcodec/avcodec.h:2723:35: note: declared here&#xA;     attribute_deprecated AVFrame *coded_frame;&#xA;                                   ^&#xA;video.cpp:258:19: warning: ‘AVCodecContext::coded_frame’ is deprecated [-Wdeprecated-declarations]&#xA;             if(c->coded_frame->key_frame)&#xA;                   ^&#xA;In file included from video.cpp:8:0:&#xA;./FFmpeg/libavcodec/avcodec.h:2723:35: note: declared here&#xA;     attribute_deprecated AVFrame *coded_frame;&#xA;                                   ^&#xA;video.cpp:258:19: warning: ‘AVCodecContext::coded_frame’ is deprecated [-Wdeprecated-declarations]&#xA;             if(c->coded_frame->key_frame)&#xA;                   ^&#xA;In file included from video.cpp:8:0:&#xA;./FFmpeg/libavcodec/avcodec.h:2723:35: note: declared here&#xA;     attribute_deprecated AVFrame *coded_frame;&#xA;                                   ^&#xA;video.cpp:258:19: warning: ‘AVCodecContext::coded_frame’ is deprecated [-Wdeprecated-declarations]&#xA;             if(c->coded_frame->key_frame)&#xA;                   ^&#xA;In file included from video.cpp:8:0:&#xA;./FFmpeg/libavcodec/avcodec.h:2723:35: note: declared here&#xA;     attribute_deprecated AVFrame *coded_frame;&#xA;                                   ^&#xA;video.cpp:357:40: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]&#xA;         av_freep(&amp;context->streams[i]->codec);&#xA;                                        ^&#xA;In file included from video.cpp:9:0:&#xA;./FFmpeg/libavformat/avformat.h:876:21: note: declared here&#xA;     AVCodecContext *codec;&#xA;                     ^&#xA;video.cpp:357:40: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]&#xA;         av_freep(&amp;context->streams[i]->codec);&#xA;                                        ^&#xA;In file included from video.cpp:9:0:&#xA;./FFmpeg/libavformat/avformat.h:876:21: note: declared here&#xA;     AVCodecContext *codec;&#xA;                     ^&#xA;video.cpp:357:40: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]&#xA;         av_freep(&amp;context->streams[i]->codec);&#xA;                                        ^&#xA;In file included from video.cpp:9:0:&#xA;./FFmpeg/libavformat/avformat.h:876:21: note: declared here&#xA;     AVCodecContext *codec;&#xA;                     ^&#xA;video.cpp:337:38: warning: ignoring return value of ‘int avformat_write_header(AVFormatContext*, AVDictionary**)’, declared with attribute warn_unused_result [-Wunused-result]&#xA;     avformat_write_header(context, 0);&#xA;                                      ^&#xA;

    &#xA;&#xA;

    I have also defined a few macros to redefine those who have been omited. In a modern ffmpeg API, they must be replaced.

    &#xA;&#xA;

    Could someone please help me solving errors and deprecation warnings to comply with recent ffmpeg API ?

    &#xA;

  • ffmpeg error "Could not allocate picture : Invalid argument Found Video Stream Found Audio Stream"

    26 octobre 2020, par Dinkan

    I am trying to write a C program to stream AV by copying both AV codecs with rtp_mpegts using RTP over network

    &#xA;

    ffmpeg -re -i Sample_AV_15min.ts -acodec copy -vcodec copy -f rtp_mpegts rtp://192.168.1.1:5004&#xA;

    &#xA;

    using muxing.c as example which used ffmpeg libraries.&#xA;ffmpeg application works fine.

    &#xA;

    Stream details

    &#xA;

    Input #0, mpegts, from &#x27;Weather_Nation_10min.ts&#x27;:&#xA;  Duration: 00:10:00.38, start: 41313.400811, bitrate: 2840 kb/s&#xA;  Program 1&#xA;    Stream #0:0[0x11]: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p, 1440x1080 [SAR 4:3 DAR 16:9], 29.97 fps, 59.94 tbr, 90k tbn, 59.94 tbc&#xA;    Stream #0:1[0x14]: Audio: ac3 (AC-3 / 0x332D4341), 48000 Hz, stereo, fltp, 448 kb/s&#xA;Output #0, rtp_mpegts, to &#x27;rtp://192.168.1.1:5004&#x27;:&#xA;  Metadata:&#xA;    encoder         : Lavf54.63.104&#xA;    Stream #0:0: Video: h264 ([27][0][0][0] / 0x001B), yuv420p, 1440x1080 [SAR 4:3 DAR 16:9], q=2-31, 29.97 fps, 90k tbn, 29.97 tbc&#xA;    Stream #0:1: Audio: ac3 (AC-3 / 0x332D4341), 48000 Hz, stereo, 448 kb/s&#xA;Stream mapping:&#xA;  Stream #0:0 -> #0:0 (copy)&#xA;  Stream #0:1 -> #0:1 (copy)&#xA;

    &#xA;

    However, my application fails with

    &#xA;

    ./my_test_app Sample_AV_15min.ts rtp://192.168.1.1:5004  &#xA;[h264 @ 0x800b30] non-existing PPS referenced                                  &#xA;[h264 @ 0x800b30] non-existing PPS 0 referenced                        &#xA;[h264 @ 0x800b30] decode_slice_header error                            &#xA;[h264 @ 0x800b30] no frame! &#xA;&#xA;[....snipped...]&#xA;[h264 @ 0x800b30] non-existing PPS 0 referenced        &#xA;[h264 @ 0x800b30] non-existing PPS referenced  &#xA;[h264 @ 0x800b30] non-existing PPS 0 referenced  &#xA;[h264 @ 0x800b30] decode_slice_header error  &#xA;[h264 @ 0x800b30] no frame!  &#xA;[h264 @ 0x800b30] mmco: unref short failure  &#xA;[h264 @ 0x800b30] mmco: unref short failure&#xA;&#xA;[mpegts @ 0x800020] max_analyze_duration 5000000 reached at 5024000 microseconds  &#xA;[mpegts @ 0x800020] PES packet size mismatch could not find codec tag for codec id &#xA;17075200, default to 0.  could not find codec tag for codec id 86019, default to 0.  &#xA;Could not allocate picture: Invalid argument  &#xA;Found Video Stream Found Audio Stream&#xA;

    &#xA;

    How do I fix this ? My complete source code based on muxing.c

    &#xA;

    /**&#xA; * @file&#xA; * libavformat API example.&#xA; *&#xA; * Output a media file in any supported libavformat format.&#xA; * The default codecs are used.&#xA; * @example doc/examples/muxing.c&#xA; */&#xA;&#xA;#include &#xA;#include &#xA;#include &#xA;#include &#xA;&#xA;#include <libavutil></libavutil>mathematics.h>&#xA;#include <libavformat></libavformat>avformat.h>&#xA;#include <libswscale></libswscale>swscale.h>&#xA;&#xA;/* 5 seconds stream duration */&#xA;#define STREAM_DURATION   200.0&#xA;#define STREAM_FRAME_RATE 25 /* 25 images/s */&#xA;#define STREAM_NB_FRAMES  ((int)(STREAM_DURATION * STREAM_FRAME_RATE))&#xA;#define STREAM_PIX_FMT    AV_PIX_FMT_YUV420P /* default pix_fmt */&#xA;&#xA;static int sws_flags = SWS_BICUBIC;&#xA;&#xA;/**************************************************************/&#xA;/* audio output */&#xA;&#xA;static float t, tincr, tincr2;&#xA;static int16_t *samples;&#xA;static int audio_input_frame_size;&#xA;#if 0&#xA;/* Add an output stream. */&#xA;static AVStream *add_stream(AVFormatContext *oc, AVCodec **codec,&#xA;                            enum AVCodecID codec_id)&#xA;{&#xA;    AVCodecContext *c;&#xA;    AVStream *st;&#xA;&#xA;    /* find the encoder */&#xA;    *codec = avcodec_find_encoder(codec_id);&#xA;    if (!(*codec)) {&#xA;        fprintf(stderr, "Could not find encoder for &#x27;%s&#x27;\n",&#xA;                avcodec_get_name(codec_id));&#xA;        exit(1);&#xA;    }&#xA;&#xA;    st = avformat_new_stream(oc, *codec);&#xA;    if (!st) {&#xA;        fprintf(stderr, "Could not allocate stream\n");&#xA;        exit(1);&#xA;    }&#xA;    st->id = oc->nb_streams-1;&#xA;    c = st->codec;&#xA;&#xA;    switch ((*codec)->type) {&#xA;    case AVMEDIA_TYPE_AUDIO:&#xA;        st->id = 1;&#xA;        c->sample_fmt  = AV_SAMPLE_FMT_S16;&#xA;        c->bit_rate    = 64000;&#xA;        c->sample_rate = 44100;&#xA;        c->channels    = 2;&#xA;        break;&#xA;&#xA;    case AVMEDIA_TYPE_VIDEO:&#xA;        c->codec_id = codec_id;&#xA;&#xA;        c->bit_rate = 400000;&#xA;        /* Resolution must be a multiple of two. */&#xA;        c->width    = 352;&#xA;        c->height   = 288;&#xA;        /* timebase: This is the fundamental unit of time (in seconds) in terms&#xA;         * of which frame timestamps are represented. For fixed-fps content,&#xA;         * timebase should be 1/framerate and timestamp increments should be&#xA;         * identical to 1. */&#xA;        c->time_base.den = STREAM_FRAME_RATE;&#xA;        c->time_base.num = 1;&#xA;        c->gop_size      = 12; /* emit one intra frame every twelve frames at most */&#xA;        c->pix_fmt       = STREAM_PIX_FMT;&#xA;        if (c->codec_id == AV_CODEC_ID_MPEG2VIDEO) {&#xA;            /* just for testing, we also add B frames */&#xA;            c->max_b_frames = 2;&#xA;        }&#xA;        if (c->codec_id == AV_CODEC_ID_MPEG1VIDEO) {&#xA;            /* Needed to avoid using macroblocks in which some coeffs overflow.&#xA;             * This does not happen with normal video, it just happens here as&#xA;             * the motion of the chroma plane does not match the luma plane. */&#xA;            c->mb_decision = 2;&#xA;        }&#xA;    break;&#xA;&#xA;    default:&#xA;        break;&#xA;    }&#xA;&#xA;    /* Some formats want stream headers to be separate. */&#xA;    if (oc->oformat->flags &amp; AVFMT_GLOBALHEADER)&#xA;        c->flags |= CODEC_FLAG_GLOBAL_HEADER;&#xA;&#xA;    return st;&#xA;}&#xA;#endif &#xA;/**************************************************************/&#xA;/* audio output */&#xA;&#xA;static float t, tincr, tincr2;&#xA;static int16_t *samples;&#xA;static int audio_input_frame_size;&#xA;&#xA;static void open_audio(AVFormatContext *oc, AVCodec *codec, AVStream *st)&#xA;{&#xA;    AVCodecContext *c;&#xA;    int ret;&#xA;&#xA;    c = st->codec;&#xA;&#xA;    /* open it */&#xA;    ret = avcodec_open2(c, codec, NULL);&#xA;    if (ret &lt; 0) {&#xA;        fprintf(stderr, "Could not open audio codec: %s\n", av_err2str(ret));&#xA;        exit(1);&#xA;    }&#xA;&#xA;    /* init signal generator */&#xA;    t     = 0;&#xA;    tincr = 2 * M_PI * 110.0 / c->sample_rate;&#xA;    /* increment frequency by 110 Hz per second */&#xA;    tincr2 = 2 * M_PI * 110.0 / c->sample_rate / c->sample_rate;&#xA;&#xA;    if (c->codec->capabilities &amp; CODEC_CAP_VARIABLE_FRAME_SIZE)&#xA;        audio_input_frame_size = 10000;&#xA;    else&#xA;        audio_input_frame_size = c->frame_size;&#xA;    samples = av_malloc(audio_input_frame_size *&#xA;                        av_get_bytes_per_sample(c->sample_fmt) *&#xA;                        c->channels);&#xA;    if (!samples) {&#xA;        fprintf(stderr, "Could not allocate audio samples buffer\n");&#xA;        exit(1);&#xA;    }&#xA;}&#xA;&#xA;/* Prepare a 16 bit dummy audio frame of &#x27;frame_size&#x27; samples and&#xA; * &#x27;nb_channels&#x27; channels. */&#xA;static void get_audio_frame(int16_t *samples, int frame_size, int nb_channels)&#xA;{&#xA;    int j, i, v;&#xA;    int16_t *q;&#xA;&#xA;    q = samples;&#xA;    for (j = 0; j &lt; frame_size; j&#x2B;&#x2B;) {&#xA;        v = (int)(sin(t) * 10000);&#xA;        for (i = 0; i &lt; nb_channels; i&#x2B;&#x2B;)&#xA;            *q&#x2B;&#x2B; = v;&#xA;        t     &#x2B;= tincr;&#xA;        tincr &#x2B;= tincr2;&#xA;    }&#xA;}&#xA;&#xA;static void write_audio_frame(AVFormatContext *oc, AVStream *st)&#xA;{&#xA;    AVCodecContext *c;&#xA;    AVPacket pkt = { 0 }; // data and size must be 0;&#xA;    AVFrame *frame = avcodec_alloc_frame();&#xA;    int got_packet, ret;&#xA;&#xA;    av_init_packet(&amp;pkt);&#xA;    c = st->codec;&#xA;&#xA;    get_audio_frame(samples, audio_input_frame_size, c->channels);&#xA;    frame->nb_samples = audio_input_frame_size;&#xA;    avcodec_fill_audio_frame(frame, c->channels, c->sample_fmt,&#xA;                             (uint8_t *)samples,&#xA;                             audio_input_frame_size *&#xA;                             av_get_bytes_per_sample(c->sample_fmt) *&#xA;                             c->channels, 1);&#xA;&#xA;    ret = avcodec_encode_audio2(c, &amp;pkt, frame, &amp;got_packet);&#xA;    if (ret &lt; 0) {&#xA;        fprintf(stderr, "Error encoding audio frame: %s\n", av_err2str(ret));&#xA;        exit(1);&#xA;    }&#xA;&#xA;    if (!got_packet)&#xA;        return;&#xA;&#xA;    pkt.stream_index = st->index;&#xA;&#xA;    /* Write the compressed frame to the media file. */&#xA;    ret = av_interleaved_write_frame(oc, &amp;pkt);&#xA;    if (ret != 0) {&#xA;        fprintf(stderr, "Error while writing audio frame: %s\n",&#xA;                av_err2str(ret));&#xA;        exit(1);&#xA;    }&#xA;    avcodec_free_frame(&amp;frame);&#xA;}&#xA;&#xA;static void close_audio(AVFormatContext *oc, AVStream *st)&#xA;{&#xA;    avcodec_close(st->codec);&#xA;&#xA;    av_free(samples);&#xA;}&#xA;&#xA;/**************************************************************/&#xA;/* video output */&#xA;&#xA;static AVFrame *frame;&#xA;static AVPicture src_picture, dst_picture;&#xA;static int frame_count;&#xA;&#xA;static void open_video(AVFormatContext *oc, AVCodec *codec, AVStream *st)&#xA;{&#xA;    int ret;&#xA;    AVCodecContext *c = st->codec;&#xA;&#xA;    /* open the codec */&#xA;    ret = avcodec_open2(c, codec, NULL);&#xA;    if (ret &lt; 0) {&#xA;        fprintf(stderr, "Could not open video codec: %s\n", av_err2str(ret));&#xA;        exit(1);&#xA;    }&#xA;&#xA;    /* allocate and init a re-usable frame */&#xA;    frame = avcodec_alloc_frame();&#xA;    if (!frame) {&#xA;        fprintf(stderr, "Could not allocate video frame\n");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    /* Allocate the encoded raw picture. */&#xA;    ret = avpicture_alloc(&amp;dst_picture, c->pix_fmt, c->width, c->height);&#xA;    if (ret &lt; 0) {&#xA;        fprintf(stderr, "Could not allocate picture: %s\n", av_err2str(ret));&#xA;        exit(1);&#xA;    }&#xA;&#xA;    /* If the output format is not YUV420P, then a temporary YUV420P&#xA;     * picture is needed too. It is then converted to the required&#xA;     * output format. */&#xA;    if (c->pix_fmt != AV_PIX_FMT_YUV420P) {&#xA;        ret = avpicture_alloc(&amp;src_picture, AV_PIX_FMT_YUV420P, c->width, c->height);&#xA;        if (ret &lt; 0) {&#xA;            fprintf(stderr, "Could not allocate temporary picture: %s\n",&#xA;                    av_err2str(ret));&#xA;            exit(1);&#xA;        }&#xA;    }&#xA;&#xA;    /* copy data and linesize picture pointers to frame */&#xA;    *((AVPicture *)frame) = dst_picture;&#xA;}&#xA;&#xA;/* Prepare a dummy image. */&#xA;static void fill_yuv_image(AVPicture *pict, int frame_index,&#xA;                           int width, int height)&#xA;{&#xA;    int x, y, i;&#xA;&#xA;    i = frame_index;&#xA;&#xA;    /* Y */&#xA;    for (y = 0; y &lt; height; y&#x2B;&#x2B;)&#xA;        for (x = 0; x &lt; width; x&#x2B;&#x2B;)&#xA;            pict->data[0][y * pict->linesize[0] &#x2B; x] = x &#x2B; y &#x2B; i * 3;&#xA;&#xA;    /* Cb and Cr */&#xA;    for (y = 0; y &lt; height / 2; y&#x2B;&#x2B;) {&#xA;        for (x = 0; x &lt; width / 2; x&#x2B;&#x2B;) {&#xA;            pict->data[1][y * pict->linesize[1] &#x2B; x] = 128 &#x2B; y &#x2B; i * 2;&#xA;            pict->data[2][y * pict->linesize[2] &#x2B; x] = 64 &#x2B; x &#x2B; i * 5;&#xA;        }&#xA;    }&#xA;}&#xA;&#xA;static void write_video_frame(AVFormatContext *oc, AVStream *st)&#xA;{&#xA;    int ret;&#xA;    static struct SwsContext *sws_ctx;&#xA;    AVCodecContext *c = st->codec;&#xA;&#xA;    if (frame_count >= STREAM_NB_FRAMES) {&#xA;        /* No more frames to compress. The codec has a latency of a few&#xA;         * frames if using B-frames, so we get the last frames by&#xA;         * passing the same picture again. */&#xA;    } else {&#xA;        if (c->pix_fmt != AV_PIX_FMT_YUV420P) {&#xA;            /* as we only generate a YUV420P picture, we must convert it&#xA;             * to the codec pixel format if needed */&#xA;            if (!sws_ctx) {&#xA;                sws_ctx = sws_getContext(c->width, c->height, AV_PIX_FMT_YUV420P,&#xA;                                         c->width, c->height, c->pix_fmt,&#xA;                                         sws_flags, NULL, NULL, NULL);&#xA;                if (!sws_ctx) {&#xA;                    fprintf(stderr,&#xA;                            "Could not initialize the conversion context\n");&#xA;                    exit(1);&#xA;                }&#xA;            }&#xA;            fill_yuv_image(&amp;src_picture, frame_count, c->width, c->height);&#xA;            sws_scale(sws_ctx,&#xA;                      (const uint8_t * const *)src_picture.data, src_picture.linesize,&#xA;                      0, c->height, dst_picture.data, dst_picture.linesize);&#xA;        } else {&#xA;            fill_yuv_image(&amp;dst_picture, frame_count, c->width, c->height);&#xA;        }&#xA;    }&#xA;&#xA;    if (oc->oformat->flags &amp; AVFMT_RAWPICTURE) {&#xA;        /* Raw video case - directly store the picture in the packet */&#xA;        AVPacket pkt;&#xA;        av_init_packet(&amp;pkt);&#xA;&#xA;        pkt.flags        |= AV_PKT_FLAG_KEY;&#xA;        pkt.stream_index  = st->index;&#xA;        pkt.data          = dst_picture.data[0];&#xA;        pkt.size          = sizeof(AVPicture);&#xA;&#xA;        ret = av_interleaved_write_frame(oc, &amp;pkt);&#xA;    } else {&#xA;        /* encode the image */&#xA;        AVPacket pkt;&#xA;        int got_output;&#xA;&#xA;        av_init_packet(&amp;pkt);&#xA;        pkt.data = NULL;    // packet data will be allocated by the encoder&#xA;        pkt.size = 0;&#xA;&#xA;        ret = avcodec_encode_video2(c, &amp;pkt, frame, &amp;got_output);&#xA;        if (ret &lt; 0) {&#xA;            fprintf(stderr, "Error encoding video frame: %s\n", av_err2str(ret));&#xA;            exit(1);&#xA;        }&#xA;&#xA;        /* If size is zero, it means the image was buffered. */&#xA;        if (got_output) {&#xA;            if (c->coded_frame->key_frame)&#xA;                pkt.flags |= AV_PKT_FLAG_KEY;&#xA;&#xA;            pkt.stream_index = st->index;&#xA;&#xA;            /* Write the compressed frame to the media file. */&#xA;            ret = av_interleaved_write_frame(oc, &amp;pkt);&#xA;        } else {&#xA;            ret = 0;&#xA;        }&#xA;    }&#xA;    if (ret != 0) {&#xA;        fprintf(stderr, "Error while writing video frame: %s\n", av_err2str(ret));&#xA;        exit(1);&#xA;    }&#xA;    frame_count&#x2B;&#x2B;;&#xA;}&#xA;&#xA;static void close_video(AVFormatContext *oc, AVStream *st)&#xA;{&#xA;    avcodec_close(st->codec);&#xA;    av_free(src_picture.data[0]);&#xA;    av_free(dst_picture.data[0]);&#xA;    av_free(frame);&#xA;}&#xA;&#xA;/**************************************************************/&#xA;/* media file output */&#xA;&#xA;int main(int argc, char **argv)&#xA;{&#xA;    const char *filename;&#xA;    AVOutputFormat *fmt;&#xA;    AVFormatContext *oc;&#xA;    AVStream *audio_st, *video_st;&#xA;    AVCodec *audio_codec, *video_codec;&#xA;    double audio_pts, video_pts;&#xA;    int ret;&#xA;    char errbuf[50];&#xA;    int i = 0;&#xA;    /* Initialize libavcodec, and register all codecs and formats. */&#xA;    av_register_all();&#xA;&#xA;    if (argc != 3) {&#xA;        printf("usage: %s input_file out_file|stream\n"&#xA;               "API example program to output a media file with libavformat.\n"&#xA;               "This program generates a synthetic audio and video stream, encodes and\n"&#xA;               "muxes them into a file named output_file.\n"&#xA;               "The output format is automatically guessed according to the file extension.\n"&#xA;               "Raw images can also be output by using &#x27;%%d&#x27; in the filename.\n"&#xA;               "\n", argv[0]);&#xA;        return 1;&#xA;    }&#xA;&#xA;    filename = argv[2];&#xA;&#xA;    /* allocate the output media context */&#xA;    avformat_alloc_output_context2(&amp;oc, NULL, "rtp_mpegts", filename);&#xA;    if (!oc) {&#xA;        printf("Could not deduce output format from file extension: using MPEG.\n");&#xA;        avformat_alloc_output_context2(&amp;oc, NULL, "mpeg", filename);&#xA;    }&#xA;    if (!oc) {&#xA;        return 1;&#xA;    }&#xA;    fmt = oc->oformat;&#xA;    //Find input stream info.&#xA;&#xA;   video_st = NULL;&#xA;   audio_st = NULL;&#xA;&#xA;   avformat_open_input( &amp;oc, argv[1], 0, 0);&#xA;&#xA;   if ((ret = avformat_find_stream_info(oc, 0))&lt; 0)&#xA;   {&#xA;       av_strerror(ret, errbuf,sizeof(errbuf));&#xA;       printf("Not Able to find stream info::%s ", errbuf);&#xA;       ret = -1;&#xA;       return ret;&#xA;   }&#xA;   for (i = 0; i &lt; oc->nb_streams; i&#x2B;&#x2B;)&#xA;   {&#xA;       if(oc->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO)&#xA;       {&#xA;           AVCodecContext *codec_ctx;&#xA;           unsigned int tag = 0;&#xA;&#xA;           printf("Found Video Stream ");&#xA;           video_st = oc->streams[i];&#xA;           codec_ctx = video_st->codec;&#xA;           // m_num_frames = oc->streams[i]->nb_frames;&#xA;           video_codec = avcodec_find_decoder(codec_ctx->codec_id);&#xA;           ret = avcodec_open2(codec_ctx, video_codec, NULL);&#xA;            if (ret &lt; 0) &#xA;            {&#xA;                av_log(NULL, AV_LOG_ERROR, "Failed to open decoder for stream #%u\n", i);&#xA;                return ret;&#xA;            }&#xA;            if (av_codec_get_tag2(oc->oformat->codec_tag, video_codec->id, &amp;tag) == 0) &#xA;            {&#xA;                av_log(NULL, AV_LOG_ERROR, "could not find codec tag for codec id %d, default to 0.\n", audio_codec->id);&#xA;            }&#xA;            video_st->codec = avcodec_alloc_context3(video_codec);&#xA;            video_st->codec->codec_tag = tag;&#xA;       }&#xA;&#xA;       if(oc->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO)&#xA;       {&#xA;           AVCodecContext *codec_ctx;&#xA;           unsigned int tag = 0;&#xA;&#xA;           printf("Found Audio Stream ");&#xA;           audio_st = oc->streams[i];&#xA;          // aud_dts = audio_st->cur_dts;&#xA;          // aud_pts = audio_st->last_IP_pts;           &#xA;          codec_ctx = audio_st->codec;&#xA;          audio_codec = avcodec_find_decoder(codec_ctx->codec_id);&#xA;          ret = avcodec_open2(codec_ctx, audio_codec, NULL);&#xA;          if (ret &lt; 0) &#xA;          {&#xA;             av_log(NULL, AV_LOG_ERROR, "Failed to open decoder for stream #%u\n", i);&#xA;             return ret;&#xA;          }&#xA;          if (av_codec_get_tag2(oc->oformat->codec_tag, audio_codec->id, &amp;tag) == 0) &#xA;          {&#xA;              av_log(NULL, AV_LOG_ERROR, "could not find codec tag for codec id %d, default to 0.\n", audio_codec->id);&#xA;          }&#xA;          audio_st->codec = avcodec_alloc_context3(audio_codec);&#xA;          audio_st->codec->codec_tag = tag;&#xA;       }&#xA;   }&#xA;    /* Add the audio and video streams using the default format codecs&#xA;     * and initialize the codecs. */&#xA;    /*&#xA;    if (fmt->video_codec != AV_CODEC_ID_NONE) {&#xA;        video_st = add_stream(oc, &amp;video_codec, fmt->video_codec);&#xA;    }&#xA;    if (fmt->audio_codec != AV_CODEC_ID_NONE) {&#xA;        audio_st = add_stream(oc, &amp;audio_codec, fmt->audio_codec);&#xA;    }&#xA;    */&#xA;&#xA;    /* Now that all the parameters are set, we can open the audio and&#xA;     * video codecs and allocate the necessary encode buffers. */&#xA;    if (video_st)&#xA;        open_video(oc, video_codec, video_st);&#xA;    if (audio_st)&#xA;        open_audio(oc, audio_codec, audio_st);&#xA;&#xA;    av_dump_format(oc, 0, filename, 1);&#xA;&#xA;    /* open the output file, if needed */&#xA;    if (!(fmt->flags &amp; AVFMT_NOFILE)) {&#xA;        ret = avio_open(&amp;oc->pb, filename, AVIO_FLAG_WRITE);&#xA;        if (ret &lt; 0) {&#xA;            fprintf(stderr, "Could not open &#x27;%s&#x27;: %s\n", filename,&#xA;                    av_err2str(ret));&#xA;            return 1;&#xA;        }&#xA;    }&#xA;&#xA;    /* Write the stream header, if any. */&#xA;    ret = avformat_write_header(oc, NULL);&#xA;    if (ret &lt; 0) {&#xA;        fprintf(stderr, "Error occurred when opening output file: %s\n",&#xA;                av_err2str(ret));&#xA;        return 1;&#xA;    }&#xA;&#xA;    if (frame)&#xA;        frame->pts = 0;&#xA;    for (;;) {&#xA;        /* Compute current audio and video time. */&#xA;        if (audio_st)&#xA;            audio_pts = (double)audio_st->pts.val * audio_st->time_base.num / audio_st->time_base.den;&#xA;        else&#xA;            audio_pts = 0.0;&#xA;&#xA;        if (video_st)&#xA;            video_pts = (double)video_st->pts.val * video_st->time_base.num /&#xA;                        video_st->time_base.den;&#xA;        else&#xA;            video_pts = 0.0;&#xA;&#xA;        if ((!audio_st || audio_pts >= STREAM_DURATION) &amp;&amp;&#xA;            (!video_st || video_pts >= STREAM_DURATION))&#xA;            break;&#xA;&#xA;        /* write interleaved audio and video frames */&#xA;        if (!video_st || (video_st &amp;&amp; audio_st &amp;&amp; audio_pts &lt; video_pts)) {&#xA;            write_audio_frame(oc, audio_st);&#xA;        } else {&#xA;            write_video_frame(oc, video_st);&#xA;            frame->pts &#x2B;= av_rescale_q(1, video_st->codec->time_base, video_st->time_base);&#xA;        }&#xA;    }&#xA;&#xA;    /* Write the trailer, if any. The trailer must be written before you&#xA;     * close the CodecContexts open when you wrote the header; otherwise&#xA;     * av_write_trailer() may try to use memory that was freed on&#xA;     * av_codec_close(). */&#xA;    av_write_trailer(oc);&#xA;&#xA;    /* Close each codec. */&#xA;    if (video_st)&#xA;        close_video(oc, video_st);&#xA;    if (audio_st)&#xA;        close_audio(oc, audio_st);&#xA;&#xA;    if (!(fmt->flags &amp; AVFMT_NOFILE))&#xA;        /* Close the output file. */&#xA;        avio_close(oc->pb);&#xA;&#xA;    /* free the stream */&#xA;    avformat_free_context(oc);&#xA;&#xA;    return 0;&#xA;}&#xA;

    &#xA;

  • c++ - using FFmpeg encode and UDP with a Webcam

    14 mars, par Rendres

    I'm trying to get frames from a Webcam using OpenCV, encode them with FFmpeg and send them using UDP.

    &#xA;&#xA;

    I did before a similar project that instead of sending the packets with UDP, it saved them in a video file.

    &#xA;&#xA;

    My code is.

    &#xA;&#xA;

    #include &#xA;#include &#xA;#include &#xA;#include &#xA;&#xA;extern "C" {&#xA;#include <libavcodec></libavcodec>avcodec.h>&#xA;#include <libavformat></libavformat>avformat.h>&#xA;#include <libavutil></libavutil>opt.h>&#xA;#include <libavutil></libavutil>imgutils.h>&#xA;#include <libavutil></libavutil>mathematics.h>&#xA;#include <libswscale></libswscale>swscale.h>&#xA;#include <libswresample></libswresample>swresample.h>&#xA;}&#xA;&#xA;#include <opencv2></opencv2>opencv.hpp>&#xA;&#xA;using namespace std;&#xA;using namespace cv;&#xA;&#xA;#define WIDTH 640&#xA;#define HEIGHT 480&#xA;#define CODEC_ID AV_CODEC_ID_H264&#xA;#define STREAM_PIX_FMT AV_PIX_FMT_YUV420P&#xA;&#xA;static AVFrame *frame, *pFrameBGR;&#xA;&#xA;int main(int argc, char **argv)&#xA;{&#xA;VideoCapture cap(0);&#xA;const char *url = "udp://127.0.0.1:8080";&#xA;&#xA;AVFormatContext *formatContext;&#xA;AVStream *stream;&#xA;AVCodec *codec;&#xA;AVCodecContext *c;&#xA;AVDictionary *opts = NULL;&#xA;&#xA;int ret, got_packet;&#xA;&#xA;if (!cap.isOpened())&#xA;{&#xA;    return -1;&#xA;}&#xA;&#xA;av_log_set_level(AV_LOG_TRACE);&#xA;&#xA;av_register_all();&#xA;avformat_network_init();&#xA;&#xA;avformat_alloc_output_context2(&amp;formatContext, NULL, "h264", url);&#xA;if (!formatContext)&#xA;{&#xA;    av_log(NULL, AV_LOG_FATAL, "Could not allocate an output context for &#x27;%s&#x27;.\n", url);&#xA;}&#xA;&#xA;codec = avcodec_find_encoder(CODEC_ID);&#xA;if (!codec)&#xA;{&#xA;    av_log(NULL, AV_LOG_ERROR, "Could not find encoder.\n");&#xA;}&#xA;&#xA;stream = avformat_new_stream(formatContext, codec);&#xA;&#xA;c = avcodec_alloc_context3(codec);&#xA;&#xA;stream->id = formatContext->nb_streams - 1;&#xA;stream->time_base = (AVRational){1, 25};&#xA;&#xA;c->codec_id = CODEC_ID;&#xA;c->bit_rate = 400000;&#xA;c->width = WIDTH;&#xA;c->height = HEIGHT;&#xA;c->time_base = stream->time_base;&#xA;c->gop_size = 12;&#xA;c->pix_fmt = STREAM_PIX_FMT;&#xA;&#xA;if (formatContext->flags &amp; AVFMT_GLOBALHEADER)&#xA;    c->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;&#xA;&#xA;av_dict_set(&amp;opts, "preset", "fast", 0);&#xA;&#xA;av_dict_set(&amp;opts, "tune", "zerolatency", 0);&#xA;&#xA;ret = avcodec_open2(c, codec, NULL);&#xA;if (ret &lt; 0)&#xA;{&#xA;    av_log(NULL, AV_LOG_ERROR, "Could not open video codec.\n");&#xA;}&#xA;&#xA;pFrameBGR = av_frame_alloc();&#xA;if (!pFrameBGR)&#xA;{&#xA;    av_log(NULL, AV_LOG_ERROR, "Could not allocate video frame.\n");&#xA;}&#xA;&#xA;frame = av_frame_alloc();&#xA;if (!frame)&#xA;{&#xA;    av_log(NULL, AV_LOG_ERROR, "Could not allocate video frame.\n");&#xA;}&#xA;&#xA;frame->format = c->pix_fmt;&#xA;frame->width = c->width;&#xA;frame->height = c->height;&#xA;&#xA;ret = avcodec_parameters_from_context(stream->codecpar, c);&#xA;if (ret &lt; 0)&#xA;{&#xA;    av_log(NULL, AV_LOG_ERROR, "Could not open video codec.\n");&#xA;}&#xA;&#xA;av_dump_format(formatContext, 0, url, 1);&#xA;&#xA;ret = avformat_write_header(formatContext, NULL);&#xA;if (ret != 0)&#xA;{&#xA;    av_log(NULL, AV_LOG_ERROR, "Failed to connect to &#x27;%s&#x27;.\n", url);&#xA;}&#xA;&#xA;Mat image(Size(HEIGHT, WIDTH), CV_8UC3);&#xA;SwsContext *swsctx = sws_getContext(WIDTH, HEIGHT, AV_PIX_FMT_BGR24, WIDTH, HEIGHT, AV_PIX_FMT_YUV420P, SWS_BILINEAR, NULL, NULL, NULL);&#xA;int frame_pts = 0;&#xA;&#xA;while (1)&#xA;{&#xA;    cap >> image;&#xA;&#xA;    int numBytesYUV = av_image_get_buffer_size(STREAM_PIX_FMT, WIDTH, HEIGHT, 1);&#xA;    uint8_t *bufferYUV = (uint8_t *)av_malloc(numBytesYUV * sizeof(uint8_t));&#xA;&#xA;    avpicture_fill((AVPicture *)pFrameBGR, image.data, AV_PIX_FMT_BGR24, WIDTH, HEIGHT);&#xA;    avpicture_fill((AVPicture *)frame, bufferYUV, STREAM_PIX_FMT, WIDTH, HEIGHT);&#xA;&#xA;    sws_scale(swsctx, (uint8_t const *const *)pFrameBGR->data, pFrameBGR->linesize, 0, HEIGHT, frame->data, frame->linesize);&#xA;&#xA;    AVPacket pkt = {0};&#xA;    av_init_packet(&amp;pkt);&#xA;&#xA;    frame->pts = frame_pts;&#xA;&#xA;    ret = avcodec_encode_video2(c, &amp;pkt, frame, &amp;got_packet);&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        av_log(NULL, AV_LOG_ERROR, "Error encoding frame\n");&#xA;    }&#xA;&#xA;    if (got_packet)&#xA;    {&#xA;        pkt.pts = av_rescale_q_rnd(pkt.pts, c->time_base, stream->time_base, AVRounding(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));&#xA;        pkt.dts = av_rescale_q_rnd(pkt.dts, c->time_base, stream->time_base, AVRounding(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));&#xA;        pkt.duration = av_rescale_q(pkt.duration, c->time_base, stream->time_base);&#xA;        pkt.stream_index = stream->index;&#xA;&#xA;        return av_interleaved_write_frame(formatContext, &amp;pkt);&#xA;&#xA;        cout &lt;&lt; "Seguro que si" &lt;&lt; endl;&#xA;    }&#xA;    frame_pts&#x2B;&#x2B;;&#xA;}&#xA;&#xA;avcodec_free_context(&amp;c);&#xA;av_frame_free(&amp;frame);&#xA;avformat_free_context(formatContext);&#xA;&#xA;return 0;&#xA;}&#xA;

    &#xA;&#xA;

    The code compiles but it returns Segmentation fault in the function av_interleaved_write_frame(). I've tried several implementations or several codecs (in this case I'm using libopenh264, but using mpeg2video returns the same segmentation fault). I tried also with av_write_frame() but it returns the same error.

    &#xA;&#xA;

    As I told before, I only want to grab frames from a webcam connected via USB, encode them to H264 and send the packets through UDP to another PC.

    &#xA;&#xA;

    My console log when I run the executable is.

    &#xA;&#xA;

    [100%] Built target display&#xA;[OpenH264] this = 0x0x244b4f0, Info:CWelsH264SVCEncoder::SetOption():ENCODER_OPTION_TRACE_CALLBACK callback = 0x7f0c302a87c0.&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:CWelsH264SVCEncoder::InitEncoder(), openh264 codec version = 5a5c4f1&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:iUsageType = 0,iPicWidth= 640;iPicHeight= 480;iTargetBitrate= 400000;iMaxBitrate= 400000;iRCMode= 0;iPaddingFlag= 0;iTemporalLayerNum= 1;iSpatialLayerNum= 1;fFrameRate= 25.000000f;uiIntraPeriod= 12;eSpsPpsIdStrategy = 0;bPrefixNalAddingCtrl = 0;bSimulcastAVC=0;bEnableDenoise= 0;bEnableBackgroundDetection= 1;bEnableSceneChangeDetect = 1;bEnableAdaptiveQuant= 1;bEnableFrameSkip= 0;bEnableLongTermReference= 0;iLtrMarkPeriod= 30, bIsLosslessLink=0;iComplexityMode = 0;iNumRefFrame = 1;iEntropyCodingModeFlag = 0;uiMaxNalSize = 0;iLTRRefNum = 0;iMultipleThreadIdc = 1;iLoopFilterDisableIdc = 0 (offset(alpha/beta): 0,0;iComplexityMode = 0,iMaxQp = 51;iMinQp = 0)&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:sSpatialLayers[0]: .iVideoWidth= 640; .iVideoHeight= 480; .fFrameRate= 25.000000f; .iSpatialBitrate= 400000; .iMaxSpatialBitrate= 400000; .sSliceArgument.uiSliceMode= 1; .sSliceArgument.iSliceNum= 0; .sSliceArgument.uiSliceSizeConstraint= 1500;uiProfileIdc = 66;uiLevelIdc = 41&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Warning:SliceArgumentValidationFixedSliceMode(), unsupported setting with Resolution and uiSliceNum combination under RC on! So uiSliceNum is changed to 6!&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:Setting MaxSpatialBitrate (400000) the same at SpatialBitrate (400000) will make the    actual bit rate lower than SpatialBitrate&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Warning:bEnableFrameSkip = 0,bitrate can&#x27;t be controlled for RC_QUALITY_MODE,RC_BITRATE_MODE and RC_TIMESTAMP_MODE without enabling skip frame.&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Warning:Change QP Range from(0,51) to (12,42)&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:WELS CPU features/capacities (0x4007fe3f) detected:   HTT:      Y, MMX:      Y, MMXEX:    Y, SSE:      Y, SSE2:     Y, SSE3:     Y, SSSE3:    Y, SSE4.1:   Y, SSE4.2:   Y, AVX:      Y, FMA:      Y, X87-FPU:  Y, 3DNOW:    N, 3DNOWEX:  N, ALTIVEC:  N, CMOV:     Y, MOVBE:    Y, AES:      Y, NUMBER OF LOGIC PROCESSORS ON CHIP: 8, CPU CACHE LINE SIZE (BYTES):        64&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:WelsInitEncoderExt() exit, overall memory usage: 4542878 bytes&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Info:WelsInitEncoderExt(), pCtx= 0x0x245a400.&#xA;Output #0, h264, to &#x27;udp://192.168.100.39:8080&#x27;:&#xA;Stream #0:0, 0, 1/25: Video: h264 (libopenh264), 1 reference frame, yuv420p, 640x480 (0x0), 0/1, q=2-31, 400 kb/s, 25 tbn&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Debug:RcUpdateIntraComplexity iFrameDqBits = 385808,iQStep= 2016,iIntraCmplx = 777788928&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Debug:[Rc]Layer 0: Frame timestamp = 0, Frame type = 2, encoding_qp = 30, average qp = 30, max qp = 33, min qp = 27, index = 0, iTid = 0, used = 385808, bitsperframe = 16000, target = 64000, remainingbits = -257808, skipbuffersize = 200000&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Debug:WelsEncoderEncodeExt() OutputInfo iLayerNum = 2,iFrameSize = 48252&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Debug:WelsEncoderEncodeExt() OutputInfo iLayerId = 0,iNalType = 0,iNalCount = 2, first Nal Length=18,uiSpatialId = 0,uiTemporalId = 0,iSubSeqId = 0&#xA;[libopenh264 @ 0x244aa00] [OpenH264] this = 0x0x244b4f0, Debug:WelsEncoderEncodeExt() OutputInfo iLayerId = 1,iNalType = 1,iNalCount = 6, first Nal Length=6057,uiSpatialId = 0,uiTemporalId = 0,iSubSeqId = 0&#xA;[libopenh264 @ 0x244aa00] 6 slices&#xA;./scriptBuild.sh: line 20: 10625 Segmentation fault      (core dumped) ./display&#xA;

    &#xA;&#xA;

    As you can see, FFmpeg uses libopenh264 and configures it correctly. However, no matter what. It always returns the same Segmentation fault error...

    &#xA;&#xA;

    I've used commands like this.

    &#xA;&#xA;

    ffmpeg -s 640x480 -f video4linux2 -i /dev/video0 -r 30 -vcodec libopenh264 -an -f h264 udp://127.0.0.1:8080&#xA;

    &#xA;&#xA;

    And it works perfectly, but I need to process the frames before sending them. Thats why I'm trying to use the libs.

    &#xA;&#xA;

    My FFmpeg version is.

    &#xA;&#xA;

    ffmpeg version 3.3.6 Copyright (c) 2000-2017 the FFmpeg developers&#xA;built with gcc 4.8 (Ubuntu 4.8.4-2ubuntu1~14.04.3)&#xA;configuration: --disable-yasm --enable-shared --enable-libopenh264 --cc=&#x27;gcc -fPIC&#x27;&#xA;libavutil      55. 58.100 / 55. 58.100&#xA;libavcodec     57. 89.100 / 57. 89.100&#xA;libavformat    57. 71.100 / 57. 71.100&#xA;libavdevice    57.  6.100 / 57.  6.100&#xA;libavfilter     6. 82.100 /  6. 82.100&#xA;libswscale      4.  6.100 /  4.  6.100&#xA;libswresample   2.  7.100 /  2.  7.100&#xA;

    &#xA;&#xA;

    I tried to get more information of the error using gbd, but it didn't give me debugging info.

    &#xA;&#xA;

    How can I solve this problem ? I don't know what else can I try...

    &#xA;&#xA;

    Thank you !

    &#xA;