Recherche avancée

Médias (1)

Mot : - Tags -/3GS

Autres articles (12)

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

  • Les vidéos

    21 avril 2011, par

    Comme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
    Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
    Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...)

Sur d’autres sites (4083)

  • video generation using ffmpeg

    10 mai 2012, par Jack

    I am trying to generate video from set of image using ffmpeg library. Using following code, I am able to generate the video on simulator but when I run it on device, it produces strange green effect in video. I followed Encoding images to video with ffmpeg url. Can anybody help me out or if can provide code snippet, will be really appreciated.

    //here is the code..

    (void)createVideoFromImages
    {
     NSString *fileName2 = [Utilities documentsPath:[NSString stringWithFormat:@"test.mov"]];
     NSLog(@"filename: %@",fileName2);

     //Register all formats and codecs

     AVCodec *codec;

     //avcodec_register_all();
     //avdevice_register_all();

     av_register_all();


     AVCodecContext *c= NULL;
     int i, out_size, size, outbuf_size;
     FILE *f;
     AVFrame *picture;
     uint8_t *outbuf;

     printf("Video encoding\n");

     /* find the mpeg video encoder */
     codec = avcodec_find_encoder(CODEC_ID_MPEG2VIDEO);
     if (!codec)
     {
       fprintf(stderr, "codec not found\n");
       exit(1);
     }

     c= avcodec_alloc_context();
     picture= avcodec_alloc_frame();

     /* put sample parameters */
     c->bit_rate = 400000;
     /* resolution must be a multiple of two */
     c->width = 256;
     c->height = 256;//258;

     /* frames per second */
     c->time_base= (AVRational){1,25};
     c->gop_size = 10; /* emit one intra frame every ten frames */
     c->max_b_frames=1;
     c->pix_fmt =  PIX_FMT_YUV420P;//PIX_FMT_YUV420P;

     /* open it */
     if (avcodec_open(c, codec) < 0) {
       fprintf(stderr, "could not open codec\n");
       exit(1);
     }

     const char* filename_cstr = [fileName2 cStringUsingEncoding:NSUTF8StringEncoding];
     f = fopen(filename_cstr, "wb");
     if (!f) {
       fprintf(stderr, "could not open %s\n", fileName2);
       exit(1);
     }

     /* alloc image and output buffer */
     outbuf_size = 100000;
     outbuf = malloc(outbuf_size);
     size = c->width * c->height;

     #pragma mark -
     AVFrame* outpic = avcodec_alloc_frame();
     int nbytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);

     NSLog(@"bytes: %d",nbytes);

     //create buffer for the output image
     uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes);

     for(i=100;i<104;i++)
     {
       fflush(stdout);

       int numBytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);
       NSLog(@"numBytes: %d",numBytes);
       uint8_t *buffer = (uint8_t *)av_malloc(numBytes*sizeof(uint8_t));

       UIImage *image;

       image = [UIImage imageWithContentsOfFile:[Utilities bundlePath:[NSString stringWithFormat:@"%d.png",i]]];


       /*
       if(i>=98)//for video images
       {
          NSLog(@"i: %d",i);
          image = [UIImage imageWithContentsOfFile:[Utilities documentsPath:[NSString stringWithFormat:@"image0098.png"]]]; ///@"image0098.png"]];
          //[NSString stringWithFormat:@"%d.png", i]];
       }
       else //for custom image
       {
          image = [UIImage imageWithContentsOfFile:[Utilities bundlePath:[NSString stringWithFormat:@"image%04d.png", i]]];
          //[UIImage imageNamed:[NSString stringWithFormat:@"%d.png", i]];//@"image%04d.png",i]];
       }*/

       CGImageRef newCgImage = [image CGImage];

       NSLog(@"No. of Bits per component: %d",CGImageGetBitsPerComponent([image CGImage]));
       NSLog(@"No. of Bits per pixel: %d",CGImageGetBitsPerPixel([image CGImage]));
       NSLog(@"No. of Bytes per row: %d",CGImageGetBytesPerRow([image CGImage]));


       CGDataProviderRef dataProvider = CGImageGetDataProvider(newCgImage);
       CFDataRef bitmapData = CGDataProviderCopyData(dataProvider);
       buffer = (uint8_t *)CFDataGetBytePtr(bitmapData);  

       struct SwsContext* fooContext;

       avpicture_fill((AVPicture*)picture, buffer, PIX_FMT_RGBA, c->width, c->height);

       avpicture_fill((AVPicture*)outpic, outbuffer, PIX_FMT_YUV420P, c->width, c->height);

       fooContext= sws_getContext(c->width, c->height,
          PIX_FMT_RGBA,
           c->width, c->height,
           PIX_FMT_YUV420P,
           SWS_FAST_BILINEAR , NULL, NULL, NULL);

    //}

     //perform the conversion

     NSLog(@"linesize: %d", picture->linesize[0]);

     sws_scale(fooContext, picture->data, picture->linesize, 0, c->height, outpic->data, outpic->linesize);

     // Here is where I try to convert to YUV

     /* encode the image */
     out_size = avcodec_encode_video(c, outbuf, outbuf_size, outpic);
     printf("encoding frame %d (size=%d)\n", i, out_size);
     fwrite(outbuf, 1, out_size, f);

     NSLog(@"%d",sizeof(f));

     free(buffer);
     buffer = NULL;      

     }

     /* get the delayed frames */
     for( ; out_size; i++)
     {
       fflush(stdout);
       out_size = avcodec_encode_video(c, outbuf, outbuf_size, NULL);
       printf("write frame %3d (size=%5d)\n", i, out_size);
       fwrite(outbuf, 1, outbuf_size, f);      
     }

     /* add sequence end code to have a real mpeg file */
     outbuf[0] = 0x00;
     outbuf[1] = 0x00;
     outbuf[2] = 0x01;
     outbuf[3] = 0xb7;
     fwrite(outbuf, 1, 4, f);

     fclose(f);
     free(outbuf);

     avcodec_close(c);
     av_free(c);
     av_free(picture);
     printf("\n");

    }
  • Converting uint8_t data to AVFrame with FFmpeg

    30 octobre 2017, par J.Lefebvre

    I am currently working in C++ with the Autodesk 3DStudio Max 2014 SDK (toolset 100) and the Ffmpeg library in Visual Studio 2015 and trying to convert a DIB (Device Independent Bitmap) to uint8_t pointer array and then convert these data to an AVFrame.

    I don’t have any errors, but my video is still black and without meta data.
    (no time display, etc)

    I made approximatively the same with a Visual Studio Console application to convert jpeg image sequence from disk and this is working fine.
    (The only difference is that instead of converting jpeg to AVFrame with the Ffmpeg library, I try to convert raw data to an AVFrame.)

    So I think the problem could be either on the DIB conversion to the uint8_t data or the uint8_t data to the AVFrame.
    (The second is more plausible, because I used the SFML library to display a window with my rgb uint8_t* data for debuging and it is working fine.)

    I first initialize the ffmpeg library :

    This function is called once at the beginning.

    int Converter::Initialize(AVCodecID codec_id, int width, int height, int fps, const char *filename)
    {
       avcodec_register_all();
       av_register_all();

       AVCodec *codec;
       inputFrame = NULL;
       codecContext = NULL;
       pkt = NULL;
       file = NULL;
       outputFilename = new char[strlen(filename)]();
       *outputFilename = '\0';
       strcpy(outputFilename, filename);

       int ret;

       //Initializing AVCodecContext and getting PixelFormat supported by encoder
       codec = avcodec_find_encoder(codec_id);
       if (!codec)
           return 1;

       AVPixelFormat pixFormat = codec->pix_fmts[0];
       codecContext = avcodec_alloc_context3(codec);
       if (!codecContext)
           return 1;

       codecContext->bit_rate = 400000;
       codecContext->width = width;
       codecContext->height = height;
       codecContext->time_base.num = 1;
       codecContext->time_base.den = fps;
       codecContext->gop_size = 10;
       codecContext->max_b_frames = 1;
       codecContext->pix_fmt = pixFormat;

       if (codec_id == AV_CODEC_ID_H264)
           av_opt_set(codecContext->priv_data, "preset", "slow", 0);

       //Actually opening the encoder
       if (avcodec_open2(codecContext, codec, NULL) < 0)
           return 1;

       file = fopen(outputFilename, "wb");
       if (!file)
           return 1;

       inputFrame = av_frame_alloc();
       inputFrame->format = codecContext->pix_fmt;
       inputFrame->width = codecContext->width;
       inputFrame->height = codecContext->height;

       ret = av_image_alloc(inputFrame->data, inputFrame->linesize, codecContext->width, codecContext->height, codecContext->pix_fmt, 32);

       if (ret < 0)
           return 1;

       return 0;
    }

    Then for each frame, I get the DIB and convert to a uint8_t* it with this function :

    uint8_t* Util::ToUint8_t(RGBQUAD *data, int width, int height)
    {
       uint8_t* buf = (uint8_t*)data;

       int imageSize = width * height;
       size_t rgbquad_size = sizeof(RGBQUAD);
       size_t total_bytes = imageSize * rgbquad_size;
       uint8_t * pCopyBuffer = new uint8_t[total_bytes];

       for (int x = 0; x < width; x++)
       {
           for (int y = 0; y < height; y++)
           {
               int index = (x + width * y) * rgbquad_size;
               int invertIndex = (x + width* (height - y - 1)) * rgbquad_size;

               //BGRA to RGBA
               pCopyBuffer[index] = buf[invertIndex + 2];
               pCopyBuffer[index + 1] = buf[invertIndex + 1];
               pCopyBuffer[index + 2] = buf[invertIndex];
               pCopyBuffer[index + 3] = 0xFF;
           }
       }

       return pCopyBuffer;
    }

    void GetDIBBuffer(Interface* ip, BITMAPINFO *bmi, uint8_t** outBuffer)
    {
       int size;

       ViewExp& view = ip->GetActiveViewExp();

       view.getGW()->getDIB(NULL, &size);

       bmi = (BITMAPINFO *)malloc(size);
       BITMAPINFOHEADER *bmih = (BITMAPINFOHEADER *)bmi;
       view.getGW()->getDIB(bmi, &size);

       uint8_t * pCopyBuffer = Util::ToUint8_t(bmi->bmiColors, bmih->biWidth, bmih->biHeight);

       *outBuffer = pCopyBuffer;
    }

    This function is used to get the DIB :

    void GetViewportDIB(Interface* ip, BITMAPINFO *bmi, BITMAPINFOHEADER *bmih, BitmapInfo biFile, Bitmap *map)
    {
       int size;

       if (!biFile.Name()[0])
           return;

       ViewExp& view = ip->GetActiveViewExp();

       view.getGW()->getDIB(NULL, &size);

       bmi = (BITMAPINFO *)malloc(size);
       bmih = (BITMAPINFOHEADER *)bmi;

       view.getGW()->getDIB(bmi, &size);

       biFile.SetWidth((WORD)bmih->biWidth);
       biFile.SetHeight((WORD)bmih->biHeight);
       biFile.SetType(BMM_TRUE_32);

       map = TheManager->Create(&biFile);
       map->OpenOutput(&biFile);
       map->FromDib(bmi);
       map->Write(&biFile);
       map->Close(&biFile);
    }

    And after the conversion to AVFrame and video encoding :

    The EncodeFromMem function is call each frame.

    int Converter::EncodeFromMem(const char *outputDir, int frameNumber, uint8_t* data)
    {
       int ret;

       inputFrame->pts = frameNumber;
       EncodeFrame(data, codecContext, inputFrame, &pkt, file);

       return 0;
    }

    static void RgbToYuv(uint8_t *rgb, AVCodecContext *c, AVFrame *frame)
    {
       struct SwsContext *swsCtx = NULL;
       const int in_linesize[1] = { 3 * c->width };// RGB stride
       swsCtx = sws_getCachedContext(swsCtx, c->width, c->height, AV_PIX_FMT_RGB24, c->width, c->height, AV_PIX_FMT_YUV420P, 0, 0, 0, 0);
       sws_scale(swsCtx, (const uint8_t * const *)&rgb, in_linesize, 0, c->height, frame->data, frame->linesize);
    }

    static void EncodeFrame(uint8_t *rgb, AVCodecContext *c, AVFrame *frame, AVPacket **pkt, FILE *file)
    {
       int ret, got_output;

       RgbToYuv(rgb, c, frame);

       *pkt = av_packet_alloc();
       av_init_packet(*pkt);
       (*pkt)->data = NULL;
       (*pkt)->size = 0;

       ret = avcodec_encode_video2(c, *pkt, frame, &got_output);
       if (ret < 0)
       {
           fprintf(stderr, "Error encoding frame/n");
           exit(1);
       }
       if (got_output)
       {
           fwrite((*pkt)->data, 1, (*pkt)->size, file);
           av_packet_unref(*pkt);
       }
    }

    To finish I have a function that write the packets and free the memory :
    This function is called once at the end of the time range.

    int Converter::Finalize()
    {
       int ret, got_output;
       uint8_t endcode[] = { 0, 0, 1, 0xb7 };

       /* get the delayed frames */
       do
       {
           fflush(stdout);
           ret = avcodec_encode_video2(codecContext, pkt, NULL, &got_output);
           if (ret < 0)
           {
               fprintf(stderr, "Error encoding frame/n");
               return 1;
           }
           if (got_output)
           {
               fwrite(pkt->data, 1, pkt->size, file);
               av_packet_unref(pkt);
           }
       } while (got_output);

       fwrite(endcode, 1, sizeof(endcode), file);
       fclose(file);

       avcodec_close(codecContext);
       av_free(codecContext);

       av_frame_unref(inputFrame);
       av_frame_free(&inputFrame);
       //av_freep(&inputFrame->data[0]); //Crash

       delete outputFilename;
       outputFilename = 0;

       return 0;
    }

    EDIT :

    I modify my RgbToYuv function and create another one to convert back the yuv frame to an rgb one.

    This not really solve the problem, but maybe focus the problem on the conversion from YuvToRgb.

    This is the result of the conversion from YUV to RGB :

     ![YuvToRgb result] : https://img42.com/kHqpt+

    static void YuvToRgb(AVCodecContext *c, AVFrame *frame)
    {
       struct SwsContext *img_convert_ctx = sws_getContext(c->width, c->height, AV_PIX_FMT_YUV420P, c->width, c->height, AV_PIX_FMT_RGB24, SWS_BICUBIC, NULL, NULL, NULL);
       AVFrame * rgbPictInfo = av_frame_alloc();
       avpicture_fill((AVPicture*)rgbPictInfo, *(frame)->data, AV_PIX_FMT_RGB24, c->width, c->height);
       sws_scale(img_convert_ctx, frame->data, frame->linesize, 0, c->height, rgbPictInfo->data, rgbPictInfo->linesize);

       Util::DebugWindow(c->width, c->height, rgbPictInfo->data[0]);
    }
    static void RgbToYuv(uint8_t *rgb, AVCodecContext *c, AVFrame *frame)
    {
       AVFrame * rgbPictInfo = av_frame_alloc();
       avpicture_fill((AVPicture*)rgbPictInfo, rgb, AV_PIX_FMT_RGBA, c->width, c->height);

       struct SwsContext *swsCtx = sws_getContext(c->width, c->height, AV_PIX_FMT_RGBA, c->width, c->height, AV_PIX_FMT_YUV420P, SWS_BICUBIC, NULL, NULL, NULL);
       avpicture_fill((AVPicture*)frame, rgb, AV_PIX_FMT_YUV420P, c->width, c->height);    
       sws_scale(swsCtx, rgbPictInfo->data, rgbPictInfo->linesize, 0, c->height, frame->data, frame->linesize);

       YuvToRgb(c, frame);
    }
  • Video creation with a recent ffmpeg API (2017)

    16 novembre 2017, par ar2015

    I have started learning how to work with ffmpeg which has a suffering deprecation of all tutorial and available examples such as this.

    I am looking for a code which creates an output video.

    Unfortunately, most of good examples are focusing on reading from a file rather than creating one.

    Here, I have found a deprecated example and I spent a long time to fix its errors until it became like this :

    #include <iostream>
    #include
    #include
    #include <string>

    extern "C" {
           #include <libavcodec></libavcodec>avcodec.h>
           #include <libavformat></libavformat>avformat.h>
           #include <libswscale></libswscale>swscale.h>
           #include <libavformat></libavformat>avio.h>
           #include <libavutil></libavutil>opt.h>
    }

    #define WIDTH 800
    #define HEIGHT 480
    #define STREAM_NB_FRAMES  ((int)(STREAM_DURATION * FRAME_RATE))
    #define FRAME_RATE 24
    #define PIXEL_FORMAT AV_PIX_FMT_YUV420P
    #define STREAM_DURATION 5.0 //seconds
    #define BIT_RATE 400000

    #define AV_CODEC_FLAG_GLOBAL_HEADER (1 &lt;&lt; 22)
    #define CODEC_FLAG_GLOBAL_HEADER AV_CODEC_FLAG_GLOBAL_HEADER
    #define AVFMT_RAWPICTURE 0x0020

    using namespace std;

    static int sws_flags = SWS_BICUBIC;

    AVFrame *picture, *tmp_picture;
    uint8_t *video_outbuf;
    int frame_count, video_outbuf_size;


    /****** IF LINUX ******/
    inline int sprintf_s(char* buffer, size_t sizeOfBuffer, const char* format, ...)
    {
       va_list ap;
       va_start(ap, format);
       int result = vsnprintf(buffer, sizeOfBuffer, format, ap);
       va_end(ap);
       return result;
    }

    /****** IF LINUX ******/
    template
    inline int sprintf_s(char (&amp;buffer)[sizeOfBuffer], const char* format, ...)
    {
       va_list ap;
       va_start(ap, format);
       int result = vsnprintf(buffer, sizeOfBuffer, format, ap);
       va_end(ap);
       return result;
    }


    static void closeVideo(AVFormatContext *oc, AVStream *st)
    {
       avcodec_close(st->codec);
       av_free(picture->data[0]);
       av_free(picture);
       if (tmp_picture)
       {
           av_free(tmp_picture->data[0]);
           av_free(tmp_picture);
       }
       av_free(video_outbuf);
    }

    static AVFrame *alloc_picture(enum AVPixelFormat pix_fmt, int width, int height)
    {
       AVFrame *picture;
       uint8_t *picture_buf;
       int size;

       picture = av_frame_alloc();
       if(!picture)
           return NULL;
       size = avpicture_get_size(pix_fmt, width, height);
       picture_buf = (uint8_t*)(av_malloc(size));
       if (!picture_buf)
       {
           av_free(picture);
           return NULL;
       }
       avpicture_fill((AVPicture *) picture, picture_buf, pix_fmt, WIDTH, HEIGHT);
       return picture;
    }

    static void openVideo(AVFormatContext *oc, AVStream *st)
    {
       AVCodec *codec;
       AVCodecContext *c;

       c = st->codec;
       if(c->idct_algo == AV_CODEC_ID_H264)
           av_opt_set(c->priv_data, "preset", "slow", 0);

       codec = avcodec_find_encoder(c->codec_id);
       if(!codec)
       {
           std::cout &lt;&lt; "Codec not found." &lt;&lt; std::endl;
           std::cin.get();std::cin.get();exit(1);
       }

       if(codec->id == AV_CODEC_ID_H264)
           av_opt_set(c->priv_data, "preset", "medium", 0);

       if(avcodec_open2(c, codec, NULL) &lt; 0)
       {
           std::cout &lt;&lt; "Could not open codec." &lt;&lt; std::endl;
           std::cin.get();std::cin.get();exit(1);
       }
       video_outbuf = NULL;
       if(!(oc->oformat->flags &amp; AVFMT_RAWPICTURE))
       {
           video_outbuf_size = 200000;
           video_outbuf = (uint8_t*)(av_malloc(video_outbuf_size));
       }
       picture = alloc_picture(c->pix_fmt, c->width, c->height);
       if(!picture)
       {
           std::cout &lt;&lt; "Could not allocate picture" &lt;&lt; std::endl;
           std::cin.get();exit(1);
       }
       tmp_picture = NULL;
       if(c->pix_fmt != AV_PIX_FMT_YUV420P)
       {
           tmp_picture = alloc_picture(AV_PIX_FMT_YUV420P, WIDTH, HEIGHT);
           if(!tmp_picture)
           {
               std::cout &lt;&lt; " Could not allocate temporary picture" &lt;&lt; std::endl;
               std::cin.get();exit(1);
           }
       }
    }


    static AVStream* addVideoStream(AVFormatContext *context, enum AVCodecID codecID)
    {
       AVCodecContext *codec;
       AVStream *stream;
       stream = avformat_new_stream(context, NULL);
       if(!stream)
       {
           std::cout &lt;&lt; "Could not alloc stream." &lt;&lt; std::endl;
           std::cin.get();exit(1);
       }

       codec = stream->codec;
       codec->codec_id = codecID;
       codec->codec_type = AVMEDIA_TYPE_VIDEO;

       // sample rate
       codec->bit_rate = BIT_RATE;
       // resolution must be a multiple of two
       codec->width = WIDTH;
       codec->height = HEIGHT;
       codec->time_base.den = FRAME_RATE; // stream fps
       codec->time_base.num = 1;
       codec->gop_size = 12; // intra frame every twelve frames at most
       codec->pix_fmt = PIXEL_FORMAT;
       if(codec->codec_id == AV_CODEC_ID_MPEG2VIDEO)
           codec->max_b_frames = 2; // for testing, B frames

       if(codec->codec_id == AV_CODEC_ID_MPEG1VIDEO)
           codec->mb_decision = 2;

       if(context->oformat->flags &amp; AVFMT_GLOBALHEADER)
           codec->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;

       return stream;
    }

    static void fill_yuv_image(AVFrame *pict, int frame_index, int width, int height)
    {
       int x, y, i;
       i = frame_index;

       /* Y */
       for(y=0;ydata[0][y * pict->linesize[0] + x] = x + y + i * 3;
           }
       }

       /* Cb and Cr */
       for(y=0;y<height></height>2;y++) {
           for(x=0;x<width></width>2;x++) {
               pict->data[1][y * pict->linesize[1] + x] = 128 + y + i * 2;
               pict->data[2][y * pict->linesize[2] + x] = 64 + x + i * 5;
           }
       }
    }

    static void write_video_frame(AVFormatContext *oc, AVStream *st)
    {
       int out_size, ret;
       AVCodecContext *c;
       static struct SwsContext *img_convert_ctx;
       c = st->codec;

       if(frame_count >= STREAM_NB_FRAMES)
       {

       }
       else
       {
           if(c->pix_fmt != AV_PIX_FMT_YUV420P)
           {
               if(img_convert_ctx = NULL)
               {
                   img_convert_ctx = sws_getContext(WIDTH, HEIGHT, AV_PIX_FMT_YUV420P, WIDTH, HEIGHT,
                                                   c->pix_fmt, sws_flags, NULL, NULL, NULL);
                   if(img_convert_ctx == NULL)
                   {
                       std::cout &lt;&lt; "Cannot initialize the conversion context" &lt;&lt; std::endl;
                       std::cin.get();exit(1);
                   }
               }
               fill_yuv_image(tmp_picture, frame_count, WIDTH, HEIGHT);
               sws_scale(img_convert_ctx, tmp_picture->data, tmp_picture->linesize, 0, HEIGHT,
                           picture->data, picture->linesize);
           }
           else
           {
               fill_yuv_image(picture, frame_count, WIDTH, HEIGHT);
           }
       }

       if (oc->oformat->flags &amp; AVFMT_RAWPICTURE)
       {
           /* raw video case. The API will change slightly in the near
              futur for that */
           AVPacket pkt;
           av_init_packet(&amp;pkt);

           pkt.flags |= AV_PKT_FLAG_KEY;
           pkt.stream_index= st->index;
           pkt.data= (uint8_t *)picture;
           pkt.size= sizeof(AVPicture);

           ret = av_interleaved_write_frame(oc, &amp;pkt);
       }
       else
       {
           /* encode the image */
           out_size = avcodec_encode_video(c, video_outbuf, video_outbuf_size, picture);
           /* if zero size, it means the image was buffered */
           if (out_size > 0)
           {
               AVPacket pkt;
               av_init_packet(&amp;pkt);

               if (c->coded_frame->pts != AV_NOPTS_VALUE)
                   pkt.pts= av_rescale_q(c->coded_frame->pts, c->time_base, st->time_base);
               if(c->coded_frame->key_frame)
                   pkt.flags |= AV_PKT_FLAG_KEY;
               pkt.stream_index= st->index;
               pkt.data= video_outbuf;
               pkt.size= out_size;
               /* write the compressed frame in the media file */
               ret = av_interleaved_write_frame(oc, &amp;pkt);
           } else {
               ret = 0;
           }
       }
       if (ret != 0) {
           std::cout &lt;&lt; "Error while writing video frames" &lt;&lt; std::endl;
           std::cin.get();exit(1);
       }
       frame_count++;
    }

    int main ( int argc, char *argv[] )
    {
       const char* filename = "test.h264";
       AVOutputFormat *outputFormat;
       AVFormatContext *context;
       AVCodecContext *codec;
       AVStream *videoStream;
       double videoPTS;

       // init libavcodec, register all codecs and formats
       av_register_all();
       // auto detect the output format from the name
       outputFormat = av_guess_format(NULL, filename, NULL);
       if(!outputFormat)
       {
           std::cout &lt;&lt; "Cannot guess output format! Using mpeg!" &lt;&lt; std::endl;
           std::cin.get();
           outputFormat = av_guess_format(NULL, "h263" , NULL);
       }
       if(!outputFormat)
       {
           std::cout &lt;&lt; "Could not find suitable output format." &lt;&lt; std::endl;
           std::cin.get();exit(1);
       }

       context = avformat_alloc_context();
       if(!context)
       {
           std::cout &lt;&lt; "Cannot allocate avformat memory." &lt;&lt; std::endl;
           std::cin.get();exit(1);
       }
       context->oformat = outputFormat;
       sprintf_s(context->filename, sizeof(context->filename), "%s", filename);
       std::cout &lt;&lt; "Is '" &lt;&lt; context->filename &lt;&lt; "' = '" &lt;&lt; filename &lt;&lt; "'" &lt;&lt; std::endl;


       videoStream = NULL;
       outputFormat->audio_codec = AV_CODEC_ID_NONE;
       videoStream = addVideoStream(context, outputFormat->video_codec);

       /* still needed?
       if(av_set_parameters(context, NULL) &lt; 0)
       {
           std::cout &lt;&lt; "Invalid output format parameters." &lt;&lt; std::endl;
           exit(0);
       }*/

       av_dump_format(context, 0, filename, 1);

       if(videoStream)
           openVideo(context, videoStream);

       if(!outputFormat->flags &amp; AVFMT_NOFILE)
       {
           if(avio_open(&amp;context->pb, filename, AVIO_FLAG_READ_WRITE) &lt; 0)
           {
               std::cout &lt;&lt; "Could not open " &lt;&lt; filename &lt;&lt; std::endl;
               std::cin.get();exit(1);
           }
       }

       avformat_write_header(context, 0);

       while(true)
       {
           if(videoStream)
               videoPTS = (double) videoStream->pts.val * videoStream->time_base.num / videoStream->time_base.den;
           else
               videoPTS = 0.;

           if((!videoStream || videoPTS >= STREAM_DURATION))
           {
               break;
           }
           write_video_frame(context, videoStream);
       }
       av_write_trailer(context);
       if(videoStream)
           closeVideo(context, videoStream);
       for(int i = 0; i &lt; context->nb_streams; i++)
       {
           av_freep(&amp;context->streams[i]->codec);
           av_freep(&amp;context->streams[i]);
       }

       if(!(outputFormat->flags &amp; AVFMT_NOFILE))
       {
           avio_close(context->pb);
       }
       av_free(context);
       std::cin.get();
       return 0;
    }
    </string></iostream>

    Compile :

    g++ -I ./FFmpeg/ video.cpp -L fflibs -lavcodec -lavformat

    The code comes with two errors :

    video.cpp:249:84: error: ‘avcodec_encode_video’ was not declared in this scope
            out_size = avcodec_encode_video(c, video_outbuf, video_outbuf_size, picture);
                                                                                       ^


    video.cpp: In function ‘int main(int, char**)’:
    video.cpp:342:46: error: ‘AVStream {aka struct AVStream}’ has no member named ‘pts’
                videoPTS = (double) videoStream->pts.val * videoStream->time_base.num / videoStream->time_base.den;
                                                 ^

    and a huge number of warnings for deprecation.

    video.cpp: In function ‘void closeVideo(AVFormatContext*, AVStream*)’:
    video.cpp:60:23: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
        avcodec_close(st->codec);
                          ^
    In file included from video.cpp:9:0:
    ./FFmpeg/libavformat/avformat.h:876:21: note: declared here
        AVCodecContext *codec;
                        ^
    video.cpp:60:23: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
        avcodec_close(st->codec);
                          ^
    In file included from video.cpp:9:0:
    ./FFmpeg/libavformat/avformat.h:876:21: note: declared here
        AVCodecContext *codec;
                        ^
    video.cpp:60:23: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
        avcodec_close(st->codec);
                          ^
    In file included from video.cpp:9:0:
    ./FFmpeg/libavformat/avformat.h:876:21: note: declared here
        AVCodecContext *codec;
                        ^
    video.cpp: In function ‘AVFrame* alloc_picture(AVPixelFormat, int, int)’:
    video.cpp:80:12: warning: ‘int avpicture_get_size(AVPixelFormat, int, int)’ is deprecated [-Wdeprecated-declarations]
        size = avpicture_get_size(pix_fmt, width, height);
               ^
    In file included from video.cpp:8:0:
    ./FFmpeg/libavcodec/avcodec.h:5228:5: note: declared here
    int avpicture_get_size(enum AVPixelFormat pix_fmt, int width, int height);
        ^
    video.cpp:80:12: warning: ‘int avpicture_get_size(AVPixelFormat, int, int)’ is deprecated [-Wdeprecated-declarations]
        size = avpicture_get_size(pix_fmt, width, height);
               ^
    In file included from video.cpp:8:0:
    ./FFmpeg/libavcodec/avcodec.h:5228:5: note: declared here
    int avpicture_get_size(enum AVPixelFormat pix_fmt, int width, int height);
        ^
    video.cpp:80:53: warning: ‘int avpicture_get_size(AVPixelFormat, int, int)’ is deprecated [-Wdeprecated-declarations]
        size = avpicture_get_size(pix_fmt, width, height);
                                                        ^
    In file included from video.cpp:8:0:
    ./FFmpeg/libavcodec/avcodec.h:5228:5: note: declared here
    int avpicture_get_size(enum AVPixelFormat pix_fmt, int width, int height);
        ^
    video.cpp:87:5: warning: ‘int avpicture_fill(AVPicture*, const uint8_t*, AVPixelFormat, int, int)’ is deprecated [-Wdeprecated-declarations]
        avpicture_fill((AVPicture *) picture, picture_buf, pix_fmt, WIDTH, HEIGHT);
        ^
    In file included from video.cpp:8:0:
    ./FFmpeg/libavcodec/avcodec.h:5213:5: note: declared here
    int avpicture_fill(AVPicture *picture, const uint8_t *ptr,
        ^
    video.cpp:87:5: warning: ‘int avpicture_fill(AVPicture*, const uint8_t*, AVPixelFormat, int, int)’ is deprecated [-Wdeprecated-declarations]
        avpicture_fill((AVPicture *) picture, picture_buf, pix_fmt, WIDTH, HEIGHT);
        ^
    In file included from video.cpp:8:0:
    ./FFmpeg/libavcodec/avcodec.h:5213:5: note: declared here
    int avpicture_fill(AVPicture *picture, const uint8_t *ptr,
        ^
    video.cpp:87:78: warning: ‘int avpicture_fill(AVPicture*, const uint8_t*, AVPixelFormat, int, int)’ is deprecated [-Wdeprecated-declarations]
        avpicture_fill((AVPicture *) picture, picture_buf, pix_fmt, WIDTH, HEIGHT);
                                                                                 ^
    In file included from video.cpp:8:0:
    ./FFmpeg/libavcodec/avcodec.h:5213:5: note: declared here
    int avpicture_fill(AVPicture *picture, const uint8_t *ptr,
        ^
    video.cpp: In function ‘void openVideo(AVFormatContext*, AVStream*)’:
    video.cpp:96:13: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
        c = st->codec;
                ^
    In file included from video.cpp:9:0:
    ./FFmpeg/libavformat/avformat.h:876:21: note: declared here
        AVCodecContext *codec;
                        ^
    video.cpp:96:13: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
        c = st->codec;
                ^
    In file included from video.cpp:9:0:
    ./FFmpeg/libavformat/avformat.h:876:21: note: declared here
        AVCodecContext *codec;
                        ^
    video.cpp:96:13: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
        c = st->codec;
                ^
    In file included from video.cpp:9:0:
    ./FFmpeg/libavformat/avformat.h:876:21: note: declared here
        AVCodecContext *codec;
                        ^
    video.cpp: In function ‘AVStream* addVideoStream(AVFormatContext*, AVCodecID)’:
    video.cpp:151:21: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
        codec = stream->codec;
                        ^
    In file included from video.cpp:9:0:
    ./FFmpeg/libavformat/avformat.h:876:21: note: declared here
        AVCodecContext *codec;
                        ^
    video.cpp:151:21: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
        codec = stream->codec;
                        ^
    In file included from video.cpp:9:0:
    ./FFmpeg/libavformat/avformat.h:876:21: note: declared here
        AVCodecContext *codec;
                        ^
    video.cpp:151:21: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
        codec = stream->codec;
                        ^
    In file included from video.cpp:9:0:
    ./FFmpeg/libavformat/avformat.h:876:21: note: declared here
        AVCodecContext *codec;
                        ^
    video.cpp: In function ‘void write_video_frame(AVFormatContext*, AVStream*)’:
    video.cpp:202:13: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
        c = st->codec;
                ^
    In file included from video.cpp:9:0:
    ./FFmpeg/libavformat/avformat.h:876:21: note: declared here
        AVCodecContext *codec;
                        ^
    video.cpp:202:13: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
        c = st->codec;
                ^
    In file included from video.cpp:9:0:
    ./FFmpeg/libavformat/avformat.h:876:21: note: declared here
        AVCodecContext *codec;
                        ^
    video.cpp:202:13: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
        c = st->codec;
                ^
    In file included from video.cpp:9:0:
    ./FFmpeg/libavformat/avformat.h:876:21: note: declared here
        AVCodecContext *codec;
                        ^
    video.cpp:256:20: warning: ‘AVCodecContext::coded_frame’ is deprecated [-Wdeprecated-declarations]
                if (c->coded_frame->pts != AV_NOPTS_VALUE)
                       ^
    In file included from video.cpp:8:0:
    ./FFmpeg/libavcodec/avcodec.h:2723:35: note: declared here
        attribute_deprecated AVFrame *coded_frame;
                                      ^
    video.cpp:256:20: warning: ‘AVCodecContext::coded_frame’ is deprecated [-Wdeprecated-declarations]
                if (c->coded_frame->pts != AV_NOPTS_VALUE)
                       ^
    In file included from video.cpp:8:0:
    ./FFmpeg/libavcodec/avcodec.h:2723:35: note: declared here
        attribute_deprecated AVFrame *coded_frame;
                                      ^
    video.cpp:256:20: warning: ‘AVCodecContext::coded_frame’ is deprecated [-Wdeprecated-declarations]
                if (c->coded_frame->pts != AV_NOPTS_VALUE)
                       ^
    In file included from video.cpp:8:0:
    ./FFmpeg/libavcodec/avcodec.h:2723:35: note: declared here
        attribute_deprecated AVFrame *coded_frame;
                                      ^
    video.cpp:257:42: warning: ‘AVCodecContext::coded_frame’ is deprecated [-Wdeprecated-declarations]
                    pkt.pts= av_rescale_q(c->coded_frame->pts, c->time_base, st->time_base);
                                             ^
    In file included from video.cpp:8:0:
    ./FFmpeg/libavcodec/avcodec.h:2723:35: note: declared here
        attribute_deprecated AVFrame *coded_frame;
                                      ^
    video.cpp:257:42: warning: ‘AVCodecContext::coded_frame’ is deprecated [-Wdeprecated-declarations]
                    pkt.pts= av_rescale_q(c->coded_frame->pts, c->time_base, st->time_base);
                                             ^
    In file included from video.cpp:8:0:
    ./FFmpeg/libavcodec/avcodec.h:2723:35: note: declared here
        attribute_deprecated AVFrame *coded_frame;
                                      ^
    video.cpp:257:42: warning: ‘AVCodecContext::coded_frame’ is deprecated [-Wdeprecated-declarations]
                    pkt.pts= av_rescale_q(c->coded_frame->pts, c->time_base, st->time_base);
                                             ^
    In file included from video.cpp:8:0:
    ./FFmpeg/libavcodec/avcodec.h:2723:35: note: declared here
        attribute_deprecated AVFrame *coded_frame;
                                      ^
    video.cpp:258:19: warning: ‘AVCodecContext::coded_frame’ is deprecated [-Wdeprecated-declarations]
                if(c->coded_frame->key_frame)
                      ^
    In file included from video.cpp:8:0:
    ./FFmpeg/libavcodec/avcodec.h:2723:35: note: declared here
        attribute_deprecated AVFrame *coded_frame;
                                      ^
    video.cpp:258:19: warning: ‘AVCodecContext::coded_frame’ is deprecated [-Wdeprecated-declarations]
                if(c->coded_frame->key_frame)
                      ^
    In file included from video.cpp:8:0:
    ./FFmpeg/libavcodec/avcodec.h:2723:35: note: declared here
        attribute_deprecated AVFrame *coded_frame;
                                      ^
    video.cpp:258:19: warning: ‘AVCodecContext::coded_frame’ is deprecated [-Wdeprecated-declarations]
                if(c->coded_frame->key_frame)
                      ^
    In file included from video.cpp:8:0:
    ./FFmpeg/libavcodec/avcodec.h:2723:35: note: declared here
        attribute_deprecated AVFrame *coded_frame;
                                      ^
    video.cpp:357:40: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
            av_freep(&amp;context->streams[i]->codec);
                                           ^
    In file included from video.cpp:9:0:
    ./FFmpeg/libavformat/avformat.h:876:21: note: declared here
        AVCodecContext *codec;
                        ^
    video.cpp:357:40: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
            av_freep(&amp;context->streams[i]->codec);
                                           ^
    In file included from video.cpp:9:0:
    ./FFmpeg/libavformat/avformat.h:876:21: note: declared here
        AVCodecContext *codec;
                        ^
    video.cpp:357:40: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
            av_freep(&amp;context->streams[i]->codec);
                                           ^
    In file included from video.cpp:9:0:
    ./FFmpeg/libavformat/avformat.h:876:21: note: declared here
        AVCodecContext *codec;
                        ^
    video.cpp:337:38: warning: ignoring return value of ‘int avformat_write_header(AVFormatContext*, AVDictionary**)’, declared with attribute warn_unused_result [-Wunused-result]
        avformat_write_header(context, 0);
                                         ^

    I have also defined a few macros to redefine those who have been omited. In a modern ffmpeg API, they must be replaced.

    Could someone please help me solving errors and deprecation warnings to comply with recent ffmpeg API ?