Recherche avancée

Médias (2)

Mot : - Tags -/documentation

Autres articles (49)

  • Installation en mode ferme

    4 février 2011, par

    Le mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
    C’est la méthode que nous utilisons sur cette même plateforme.
    L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
    Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

Sur d’autres sites (7880)

  • php ffmpeg thumbnail size won't work

    26 septembre 2017, par Rtra

    In my code I am trying to generate a thumbnail in fixed size from a local hosted videos using ffmpeg but when I put my $default_size variable in command it won’t generate thumbnails I don’t know what I am doing wrong in it.

    Here is my PHP code

    <?php
    $default_size = '320x240';
    $result_generator = shell_exec("ffmpeg -i $video -deinterlace -an -ss $half -t $half -r 1 -y -vcodec mjpeg -f mjpeg $default_size $thumbnail 2>&1");
    if( ! $result_generator) {
    throw new Exception('Error creating video thumbnail');
    }
    print $result_generator . "\n\n";
    ?>
  • How to encode images to h264 stream using ffmpeg c api ? [on hold]

    8 septembre 2017, par Tarhan

    Edit :
    Is it possible to create series of H264 packets (complete NALs with starting code 0x00 0x00 0x00 0x01) in FFMPEG and in memory only without direct usage of libx264 for encoding frames ?
    If so how to specify FFMPEG’s format context correctly so I can use AVPacket’s data without writing to file ?

    As described below (I’m sorry for a long explanation). When I use only codec context and encode frames using avcodec_send_frame/avcodec_send_frame FFMPEG produce odd output. At least first packet(s) contains header which looks like FFMPEG logo message (hex dump provided below).
    Is it possible to receive packets which contains only NALs ? I think FFMPEG birary provide expected output when I encode in file with .h264 extension. So bottom line I need to setup format context and codec context to reproduce FFMPEG binary behaviour but only in memory.

    Original long explanation :
    I have several devices which I could not modify and do not have complete source code.
    Devices receive UDP stream in custom format. Some UDP frames contains H264 video in some kind of header wrapper.
    After I unwrap packets I have list of complete H264 NALs (all video payload packets starts with 0x00 0x00 0x00 0x01).
    My test decoding similar to one in devices looks like this :
    Init

    H264Parser::H264Parser(IMatConsumer& receiver) :
       _receiver(receiver)
    {
       av_register_all();
       _codec = avcodec_find_decoder(AV_CODEC_ID_H264);
       _codecContext = avcodec_alloc_context3(_codec);
       _codecContext->refcounted_frames = 0;
       _codecContext->bit_rate = 0;
       _codecContext->flags |= CODEC_FLAG_INPUT_PRESERVED | CODEC_FLAG_LOW_DELAY | CODEC_FLAG_LOOP_FILTER;
       if (_codec->capabilities & CODEC_CAP_TRUNCATED)
           _codecContext->flags |= CODEC_FLAG_TRUNCATED;
       _codecContext->flags2 |= CODEC_FLAG2_CHUNKS | CODEC_FLAG2_NO_OUTPUT | CODEC_FLAG2_FAST;
       _codecContext->flags2 |= CODEC_FLAG2_DROP_FRAME_TIMECODE | CODEC_FLAG2_IGNORE_CROP | CODEC_FLAG2_SHOW_ALL;
       _codecContext->pix_fmt = AV_PIX_FMT_YUV420P;
       _codecContext->field_order = AV_FIELD_UNKNOWN;
       _codecContext->request_sample_fmt = AV_SAMPLE_FMT_NONE;
       _codecContext->workaround_bugs = FF_BUG_AUTODETECT;
       _codecContext->strict_std_compliance = FF_COMPLIANCE_NORMAL;
       _codecContext->error_concealment = FF_EC_DEBLOCK;
       _codecContext->idct_algo = FF_IDCT_AUTO;
       _codecContext->thread_count = 0;
       _codecContext->thread_type = FF_THREAD_FRAME;
       _codecContext->thread_safe_callbacks = 0;
       _codecContext->skip_loop_filter = AVDISCARD_DEFAULT;
       _codecContext->skip_idct = AVDISCARD_DEFAULT;
       _codecContext->skip_frame = AVDISCARD_DEFAULT;
       _codecContext->pkt_timebase.num = 1;
       _codecContext->pkt_timebase.den = -1;
       if (avcodec_open2(_codecContext, _codec, nullptr) != 0) {
           L_ERROR("Could not open codec");
       }
       L_INFO("H264 codec opened succesfully");
       _frame = av_frame_alloc();
       if (_frame == nullptr) {
           L_ERROR("Could not allocate single frame");
       }
       _rgbFrame = av_frame_alloc();
       int frameBytesCount = avpicture_get_size(AV_PIX_FMT_BGR24, INITIAL_PICTURE_WIDTH, INITIAL_PICTURE_HEIGHT);
       _buffer = (uint8_t*)av_malloc(frameBytesCount * sizeof(frameBytesCount));
       avpicture_fill((AVPicture*)_rgbFrame, _buffer, AV_PIX_FMT_BGR24, INITIAL_PICTURE_WIDTH, INITIAL_PICTURE_HEIGHT);
       _packet.dts = AV_NOPTS_VALUE;
       _packet.stream_index = 0;
       _packet.flags = 0;
       _packet.side_data = nullptr;
       _packet.side_data_elems = 0;
       _packet.duration = 0;
       _packet.pos = -1;
       _packet.convergence_duration = AV_NOPTS_VALUE;
       if (avpicture_alloc(&_rgbPicture, AV_PIX_FMT_BGR24, INITIAL_PICTURE_WIDTH, INITIAL_PICTURE_HEIGHT) != 0) {
           L_ERROR("Could not allocate RGB picture");
       }
       _width = INITIAL_PICTURE_WIDTH;
       _height = INITIAL_PICTURE_HEIGHT;
       _convertContext = sws_getContext(INITIAL_PICTURE_WIDTH, INITIAL_PICTURE_HEIGHT, AV_PIX_FMT_YUV420P,
           INITIAL_PICTURE_WIDTH, INITIAL_PICTURE_HEIGHT, AV_PIX_FMT_BGR24,
           SWS_BILINEAR, nullptr, nullptr, nullptr);
       if (_convertContext == nullptr) {
           L_ERROR("Faild to initialize SWS convert context");
       }
       _skipBad = false;
       _initialized = true;
    }

    Decoding NALs received from unwrapper :

    void H264Parser::handle(const uint8_t * nalUnit, int size)
    {
       static int packetIndex = 0;
       bool result = false;
       if (!_initialized)
           return;
       _packet.buf = nullptr;
       _packet.pts = packetIndex;
       _packet.data = (uint8_t*)nalUnit;
       _packet.size = size;
       int frameFinished = 0;
       int length = avcodec_decode_video2(_codecContext, _frame, &frameFinished, &_packet);
       if (_skipBad) {
           L_ERROR("We should not skip bad frames");
       }
       int width = 0;
       int height = 0;
       if (((_frame->pict_type == AV_PICTURE_TYPE_I) ||
           (_frame->pict_type == AV_PICTURE_TYPE_P) ||
           (_frame->pict_type == AV_PICTURE_TYPE_B)) &&
           (length > 0) && (frameFinished > 0)) {
           L_DEBUG("Found picture type: %d", _frame->pict_type);
           if ((_codecContext->width != _width) && (_codecContext->height != _height)) {
               if (_convertContext != nullptr) {
                   sws_freeContext(_convertContext);
                   _convertContext = nullptr;
               }
               _convertContext = sws_getContext(_codecContext->width, _codecContext->height, AV_PIX_FMT_YUV420P,
                   _codecContext->width, _codecContext->height, AV_PIX_FMT_BGR24,
                   SWS_BILINEAR, nullptr, nullptr, nullptr);
               if (_convertContext == nullptr) {
                   L_ERROR("Could not create SWS convert context for new width and height");
                   return;
               }
               avpicture_free(&_rgbPicture);

               if (avpicture_alloc(&_rgbPicture, AV_PIX_FMT_BGR24, _codecContext->width, _codecContext->height) != 0) {
                   L_ERROR("Could not allocate picture for new width and height");
               }
               _width = _codecContext->width;
               _height = _codecContext->height;
           }

           if (sws_scale(_convertContext, _frame->data, _frame->linesize, 0, _codecContext->height, _rgbPicture.data, _rgbPicture.linesize) == _codecContext->height) {
               width = _codecContext->width;
               height = _codecContext->height;
               cv::Mat mat(height, width, CV_8UC3, _rgbPicture.data[0], _rgbPicture.linesize[0]);
               _receiver.onImage(mat);
           }
       }
    }

    It workings and decode images correctly from existing encoding devices.
    P.S. : There small issue with FFMPEG print warning to console "[h264 @ 00000000024ad860] data partitioning is not implemented.". But I suppose it is problem with encoding devices.

    How is question part.
    I need to create another encoding device with settings compatible with decoding described above.
    From tutorials or other Stack Overflow questions people mostly need to write H264 stream to file or direct to UDP without custom wrapping.
    I need to create NALs packets in memory.

    Can someone provide correct code for initialization and encoding series of images into series of complete NALs packets ?

    I’ve tried to create encoding using following code :
    Init

    H264Encoder::H264Encoder(int width, int height, int fpsRationalHigh, int fpsRationalLow) :
       _frameCounter(0),
       _output("video_encoded.h264", std::ios::binary)
    {
       av_register_all();
       avcodec_register_all();
       _codec = avcodec_find_encoder(AV_CODEC_ID_H264);
       if (!_codec) {
           L_ERROR("Could not find H264 encoder");
           throw std::runtime_error("Could not find H264 encoder");
       }

       _codecContext = avcodec_alloc_context3(_codec);
       if (!_codecContext) {
           L_ERROR("Cound not open codec context for H264 encoder");
           throw std::runtime_error("Cound not open codec context for H264 encoder");
       }

       _codecContext->width = width;
       _codecContext->height = height;
       _codecContext->time_base = AVRational{ fpsRationalLow, fpsRationalHigh };
       _codecContext->framerate = AVRational{ fpsRationalHigh, fpsRationalLow };
       _codecContext->bit_rate = BIT_RATE;
       _codecContext->bit_rate_tolerance = 0;
       _codecContext->rc_max_rate = 0;
       _codecContext->gop_size = GOP_SIZE;
       _codecContext->flags |= CODEC_FLAG_LOOP_FILTER;
       // _codecContext->refcounted_frames = 0;
       av_opt_set(_codecContext->priv_data, "preset", "fast", 0);
       av_opt_set(_codecContext->priv_data, "tune", "zerolatency", 0);
       av_opt_set(_codecContext->priv_data, "vprofile", "baseline", 0);

       _codecContext->max_b_frames = 1;
       _codecContext->pix_fmt = AV_PIX_FMT_YUV420P;
       //_codecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;

       if (avcodec_open2(_codecContext, _codec, nullptr) != 0) {
           L_ERROR("Could not open codec");
           throw std::runtime_error("Could not open codec");
       }
       L_INFO("H264 codec opened succesfully");
       _frame = av_frame_alloc();
       if (_frame == nullptr) {
           L_ERROR("Could not allocate single frame");
       }
       _frame->format = _codecContext->pix_fmt;
       _frame->width = width;
       _frame->height = height;

       av_frame_get_buffer(_frame, 1);

       _rgbFrame = av_frame_alloc();
       _rgbFrame->format = AV_PIX_FMT_BGR24;
       _rgbFrame->width = width;
       _rgbFrame->height = height;
       av_frame_get_buffer(_rgbFrame, 1);

       _width = width;
       _height = height;
       _convertContext = sws_getContext(width, height, AV_PIX_FMT_BGR24,
           width, height, AV_PIX_FMT_YUV420P,
           SWS_BILINEAR, nullptr, nullptr, nullptr);
       if (_convertContext == nullptr) {
           L_ERROR("Faild to initialize SWS convert context");
       }
       _skipBad = false;
       _initialized = true;
    }

    Encoding

    void H264Encoder::processImage(const cv::Mat & mat)
    {
       av_init_packet(&_packet);
       _packet.data = nullptr;
       _packet.size = 0;
       _packet.pts = _frameCounter;
       _rgbFrame->data[0] = (uint8_t*)mat.data;

       // av_image_fill_arrays(_rgbFrame->data, _rgbFrame->linesize, _buffer, (AVPixelFormat)_rgbFrame->format, _rgbFrame->width, _rgbFrame->height, 1);
       if (sws_scale(_convertContext, _rgbFrame->data, _rgbFrame->linesize, 0, _codecContext->height, _frame->data, _frame->linesize) == _codecContext->height) {
           L_DEBUG("BGR frame converted to YUV");
       }
       else {
           L_DEBUG("Could not convert BGR frame to YUV");
       }


       int retSendFrame = avcodec_send_frame(_codecContext, _frame);
       int retReceivePacket = avcodec_receive_packet(_codecContext, &_packet);
       if (retSendFrame == AVERROR(EAGAIN)) {
           L_DEBUG("Buffers are filled");
       }
       if (retReceivePacket == 0) {
           _packet.pts = _frameCounter;
           L_DEBUG("Got frame (Frame index: %4d)", _frameCounter);
           _output.write((char*)_packet.data, _packet.size);
           av_packet_unref(&_packet);
       }
       else {
           L_DEBUG("No frame at moment. (Frame index: %4d)", _frameCounter);
       }
       _frameCounter++;
    }

    But this code produce incorrect output. FFMPEG itself could not understand test ```video_encoded.h264`` file.
    It output errors like this :

    [h264 @ 00000000006da940] decode_slice_header error
    [h264 @ 00000000006da940] non-existing PPS 0 referenced
    [h264 @ 00000000006da940] decode_slice_header error
    [h264 @ 00000000006da940] non-existing PPS 0 referenced
    [h264 @ 00000000006da940] decode_slice_header error
    [h264 @ 00000000006da940] non-existing PPS 0 referenced
    [h264 @ 00000000006da940] decode_slice_header error
    [h264 @ 00000000006da940] non-existing PPS 0 referenced
    [h264 @ 00000000006da940] decode_slice_header error
    [h264 @ 00000000006da940] non-existing PPS 0 referenced
    [h264 @ 00000000006da940] decode_slice_header error
    [h264 @ 00000000006da940] non-existing PPS 0 referenced
    [h264 @ 00000000006da940] decode_slice_header error
    [h264 @ 00000000006da940] no frame!
    [h264 @ 00000000006da940] non-existing PPS 0 referenced
    [h264 @ 00000000026196a0] decoding for stream 0 failed
    [h264 @ 00000000026196a0] Could not find codec parameters for stream 0 (Video: h264, none): unspecified size
    Consider increasing the value for the 'analyzeduration' and 'probesize' options
    Input #0, h264, from 'video_encoded.h264':
     Duration: N/A, bitrate: N/A
       Stream #0:0: Video: h264, none, 25 fps, 25 tbr, 1200k tbn, 50 tbc
    [mp4 @ 00000000026d00a0] dimensions not set
    Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
    Stream mapping:
     Stream #0:0 -> #0:0 (copy)
       Last message repeated 1 times

    When I’ve opened file in HEX editor I found FFMPEG logo text (WHY ??) in beginning. It looks like this

    Offset      0  1  2  3  4  5  6  7   8  9  A  B  C  D  E  F

    00000000   00 00 00 01 67 64 00 1F  AC EC 05 00 5B A1 00 00       gd  ¬ì  [¡  
    00000010   03 00 01 00 00 03 00 32  8F 18 31 38 00 00 00 01          2  18    
    00000020   68 EA EC B2 2C 00 00 01  06 05 FF FF BE DC 45 E9   hêì²,     ÿÿ¾ÜEé
    00000030   BD E6 D9 48 B7 96 2C D8  20 D9 23 EE EF 78 32 36   ½æÙH·–,Ø Ù#îïx26
    00000040   34 20 2D 20 63 6F 72 65  20 31 35 32 20 72 32 38   4 - core 152 r28
    00000050   35 31 20 62 61 32 34 38  39 39 20 2D 20 48 2E 32   51 ba24899 - H.2
    00000060   36 34 2F 4D 50 45 47 2D  34 20 41 56 43 20 63 6F   64/MPEG-4 AVC co
    00000070   64 65 63 20 2D 20 43 6F  70 79 6C 65 66 74 20 32   dec - Copyleft 2
    00000080   30 30 33 2D 32 30 31 37  20 2D 20 68 74 74 70 3A   003-2017 - http:
    00000090   2F 2F 77 77 77 2E 76 69  64 65 6F 6C 61 6E 2E 6F   //www.videolan.o
    000000A0   72 67 2F 78 32 36 34 2E  68 74 6D 6C 20 2D 20 6F   rg/x264.html - o
    000000B0   70 74 69 6F 6E 73 3A 20  63 61 62 61 63 3D 31 20   ptions: cabac=1
    000000C0   72 65 66 3D 32 20 64 65  62 6C 6F 63 6B 3D 31 3A   ref=2 deblock=1:
    000000D0   30 3A 30 20 61 6E 61 6C  79 73 65 3D 30 78 33 3A   0:0 analyse=0x3:
    000000E0   30 78 31 31 33 20 6D 65  3D 68 65 78 20 73 75 62   0x113 me=hex sub
    000000F0   6D 65 3D 36 20 70 73 79  3D 31 20 70 73 79 5F 72   me=6 psy=1 psy_r
    00000100   64 3D 31 2E 30 30 3A 30  2E 30 30 20 6D 69 78 65   d=1.00:0.00 mixe
    00000110   64 5F 72 65 66 3D 31 20  6D 65 5F 72 61 6E 67 65   d_ref=1 me_range
    00000120   3D 31 36 20 63 68 72 6F  6D 61 5F 6D 65 3D 31 20   =16 chroma_me=1
    00000130   74 72 65 6C 6C 69 73 3D  31 20 38 78 38 64 63 74   trellis=1 8x8dct
    00000140   3D 31 20 63 71 6D 3D 30  20 64 65 61 64 7A 6F 6E   =1 cqm=0 deadzon
    00000150   65 3D 32 31 2C 31 31 20  66 61 73 74 5F 70 73 6B   e=21,11 fast_psk
    00000160   69 70 3D 31 20 63 68 72  6F 6D 61 5F 71 70 5F 6F   ip=1 chroma_qp_o
    00000170   66 66 73 65 74 3D 2D 32  20 74 68 72 65 61 64 73   ffset=-2 threads
    00000180   3D 38 20 6C 6F 6F 6B 61  68 65 61 64 5F 74 68 72   =8 lookahead_thr
    00000190   65 61 64 73 3D 38 20 73  6C 69 63 65 64 5F 74 68   eads=8 sliced_th
    000001A0   72 65 61 64 73 3D 31 20  73 6C 69 63 65 73 3D 38   reads=1 slices=8
    000001B0   20 6E 72 3D 30 20 64 65  63 69 6D 61 74 65 3D 31    nr=0 decimate=1
    000001C0   20 69 6E 74 65 72 6C 61  63 65 64 3D 30 20 62 6C    interlaced=0 bl
    000001D0   75 72 61 79 5F 63 6F 6D  70 61 74 3D 30 20 63 6F   uray_compat=0 co
    000001E0   6E 73 74 72 61 69 6E 65  64 5F 69 6E 74 72 61 3D   nstrained_intra=
    000001F0   30 20 62 66 72 61 6D 65  73 3D 31 20 62 5F 70 79   0 bframes=1 b_py
    00000200   72 61 6D 69 64 3D 30 20  62 5F 61 64 61 70 74 3D   ramid=0 b_adapt=
    00000210   31 20 62 5F 62 69 61 73  3D 30 20 64 69 72 65 63   1 b_bias=0 direc
    00000220   74 3D 31 20 77 65 69 67  68 74 62 3D 31 20 6F 70   t=1 weightb=1 op
    00000230   65 6E 5F 67 6F 70 3D 30  20 77 65 69 67 68 74 70   en_gop=0 weightp
    00000240   3D 31 20 6B 65 79 69 6E  74 3D 35 20 6B 65 79 69   =1 keyint=5 keyi
    00000250   6E 74 5F 6D 69 6E 3D 31  20 73 63 65 6E 65 63 75   nt_min=1 scenecu
    00000260   74 3D 34 30 20 69 6E 74  72 61 5F 72 65 66 72 65   t=40 intra_refre
    00000270   73 68 3D 30 20 72 63 3D  61 62 72 20 6D 62 74 72   sh=0 rc=abr mbtr
    00000280   65 65 3D 30 20 62 69 74  72 61 74 65 3D 31 32 30   ee=0 bitrate=120
    00000290   30 20 72 61 74 65 74 6F  6C 3D 31 2E 30 20 71 63   0 ratetol=1.0 qc
    000002A0   6F 6D 70 3D 30 2E 36 30  20 71 70 6D 69 6E 3D 30   omp=0.60 qpmin=0
    000002B0   20 71 70 6D 61 78 3D 36  39 20 71 70 73 74 65 70    qpmax=69 qpstep
    000002C0   3D 34 20 69 70 5F 72 61  74 69 6F 3D 31 2E 34 30   =4 ip_ratio=1.40
    000002D0   20 70 62 5F 72 61 74 69  6F 3D 31 2E 33 30 20 61    pb_ratio=1.30 a
    000002E0   71 3D 31 3A 31 2E 30 30  00 80 00                  q=1:1.00 €

    I support additional I need to create AVFormatContext and create stream. But I don’t know how to create it for RAW H264 and most important to not write output to file but to memory buffer.

    Can someone help me ?

  • FFmpeg mux video use libavformat avcodec but output couldn't be played

    10 août 2017, par tqn

    I’m trying write a app that take an input video and crop it to square video and ignore audio stream. Because bad performance if using command, I’m trying to use libavcodec and libavformat to do it. But the output isn’t playable by any video player and duration is 0 although I wrote all frame. Here are my code.

    void convert_video(char* input) {
       AVFormatContext *pFormatCtx = NULL;
       int             i, videoStreamIndex;
       AVCodecContext  *pCodecCtx = NULL;
       AVCodec         *pCodec = NULL;
       AVFrame         *pFrame = NULL;
       AVFrame         *pFrameSquare = NULL;
       AVPacket        packet, outPacket;
       int             frameFinished;
       int             numBytes;
       uint8_t         *buffer = NULL;
       AVCodec         *pEncodec = NULL;
       AVFormatContext *poFormatCxt = NULL;
       MuxOutputStream    videoStream = {0}, audioStream = {0};
       int tar_w, tar_h;

       const enum AVPixelFormat pic_format = AV_PIX_FMT_YUV420P;
       const enum AVCodecID codec_id = AV_CODEC_ID_H264;
       AVDictionary    *optionsDict = NULL;
       char output[50];
       sprintf(output, "%soutput.mp4", ANDROID_SDCARD);

       // Register all formats and codecs
       av_register_all();

       // Open video file
       if(avformat_open_input(&pFormatCtx, input, NULL, NULL)!=0)
           return; // Couldn't open file
       avformat_alloc_output_context2(&poFormatCxt, NULL, NULL, output);

       // Retrieve stream information
       if(avformat_find_stream_info(pFormatCtx, NULL)<0)
           return; // Couldn't find stream information

       // Find the first video stream
       videoStreamIndex=-1;
       for(i=0; inb_streams; i++)
           if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO) {
               videoStreamIndex=i;
               break;
           }
       if(videoStreamIndex==-1)
           return; // Didn't find a video stream

       // Get a pointer to the codec context for the video stream
       pCodecCtx = pFormatCtx->streams[videoStreamIndex]->codec;
       tar_w = pCodecCtx->width > pCodecCtx->height ? pCodecCtx->height : pCodecCtx->width;
       tar_h = tar_w;

       // Find the decoder for the video stream
       pCodec=avcodec_find_decoder(pCodecCtx->codec_id);
       pEncodec = avcodec_find_encoder(codec_id);

       add_stream_mux(&videoStream, poFormatCxt, &pEncodec, codec_id, tar_w, tar_h);
       videoStream.st[0].time_base = pFormatCtx->streams[videoStreamIndex]->time_base;
       videoStream.st[0].codec->time_base = videoStream.st[0].time_base;
       videoStream.st[0].codec->time_base.den *= videoStream.st[0].codec->ticks_per_frame;
    //    add_stream(&audioStream, poFormatCxt, &)
       open_video(poFormatCxt, pEncodec, &videoStream, optionsDict);
       int ret = avio_open(&poFormatCxt->pb, output, AVIO_FLAG_WRITE);

       // Open codec
       if(avcodec_open2(pCodecCtx, pCodec, &optionsDict) < 0)
           return; // Could not open codec

       ret = avformat_write_header(poFormatCxt, &optionsDict);
       if (ret != 0) {
           ANDROID_LOG("Died");
       }

       // Allocate video frame
       pFrame=av_frame_alloc();
       pFrame->format = videoStream.st->codec->pix_fmt;
       pFrame->width = pCodecCtx->width;
       pFrame->height = pCodecCtx->height;
       av_frame_get_buffer(pFrame, 32);

       // Allocate an AVFrame structure
       pFrameSquare=av_frame_alloc();
       if(pFrameSquare==NULL)
           return;

       // Determine required buffer size and allocate buffer
       numBytes=avpicture_get_size(pic_format, tar_w,
                                   tar_h);
       buffer = (uint8_t *)av_malloc(numBytes*sizeof(uint8_t));

       // Assign appropriate parts of buffer to image planes in pFrameSquare
       // Note that pFrameSquare is an AVFrame, but AVFrame is a superset
       // of AVPicture
       ret = avpicture_fill((AVPicture *)pFrameSquare, buffer, pic_format,
                      tar_w, tar_h);
       if (ret < 0) {
           ANDROID_LOG("Can't fill picture");
           return;
       }

       // Read frames and save first five frames to disk
       i=0;
       ret = av_read_frame(pFormatCtx, &packet);
       while(ret >= 0) {
           // Is this a packet from the video stream?
           if(packet.stream_index == videoStreamIndex) {
               // Decode video frame
    //            av_packet_rescale_ts(&packet, videoStream.st->time_base, videoStream.st->codec->time_base);
               avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished,
                                     &packet);
    //            while (!frameFinished) {
    //                avcodec_decode_video2(videoStream.st->codec, pFrame, &frameFinished, NULL);
    //            }
               ANDROID_LOG("Trying to decode frame %d with result %d", i, frameFinished);
               ret = av_picture_crop((AVPicture*) pFrameSquare, (AVPicture*) pFrame, pic_format, 0, 0);
               if (ret < 0) {
                   ANDROID_LOG("Can't crop image");
               }
    //            av_frame_get_best_effort_timestamp(pFrame);
    //            av_rescale_q()

               if(frameFinished) {

                   // Save the frame to disk
                   av_init_packet(&outPacket);
    //                av_packet_rescale_ts(&outPacket, videoStream.st->codec->time_base, videoStream.st->time_base);
                   pFrameSquare->width = tar_w;
                   pFrameSquare->height = tar_h;
                   pFrameSquare->format = pic_format;
                   pFrameSquare->pts = ++videoStream.next_pts;
                   ret = avcodec_encode_video2(videoStream.st->codec, &outPacket, pFrameSquare, &frameFinished);

    //                int count = 0;
    //                while (!frameFinished && count++ < 6) {
    //                    ret = avcodec_encode_video2(videoStream.st->codec, &outPacket, NULL, &frameFinished);
    //                }
                   if (frameFinished) {
                       ANDROID_LOG("Writing frame %d", i);
                       outPacket.stream_index = videoStreamIndex;
                       av_interleaved_write_frame(poFormatCxt, &outPacket);
                   }
                   av_free_packet(&outPacket);
               }
           }

           // Free the packet that was allocated by av_read_frameav_free_packet(&packet);
           ret = av_read_frame(pFormatCtx, &packet);
       }

       ret = av_write_trailer(poFormatCxt);
       if (ret < 0) {
           ANDROID_LOG("Couldn't write trailer");
       } else {
           ANDROID_LOG("Video convert finished");
       }

       // Free the RGB image
       av_free(buffer);
       av_free(pFrameSquare);

       // Free the YUV frame
       av_free(pFrame);

       // Close the codec
       avcodec_close(pCodecCtx);
    //    avcodec_close(pEncodecCtx);

       // Close the video file
       avformat_close_input(&pFormatCtx);

       return;
    }

    Helper

    #define STREAM_DURATION   10.0
    #define STREAM_FRAME_RATE 25 /* 25 images/s */
    #define STREAM_PIX_FMT    AV_PIX_FMT_YUV420P /* default pix_fmt */

    /* Add an output stream. */
    void add_stream_mux(MuxOutputStream *ost, AVFormatContext *oc,
                          AVCodec **codec,
                          enum AVCodecID codec_id, int width, int heigh)
    {
       AVCodecContext *codecCtx;
       int i;
       /* find the encoder */
       *codec = avcodec_find_encoder(codec_id);
       if (!(*codec)) {
           fprintf(stderr, "Could not find encoder for '%s'\n",
                   avcodec_get_name(codec_id));
           exit(1);
       }
       ost->st = avformat_new_stream(oc, *codec);
       if (!ost->st) {
           fprintf(stderr, "Could not allocate stream\n");
           exit(1);
       }
       ost->st->id = oc->nb_streams-1;
       codecCtx = ost->st->codec;
       switch ((*codec)->type) {
           case AVMEDIA_TYPE_AUDIO:
               codecCtx->sample_fmt  = (*codec)->sample_fmts ?
                                (*codec)->sample_fmts[0] : AV_SAMPLE_FMT_FLTP;
               codecCtx->bit_rate    = 64000;
               codecCtx->sample_rate = 44100;
               if ((*codec)->supported_samplerates) {
                   codecCtx->sample_rate = (*codec)->supported_samplerates[0];
                   for (i = 0; (*codec)->supported_samplerates[i]; i++) {
                       if ((*codec)->supported_samplerates[i] == 44100)
                           codecCtx->sample_rate = 44100;
                   }
               }
               codecCtx->channels        = av_get_channel_layout_nb_channels(codecCtx->channel_layout);
               codecCtx->channel_layout = AV_CH_LAYOUT_STEREO;
               if ((*codec)->channel_layouts) {
                   codecCtx->channel_layout = (*codec)->channel_layouts[0];
                   for (i = 0; (*codec)->channel_layouts[i]; i++) {
                       if ((*codec)->channel_layouts[i] == AV_CH_LAYOUT_STEREO)
                           codecCtx->channel_layout = AV_CH_LAYOUT_STEREO;
                   }
               }
               codecCtx->channels        = av_get_channel_layout_nb_channels(codecCtx->channel_layout);
               ost->st->time_base = (AVRational){ 1, codecCtx->sample_rate };
               break;
           case AVMEDIA_TYPE_VIDEO:
               codecCtx->codec_id = codec_id;
               codecCtx->bit_rate = 400000;
               /* Resolution must be a multiple of two. */
               codecCtx->width    = width;
               codecCtx->height   = heigh;
               /* timebase: This is the fundamental unit of time (in seconds) in terms
                * of which frame timestamps are represented. For fixed-fps content,
                * timebase should be 1/framerate and timestamp increments should be
                * identical to 1. */
               ost->st->time_base = (AVRational){ 1, STREAM_FRAME_RATE };
               codecCtx->time_base       = ost->st->time_base;
               codecCtx->gop_size      = 12; /* emit one intra frame every twelve frames at most */
               codecCtx->pix_fmt       = STREAM_PIX_FMT;
               if (codecCtx->codec_id == AV_CODEC_ID_MPEG2VIDEO) {
                   /* just for testing, we also add B frames */
                   codecCtx->max_b_frames = 2;
               }
               if (codecCtx->codec_id == AV_CODEC_ID_MPEG1VIDEO) {
                   /* Needed to avoid using macroblocks in which some coeffs overflow.
                    * This does not happen with normal video, it just happens here as
                    * the motion of the chroma plane does not match the luma plane. */
                   codecCtx->mb_decision = 2;
               }
               break;
           default:
               break;
       }
       /* Some formats want stream headers to be separate. */
       if (oc->oformat->flags & AVFMT_GLOBALHEADER)
           codecCtx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
    }
    static void open_video(AVFormatContext *oc, AVCodec *codec, MuxOutputStream *ost, AVDictionary *opt_arg)
    {
       int ret;
       AVCodecContext *c = ost->st->codec;
       AVDictionary *opt = NULL;
       av_dict_copy(&opt, opt_arg, 0);
       /* open the codec */
       ret = avcodec_open2(c, codec, &opt);
       av_dict_free(&opt);
       if (ret < 0) {
           fprintf(stderr, "Could not open video codec: %s\n", av_err2str(ret));
           exit(1);
       }
       /* allocate and init a re-usable frame */
       ost->frame = alloc_picture(c->pix_fmt, c->width, c->height);
       if (!ost->frame) {
           fprintf(stderr, "Could not allocate video frame\n");
           exit(1);
       }
       /* If the output format is not YUV420P, then a temporary YUV420P
        * picture is needed too. It is then converted to the required
        * output format. */
       ost->tmp_frame = NULL;
       if (c->pix_fmt != AV_PIX_FMT_YUV420P) {
           ost->tmp_frame = alloc_picture(AV_PIX_FMT_YUV420P, c->width, c->height);
           if (!ost->tmp_frame) {
               fprintf(stderr, "Could not allocate temporary picture\n");
               exit(1);
           }
       }
    }

    I’m afraid that I set wrong pts or time_base of frame, and also when decoding or encoding, I see that some first frame is lost, frameFinished is 0. See a post that I’ve to flush decoder by avcodec_decode_video2(videoStream.st->codec, pFrame, &frameFinished, NULL) but after try a few times, frameFinished still is 0, and with avcodec_encode_video2(videoStream.st->codec, &outPacket, NULL, &frameFinished) will throw error in next encode frame. So how I can get all frame that lost ? I’m using FFmpeg version 3.0.1