Recherche avancée

Médias (2)

Mot : - Tags -/rotation

Autres articles (15)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

  • Other interesting software

    13 avril 2011, par

    We don’t claim to be the only ones doing what we do ... and especially not to assert claims to be the best either ... What we do, we just try to do it well and getting better ...
    The following list represents softwares that tend to be more or less as MediaSPIP or that MediaSPIP tries more or less to do the same, whatever ...
    We don’t know them, we didn’t try them, but you can take a peek.
    Videopress
    Website : http://videopress.com/
    License : GNU/GPL v2
    Source code : (...)

Sur d’autres sites (3637)

  • FFmpeg "movflags" > "faststart" causes invalid MP4 file to be written

    22 août 2016, par williamtroup

    I’m setting up the format layout for the video as follows :

    AVOutputFormat* outputFormat = ffmpeg.av_guess_format(null, "output.mp4", null);

    AVCodec* videoCodec = ffmpeg.avcodec_find_encoder(outputFormat->video_codec);

    AVFormatContext* formatContext = ffmpeg.avformat_alloc_context();
    formatContext->oformat = outputFormat;
    formatContext->video_codec_id = videoCodec->id;

    ffmpeg.avformat_new_stream(formatContext, videoCodec);

    This is how I am setting up the Codec Context :

    AVCodecContext* codecContext = ffmpeg.avcodec_alloc_context3(videoCodec);
    codecContext->bit_rate = 400000;
    codecContext->width = 1280;
    codecContext->height = 720;
    codecContext->gop_size = 12;
    codecContext->max_b_frames = 1;
    codecContext->pix_fmt = videoCodec->pix_fmts[0];
    codecContext->codec_id = videoCodec->id;
    codecContext->codec_type = videoCodec->type;
    codecContext->time_base = new AVRational
    {
       num = 1,
       den = 30
    };

    I’m using the following code to setup the "movflags" > "faststart" option for the header of the video :

    AVDictionary* options = null;

    int result = ffmpeg.av_dict_set(&options, "movflags", "faststart", 0);

    int writeHeaderResult = ffmpeg.avformat_write_header(formatContext, &options);

    The file is opened and the header is written as follows :

    if ((formatContext->oformat->flags & ffmpeg.AVFMT_NOFILE) == 0)
    {
       int ioOptionResult = ffmpeg.avio_open(&formatContext->pb, "output.mp4", ffmpeg.AVIO_FLAG_WRITE);
    }

    int writeHeaderResult = ffmpeg.avformat_write_header(formatContext, &options);

    After this, I write each video frame as follows :

    outputFrame->pts = frameIndex;

    packet.flags |= ffmpeg.AV_PKT_FLAG_KEY;
    packet.pts = frameIndex;
    packet.dts = frameIndex;

    int encodedFrame = 0;
    int encodeVideoResult = ffmpeg.avcodec_encode_video2(codecContext, &packet, outputFrame, &encodedFrame);

    if (encodedFrame != 0)
    {
       packet.pts = ffmpeg.av_rescale_q(packet.pts, codecContext->time_base, m_videoStream->time_base);
       packet.dts = ffmpeg.av_rescale_q(packet.dts, codecContext->time_base, m_videoStream->time_base);
       packet.stream_index = m_videoStream->index;

       if (codecContext->coded_frame->key_frame > 0)
       {
           packet.flags |= ffmpeg.AV_PKT_FLAG_KEY;
       }

       int writeFrameResult = ffmpeg.av_interleaved_write_frame(formatContext, &packet);
    }

    After that, I write the trailer :

    int writeTrailerResult = ffmpeg.av_write_trailer(formatContext);

    The file finishes writing and everything closes and frees up correctly. However, the MP4 file is unplayable (even VLC cant play it). AtomicParsley.exe won’t show any information about the file either.

    The DLLs used for the AutoGen library are :

    avcodec-56.dll
    avdevice-56.dll
    avfilter-5.dll
    avformat-56.dll
    avutil-54.dll
    postproc-53.dll
    swresample-1.dll
    swscale-3.dll
  • ffmpeg error "Could not allocate picture : Invalid argument Found Video Stream Found Audio Stream"

    26 octobre 2020, par Dinkan

    I am trying to write a C program to stream AV by copying both AV codecs with rtp_mpegts using RTP over network

    


    ffmpeg -re -i Sample_AV_15min.ts -acodec copy -vcodec copy -f rtp_mpegts rtp://192.168.1.1:5004


    


    using muxing.c as example which used ffmpeg libraries.
ffmpeg application works fine.

    


    Stream details

    


    Input #0, mpegts, from 'Weather_Nation_10min.ts':
  Duration: 00:10:00.38, start: 41313.400811, bitrate: 2840 kb/s
  Program 1
    Stream #0:0[0x11]: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p, 1440x1080 [SAR 4:3 DAR 16:9], 29.97 fps, 59.94 tbr, 90k tbn, 59.94 tbc
    Stream #0:1[0x14]: Audio: ac3 (AC-3 / 0x332D4341), 48000 Hz, stereo, fltp, 448 kb/s
Output #0, rtp_mpegts, to 'rtp://192.168.1.1:5004':
  Metadata:
    encoder         : Lavf54.63.104
    Stream #0:0: Video: h264 ([27][0][0][0] / 0x001B), yuv420p, 1440x1080 [SAR 4:3 DAR 16:9], q=2-31, 29.97 fps, 90k tbn, 29.97 tbc
    Stream #0:1: Audio: ac3 (AC-3 / 0x332D4341), 48000 Hz, stereo, 448 kb/s
Stream mapping:
  Stream #0:0 -> #0:0 (copy)
  Stream #0:1 -> #0:1 (copy)


    


    However, my application fails with

    


    ./my_test_app Sample_AV_15min.ts rtp://192.168.1.1:5004  
[h264 @ 0x800b30] non-existing PPS referenced                                  
[h264 @ 0x800b30] non-existing PPS 0 referenced                        
[h264 @ 0x800b30] decode_slice_header error                            
[h264 @ 0x800b30] no frame! 

[....snipped...]
[h264 @ 0x800b30] non-existing PPS 0 referenced        
[h264 @ 0x800b30] non-existing PPS referenced  
[h264 @ 0x800b30] non-existing PPS 0 referenced  
[h264 @ 0x800b30] decode_slice_header error  
[h264 @ 0x800b30] no frame!  
[h264 @ 0x800b30] mmco: unref short failure  
[h264 @ 0x800b30] mmco: unref short failure

[mpegts @ 0x800020] max_analyze_duration 5000000 reached at 5024000 microseconds  
[mpegts @ 0x800020] PES packet size mismatch could not find codec tag for codec id 
17075200, default to 0.  could not find codec tag for codec id 86019, default to 0.  
Could not allocate picture: Invalid argument  
Found Video Stream Found Audio Stream


    


    How do I fix this ? My complete source code based on muxing.c

    


    /**&#xA; * @file&#xA; * libavformat API example.&#xA; *&#xA; * Output a media file in any supported libavformat format.&#xA; * The default codecs are used.&#xA; * @example doc/examples/muxing.c&#xA; */&#xA;&#xA;#include &#xA;#include &#xA;#include &#xA;#include &#xA;&#xA;#include <libavutil></libavutil>mathematics.h>&#xA;#include <libavformat></libavformat>avformat.h>&#xA;#include <libswscale></libswscale>swscale.h>&#xA;&#xA;/* 5 seconds stream duration */&#xA;#define STREAM_DURATION   200.0&#xA;#define STREAM_FRAME_RATE 25 /* 25 images/s */&#xA;#define STREAM_NB_FRAMES  ((int)(STREAM_DURATION * STREAM_FRAME_RATE))&#xA;#define STREAM_PIX_FMT    AV_PIX_FMT_YUV420P /* default pix_fmt */&#xA;&#xA;static int sws_flags = SWS_BICUBIC;&#xA;&#xA;/**************************************************************/&#xA;/* audio output */&#xA;&#xA;static float t, tincr, tincr2;&#xA;static int16_t *samples;&#xA;static int audio_input_frame_size;&#xA;#if 0&#xA;/* Add an output stream. */&#xA;static AVStream *add_stream(AVFormatContext *oc, AVCodec **codec,&#xA;                            enum AVCodecID codec_id)&#xA;{&#xA;    AVCodecContext *c;&#xA;    AVStream *st;&#xA;&#xA;    /* find the encoder */&#xA;    *codec = avcodec_find_encoder(codec_id);&#xA;    if (!(*codec)) {&#xA;        fprintf(stderr, "Could not find encoder for &#x27;%s&#x27;\n",&#xA;                avcodec_get_name(codec_id));&#xA;        exit(1);&#xA;    }&#xA;&#xA;    st = avformat_new_stream(oc, *codec);&#xA;    if (!st) {&#xA;        fprintf(stderr, "Could not allocate stream\n");&#xA;        exit(1);&#xA;    }&#xA;    st->id = oc->nb_streams-1;&#xA;    c = st->codec;&#xA;&#xA;    switch ((*codec)->type) {&#xA;    case AVMEDIA_TYPE_AUDIO:&#xA;        st->id = 1;&#xA;        c->sample_fmt  = AV_SAMPLE_FMT_S16;&#xA;        c->bit_rate    = 64000;&#xA;        c->sample_rate = 44100;&#xA;        c->channels    = 2;&#xA;        break;&#xA;&#xA;    case AVMEDIA_TYPE_VIDEO:&#xA;        c->codec_id = codec_id;&#xA;&#xA;        c->bit_rate = 400000;&#xA;        /* Resolution must be a multiple of two. */&#xA;        c->width    = 352;&#xA;        c->height   = 288;&#xA;        /* timebase: This is the fundamental unit of time (in seconds) in terms&#xA;         * of which frame timestamps are represented. For fixed-fps content,&#xA;         * timebase should be 1/framerate and timestamp increments should be&#xA;         * identical to 1. */&#xA;        c->time_base.den = STREAM_FRAME_RATE;&#xA;        c->time_base.num = 1;&#xA;        c->gop_size      = 12; /* emit one intra frame every twelve frames at most */&#xA;        c->pix_fmt       = STREAM_PIX_FMT;&#xA;        if (c->codec_id == AV_CODEC_ID_MPEG2VIDEO) {&#xA;            /* just for testing, we also add B frames */&#xA;            c->max_b_frames = 2;&#xA;        }&#xA;        if (c->codec_id == AV_CODEC_ID_MPEG1VIDEO) {&#xA;            /* Needed to avoid using macroblocks in which some coeffs overflow.&#xA;             * This does not happen with normal video, it just happens here as&#xA;             * the motion of the chroma plane does not match the luma plane. */&#xA;            c->mb_decision = 2;&#xA;        }&#xA;    break;&#xA;&#xA;    default:&#xA;        break;&#xA;    }&#xA;&#xA;    /* Some formats want stream headers to be separate. */&#xA;    if (oc->oformat->flags &amp; AVFMT_GLOBALHEADER)&#xA;        c->flags |= CODEC_FLAG_GLOBAL_HEADER;&#xA;&#xA;    return st;&#xA;}&#xA;#endif &#xA;/**************************************************************/&#xA;/* audio output */&#xA;&#xA;static float t, tincr, tincr2;&#xA;static int16_t *samples;&#xA;static int audio_input_frame_size;&#xA;&#xA;static void open_audio(AVFormatContext *oc, AVCodec *codec, AVStream *st)&#xA;{&#xA;    AVCodecContext *c;&#xA;    int ret;&#xA;&#xA;    c = st->codec;&#xA;&#xA;    /* open it */&#xA;    ret = avcodec_open2(c, codec, NULL);&#xA;    if (ret &lt; 0) {&#xA;        fprintf(stderr, "Could not open audio codec: %s\n", av_err2str(ret));&#xA;        exit(1);&#xA;    }&#xA;&#xA;    /* init signal generator */&#xA;    t     = 0;&#xA;    tincr = 2 * M_PI * 110.0 / c->sample_rate;&#xA;    /* increment frequency by 110 Hz per second */&#xA;    tincr2 = 2 * M_PI * 110.0 / c->sample_rate / c->sample_rate;&#xA;&#xA;    if (c->codec->capabilities &amp; CODEC_CAP_VARIABLE_FRAME_SIZE)&#xA;        audio_input_frame_size = 10000;&#xA;    else&#xA;        audio_input_frame_size = c->frame_size;&#xA;    samples = av_malloc(audio_input_frame_size *&#xA;                        av_get_bytes_per_sample(c->sample_fmt) *&#xA;                        c->channels);&#xA;    if (!samples) {&#xA;        fprintf(stderr, "Could not allocate audio samples buffer\n");&#xA;        exit(1);&#xA;    }&#xA;}&#xA;&#xA;/* Prepare a 16 bit dummy audio frame of &#x27;frame_size&#x27; samples and&#xA; * &#x27;nb_channels&#x27; channels. */&#xA;static void get_audio_frame(int16_t *samples, int frame_size, int nb_channels)&#xA;{&#xA;    int j, i, v;&#xA;    int16_t *q;&#xA;&#xA;    q = samples;&#xA;    for (j = 0; j &lt; frame_size; j&#x2B;&#x2B;) {&#xA;        v = (int)(sin(t) * 10000);&#xA;        for (i = 0; i &lt; nb_channels; i&#x2B;&#x2B;)&#xA;            *q&#x2B;&#x2B; = v;&#xA;        t     &#x2B;= tincr;&#xA;        tincr &#x2B;= tincr2;&#xA;    }&#xA;}&#xA;&#xA;static void write_audio_frame(AVFormatContext *oc, AVStream *st)&#xA;{&#xA;    AVCodecContext *c;&#xA;    AVPacket pkt = { 0 }; // data and size must be 0;&#xA;    AVFrame *frame = avcodec_alloc_frame();&#xA;    int got_packet, ret;&#xA;&#xA;    av_init_packet(&amp;pkt);&#xA;    c = st->codec;&#xA;&#xA;    get_audio_frame(samples, audio_input_frame_size, c->channels);&#xA;    frame->nb_samples = audio_input_frame_size;&#xA;    avcodec_fill_audio_frame(frame, c->channels, c->sample_fmt,&#xA;                             (uint8_t *)samples,&#xA;                             audio_input_frame_size *&#xA;                             av_get_bytes_per_sample(c->sample_fmt) *&#xA;                             c->channels, 1);&#xA;&#xA;    ret = avcodec_encode_audio2(c, &amp;pkt, frame, &amp;got_packet);&#xA;    if (ret &lt; 0) {&#xA;        fprintf(stderr, "Error encoding audio frame: %s\n", av_err2str(ret));&#xA;        exit(1);&#xA;    }&#xA;&#xA;    if (!got_packet)&#xA;        return;&#xA;&#xA;    pkt.stream_index = st->index;&#xA;&#xA;    /* Write the compressed frame to the media file. */&#xA;    ret = av_interleaved_write_frame(oc, &amp;pkt);&#xA;    if (ret != 0) {&#xA;        fprintf(stderr, "Error while writing audio frame: %s\n",&#xA;                av_err2str(ret));&#xA;        exit(1);&#xA;    }&#xA;    avcodec_free_frame(&amp;frame);&#xA;}&#xA;&#xA;static void close_audio(AVFormatContext *oc, AVStream *st)&#xA;{&#xA;    avcodec_close(st->codec);&#xA;&#xA;    av_free(samples);&#xA;}&#xA;&#xA;/**************************************************************/&#xA;/* video output */&#xA;&#xA;static AVFrame *frame;&#xA;static AVPicture src_picture, dst_picture;&#xA;static int frame_count;&#xA;&#xA;static void open_video(AVFormatContext *oc, AVCodec *codec, AVStream *st)&#xA;{&#xA;    int ret;&#xA;    AVCodecContext *c = st->codec;&#xA;&#xA;    /* open the codec */&#xA;    ret = avcodec_open2(c, codec, NULL);&#xA;    if (ret &lt; 0) {&#xA;        fprintf(stderr, "Could not open video codec: %s\n", av_err2str(ret));&#xA;        exit(1);&#xA;    }&#xA;&#xA;    /* allocate and init a re-usable frame */&#xA;    frame = avcodec_alloc_frame();&#xA;    if (!frame) {&#xA;        fprintf(stderr, "Could not allocate video frame\n");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    /* Allocate the encoded raw picture. */&#xA;    ret = avpicture_alloc(&amp;dst_picture, c->pix_fmt, c->width, c->height);&#xA;    if (ret &lt; 0) {&#xA;        fprintf(stderr, "Could not allocate picture: %s\n", av_err2str(ret));&#xA;        exit(1);&#xA;    }&#xA;&#xA;    /* If the output format is not YUV420P, then a temporary YUV420P&#xA;     * picture is needed too. It is then converted to the required&#xA;     * output format. */&#xA;    if (c->pix_fmt != AV_PIX_FMT_YUV420P) {&#xA;        ret = avpicture_alloc(&amp;src_picture, AV_PIX_FMT_YUV420P, c->width, c->height);&#xA;        if (ret &lt; 0) {&#xA;            fprintf(stderr, "Could not allocate temporary picture: %s\n",&#xA;                    av_err2str(ret));&#xA;            exit(1);&#xA;        }&#xA;    }&#xA;&#xA;    /* copy data and linesize picture pointers to frame */&#xA;    *((AVPicture *)frame) = dst_picture;&#xA;}&#xA;&#xA;/* Prepare a dummy image. */&#xA;static void fill_yuv_image(AVPicture *pict, int frame_index,&#xA;                           int width, int height)&#xA;{&#xA;    int x, y, i;&#xA;&#xA;    i = frame_index;&#xA;&#xA;    /* Y */&#xA;    for (y = 0; y &lt; height; y&#x2B;&#x2B;)&#xA;        for (x = 0; x &lt; width; x&#x2B;&#x2B;)&#xA;            pict->data[0][y * pict->linesize[0] &#x2B; x] = x &#x2B; y &#x2B; i * 3;&#xA;&#xA;    /* Cb and Cr */&#xA;    for (y = 0; y &lt; height / 2; y&#x2B;&#x2B;) {&#xA;        for (x = 0; x &lt; width / 2; x&#x2B;&#x2B;) {&#xA;            pict->data[1][y * pict->linesize[1] &#x2B; x] = 128 &#x2B; y &#x2B; i * 2;&#xA;            pict->data[2][y * pict->linesize[2] &#x2B; x] = 64 &#x2B; x &#x2B; i * 5;&#xA;        }&#xA;    }&#xA;}&#xA;&#xA;static void write_video_frame(AVFormatContext *oc, AVStream *st)&#xA;{&#xA;    int ret;&#xA;    static struct SwsContext *sws_ctx;&#xA;    AVCodecContext *c = st->codec;&#xA;&#xA;    if (frame_count >= STREAM_NB_FRAMES) {&#xA;        /* No more frames to compress. The codec has a latency of a few&#xA;         * frames if using B-frames, so we get the last frames by&#xA;         * passing the same picture again. */&#xA;    } else {&#xA;        if (c->pix_fmt != AV_PIX_FMT_YUV420P) {&#xA;            /* as we only generate a YUV420P picture, we must convert it&#xA;             * to the codec pixel format if needed */&#xA;            if (!sws_ctx) {&#xA;                sws_ctx = sws_getContext(c->width, c->height, AV_PIX_FMT_YUV420P,&#xA;                                         c->width, c->height, c->pix_fmt,&#xA;                                         sws_flags, NULL, NULL, NULL);&#xA;                if (!sws_ctx) {&#xA;                    fprintf(stderr,&#xA;                            "Could not initialize the conversion context\n");&#xA;                    exit(1);&#xA;                }&#xA;            }&#xA;            fill_yuv_image(&amp;src_picture, frame_count, c->width, c->height);&#xA;            sws_scale(sws_ctx,&#xA;                      (const uint8_t * const *)src_picture.data, src_picture.linesize,&#xA;                      0, c->height, dst_picture.data, dst_picture.linesize);&#xA;        } else {&#xA;            fill_yuv_image(&amp;dst_picture, frame_count, c->width, c->height);&#xA;        }&#xA;    }&#xA;&#xA;    if (oc->oformat->flags &amp; AVFMT_RAWPICTURE) {&#xA;        /* Raw video case - directly store the picture in the packet */&#xA;        AVPacket pkt;&#xA;        av_init_packet(&amp;pkt);&#xA;&#xA;        pkt.flags        |= AV_PKT_FLAG_KEY;&#xA;        pkt.stream_index  = st->index;&#xA;        pkt.data          = dst_picture.data[0];&#xA;        pkt.size          = sizeof(AVPicture);&#xA;&#xA;        ret = av_interleaved_write_frame(oc, &amp;pkt);&#xA;    } else {&#xA;        /* encode the image */&#xA;        AVPacket pkt;&#xA;        int got_output;&#xA;&#xA;        av_init_packet(&amp;pkt);&#xA;        pkt.data = NULL;    // packet data will be allocated by the encoder&#xA;        pkt.size = 0;&#xA;&#xA;        ret = avcodec_encode_video2(c, &amp;pkt, frame, &amp;got_output);&#xA;        if (ret &lt; 0) {&#xA;            fprintf(stderr, "Error encoding video frame: %s\n", av_err2str(ret));&#xA;            exit(1);&#xA;        }&#xA;&#xA;        /* If size is zero, it means the image was buffered. */&#xA;        if (got_output) {&#xA;            if (c->coded_frame->key_frame)&#xA;                pkt.flags |= AV_PKT_FLAG_KEY;&#xA;&#xA;            pkt.stream_index = st->index;&#xA;&#xA;            /* Write the compressed frame to the media file. */&#xA;            ret = av_interleaved_write_frame(oc, &amp;pkt);&#xA;        } else {&#xA;            ret = 0;&#xA;        }&#xA;    }&#xA;    if (ret != 0) {&#xA;        fprintf(stderr, "Error while writing video frame: %s\n", av_err2str(ret));&#xA;        exit(1);&#xA;    }&#xA;    frame_count&#x2B;&#x2B;;&#xA;}&#xA;&#xA;static void close_video(AVFormatContext *oc, AVStream *st)&#xA;{&#xA;    avcodec_close(st->codec);&#xA;    av_free(src_picture.data[0]);&#xA;    av_free(dst_picture.data[0]);&#xA;    av_free(frame);&#xA;}&#xA;&#xA;/**************************************************************/&#xA;/* media file output */&#xA;&#xA;int main(int argc, char **argv)&#xA;{&#xA;    const char *filename;&#xA;    AVOutputFormat *fmt;&#xA;    AVFormatContext *oc;&#xA;    AVStream *audio_st, *video_st;&#xA;    AVCodec *audio_codec, *video_codec;&#xA;    double audio_pts, video_pts;&#xA;    int ret;&#xA;    char errbuf[50];&#xA;    int i = 0;&#xA;    /* Initialize libavcodec, and register all codecs and formats. */&#xA;    av_register_all();&#xA;&#xA;    if (argc != 3) {&#xA;        printf("usage: %s input_file out_file|stream\n"&#xA;               "API example program to output a media file with libavformat.\n"&#xA;               "This program generates a synthetic audio and video stream, encodes and\n"&#xA;               "muxes them into a file named output_file.\n"&#xA;               "The output format is automatically guessed according to the file extension.\n"&#xA;               "Raw images can also be output by using &#x27;%%d&#x27; in the filename.\n"&#xA;               "\n", argv[0]);&#xA;        return 1;&#xA;    }&#xA;&#xA;    filename = argv[2];&#xA;&#xA;    /* allocate the output media context */&#xA;    avformat_alloc_output_context2(&amp;oc, NULL, "rtp_mpegts", filename);&#xA;    if (!oc) {&#xA;        printf("Could not deduce output format from file extension: using MPEG.\n");&#xA;        avformat_alloc_output_context2(&amp;oc, NULL, "mpeg", filename);&#xA;    }&#xA;    if (!oc) {&#xA;        return 1;&#xA;    }&#xA;    fmt = oc->oformat;&#xA;    //Find input stream info.&#xA;&#xA;   video_st = NULL;&#xA;   audio_st = NULL;&#xA;&#xA;   avformat_open_input( &amp;oc, argv[1], 0, 0);&#xA;&#xA;   if ((ret = avformat_find_stream_info(oc, 0))&lt; 0)&#xA;   {&#xA;       av_strerror(ret, errbuf,sizeof(errbuf));&#xA;       printf("Not Able to find stream info::%s ", errbuf);&#xA;       ret = -1;&#xA;       return ret;&#xA;   }&#xA;   for (i = 0; i &lt; oc->nb_streams; i&#x2B;&#x2B;)&#xA;   {&#xA;       if(oc->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO)&#xA;       {&#xA;           AVCodecContext *codec_ctx;&#xA;           unsigned int tag = 0;&#xA;&#xA;           printf("Found Video Stream ");&#xA;           video_st = oc->streams[i];&#xA;           codec_ctx = video_st->codec;&#xA;           // m_num_frames = oc->streams[i]->nb_frames;&#xA;           video_codec = avcodec_find_decoder(codec_ctx->codec_id);&#xA;           ret = avcodec_open2(codec_ctx, video_codec, NULL);&#xA;            if (ret &lt; 0) &#xA;            {&#xA;                av_log(NULL, AV_LOG_ERROR, "Failed to open decoder for stream #%u\n", i);&#xA;                return ret;&#xA;            }&#xA;            if (av_codec_get_tag2(oc->oformat->codec_tag, video_codec->id, &amp;tag) == 0) &#xA;            {&#xA;                av_log(NULL, AV_LOG_ERROR, "could not find codec tag for codec id %d, default to 0.\n", audio_codec->id);&#xA;            }&#xA;            video_st->codec = avcodec_alloc_context3(video_codec);&#xA;            video_st->codec->codec_tag = tag;&#xA;       }&#xA;&#xA;       if(oc->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO)&#xA;       {&#xA;           AVCodecContext *codec_ctx;&#xA;           unsigned int tag = 0;&#xA;&#xA;           printf("Found Audio Stream ");&#xA;           audio_st = oc->streams[i];&#xA;          // aud_dts = audio_st->cur_dts;&#xA;          // aud_pts = audio_st->last_IP_pts;           &#xA;          codec_ctx = audio_st->codec;&#xA;          audio_codec = avcodec_find_decoder(codec_ctx->codec_id);&#xA;          ret = avcodec_open2(codec_ctx, audio_codec, NULL);&#xA;          if (ret &lt; 0) &#xA;          {&#xA;             av_log(NULL, AV_LOG_ERROR, "Failed to open decoder for stream #%u\n", i);&#xA;             return ret;&#xA;          }&#xA;          if (av_codec_get_tag2(oc->oformat->codec_tag, audio_codec->id, &amp;tag) == 0) &#xA;          {&#xA;              av_log(NULL, AV_LOG_ERROR, "could not find codec tag for codec id %d, default to 0.\n", audio_codec->id);&#xA;          }&#xA;          audio_st->codec = avcodec_alloc_context3(audio_codec);&#xA;          audio_st->codec->codec_tag = tag;&#xA;       }&#xA;   }&#xA;    /* Add the audio and video streams using the default format codecs&#xA;     * and initialize the codecs. */&#xA;    /*&#xA;    if (fmt->video_codec != AV_CODEC_ID_NONE) {&#xA;        video_st = add_stream(oc, &amp;video_codec, fmt->video_codec);&#xA;    }&#xA;    if (fmt->audio_codec != AV_CODEC_ID_NONE) {&#xA;        audio_st = add_stream(oc, &amp;audio_codec, fmt->audio_codec);&#xA;    }&#xA;    */&#xA;&#xA;    /* Now that all the parameters are set, we can open the audio and&#xA;     * video codecs and allocate the necessary encode buffers. */&#xA;    if (video_st)&#xA;        open_video(oc, video_codec, video_st);&#xA;    if (audio_st)&#xA;        open_audio(oc, audio_codec, audio_st);&#xA;&#xA;    av_dump_format(oc, 0, filename, 1);&#xA;&#xA;    /* open the output file, if needed */&#xA;    if (!(fmt->flags &amp; AVFMT_NOFILE)) {&#xA;        ret = avio_open(&amp;oc->pb, filename, AVIO_FLAG_WRITE);&#xA;        if (ret &lt; 0) {&#xA;            fprintf(stderr, "Could not open &#x27;%s&#x27;: %s\n", filename,&#xA;                    av_err2str(ret));&#xA;            return 1;&#xA;        }&#xA;    }&#xA;&#xA;    /* Write the stream header, if any. */&#xA;    ret = avformat_write_header(oc, NULL);&#xA;    if (ret &lt; 0) {&#xA;        fprintf(stderr, "Error occurred when opening output file: %s\n",&#xA;                av_err2str(ret));&#xA;        return 1;&#xA;    }&#xA;&#xA;    if (frame)&#xA;        frame->pts = 0;&#xA;    for (;;) {&#xA;        /* Compute current audio and video time. */&#xA;        if (audio_st)&#xA;            audio_pts = (double)audio_st->pts.val * audio_st->time_base.num / audio_st->time_base.den;&#xA;        else&#xA;            audio_pts = 0.0;&#xA;&#xA;        if (video_st)&#xA;            video_pts = (double)video_st->pts.val * video_st->time_base.num /&#xA;                        video_st->time_base.den;&#xA;        else&#xA;            video_pts = 0.0;&#xA;&#xA;        if ((!audio_st || audio_pts >= STREAM_DURATION) &amp;&amp;&#xA;            (!video_st || video_pts >= STREAM_DURATION))&#xA;            break;&#xA;&#xA;        /* write interleaved audio and video frames */&#xA;        if (!video_st || (video_st &amp;&amp; audio_st &amp;&amp; audio_pts &lt; video_pts)) {&#xA;            write_audio_frame(oc, audio_st);&#xA;        } else {&#xA;            write_video_frame(oc, video_st);&#xA;            frame->pts &#x2B;= av_rescale_q(1, video_st->codec->time_base, video_st->time_base);&#xA;        }&#xA;    }&#xA;&#xA;    /* Write the trailer, if any. The trailer must be written before you&#xA;     * close the CodecContexts open when you wrote the header; otherwise&#xA;     * av_write_trailer() may try to use memory that was freed on&#xA;     * av_codec_close(). */&#xA;    av_write_trailer(oc);&#xA;&#xA;    /* Close each codec. */&#xA;    if (video_st)&#xA;        close_video(oc, video_st);&#xA;    if (audio_st)&#xA;        close_audio(oc, audio_st);&#xA;&#xA;    if (!(fmt->flags &amp; AVFMT_NOFILE))&#xA;        /* Close the output file. */&#xA;        avio_close(oc->pb);&#xA;&#xA;    /* free the stream */&#xA;    avformat_free_context(oc);&#xA;&#xA;    return 0;&#xA;}&#xA;

    &#xA;

  • How to reparse video with stable "overall bit rate" ? (FFmpeg)

    20 février 2018, par user3360601

    I have such code below :

    #include
    #include
    #include

    extern "C"
    {
    #include <libavcodec></libavcodec>avcodec.h>
    #include <libavformat></libavformat>avformat.h>
    #include <libavfilter></libavfilter>buffersink.h>
    #include <libavfilter></libavfilter>buffersrc.h>
    #include <libavutil></libavutil>opt.h>
    #include <libavutil></libavutil>pixdesc.h>
    }

    static AVFormatContext *ifmt_ctx;
    static AVFormatContext *ofmt_ctx;

    typedef struct StreamContext {
       AVCodecContext *dec_ctx;
       AVCodecContext *enc_ctx;
    } StreamContext;
    static StreamContext *stream_ctx;

    static int open_input_file(const char *filename)
    {
       int ret;
       unsigned int i;

       ifmt_ctx = NULL;
       if ((ret = avformat_open_input(&amp;ifmt_ctx, filename, NULL, NULL)) &lt; 0) {
           av_log(NULL, AV_LOG_ERROR, "Cannot open input file\n");
           return ret;
       }

       if ((ret = avformat_find_stream_info(ifmt_ctx, NULL)) &lt; 0) {
           av_log(NULL, AV_LOG_ERROR, "Cannot find stream information\n");
           return ret;
       }

       stream_ctx = (StreamContext *) av_mallocz_array(ifmt_ctx->nb_streams, sizeof(*stream_ctx));
       if (!stream_ctx)
           return AVERROR(ENOMEM);

       for (i = 0; i &lt; ifmt_ctx->nb_streams; i++) {
           AVStream *stream = ifmt_ctx->streams[i];
           AVCodec *dec = avcodec_find_decoder(stream->codecpar->codec_id);
           AVCodecContext *codec_ctx;
           if (!dec) {
               av_log(NULL, AV_LOG_ERROR, "Failed to find decoder for stream #%u\n", i);
               return AVERROR_DECODER_NOT_FOUND;
           }
           codec_ctx = avcodec_alloc_context3(dec);
           if (!codec_ctx) {
               av_log(NULL, AV_LOG_ERROR, "Failed to allocate the decoder context for stream #%u\n", i);
               return AVERROR(ENOMEM);
           }
           ret = avcodec_parameters_to_context(codec_ctx, stream->codecpar);
           if (ret &lt; 0) {
               av_log(NULL, AV_LOG_ERROR, "Failed to copy decoder parameters to input decoder context "
                   "for stream #%u\n", i);
               return ret;
           }
           /* Reencode video &amp; audio and remux subtitles etc. */
           if (codec_ctx->codec_type == AVMEDIA_TYPE_VIDEO
               || codec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
               if (codec_ctx->codec_type == AVMEDIA_TYPE_VIDEO)
                   codec_ctx->framerate = av_guess_frame_rate(ifmt_ctx, stream, NULL);
               /* Open decoder */
               ret = avcodec_open2(codec_ctx, dec, NULL);
               if (ret &lt; 0) {
                   av_log(NULL, AV_LOG_ERROR, "Failed to open decoder for stream #%u\n", i);
                   return ret;
               }
           }
           stream_ctx[i].dec_ctx = codec_ctx;
       }

       av_dump_format(ifmt_ctx, 0, filename, 0);
       return 0;
    }

    static int open_output_file(const char *filename)
    {
       AVStream *out_stream;
       AVStream *in_stream;
       AVCodecContext *dec_ctx, *enc_ctx;
       AVCodec *encoder;
       int ret;
       unsigned int i;

       ofmt_ctx = NULL;
       avformat_alloc_output_context2(&amp;ofmt_ctx, NULL, NULL, filename);
       if (!ofmt_ctx) {
           av_log(NULL, AV_LOG_ERROR, "Could not create output context\n");
           return AVERROR_UNKNOWN;
       }


       for (i = 0; i &lt; ifmt_ctx->nb_streams; i++) {
           out_stream = avformat_new_stream(ofmt_ctx, NULL);
           if (!out_stream) {
               av_log(NULL, AV_LOG_ERROR, "Failed allocating output stream\n");
               return AVERROR_UNKNOWN;
           }

           in_stream = ifmt_ctx->streams[i];
           dec_ctx = stream_ctx[i].dec_ctx;

           //ofmt_ctx->bit_rate = ifmt_ctx->bit_rate;
           ofmt_ctx->duration = ifmt_ctx->duration;

           if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO
               || dec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
               /* in this example, we choose transcoding to same codec */
               encoder = avcodec_find_encoder(dec_ctx->codec_id);
               if (!encoder) {
                   av_log(NULL, AV_LOG_FATAL, "Necessary encoder not found\n");
                   return AVERROR_INVALIDDATA;
               }
               enc_ctx = avcodec_alloc_context3(encoder);
               if (!enc_ctx) {
                   av_log(NULL, AV_LOG_FATAL, "Failed to allocate the encoder context\n");
                   return AVERROR(ENOMEM);
               }

               /* In this example, we transcode to same properties (picture size,
               * sample rate etc.). These properties can be changed for output
               * streams easily using filters */
               if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO) {
                   enc_ctx->gop_size = dec_ctx->gop_size;
                   enc_ctx->bit_rate = dec_ctx->bit_rate;
                   enc_ctx->height = dec_ctx->height;
                   enc_ctx->width = dec_ctx->width;
                   enc_ctx->sample_aspect_ratio = dec_ctx->sample_aspect_ratio;
                   /* take first format from list of supported formats */
                   if (encoder->pix_fmts)
                       enc_ctx->pix_fmt = encoder->pix_fmts[0];
                   else
                       enc_ctx->pix_fmt = dec_ctx->pix_fmt;
                   /* video time_base can be set to whatever is handy and supported by encoder */
                   enc_ctx->time_base = av_inv_q(dec_ctx->framerate);
                   enc_ctx->framerate = av_guess_frame_rate(ifmt_ctx, in_stream, NULL);
               }
               else {
                   enc_ctx->gop_size = dec_ctx->gop_size;
                   enc_ctx->bit_rate = dec_ctx->bit_rate;
                   enc_ctx->sample_rate = dec_ctx->sample_rate;
                   enc_ctx->channel_layout = dec_ctx->channel_layout;
                   enc_ctx->channels = av_get_channel_layout_nb_channels(enc_ctx->channel_layout);
                   /* take first format from list of supported formats */
                   enc_ctx->sample_fmt = encoder->sample_fmts[0];
                   //enc_ctx->time_base = (AVRational){ 1, enc_ctx->sample_rate };
                   enc_ctx->time_base.num = 1;
                   enc_ctx->time_base.den = enc_ctx->sample_rate;

                   enc_ctx->framerate = av_guess_frame_rate(ifmt_ctx, in_stream, NULL);
               }

               /* Third parameter can be used to pass settings to encoder */
               ret = avcodec_open2(enc_ctx, encoder, NULL);
               if (ret &lt; 0) {
                   av_log(NULL, AV_LOG_ERROR, "Cannot open video encoder for stream #%u\n", i);
                   return ret;
               }
               ret = avcodec_parameters_from_context(out_stream->codecpar, enc_ctx);
               if (ret &lt; 0) {
                   av_log(NULL, AV_LOG_ERROR, "Failed to copy encoder parameters to output stream #%u\n", i);
                   return ret;
               }
               if (ofmt_ctx->oformat->flags &amp; AVFMT_GLOBALHEADER)
                   enc_ctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;

               out_stream->time_base = enc_ctx->time_base;
               stream_ctx[i].enc_ctx = enc_ctx;
           }
           else if (dec_ctx->codec_type == AVMEDIA_TYPE_UNKNOWN) {
               av_log(NULL, AV_LOG_FATAL, "Elementary stream #%d is of unknown type, cannot proceed\n", i);
               return AVERROR_INVALIDDATA;
           }
           else {
               /* if this stream must be remuxed */
               ret = avcodec_parameters_copy(out_stream->codecpar, in_stream->codecpar);
               if (ret &lt; 0) {
                   av_log(NULL, AV_LOG_ERROR, "Copying parameters for stream #%u failed\n", i);
                   return ret;
               }
               out_stream->time_base = in_stream->time_base;
           }

       }
       av_dump_format(ofmt_ctx, 0, filename, 1);

       if (!(ofmt_ctx->oformat->flags &amp; AVFMT_NOFILE)) {
           ret = avio_open(&amp;ofmt_ctx->pb, filename, AVIO_FLAG_WRITE);
           if (ret &lt; 0) {
               av_log(NULL, AV_LOG_ERROR, "Could not open output file '%s'", filename);
               return ret;
           }
       }

       /* init muxer, write output file header */
       ret = avformat_write_header(ofmt_ctx, NULL);
       if (ret &lt; 0) {
           av_log(NULL, AV_LOG_ERROR, "Error occurred when opening output file\n");
           return ret;
       }

       return 0;
    }

    int main(int argc, char **argv)
    {
       int ret;
       AVPacket packet = {0};
       packet.data = NULL;
       packet.size = 0 ;
       AVFrame *frame = NULL;
       enum AVMediaType type;
       unsigned int stream_index;
       unsigned int i;
       int got_frame;
       int(*dec_func)(AVCodecContext *, AVFrame *, int *, const AVPacket *);

       if (argc != 3) {
           av_log(NULL, AV_LOG_ERROR, "Usage: %s <input file="file" /> <output file="file">\n", argv[0]);
           return 1;
       }

       av_register_all();
       avfilter_register_all();

       if ((ret = open_input_file(argv[1])) &lt; 0)
           goto end;
       if ((ret = open_output_file(argv[2])) &lt; 0)
           goto end;

       /* read all packets */
       while (1) {
           if ((ret = av_read_frame(ifmt_ctx, &amp;packet)) &lt; 0)
               break;
           stream_index = packet.stream_index;
           type = ifmt_ctx->streams[packet.stream_index]->codecpar->codec_type;
           av_log(NULL, AV_LOG_DEBUG, "Demuxer gave frame of stream_index %u\n", stream_index);

           /* remux this frame without reencoding */
           av_packet_rescale_ts(&amp;packet,
               ifmt_ctx->streams[stream_index]->time_base,
               ofmt_ctx->streams[stream_index]->time_base);

           ret = av_interleaved_write_frame(ofmt_ctx, &amp;packet);
           if (ret &lt; 0)
               goto end;

           av_packet_unref(&amp;packet);
       }

       av_write_trailer(ofmt_ctx);
    end:
       av_packet_unref(&amp;packet);
       av_frame_free(&amp;frame);
       for (i = 0; i &lt; ifmt_ctx->nb_streams; i++) {
           avcodec_free_context(&amp;stream_ctx[i].dec_ctx);
           if (ofmt_ctx &amp;&amp; ofmt_ctx->nb_streams > i &amp;&amp; ofmt_ctx->streams[i] &amp;&amp; stream_ctx[i].enc_ctx)
               avcodec_free_context(&amp;stream_ctx[i].enc_ctx);
       }
       av_free(stream_ctx);
       avformat_close_input(&amp;ifmt_ctx);
       if (ofmt_ctx &amp;&amp; !(ofmt_ctx->oformat->flags &amp; AVFMT_NOFILE))
           avio_closep(&amp;ofmt_ctx->pb);
       avformat_free_context(ofmt_ctx);

       return ret ? 1 : 0;
    }
    </output>

    This is a little bit changed code from official example of using ffmpeg called transcoding.c

    I only read packets from one stream and write them to another stream. It works fine.

    Proof below.
    enter image description here

    then I add to main a condition. If it is a packet with video frame, I will decode it, then encode and write to another stream. No other actions with frame.

    My addition code below :

    static int encode_write_frame(AVFrame *filt_frame, unsigned int stream_index, int *got_frame) {
       int ret;
       int got_frame_local;
       AVPacket enc_pkt;
       int(*enc_func)(AVCodecContext *, AVPacket *, const AVFrame *, int *) = avcodec_encode_video2 ;

       if (!got_frame)
           got_frame = &amp;got_frame_local;

       av_log(NULL, AV_LOG_INFO, "Encoding frame\n");
       /* encode filtered frame */
       enc_pkt.data = NULL;
       enc_pkt.size = 0;
       av_init_packet(&amp;enc_pkt);
       ret = enc_func(stream_ctx[stream_index].enc_ctx, &amp;enc_pkt,
           filt_frame, got_frame);
       if (ret &lt; 0)
           return ret;
       if (!(*got_frame))
           return 0;

       /* prepare packet for muxing */
       enc_pkt.stream_index = stream_index;
       av_packet_rescale_ts(&amp;enc_pkt,
           stream_ctx[stream_index].enc_ctx->time_base,
           ofmt_ctx->streams[stream_index]->time_base);

       av_log(NULL, AV_LOG_DEBUG, "Muxing frame\n");
       /* mux encoded frame */
       ret = av_interleaved_write_frame(ofmt_ctx, &amp;enc_pkt);
       return ret;
    }

    static int filter_encode_write_frame(AVFrame *frame, unsigned int stream_index)
    {
       int ret;
       av_log(NULL, AV_LOG_INFO, "Pushing decoded frame to filters\n");

       while (1) {
           av_log(NULL, AV_LOG_INFO, "Pulling filtered frame from filters\n");

           ret = encode_write_frame(frame, stream_index, NULL);
           if (ret &lt; 0)
               break;
           break;
       }

       return ret;
    }


    int main(int argc, char **argv)
    {
       int ret;
       AVPacket packet = {0};
       packet.data = NULL;
       packet.size = 0 ;
       AVFrame *frame = NULL;
       enum AVMediaType type;
       unsigned int stream_index;
       unsigned int i;
       int got_frame;
       int(*dec_func)(AVCodecContext *, AVFrame *, int *, const AVPacket *);

       if (argc != 3) {
           av_log(NULL, AV_LOG_ERROR, "Usage: %s <input file="file" /> <output file="file">\n", argv[0]);
           return 1;
       }

       av_register_all();
       avfilter_register_all();

       if ((ret = open_input_file(argv[1])) &lt; 0)
           goto end;
       if ((ret = open_output_file(argv[2])) &lt; 0)
           goto end;

       /* read all packets */
       while (1) {
           if ((ret = av_read_frame(ifmt_ctx, &amp;packet)) &lt; 0)
               break;
           stream_index = packet.stream_index;
           type = ifmt_ctx->streams[packet.stream_index]->codecpar->codec_type;
           av_log(NULL, AV_LOG_DEBUG, "Demuxer gave frame of stream_index %u\n",
               stream_index);

            if (type == AVMEDIA_TYPE_VIDEO)
            {
                av_log(NULL, AV_LOG_DEBUG, "Going to reencode&amp;filter the frame\n");
                frame = av_frame_alloc();
                if (!frame) {
                    ret = AVERROR(ENOMEM);
                    break;
                }
                av_packet_rescale_ts(&amp;packet,
                    ifmt_ctx->streams[stream_index]->time_base,
                    stream_ctx[stream_index].dec_ctx->time_base);
                dec_func = avcodec_decode_video2;
                ret = dec_func(stream_ctx[stream_index].dec_ctx, frame,
                    &amp;got_frame, &amp;packet);
                if (ret &lt; 0) {
                    av_frame_free(&amp;frame);
                    av_log(NULL, AV_LOG_ERROR, "Decoding failed\n");
                    break;
                }

                if (got_frame) {
                    frame->pts = frame->best_effort_timestamp;
                    ret = filter_encode_write_frame(frame, stream_index);
                    av_frame_free(&amp;frame);
                    if (ret &lt; 0)
                        goto end;
                }
                else {
                    av_frame_free(&amp;frame);
                }
            }
            else
           {
               /* remux this frame without reencoding */
               av_packet_rescale_ts(&amp;packet,
                   ifmt_ctx->streams[stream_index]->time_base,
                   ofmt_ctx->streams[stream_index]->time_base);

               ret = av_interleaved_write_frame(ofmt_ctx, &amp;packet);
               if (ret &lt; 0)
                   goto end;
           }
           av_packet_unref(&amp;packet);
       }

       av_write_trailer(ofmt_ctx);
    end:
       av_packet_unref(&amp;packet);
       av_frame_free(&amp;frame);
       for (i = 0; i &lt; ifmt_ctx->nb_streams; i++) {
           avcodec_free_context(&amp;stream_ctx[i].dec_ctx);
           if (ofmt_ctx &amp;&amp; ofmt_ctx->nb_streams > i &amp;&amp; ofmt_ctx->streams[i] &amp;&amp; stream_ctx[i].enc_ctx)
               avcodec_free_context(&amp;stream_ctx[i].enc_ctx);
       }
       av_free(stream_ctx);
       avformat_close_input(&amp;ifmt_ctx);
       if (ofmt_ctx &amp;&amp; !(ofmt_ctx->oformat->flags &amp; AVFMT_NOFILE))
           avio_closep(&amp;ofmt_ctx->pb);
       avformat_free_context(ofmt_ctx);

       return ret ? 1 : 0;
    }
    </output>

    And the result is different.

    For a test I took a SampleVideo_1280x720_1mb.flv.
    It has

    File size : 1.00 MiB
    Overall bit rate : 1 630 kb/s

    After my decode/encode actions the result became :

    File size : 1.23 MiB
    Overall bit rate : 2 005 kb/s

    Other parameters (video bit rate, audio bit rate, etc) are the same.
    enter image description here

    What am I doing wrong ? How to control overall bit rate ? I suppose, something wrong with encoder/decoder, but what ?

    UPD :
    I get that when in function open_input_file I write

    if (codec_ctx->codec_type == AVMEDIA_TYPE_VIDEO)
    {
        codec_ctx->framerate = av_guess_frame_rate(ifmt_ctx, stream, NULL);
    }

    I get what I get (bigger size and bit rate).

    And when in this function I write

    if (codec_ctx->codec_type == AVMEDIA_TYPE_VIDEO)
    {
       codec_ctx = ifmt_ctx->streams[i]->codec;
    }

    I get smaller size and bit rate.

    File size : 900 KiB
    Overall bit rate : 1 429 kb/s

    But how to get the exactly size and frame rate as in the original file ?