Recherche avancée

Médias (91)

Autres articles (68)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

Sur d’autres sites (6648)

  • How to encode resampled PCM-audio to AAC using ffmpeg-API when input pcm samples count not equal 1024

    26 septembre 2016, par Aleksei2414904

    I am working on capturing and streaming audio to RTMP server at a moment. I work under MacOS (in Xcode), so for capturing audio sample-buffer I use AVFoundation-framework. But for encoding and streaming I need to use ffmpeg-API and libfaac encoder. So output format must be AAC (for supporting stream playback on iOS-devices).

    And I faced with such problem : audio-capturing device (in my case logitech camera) gives me sample-buffer with 512 LPCM samples, and I can select input sample-rate from 16000, 24000, 36000 or 48000 Hz. When I give these 512 samples to AAC-encoder (configured for appropriate sample-rate), I hear a slow and jerking audio (seems as like pice of silence after each frame).

    I figured out (maybe I am wrong), that libfaac encoder accepts audio frames only with 1024 samples. When I set input samplerate to 24000 and resample input sample-buffer to 48000 before encoding, I obtain 1024 resampled samples. After encoding these 1024 sampels to AAC, I hear proper sound on output. But my web-cam produce 512 samples in buffer for any input samplerate, when output sample-rate must be 48000 Hz. So I need to do resampling in any case, and I will not obtain exactly 1024 samples in buffer after resampling.

    Is there a way to solve this problem within ffmpeg-API functionality ?

    I would be grateful for any help.

    PS :
    I guess that I can accumulate resampled buffers until count of samples become 1024, and then encode it, but this is stream so there will be troubles with resulting timestamps and with other input devices, and such solution is not suitable.

    The current issue came out of the problem described in [question] : How to fill audio AVFrame (ffmpeg) with the data obtained from CMSampleBufferRef (AVFoundation) ?

    Here is a code with audio-codec configs (there also was video stream but video work fine) :

       /*global variables*/
       static AVFrame *aframe;
       static AVFrame *frame;
       AVOutputFormat *fmt;
       AVFormatContext *oc;
       AVStream *audio_st, *video_st;
    Init ()
    {
       AVCodec *audio_codec, *video_codec;
       int ret;

       avcodec_register_all();  
       av_register_all();
       avformat_network_init();
       avformat_alloc_output_context2(&oc, NULL, "flv", filename);
       fmt = oc->oformat;
       oc->oformat->video_codec = AV_CODEC_ID_H264;
       oc->oformat->audio_codec = AV_CODEC_ID_AAC;
       video_st = NULL;
       audio_st = NULL;
       if (fmt->video_codec != AV_CODEC_ID_NONE)
         { //…  /*init video codec*/}
       if (fmt->audio_codec != AV_CODEC_ID_NONE) {
       audio_codec= avcodec_find_encoder(fmt->audio_codec);

       if (!(audio_codec)) {
           fprintf(stderr, "Could not find encoder for '%s'\n",
                   avcodec_get_name(fmt->audio_codec));
           exit(1);
       }
       audio_st= avformat_new_stream(oc, audio_codec);
       if (!audio_st) {
           fprintf(stderr, "Could not allocate stream\n");
           exit(1);
       }
       audio_st->id = oc->nb_streams-1;

       //AAC:
       audio_st->codec->sample_fmt  = AV_SAMPLE_FMT_S16;
       audio_st->codec->bit_rate    = 32000;
       audio_st->codec->sample_rate = 48000;
       audio_st->codec->profile=FF_PROFILE_AAC_LOW;
       audio_st->time_base = (AVRational){1, audio_st->codec->sample_rate };
       audio_st->codec->channels    = 1;
       audio_st->codec->channel_layout = AV_CH_LAYOUT_MONO;      


       if (oc->oformat->flags & AVFMT_GLOBALHEADER)
           audio_st->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
       }

       if (video_st)
       {
       //   …
       /*prepare video*/
       }
       if (audio_st)
       {
       aframe = avcodec_alloc_frame();
       if (!aframe) {
           fprintf(stderr, "Could not allocate audio frame\n");
           exit(1);
       }
       AVCodecContext *c;
       int ret;

       c = audio_st->codec;


       ret = avcodec_open2(c, audio_codec, 0);
       if (ret < 0) {
           fprintf(stderr, "Could not open audio codec: %s\n", av_err2str(ret));
           exit(1);
       }

       //…
    }

    And resampling and encoding audio :

    if (mType == kCMMediaType_Audio)
    {
       CMSampleTimingInfo timing_info;
       CMSampleBufferGetSampleTimingInfo(sampleBuffer, 0, &timing_info);
       double  pts=0;
       double  dts=0;
       AVCodecContext *c;
       AVPacket pkt = { 0 }; // data and size must be 0;
       int got_packet, ret;
        av_init_packet(&pkt);
       c = audio_st->codec;
         CMItemCount numSamples = CMSampleBufferGetNumSamples(sampleBuffer);

       NSUInteger channelIndex = 0;

       CMBlockBufferRef audioBlockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer);
       size_t audioBlockBufferOffset = (channelIndex * numSamples * sizeof(SInt16));
       size_t lengthAtOffset = 0;
       size_t totalLength = 0;
       SInt16 *samples = NULL;
       CMBlockBufferGetDataPointer(audioBlockBuffer, audioBlockBufferOffset, &lengthAtOffset, &totalLength, (char **)(&samples));

       const AudioStreamBasicDescription *audioDescription = CMAudioFormatDescriptionGetStreamBasicDescription(CMSampleBufferGetFormatDescription(sampleBuffer));

       SwrContext *swr = swr_alloc();

       int in_smprt = (int)audioDescription->mSampleRate;
       av_opt_set_int(swr, "in_channel_layout",  AV_CH_LAYOUT_MONO, 0);

       av_opt_set_int(swr, "out_channel_layout", audio_st->codec->channel_layout,  0);

       av_opt_set_int(swr, "in_channel_count", audioDescription->mChannelsPerFrame,  0);
       av_opt_set_int(swr, "out_channel_count", audio_st->codec->channels,  0);

       av_opt_set_int(swr, "out_channel_layout", audio_st->codec->channel_layout,  0);
       av_opt_set_int(swr, "in_sample_rate",     audioDescription->mSampleRate,0);

       av_opt_set_int(swr, "out_sample_rate",    audio_st->codec->sample_rate,0);

       av_opt_set_sample_fmt(swr, "in_sample_fmt",  AV_SAMPLE_FMT_S16, 0);

       av_opt_set_sample_fmt(swr, "out_sample_fmt", audio_st->codec->sample_fmt,  0);

       swr_init(swr);
       uint8_t **input = NULL;
       int src_linesize;
       int in_samples = (int)numSamples;
       ret = av_samples_alloc_array_and_samples(&input, &src_linesize, audioDescription->mChannelsPerFrame,
                                                in_samples, AV_SAMPLE_FMT_S16P, 0);


       *input=(uint8_t*)samples;
       uint8_t *output=NULL;


       int out_samples = av_rescale_rnd(swr_get_delay(swr, in_smprt) +in_samples, (int)audio_st->codec->sample_rate, in_smprt, AV_ROUND_UP);

       av_samples_alloc(&output, NULL, audio_st->codec->channels, out_samples, audio_st->codec->sample_fmt, 0);
       in_samples = (int)numSamples;
       out_samples = swr_convert(swr, &output, out_samples, (const uint8_t **)input, in_samples);


       aframe->nb_samples =(int) out_samples;


       ret = avcodec_fill_audio_frame(aframe, audio_st->codec->channels, audio_st->codec->sample_fmt,
                                (uint8_t *)output,
                                (int) out_samples *
                                av_get_bytes_per_sample(audio_st->codec->sample_fmt) *
                                audio_st->codec->channels, 1);

       aframe->channel_layout = audio_st->codec->channel_layout;
       aframe->channels=audio_st->codec->channels;
       aframe->sample_rate= audio_st->codec->sample_rate;

       if (timing_info.presentationTimeStamp.timescale!=0)
           pts=(double) timing_info.presentationTimeStamp.value/timing_info.presentationTimeStamp.timescale;

       aframe->pts=pts*audio_st->time_base.den;
       aframe->pts = av_rescale_q(aframe->pts, audio_st->time_base, audio_st->codec->time_base);

       ret = avcodec_encode_audio2(c, &pkt, aframe, &got_packet);

       if (ret < 0) {
           fprintf(stderr, "Error encoding audio frame: %s\n", av_err2str(ret));
           exit(1);
       }
       swr_free(&swr);
       if (got_packet)
       {
           pkt.stream_index = audio_st->index;

           pkt.pts = av_rescale_q(pkt.pts, audio_st->codec->time_base, audio_st->time_base);
           pkt.dts = av_rescale_q(pkt.dts, audio_st->codec->time_base, audio_st->time_base);

           // Write the compressed frame to the media file.
          ret = av_interleaved_write_frame(oc, &pkt);
          if (ret != 0) {
               fprintf(stderr, "Error while writing audio frame: %s\n",
                       av_err2str(ret));
               exit(1);
           }

    }
  • Streaming video from an image using FFMPEG on Windows

    26 mai 2013, par Daniel Zohar

    I wrote a program that simulates a camera and converts the output into a video stream. The program is required to be able to run on Windows.
    There are two components in the system :

    1. Camera Simulator. A C++ program that simulates the camera. It copies a pre-generated frame (i.e. PNG file) every 0.1 seconds, using the windows copy command, to a destination path ./target/target_image.png
    2. Video Stream. Using FFmpeg, it creates a video stream out of the copied images. FFmpeg is ran with the following command :
      ffmpeg -loop 1 -i ./target/target_image.png -r 10 -vcodec mpeg4 -f mpegts udp://127.0.0.1:1234

    When running the whole thing together, it works fine for a few seconds until the ffmpeg halts. Here is a log while running in debug mode :

    ffmpeg version N-52458-gaa96439 Copyright (c) 2000-2013 the FFmpeg developers
     built on Apr 24 2013 22:19:32 with gcc 4.8.0 (GCC)
     configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libcaca --enable-libfreetype --enable-libgsm --enable-libilbc --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxavs --enable-libxvid --enable-zlib
     libavutil      52. 27.101 / 52. 27.101
     libavcodec     55.  6.100 / 55.  6.100
     libavformat    55.  3.100 / 55.  3.100
     libavdevice    55.  0.100 / 55.  0.100
     libavfilter     3. 60.101 /  3. 60.101
     libswscale      2.  2.100 /  2.  2.100
     libswresample   0. 17.102 /  0. 17.102
     libpostproc    52.  3.100 / 52.  3.100
    Splitting the commandline.
    Reading option '-loop' ... matched as AVOption 'loop' with argument '1'.
    Reading option '-i' ... matched as input file with argument './target/target_image.png'.
    Reading option '-r' ... matched as option 'r' (set frame rate (Hz value, fraction or abbreviation)) with argument '10'.
    Reading option '-vcodec' ... matched as option 'vcodec' (force video codec ('copy' to copy stream)) with argument 'mpeg4'.
    Reading option '-f' ... matched as option 'f' (force format) with argument 'mpegts'.
    Reading option 'udp://127.0.0.1:1234' ... matched as output file.
    Reading option '-loglevel' ... matched as option 'loglevel' (set logging level) with argument 'debug'.
    Finished splitting the commandline.
    Parsing a group of options: global .
    Applying option loglevel (set logging level) with argument debug.
    Successfully parsed a group of options.
    Parsing a group of options: input file ./target/target_image.png.
    Successfully parsed a group of options.
    Opening an input file: ./target/target_image.png.
    [AVIOContext @ 02678840] Statistics: 234307 bytes read, 0 seeks
    [AVIOContext @ 02678840] Statistics: 221345 bytes read, 0 seeks
       Last message repeated 1 times
    [AVIOContext @ 02678840] Statistics: 226329 bytes read, 0 seeks
       Last message repeated 2 times
    [AVIOContext @ 02678840] Statistics: 228676 bytes read, 0 seeks
       Last message repeated 2 times
    [AVIOContext @ 02678840] Statistics: 230685 bytes read, 0 seeks
       Last message repeated 2 times
    [AVIOContext @ 02678840] Statistics: 232697 bytes read, 0 seeks
       Last message repeated 5 times
    [AVIOContext @ 02678840] Statistics: 234900 bytes read, 0 seeks
       Last message repeated 2 times
    [AVIOContext @ 02678840] Statistics: 236847 bytes read, 0 seeks
    [image2 @ 02677ac0] Probe buffer size limit of 5000000 bytes reached
    Input #0, image2, from './target/target_image.png':
     Duration: 00:00:00.04, start: 0.000000, bitrate: N/A
       Stream #0:0, 22, 1/25: Video: png, rgb24, 1274x772 [SAR 1:1 DAR 637:386], 1/25, 25 fps, 25 tbr, 25 tbn, 25 tbc
    Successfully opened the file.
    Parsing a group of options: output file udp://127.0.0.1:1234.
    Applying option r (set frame rate (Hz value, fraction or abbreviation)) with argument 10.
    Applying option vcodec (force video codec ('copy' to copy stream)) with argument mpeg4.
    Applying option f (force format) with argument mpegts.
    Successfully parsed a group of options.
    Opening an output file: udp://127.0.0.1:1234.
    Successfully opened the file.
    [graph 0 input from stream 0:0 @ 02769280] Setting 'video_size' to value '1274x772'
    [graph 0 input from stream 0:0 @ 02769280] Setting 'pix_fmt' to value '2'
    [graph 0 input from stream 0:0 @ 02769280] Setting 'time_base' to value '1/25'
    [graph 0 input from stream 0:0 @ 02769280] Setting 'pixel_aspect' to value '1/1'
    [graph 0 input from stream 0:0 @ 02769280] Setting 'sws_param' to value 'flags=2'
    [graph 0 input from stream 0:0 @ 02769280] Setting 'frame_rate' to value '25/1'
    [graph 0 input from stream 0:0 @ 02769280] w:1274 h:772 pixfmt:rgb24 tb:1/25 fr:25/1 sar:1/1 sws_param:flags=2
    [format @ 02768ba0] compat: called with args=[yuv420p]
    [format @ 02768ba0] Setting 'pix_fmts' to value 'yuv420p'
    [auto-inserted scaler 0 @ 02768740] Setting 'w' to value '0'
    [auto-inserted scaler 0 @ 02768740] Setting 'h' to value '0'
    [auto-inserted scaler 0 @ 02768740] Setting 'flags' to value '0x4'
    [auto-inserted scaler 0 @ 02768740] w:0 h:0 flags:'0x4' interl:0
    [format @ 02768ba0] auto-inserting filter 'auto-inserted scaler 0' between the filter 'Parsed_null_0' and the filter 'format'
    [AVFilterGraph @ 026772c0] query_formats: 4 queried, 3 merged, 1 already done, 0 delayed
    [auto-inserted scaler 0 @ 02768740] w:1274 h:772 fmt:rgb24 sar:1/1 -> w:1274 h:772 fmt:yuv420p sar:1/1 flags:0x4
    [mpeg4 @ 02785020] detected 4 logical cores
    [mpeg4 @ 02785020] intra_quant_bias = 0 inter_quant_bias = -64
    [mpegts @ 0277da40] muxrate VBR, pcr every 1 pkts, sdt every 200, pat/pmt every 40 pkts
    Output #0, mpegts, to 'udp://127.0.0.1:1234':
     Metadata:
       encoder         : Lavf55.3.100
       Stream #0:0, 0, 1/90000: Video: mpeg4, yuv420p, 1274x772 [SAR 1:1 DAR 637:386], 1/10, q=2-31, 200 kb/s, 90k tbn, 10 tbc
    Stream mapping:
     Stream #0:0 -> #0:0 (png -> mpeg4)
    Press [q] to stop, [?] for help
    *** drop!
       Last message repeated 10 times
    frame=   11 fps=0.0 q=4.0 size=     118kB time=00:00:01.10 bitrate= 875.1kbits/s dup=0 drop=11    
    Statistics: 242771 bytes read, 0 seeks
    [AVIOContext @ 02674a60] Statistics: 246525 bytes read, 0 seeks
    *** drop!
    [AVIOContext @ 02674a60] Statistics: 230678 bytes read, 0 seeks
    [AVIOContext @ 02674a60] Statistics: 244023 bytes read, 0 seeks
    *** drop!
    [AVIOContext @ 02674a60] Statistics: 246389 bytes read, 0 seeks

    *** drop!
    [AVIOContext @ 02674a60] Statistics: 224478 bytes read, 0 seeks
    [AVIOContext @ 02674a60] Statistics: 228013 bytes read, 0 seeks
    *** drop!
    [image2 @ 02677ac0] Could not open file : ./target/target_image.png
    ./target/target_image.png: Input/output error
    [output stream 0:0 @ 02768c20] EOF on sink link output stream 0:0:default.
    No more output streams to write to, finishing.
    frame=  164 fps= 17 q=31.0 Lsize=     959kB time=00:00:16.40 bitrate= 478.9kbits/s dup=0 drop=240    

    video:869kB audio:0kB subtitle:0 global headers:0kB muxing overhead 10.285235%
    404 frames successfully decoded, 0 decoding errors
    [AVIOContext @ 026779c0] Statistics: 0 seeks, 746 writeouts

    It seems to me there's some kind of collision between the reading and writing to/from the same file. What's also interesting is that on Linux (while replacing the copy with cp) the program works just fine.

    Can someone suggest a way to solve this issue ? Alternatives solutions are also acceptable as long as the logical workflow remains the same.

  • C - Transcoding to UDP using FFmpeg ?

    30 avril 2013, par golmschenk

    I'm trying to use the FFmpeg libraries to take an existing video file and stream it over a UDP connection. Specifically, I've been looking at the muxing.c and demuxing.c example files in the source code doc/example directory of FFmpeg. The demuxing file presents code which allows an input video to be converted into the video and audio streams. The muxing file presents code which creates fake data and can already be output to a UDP connection as I would like. I've begun work combining the two. Below can be found my code which is basically a copy of the muxing file with some parts replaced/appended with parts of the demuxing file. Unfortunately I'm running into plenty of complications attempting my goal through this approach. Is there an existing source code example which does the transcoding I'm looking for ? Or at least a tutorial on how one might create this ? If not, at least a few pointers might be helpful in directing my work in combing the two files to achieve my goal. Specifically, I'm getting the error :

    [NULL @ 0x23b4040] Unable to find a suitable output format for 'udp://localhost:7777'
    Could not deduce output format from file extension: using MPEG.
    Output #0, mpeg, to 'udp://localhost:7777':

    Even though the muxing file could accept UDP formats. Any suggestions ? Thank you much !

    #include
    #include
    #include
    #include

    #include <libavutil></libavutil>mathematics.h>
    #include <libavformat></libavformat>avformat.h>
    #include <libswscale></libswscale>swscale.h>

    /* 5 seconds stream duration */
    #define STREAM_DURATION   200.0
    #define STREAM_FRAME_RATE 25 /* 25 images/s */
    #define STREAM_NB_FRAMES  ((int)(STREAM_DURATION * STREAM_FRAME_RATE))
    #define STREAM_PIX_FMT    AV_PIX_FMT_YUV420P /* default pix_fmt */

    //FROM DE
    static AVFormatContext *fmt_ctx = NULL;
    static AVCodecContext *video_dec_ctx = NULL, *audio_dec_ctx;
    static AVStream *video_stream = NULL, *audio_stream = NULL;
    static const char *src_filename = NULL;
    static const char *video_dst_filename = NULL;
    static const char *audio_dst_filename = NULL;
    static FILE *video_dst_file = NULL;
    static FILE *audio_dst_file = NULL;

    static uint8_t *video_dst_data[4] = {NULL};
    static int      video_dst_linesize[4];
    static int video_dst_bufsize;

    static uint8_t **audio_dst_data = NULL;
    static int       audio_dst_linesize;
    static int audio_dst_bufsize;

    static int video_stream_idx = -1, audio_stream_idx = -1;
    static AVFrame *frame = NULL;
    static AVPacket pkt;
    static int video_frame_count = 0;
    static int audio_frame_count = 0;
    //END DE

    static int sws_flags = SWS_BICUBIC;

    /**************************************************************/
    /* audio output */

    static float t, tincr, tincr2;
    static int16_t *samples;
    static int audio_input_frame_size;

    /* Add an output stream. */
    static AVStream *add_stream(AVFormatContext *oc, AVCodec **codec,
                               enum AVCodecID codec_id)
    {
       AVCodecContext *c;
       AVStream *st;

       /* find the encoder */
       *codec = avcodec_find_encoder(codec_id);
       if (!(*codec)) {
           fprintf(stderr, "Could not find encoder for &#39;%s&#39;\n",
                   avcodec_get_name(codec_id));
           exit(1);
       }

       st = avformat_new_stream(oc, *codec);
       if (!st) {
           fprintf(stderr, "Could not allocate stream\n");
           exit(1);
       }
       st->id = oc->nb_streams-1;
       c = st->codec;

       switch ((*codec)->type) {
       case AVMEDIA_TYPE_AUDIO:
           st->id = 1;
           c->sample_fmt  = AV_SAMPLE_FMT_S16;
           c->bit_rate    = 64000;
           c->sample_rate = 44100;
           c->channels    = 2;
           break;

       case AVMEDIA_TYPE_VIDEO:
           c->codec_id = codec_id;

           c->bit_rate = 400000;
           /* Resolution must be a multiple of two. */
           c->width    = 352;
           c->height   = 288;
           /* timebase: This is the fundamental unit of time (in seconds) in terms
            * of which frame timestamps are represented. For fixed-fps content,
            * timebase should be 1/framerate and timestamp increments should be
            * identical to 1. */
           c->time_base.den = STREAM_FRAME_RATE;
           c->time_base.num = 1;
           c->gop_size      = 12; /* emit one intra frame every twelve frames at most */
           c->pix_fmt       = STREAM_PIX_FMT;
           if (c->codec_id == AV_CODEC_ID_MPEG2VIDEO) {
               /* just for testing, we also add B frames */
               c->max_b_frames = 2;
           }
           if (c->codec_id == AV_CODEC_ID_MPEG1VIDEO) {
               /* Needed to avoid using macroblocks in which some coeffs overflow.
                * This does not happen with normal video, it just happens here as
                * the motion of the chroma plane does not match the luma plane. */
               c->mb_decision = 2;
           }
       break;

       default:
           break;
       }

       /* Some formats want stream headers to be separate. */
       if (oc->oformat->flags &amp; AVFMT_GLOBALHEADER)
           c->flags |= CODEC_FLAG_GLOBAL_HEADER;

       return st;
    }

    /**************************************************************/
    /* audio output */

    static float t, tincr, tincr2;
    static int16_t *samples;
    static int audio_input_frame_size;

    static void open_audio(AVFormatContext *oc, AVCodec *codec, AVStream *st)
    {
       AVCodecContext *c;
       int ret;

       c = st->codec;

       /* open it */
       ret = avcodec_open2(c, codec, NULL);
       if (ret &lt; 0) {
           fprintf(stderr, "Could not open audio codec: %s\n", av_err2str(ret));
           exit(1);
       }

       /* init signal generator */
       t     = 0;
       tincr = 2 * M_PI * 110.0 / c->sample_rate;
       /* increment frequency by 110 Hz per second */
       tincr2 = 2 * M_PI * 110.0 / c->sample_rate / c->sample_rate;

       if (c->codec->capabilities &amp; CODEC_CAP_VARIABLE_FRAME_SIZE)
           audio_input_frame_size = 10000;
       else
           audio_input_frame_size = c->frame_size;
       samples = av_malloc(audio_input_frame_size *
                           av_get_bytes_per_sample(c->sample_fmt) *
                           c->channels);
       if (!samples) {
           fprintf(stderr, "Could not allocate audio samples buffer\n");
           exit(1);
       }
    }

    /* Prepare a 16 bit dummy audio frame of &#39;frame_size&#39; samples and
    * &#39;nb_channels&#39; channels. */
    static void get_audio_frame(int16_t *samples, int frame_size, int nb_channels)
    {
       int j, i, v;
       int16_t *q;

       q = samples;
       for (j = 0; j &lt; frame_size; j++) {
           v = (int)(sin(t) * 10000);
           for (i = 0; i &lt; nb_channels; i++)
               *q++ = v;
           t     += tincr;
           tincr += tincr2;
       }
    }

    static void write_audio_frame(AVFormatContext *oc, AVStream *st)
    {
       AVCodecContext *c;
       AVPacket pkt = { 0 }; // data and size must be 0;
       AVFrame *frame = avcodec_alloc_frame();
       int got_packet, ret;

       av_init_packet(&amp;pkt);
       c = st->codec;

       get_audio_frame(samples, audio_input_frame_size, c->channels);
       frame->nb_samples = audio_input_frame_size;
       avcodec_fill_audio_frame(frame, c->channels, c->sample_fmt,
                                (uint8_t *)samples,
                                audio_input_frame_size *
                                av_get_bytes_per_sample(c->sample_fmt) *
                                c->channels, 1);

       ret = avcodec_encode_audio2(c, &amp;pkt, frame, &amp;got_packet);
       if (ret &lt; 0) {
           fprintf(stderr, "Error encoding audio frame: %s\n", av_err2str(ret));
           exit(1);
       }

       if (!got_packet)
           return;

       pkt.stream_index = st->index;

       /* Write the compressed frame to the media file. */
       ret = av_interleaved_write_frame(oc, &amp;pkt);
       if (ret != 0) {
           fprintf(stderr, "Error while writing audio frame: %s\n",
                   av_err2str(ret));
           exit(1);
       }
       avcodec_free_frame(&amp;frame);
    }

    static void close_audio(AVFormatContext *oc, AVStream *st)
    {
       avcodec_close(st->codec);

       av_free(samples);
    }

    /**************************************************************/
    /* video output */

    static AVFrame *frame;
    static AVPicture src_picture, dst_picture;
    static int frame_count;

    static void open_video(AVFormatContext *oc, AVCodec *codec, AVStream *st)
    {
       int ret;
       AVCodecContext *c = st->codec;

       /* open the codec */
       ret = avcodec_open2(c, codec, NULL);
       if (ret &lt; 0) {
           fprintf(stderr, "Could not open video codec: %s\n", av_err2str(ret));
           exit(1);
       }

       /* allocate and init a re-usable frame */
       frame = avcodec_alloc_frame();
       if (!frame) {
           fprintf(stderr, "Could not allocate video frame\n");
           exit(1);
       }

       /* Allocate the encoded raw picture. */
       ret = avpicture_alloc(&amp;dst_picture, c->pix_fmt, c->width, c->height);
       if (ret &lt; 0) {
           fprintf(stderr, "Could not allocate picture: %s\n", av_err2str(ret));
           exit(1);
       }

       /* If the output format is not YUV420P, then a temporary YUV420P
        * picture is needed too. It is then converted to the required
        * output format. */
       if (c->pix_fmt != AV_PIX_FMT_YUV420P) {
           ret = avpicture_alloc(&amp;src_picture, AV_PIX_FMT_YUV420P, c->width, c->height);
           if (ret &lt; 0) {
               fprintf(stderr, "Could not allocate temporary picture: %s\n",
                       av_err2str(ret));
               exit(1);
           }
       }

       /* copy data and linesize picture pointers to frame */
       *((AVPicture *)frame) = dst_picture;
    }

    /* Prepare a dummy image. */
    static void fill_yuv_image(AVPicture *pict, int frame_index,
                              int width, int height)
    {
       int x, y, i;

       i = frame_index;

       /* Y */
       for (y = 0; y &lt; height; y++)
           for (x = 0; x &lt; width; x++)
               pict->data[0][y * pict->linesize[0] + x] = x + y + i * 3;

       /* Cb and Cr */
       for (y = 0; y &lt; height / 2; y++) {
           for (x = 0; x &lt; width / 2; x++) {
               pict->data[1][y * pict->linesize[1] + x] = 128 + y + i * 2;
               pict->data[2][y * pict->linesize[2] + x] = 64 + x + i * 5;
           }
       }
    }

    static void write_video_frame(AVFormatContext *oc, AVStream *st)
    {
       int ret;
       static struct SwsContext *sws_ctx;
       AVCodecContext *c = st->codec;

       if (frame_count >= STREAM_NB_FRAMES) {
           /* No more frames to compress. The codec has a latency of a few
            * frames if using B-frames, so we get the last frames by
            * passing the same picture again. */
       } else {
           if (c->pix_fmt != AV_PIX_FMT_YUV420P) {
               /* as we only generate a YUV420P picture, we must convert it
                * to the codec pixel format if needed */
               if (!sws_ctx) {
                   sws_ctx = sws_getContext(c->width, c->height, AV_PIX_FMT_YUV420P,
                                            c->width, c->height, c->pix_fmt,
                                            sws_flags, NULL, NULL, NULL);
                   if (!sws_ctx) {
                       fprintf(stderr,
                               "Could not initialize the conversion context\n");
                       exit(1);
                   }
               }
               fill_yuv_image(&amp;src_picture, frame_count, c->width, c->height);
               sws_scale(sws_ctx,
                         (const uint8_t * const *)src_picture.data, src_picture.linesize,
                         0, c->height, dst_picture.data, dst_picture.linesize);
           } else {
               fill_yuv_image(&amp;dst_picture, frame_count, c->width, c->height);
           }
       }

       if (oc->oformat->flags &amp; AVFMT_RAWPICTURE) {
           /* Raw video case - directly store the picture in the packet */
           AVPacket pkt;
           av_init_packet(&amp;pkt);

           pkt.flags        |= AV_PKT_FLAG_KEY;
           pkt.stream_index  = st->index;
           pkt.data          = dst_picture.data[0];
           pkt.size          = sizeof(AVPicture);

           ret = av_interleaved_write_frame(oc, &amp;pkt);
       } else {
           AVPacket pkt = { 0 };
           int got_packet;
           av_init_packet(&amp;pkt);

           /* encode the image */
           ret = avcodec_encode_video2(c, &amp;pkt, frame, &amp;got_packet);
           if (ret &lt; 0) {
               fprintf(stderr, "Error encoding video frame: %s\n", av_err2str(ret));
               exit(1);
           }
           /* If size is zero, it means the image was buffered. */

           if (!ret &amp;&amp; got_packet &amp;&amp; pkt.size) {
               pkt.stream_index = st->index;

               /* Write the compressed frame to the media file. */
               ret = av_interleaved_write_frame(oc, &amp;pkt);
           } else {
               ret = 0;
           }
       }
       if (ret != 0) {
           fprintf(stderr, "Error while writing video frame: %s\n", av_err2str(ret));
           exit(1);
       }
       frame_count++;
    }

    static void close_video(AVFormatContext *oc, AVStream *st)
    {
       avcodec_close(st->codec);
       av_free(src_picture.data[0]);
       av_free(dst_picture.data[0]);
       av_free(frame);
    }

    static int open_codec_context(int *stream_idx,
                                 AVFormatContext *fmt_ctx, enum AVMediaType type)
    {
       int ret;
       AVStream *st;
       AVCodecContext *dec_ctx = NULL;
       AVCodec *dec = NULL;

       ret = av_find_best_stream(fmt_ctx, type, -1, -1, NULL, 0);
       if (ret &lt; 0) {
           fprintf(stderr, "Could not find %s stream in input file &#39;%s&#39;\n",
                   av_get_media_type_string(type), src_filename);
           return ret;
       } else {
           *stream_idx = ret;
           st = fmt_ctx->streams[*stream_idx];

           /* find decoder for the stream */
           dec_ctx = st->codec;
           dec = avcodec_find_decoder(dec_ctx->codec_id);
           if (!dec) {
               fprintf(stderr, "Failed to find %s codec\n",
                       av_get_media_type_string(type));
               return ret;
           }

           if ((ret = avcodec_open2(dec_ctx, dec, NULL)) &lt; 0) {
               fprintf(stderr, "Failed to open %s codec\n",
                       av_get_media_type_string(type));
               return ret;
           }
       }

       return 0;
    }

    /**************************************************************/
    /* media file output */

    int main(int argc, char **argv)
    {
       const char *filename;
       AVOutputFormat *fmt;
       AVFormatContext *oc;
       AVStream *audio_st, *video_st;
       AVCodec *audio_codec, *video_codec;
       double audio_pts, video_pts;
       int ret = 0, got_frame;;

       /* Initialize libavcodec, and register all codecs and formats. */
       av_register_all();

       if (argc != 3) {
           printf("usage: %s input_file output_file\n"
                  "\n", argv[0]);
           return 1;
       }

       src_filename = argv[1];
       filename = argv[2];

       /* allocate the output media context */
       avformat_alloc_output_context2(&amp;oc, NULL, NULL, filename);
       if (!oc) {
           printf("Could not deduce output format from file extension: using MPEG.\n");
           avformat_alloc_output_context2(&amp;oc, NULL, "mpeg", filename);
       }
       if (!oc) {
           return 1;
       }
       fmt = oc->oformat;

       /* Add the audio and video streams using the default format codecs
        * and initialize the codecs. */
       video_stream = NULL;
       audio_stream = NULL;

       //FROM DE
       /* open input file, and allocate format context */
       if (avformat_open_input(&amp;fmt_ctx, src_filename, NULL, NULL) &lt; 0) {
           fprintf(stderr, "Could not open source file %s\n", src_filename);
           exit(1);
       }

       /* retrieve stream information */
       if (avformat_find_stream_info(fmt_ctx, NULL) &lt; 0) {
           fprintf(stderr, "Could not find stream information\n");
           exit(1);
       }
       if (open_codec_context(&amp;video_stream_idx, fmt_ctx, AVMEDIA_TYPE_VIDEO) >= 0) {
           video_stream = fmt_ctx->streams[video_stream_idx];
           video_dec_ctx = video_stream->codec;

           /* allocate image where the decoded image will be put */
           ret = av_image_alloc(video_dst_data, video_dst_linesize,
                                video_dec_ctx->width, video_dec_ctx->height,
                                video_dec_ctx->pix_fmt, 1);
           if (ret &lt; 0) {
               fprintf(stderr, "Could not allocate raw video buffer\n");
               goto end;
           }
           video_dst_bufsize = ret;
       }

       if (open_codec_context(&amp;audio_stream_idx, fmt_ctx, AVMEDIA_TYPE_AUDIO) >= 0) {
           int nb_planes;

           audio_stream = fmt_ctx->streams[audio_stream_idx];
           audio_dec_ctx = audio_stream->codec;

           nb_planes = av_sample_fmt_is_planar(audio_dec_ctx->sample_fmt) ?
               audio_dec_ctx->channels : 1;
           audio_dst_data = av_mallocz(sizeof(uint8_t *) * nb_planes);
           if (!audio_dst_data) {
               fprintf(stderr, "Could not allocate audio data buffers\n");
               ret = AVERROR(ENOMEM);
               goto end;
           }
       }
       //END DE

       /* Now that all the parameters are set, we can open the audio and
        * video codecs and allocate the necessary encode buffers. */
       if (video_stream)
           open_video(oc, video_codec, video_stream);
       if (audio_stream)
           open_audio(oc, audio_codec, audio_stream);

       av_dump_format(oc, 0, filename, 1);

       /* open the output file, if needed */
       if (!(fmt->flags &amp; AVFMT_NOFILE)) {
           ret = avio_open(&amp;oc->pb, filename, AVIO_FLAG_WRITE);
           if (ret &lt; 0) {
               fprintf(stderr, "Could not open &#39;%s&#39;: %s\n", filename,
                       av_err2str(ret));
               return 1;
           }
       }

       /* Write the stream header, if any. */
       ret = avformat_write_header(oc, NULL);
       if (ret &lt; 0) {
           fprintf(stderr, "Error occurred when opening output file: %s\n",
                   av_err2str(ret));
           return 1;
       }

       if (frame)
           frame->pts = 0;
       for (;;) {
           /* Compute current audio and video time. */
           if (audio_stream)
               audio_pts = (double)audio_stream->pts.val * audio_stream->time_base.num / audio_stream->time_base.den;
           else
               audio_pts = 0.0;

           if (video_stream)
               video_pts = (double)video_stream->pts.val * video_stream->time_base.num /
                           video_stream->time_base.den;
           else
               video_pts = 0.0;

           if ((!audio_stream || audio_pts >= STREAM_DURATION) &amp;&amp;
               (!video_stream || video_pts >= STREAM_DURATION))
               break;

           /* write interleaved audio and video frames */
           if (!video_stream || (video_stream &amp;&amp; audio_st &amp;&amp; audio_pts &lt; video_pts)) {
               write_audio_frame(oc, audio_stream);
           } else {
               write_video_frame(oc, video_stream);
               frame->pts += av_rescale_q(1, video_stream->codec->time_base, video_stream->time_base);
           }
       }

       /* Write the trailer, if any. The trailer must be written before you
        * close the CodecContexts open when you wrote the header; otherwise
        * av_write_trailer() may try to use memory that was freed on
        * av_codec_close(). */
       av_write_trailer(oc);

       /* Close each codec. */
       if (video_st)
           close_video(oc, video_st);
       if (audio_st)
           close_audio(oc, audio_st);

       if (!(fmt->flags &amp; AVFMT_NOFILE))
           /* Close the output file. */
           avio_close(oc->pb);

       /* free the stream */
       avformat_free_context(oc);

    end:
       if (video_dec_ctx)
           avcodec_close(video_dec_ctx);
       if (audio_dec_ctx)
           avcodec_close(audio_dec_ctx);
       avformat_close_input(&amp;fmt_ctx);
       if (video_dst_file)
           fclose(video_dst_file);
       if (audio_dst_file)
           fclose(audio_dst_file);
       av_free(frame);
       av_free(video_dst_data[0]);
       av_free(audio_dst_data);

       return 0;
    }