Recherche avancée

Médias (3)

Mot : - Tags -/collection

Autres articles (37)

  • La sauvegarde automatique de canaux SPIP

    1er avril 2010, par

    Dans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
    Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)

  • Script d’installation automatique de MediaSPIP

    25 avril 2011, par

    Afin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
    Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
    La documentation de l’utilisation du script d’installation (...)

  • Automated installation script of MediaSPIP

    25 avril 2011, par

    To overcome the difficulties mainly due to the installation of server side software dependencies, an "all-in-one" installation script written in bash was created to facilitate this step on a server with a compatible Linux distribution.
    You must have access to your server via SSH and a root account to use it, which will install the dependencies. Contact your provider if you do not have that.
    The documentation of the use of this installation script is available here.
    The code of this (...)

Sur d’autres sites (6586)

  • ffmpeg : images to 29.97fps mpeg2, audio not sync [migrated]

    21 novembre 2011, par Andy Le

    I have spent a lot of time on this issue. Hope someone can help.

    I want to convert 3147 images + ac3 audio file into an mpeg2 video at 29.97fps (about 1m45s). My command :

    ~/ffmpeg/ffmpeg/ffmpeg -loop_input -t 105 -i v%4d.tga -i final.ac3 -vcodec mpeg2video -qscale 1 -s 400x400 -r 30000/1001 -acodec copy -y out.mpeg 2> out.txt

    However, the audio file ends before the frame sequence. Which means the video is slower then audio.

    I checked the output file with imageinfo and see :

    General
    Complete name                    : out.mpeg
    Format                           : MPEG-PS
    File size                        : 7.18 MiB
    Duration                         : 1mn 44s
    Overall bit rate                 : 574 Kbps

    Video
    ID                               : 224 (0xE0)
    Format                           : MPEG Video
    Format version                   : Version 2
    Format profile                   : Main@Main
    Format settings, BVOP            : No
    Format settings, Matrix          : Default
    Format_Settings_GOP              : M=1, N=12
    Duration                         : 1mn 44s
    Bit rate mode                    : Variable
    Bit rate                         : 103 Kbps
    Width                            : 400 pixels
    Height                           : 400 pixels
    Display aspect ratio             : 1.000
    Frame rate                       : 29.970 fps
    Resolution                       : 8 bits
    Colorimetry                      : 4:2:0
    Scan type                        : Progressive
    Bits/(Pixel*Frame)               : 0.021
    Stream size                      : 1.29 MiB (18%)

    Audio
    ID                               : 128 (0x80)
    Format                           : AC-3
    Format/Info                      : Audio Coding 3
    Duration                         : 1mn 44s
    Bit rate mode                    : Constant
    Bit rate                         : 448 Kbps
    Channel(s)                       : 6 channels
    Channel positions                : Front: L C R, Side: L R, LFE
    Sampling rate                    : 44.1 KHz
    Stream size                      : 5.61 MiB (78%)

    The log from ffmpeg shows many duplicate frames. But I don't know how to get rid of that.

    -loop_input is deprecated, use -loop 1
    [image2 @ 0x9c17a80] max_analyze_duration 5000000 reached at 5000000
    Input #0, image2, from 'v%4d.tga':
     Duration: 00:02:05.88, start: 0.000000, bitrate: N/A
       Stream #0:0: Video: targa, bgr24, 400x400, 25 fps, 25 tbr, 25 tbn, 25 tbc
    -loop_input is deprecated, use -loop 1
    [ac3 @ 0x9ca5420] max_analyze_duration 5000000 reached at 5014400
    [ac3 @ 0x9ca5420] Estimating duration from bitrate, this may be inaccurate
    Input #1, ac3, from 'Final.ac3':
     Duration: 00:20:10.68, start: 0.000000, bitrate: 447 kb/s
       Stream #1:0: Audio: ac3, 44100 Hz, 5.1(side), s16, 448 kb/s
    Incompatible pixel format 'bgr24' for codec 'mpeg2video', auto-selecting format 'yuv420p'
    [buffer @ 0x9c1e060] w:400 h:400 pixfmt:bgr24 tb:1/1000000 sar:0/1 sws_param:
    [buffersink @ 0x9dd56c0] auto-inserting filter 'auto-inserted scale 0' between the filter 'src' and the filter 'out'
    [scale @ 0x9c178e0] w:400 h:400 fmt:bgr24 -> w:400 h:400 fmt:yuv420p flags:0x4
    [mpeg @ 0x9d58060] VBV buffer size not set, muxing may fail
    Output #0, mpeg, to 'out.mpeg':
     Metadata:
       encoder         : Lavf53.21.0
       Stream #0:0: Video: mpeg2video, yuv420p, 400x400, q=2-31, 200 kb/s, 90k tbn, 29.97 tbc
       Stream #0:1: Audio: ac3, 44100 Hz, 5.1(side), 448 kb/s
    Stream mapping:
     Stream #0:0 -> #0:0 (targa -> mpeg2video)
     Stream #1:0 -> #0:1 (copy)
    Press [q] to stop, [?] for help
    frame=  267 fps=  0 q=1.0 size=     564kB time=00:00:08.87 bitrate= 520.6kbits/s dup=43 drop=0    
    frame=  544 fps=542 q=1.0 size=    1186kB time=00:00:18.11 bitrate= 536.2kbits/s dup=89 drop=0    
    frame=  821 fps=546 q=1.0 size=    1818kB time=00:00:27.36 bitrate= 544.3kbits/s dup=135 drop=0    
    frame= 1098 fps=548 q=1.0 size=    2444kB time=00:00:36.60 bitrate= 547.0kbits/s dup=181 drop=0    
    frame= 1376 fps=549 q=1.0 size=    3072kB time=00:00:45.87 bitrate= 548.5kbits/s dup=227 drop=0    
    frame= 1653 fps=550 q=1.0 size=    3700kB time=00:00:55.12 bitrate= 549.9kbits/s dup=273 drop=0    
    frame= 1930 fps=550 q=1.0 size=    4326kB time=00:01:04.36 bitrate= 550.6kbits/s dup=319 drop=0    
    frame= 2208 fps=551 q=1.0 size=    4960kB time=00:01:13.64 bitrate= 551.8kbits/s dup=365 drop=0    
    frame= 2462 fps=546 q=1.0 size=    5746kB time=00:01:22.11 bitrate= 573.2kbits/s dup=407 drop=0    
    frame= 2728 fps=544 q=1.0 size=    6354kB time=00:01:30.99 bitrate= 572.1kbits/s dup=451 drop=0    
    frame= 3007 fps=545 q=1.0 size=    6980kB time=00:01:40.28 bitrate= 570.2kbits/s dup=498 drop=0    
    frame= 3146 fps=546 q=1.0 Lsize=    7352kB time=00:01:44.93 bitrate= 573.9kbits/s dup=521 drop=0    

    video:1518kB audio:5745kB global headers:0kB muxing overhead 1.230493%
  • convert images to video with differing time ranges

    11 juin 2013, par FaultyJuggler

    I have images coming in at random times labeled with their epoch time they were taken. I want to create a video that shows their real time creation in order. FFMPEG (far as I can tell) only allows you to set the framerate.

    For now I'm looking at creating a video file per image that is as long as the gap between the current image and the next image's timestamp, then concatenating all videos together after.

    Is there a better way to do this ?

  • Conversion from mp3 to aac/mp4 container (FFmpeg/c++)

    1er juillet 2013, par taansari

    I have made a small application to extract audio from an mp4 file, or simply convert an existing audio file to AAC/mp4 format (both raw AAC, or inside mp4 container). I have run this application with existing mp4 files as input, and it properly extracts audio, and encodes to mp4 (audio only:AAC), or even directly in AAC format (i.e. test.aac also works). But when I tried running it on mp3 files, output clip plays faster than it should be (a clip of 1:12 seconds plays back till 1:05 seconds only).

    Edit : I have made improvements in code - now, it no longer plays back faster, but is still only converted till 1:05 seconds, remaining clip is missing (this is about 89% conversion done, and remaining 11% remaining).

    Here is the code I have written to achieve this :

    ////////////////////////////////////////////////
       #include "stdafx.h"

    #include <iostream>
    #include <fstream>

    #include <string>
    #include <vector>
    #include <map>

    #include <deque>
    #include <queue>

    #include
    #include
    #include
    #include

    extern "C"
    {
    #include "libavcodec/avcodec.h"
    #include "libavformat/avformat.h"
    #include "libavdevice/avdevice.h"
    #include "libswscale/swscale.h"
    #include "libavutil/dict.h"
    #include "libavutil/error.h"
    #include "libavutil/opt.h"
    #include <libavutil></libavutil>fifo.h>
    #include <libavutil></libavutil>imgutils.h>
    #include <libavutil></libavutil>samplefmt.h>
    #include <libswresample></libswresample>swresample.h>
    }

    AVFormatContext*    fmt_ctx= NULL;
    int                    audio_stream_index = -1;
    AVCodecContext *    codec_ctx_audio = NULL;
    AVCodec*            codec_audio = NULL;
    AVFrame*            decoded_frame = NULL;
    uint8_t**            audio_dst_data = NULL;
    int                    got_frame = 0;
    int                    audiobufsize = 0;
    AVPacket            input_packet;
    int                    audio_dst_linesize = 0;
    int                    audio_dst_bufsize = 0;
    SwrContext *        swr = NULL;

    AVOutputFormat *    output_format = NULL ;
    AVFormatContext *    output_fmt_ctx= NULL;
    AVStream *            audio_st = NULL;
    AVCodec *            audio_codec = NULL;
    double                audio_pts = 0.0;
    AVFrame *            out_frame = avcodec_alloc_frame();

    int                    audio_input_frame_size = 0;

    uint8_t *            audio_data_buf = NULL;
    uint8_t *            audio_out = NULL;
    int                    audio_bit_rate;
    int                    audio_sample_rate;
    int                    audio_channels;

    int decode_packet();
    int open_audio_input(char* src_filename);
    int decode_frame();

    int open_encoder(char* output_filename);
    AVStream *add_audio_stream(AVFormatContext *oc, AVCodec **codec,
       enum AVCodecID codec_id);
    int open_audio(AVFormatContext *oc, AVCodec *codec, AVStream *st);
    void close_audio(AVFormatContext *oc, AVStream *st);
    void write_audio_frame(uint8_t ** audio_src_data, int audio_src_bufsize);

    int open_audio_input(char* src_filename)
    {
       int i =0;
       /* open input file, and allocate format context */
       if (avformat_open_input(&amp;fmt_ctx, src_filename, NULL, NULL) &lt; 0)
       {
           fprintf(stderr, "Could not open source file %s\n", src_filename);
           exit(1);
       }

       // Retrieve stream information
       if(avformat_find_stream_info(fmt_ctx, NULL)&lt;0)
           return -1; // Couldn&#39;t find stream information

       // Dump information about file onto standard error
       av_dump_format(fmt_ctx, 0, src_filename, 0);

       // Find the first video stream
       for(i=0; inb_streams; i++)
       {
           if(fmt_ctx->streams[i]->codec->codec_type==AVMEDIA_TYPE_AUDIO)
           {
               audio_stream_index=i;
               break;
           }
       }
       if ( audio_stream_index != -1 )
       {
           // Get a pointer to the codec context for the audio stream
           codec_ctx_audio=fmt_ctx->streams[audio_stream_index]->codec;

           // Find the decoder for the video stream
           codec_audio=avcodec_find_decoder(codec_ctx_audio->codec_id);
           if(codec_audio==NULL) {
               fprintf(stderr, "Unsupported audio codec!\n");
               return -1; // Codec not found
           }

           // Open codec
           AVDictionary *codecDictOptions = NULL;
           if(avcodec_open2(codec_ctx_audio, codec_audio, &amp;codecDictOptions)&lt;0)
               return -1; // Could not open codec

           // Set up SWR context once you&#39;ve got codec information
           swr = swr_alloc();
           av_opt_set_int(swr, "in_channel_layout",  codec_ctx_audio->channel_layout, 0);
           av_opt_set_int(swr, "out_channel_layout", codec_ctx_audio->channel_layout,  0);
           av_opt_set_int(swr, "in_sample_rate",     codec_ctx_audio->sample_rate, 0);
           av_opt_set_int(swr, "out_sample_rate",    codec_ctx_audio->sample_rate, 0);
           av_opt_set_sample_fmt(swr, "in_sample_fmt",  codec_ctx_audio->sample_fmt, 0);
           av_opt_set_sample_fmt(swr, "out_sample_fmt", AV_SAMPLE_FMT_S16,  0);
           swr_init(swr);

           // Allocate audio frame
           if ( decoded_frame == NULL ) decoded_frame = avcodec_alloc_frame();
           int nb_planes = 0;
           AVStream* audio_stream = fmt_ctx->streams[audio_stream_index];
           nb_planes = av_sample_fmt_is_planar(codec_ctx_audio->sample_fmt) ? codec_ctx_audio->channels : 1;
           int tempSize =  sizeof(uint8_t *) * nb_planes;
           audio_dst_data = (uint8_t**)av_mallocz(tempSize);
           if (!audio_dst_data)
           {
               fprintf(stderr, "Could not allocate audio data buffers\n");
           }
           else
           {
               for ( int i = 0 ; i &lt; nb_planes ; i ++ )
               {
                   audio_dst_data[i] = NULL;
               }
           }
       }
    }


    int decode_frame()
    {
       int rv = 0;
       got_frame = 0;
       if ( fmt_ctx == NULL  )
       {
           return rv;
       }
       int ret = 0;
       audiobufsize = 0;
       rv = av_read_frame(fmt_ctx, &amp;input_packet);
       if ( rv &lt; 0 )
       {
           return rv;
       }
       rv = decode_packet();
       // Free the input_packet that was allocated by av_read_frame
       //av_free_packet(&amp;input_packet);
       return rv;
    }

    int decode_packet()
    {
       int rv = 0;
       int ret = 0;

       //audio stream?
       if(input_packet.stream_index == audio_stream_index)
       {
           /* decode audio frame */
           rv = avcodec_decode_audio4(codec_ctx_audio, decoded_frame, &amp;got_frame, &amp;input_packet);
           if (rv &lt; 0)
           {
               fprintf(stderr, "Error decoding audio frame\n");
               //return ret;
           }
           else
           {
               if (got_frame)
               {
                   if ( audio_dst_data[0] == NULL )
                   {
                        ret = av_samples_alloc(audio_dst_data, &amp;audio_dst_linesize, decoded_frame->channels,
                           decoded_frame->nb_samples, (AVSampleFormat)decoded_frame->format, 1);
                       if (ret &lt; 0)
                       {
                           fprintf(stderr, "Could not allocate audio buffer\n");
                           return AVERROR(ENOMEM);
                       }
                       /* TODO: extend return code of the av_samples_* functions so that this call is not needed */
                       audio_dst_bufsize = av_samples_get_buffer_size(NULL, audio_st->codec->channels,
                           decoded_frame->nb_samples, (AVSampleFormat)decoded_frame->format, 1);

                       //int16_t* outputBuffer = ...;
                       swr_convert( swr, audio_dst_data, out_frame->nb_samples, (const uint8_t**) decoded_frame->extended_data, decoded_frame->nb_samples );
                   }
                   /* copy audio data to destination buffer:
                   * this is required since rawaudio expects non aligned data */
                   //av_samples_copy(audio_dst_data, decoded_frame->data, 0, 0,
                   //    decoded_frame->nb_samples, decoded_frame->channels, (AVSampleFormat)decoded_frame->format);
               }
           }
       }
       return rv;
    }


    int open_encoder(char* output_filename )
    {
       int rv = 0;

       /* allocate the output media context */
       AVOutputFormat *opfmt = NULL;

       avformat_alloc_output_context2(&amp;output_fmt_ctx, opfmt, NULL, output_filename);
       if (!output_fmt_ctx) {
           printf("Could not deduce output format from file extension: using MPEG.\n");
           avformat_alloc_output_context2(&amp;output_fmt_ctx, NULL, "mpeg", output_filename);
       }
       if (!output_fmt_ctx) {
           rv = -1;
       }
       else
       {
           output_format = output_fmt_ctx->oformat;
       }

       /* Add the audio stream using the default format codecs
       * and initialize the codecs. */
       audio_st = NULL;

       if ( output_fmt_ctx )
       {
           if (output_format->audio_codec != AV_CODEC_ID_NONE)
           {
               audio_st = add_audio_stream(output_fmt_ctx, &amp;audio_codec, output_format->audio_codec);
           }

           /* Now that all the parameters are set, we can open the audio and
           * video codecs and allocate the necessary encode buffers. */
           if (audio_st)
           {
               rv = open_audio(output_fmt_ctx, audio_codec, audio_st);
               if ( rv &lt; 0 ) return rv;
           }

           av_dump_format(output_fmt_ctx, 0, output_filename, 1);
           /* open the output file, if needed */
           if (!(output_format->flags &amp; AVFMT_NOFILE))
           {
               if (avio_open(&amp;output_fmt_ctx->pb, output_filename, AVIO_FLAG_WRITE) &lt; 0) {
                   fprintf(stderr, "Could not open &#39;%s&#39;\n", output_filename);
                   rv = -1;
               }
               else
               {
                   /* Write the stream header, if any. */
                   if (avformat_write_header(output_fmt_ctx, NULL) &lt; 0)
                   {
                       fprintf(stderr, "Error occurred when opening output file\n");
                       rv = -1;
                   }
               }
           }
       }

       return rv;
    }

    AVStream *add_audio_stream(AVFormatContext *oc, AVCodec **codec,
       enum AVCodecID codec_id)
    {
       AVCodecContext *c;
       AVStream *st;

       /* find the audio encoder */
       *codec = avcodec_find_encoder(codec_id);
       if (!(*codec)) {
           fprintf(stderr, "Could not find codec\n");
           exit(1);
       }

       st = avformat_new_stream(oc, *codec);
       if (!st) {
           fprintf(stderr, "Could not allocate stream\n");
           exit(1);
       }
       st->id = 1;

       c = st->codec;

       /* put sample parameters */
       c->sample_fmt  = AV_SAMPLE_FMT_S16;
       c->bit_rate    = audio_bit_rate;
       c->sample_rate = audio_sample_rate;
       c->channels    = audio_channels;

       // some formats want stream headers to be separate
       if (oc->oformat->flags &amp; AVFMT_GLOBALHEADER)
           c->flags |= CODEC_FLAG_GLOBAL_HEADER;

       return st;
    }

    int open_audio(AVFormatContext *oc, AVCodec *codec, AVStream *st)
    {
       int ret=0;
       AVCodecContext *c;

       st->duration = fmt_ctx->duration;
       c = st->codec;

       /* open it */
       ret = avcodec_open2(c, codec, NULL) ;
       if ( ret &lt; 0)
       {
           fprintf(stderr, "could not open codec\n");
           return -1;
           //exit(1);
       }

       if (c->codec->capabilities &amp; CODEC_CAP_VARIABLE_FRAME_SIZE)
           audio_input_frame_size = 10000;
       else
           audio_input_frame_size = c->frame_size;
       int tempSize = audio_input_frame_size *
           av_get_bytes_per_sample(c->sample_fmt) *
           c->channels;
       return ret;
    }

    void close_audio(AVFormatContext *oc, AVStream *st)
    {
       avcodec_close(st->codec);
    }

    void write_audio_frame(uint8_t ** audio_src_data, int audio_src_bufsize)
    {
       AVFormatContext *oc = output_fmt_ctx;
       AVStream *st = audio_st;
       if ( oc == NULL || st == NULL ) return;
       AVCodecContext *c;
       AVPacket pkt = { 0 }; // data and size must be 0;
       int got_packet;

       av_init_packet(&amp;pkt);
       c = st->codec;

       out_frame->nb_samples = audio_input_frame_size;
       int buf_size =         audio_src_bufsize *
           av_get_bytes_per_sample(c->sample_fmt) *
           c->channels;
       avcodec_fill_audio_frame(out_frame, c->channels, c->sample_fmt,
           (uint8_t *) *audio_src_data,
           buf_size, 1);
       avcodec_encode_audio2(c, &amp;pkt, out_frame, &amp;got_packet);
       if (!got_packet)
       {
       }
       else
       {
           if (pkt.pts != AV_NOPTS_VALUE)
               pkt.pts =  av_rescale_q(pkt.pts, st->codec->time_base, st->time_base);
           if (pkt.dts != AV_NOPTS_VALUE)
               pkt.dts = av_rescale_q(pkt.dts, st->codec->time_base, st->time_base);
           if ( c &amp;&amp; c->coded_frame &amp;&amp; c->coded_frame->key_frame)
               pkt.flags |= AV_PKT_FLAG_KEY;

            pkt.stream_index = st->index;
           pkt.flags |= AV_PKT_FLAG_KEY;
           /* Write the compressed frame to the media file. */
           if (av_interleaved_write_frame(oc, &amp;pkt) != 0)
           {
               fprintf(stderr, "Error while writing audio frame\n");
               exit(1);
           }
       }
       av_free_packet(&amp;pkt);
    }


    void write_delayed_frames(AVFormatContext *oc, AVStream *st)
    {
       AVCodecContext *c = st->codec;
       int got_output = 0;
       int ret = 0;
       AVPacket pkt;
       pkt.data = NULL;
       pkt.size = 0;
       av_init_packet(&amp;pkt);
       int i = 0;
       for (got_output = 1; got_output; i++)
       {
           ret = avcodec_encode_audio2(c, &amp;pkt, NULL, &amp;got_output);
           if (ret &lt; 0)
           {
               fprintf(stderr, "error encoding frame\n");
               exit(1);
           }
           static int64_t tempPts = 0;
           static int64_t tempDts = 0;
           /* If size is zero, it means the image was buffered. */
           if (got_output)
           {
               if (pkt.pts != AV_NOPTS_VALUE)
                   pkt.pts =  av_rescale_q(pkt.pts, st->codec->time_base, st->time_base);
               if (pkt.dts != AV_NOPTS_VALUE)
                   pkt.dts = av_rescale_q(pkt.dts, st->codec->time_base, st->time_base);
               if ( c &amp;&amp; c->coded_frame &amp;&amp; c->coded_frame->key_frame)
                   pkt.flags |= AV_PKT_FLAG_KEY;

               pkt.stream_index = st->index;
               /* Write the compressed frame to the media file. */
               ret = av_interleaved_write_frame(oc, &amp;pkt);
           }
           else
           {
               ret = 0;
           }
           av_free_packet(&amp;pkt);
       }
    }

    int main(int argc, char **argv)
    {
       /* register all formats and codecs */
       av_register_all();
       avcodec_register_all();
       avformat_network_init();
       avdevice_register_all();
       int i =0;
       char src_filename[90] = "mp3.mp3";
       char dst_filename[90] = "test.mp4";
       open_audio_input(src_filename);
       audio_bit_rate        = codec_ctx_audio->bit_rate;
       audio_sample_rate    = codec_ctx_audio->sample_rate;
       audio_channels        = codec_ctx_audio->channels;
       open_encoder( dst_filename );
       while(1)
       {
           int rv = decode_frame();
           if ( rv &lt; 0 )
           {
               break;
           }

           if (audio_st)
           {
               audio_pts = (double)audio_st->pts.val * audio_st->time_base.num /
                   audio_st->time_base.den;
           }
           else
           {
               audio_pts = 0.0;
           }
           if ( codec_ctx_audio )
           {
               if ( got_frame)
               {
                   write_audio_frame( audio_dst_data, audio_dst_bufsize );
               }
           }
           if ( audio_dst_data[0] )
           {
               av_freep(&amp;audio_dst_data[0]);
               audio_dst_data[0] = NULL;
           }
           av_free_packet(&amp;input_packet);
           printf("\naudio_pts: %.3f", audio_pts);
       }
       write_delayed_frames( output_fmt_ctx, audio_st );
       av_write_trailer(output_fmt_ctx);
       close_audio( output_fmt_ctx, audio_st);
       swr_free(&amp;swr);
       avcodec_free_frame(&amp;out_frame);
       return 0;
    }
    ///////////////////////////////////////////////
    </queue></deque></map></vector></string></fstream></iostream>

    I have been looking at this problem from many angles since about two days now, but cant seem to figure out what I'm doing wrong.

    Note also : the printf() statement I've inserted shows audio_pts up to 64.551 (that's about 1:05 seconds that also proves encoder is not going to full duration of input file : 1:12 secs).

    Can anyone please guide me what I may be doing wrong ?

    Thanks in advance for any guidance !

    p.s. when run through command line like : ffmpeg -i test.mp3 test.mp4, it converts the file just fine.