Recherche avancée

Médias (91)

Autres articles (71)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

Sur d’autres sites (11087)

  • What to do when last_pts > current_pts using ffmpeg libs (C++)

    11 mars 2015, par Rafael Lucio

    Im having some hard time figuring out where to find about this..

    Im building a simple recorder to learn about this video compression universe and Im facing some weird behaviors..

    Before all I need to explain the scenario...

    Its very simple... everytime I call av_read_frame( input_context, input_packet ) I save the pts into the last_pts variable...

    So...

    Whats bothering me is the fact that about 10% of my calls to av_read_frame I get
    input_packet.pts > last_pts

    Resulting in a error message from the encoder when I try to do it...
    Having it in mind I decided to just drop those frames when it happens....

    I think it is not a good idea to just drop frames because if I get them, its needed somehow...

    So... what to do when last_pts > current_pts ?

    I will paste my test code that Im using capturing the video from webcam and saving to mp4 file with libx264 encoder

    #include <qcoreapplication>

    extern "C" {
    #include <libavcodec></libavcodec>avcodec.h>
    #include <libavformat></libavformat>avformat.h>
    #include <libswscale></libswscale>swscale.h>
    #include <libavutil></libavutil>avutil.h>
    #include <libavdevice></libavdevice>avdevice.h>
    }

    #include <qtime>
    #include <qthread>
    #include <qdebug>

    #define SM_DEBUG

    static const double max_fps = 30;
    static const double min_loop_duration = 1000 / max_fps;
    static const double max_duration = 5; // in seconds

    static void sleep_if_needed(const int &amp;elapsed) {
       int sleep_duration = min_loop_duration - elapsed;

       if (sleep_duration > 0)  {
           QThread::msleep(sleep_duration);
       }
    }

    #ifdef SM_DEBUG
    static void log_packet(const AVPacket *pkt,
                          const AVRational &amp;time_base,
                          int is_input=0)
    {

       qDebug() &lt;&lt; ((is_input) ? QString(">>") : QString("&lt;&lt;"))  &lt;&lt; "Size:" &lt;&lt; QString::number(pkt->size) &lt;&lt;
           "pts:" &lt;&lt; QString::number(pkt->pts) &lt;&lt;
           "pts_time:" &lt;&lt; QString::number(av_q2d(time_base) * pkt->pts) &lt;&lt;
           "dts:" &lt;&lt; QString::number(pkt->dts) &lt;&lt;
           "dts_time:" &lt;&lt; QString::number(av_q2d(time_base) * pkt->dts);
    }
    #endif

    int main()
    {
       int input_w, input_h, output_w = 640, output_h = 480;

       av_register_all();
       avdevice_register_all();
       avcodec_register_all();
    #ifdef SM_DEBUG
       av_log_set_level(AV_LOG_DEBUG);
    #else
       av_log_set_level(AV_LOG_ERROR);
    #endif

       AVFormatContext *ic;
       AVFormatContext *oc;

       AVInputFormat *ifmt;

       AVDictionary *opts = 0;

       AVCodecContext* dec_ctx;
       AVCodecContext* enc_ctx;
       AVCodec *dec;
       AVCodec *enc;

       AVStream* ist;
       AVStream* ost;

       ifmt = av_find_input_format("v4l2");

       av_dict_set(&amp;opts, "tune", "zerolatency", AV_DICT_APPEND);
       ic = avformat_alloc_context();

       ic->flags |= AVFMT_FLAG_NONBLOCK;

       avformat_open_input(&amp;ic, "/dev/video0", ifmt, &amp;opts);

       avformat_find_stream_info(ic, NULL);

       av_dump_format(ic, 0, ic->filename, 0);

       AVFrame *frame;
       AVFrame *tmp_frame;

       ist = ic->streams[0];

       dec_ctx =  ist->codec;

       input_w = dec_ctx->width;
       input_h = dec_ctx->height;

       dec_ctx->flags |= CODEC_FLAG_LOW_DELAY;
       dec = avcodec_find_decoder(dec_ctx->codec_id);

       av_format_set_video_codec(ic, dec);
       avcodec_open2(dec_ctx, dec, NULL);

       // output

       avformat_alloc_output_context2(&amp;oc, NULL, "MP4", "/home/poste9/grava.mp4");

       enc = avcodec_find_encoder(AV_CODEC_ID_H264);
       ost = avformat_new_stream(oc, enc);
       enc_ctx = ost->codec;

       enc_ctx->codec_id = AV_CODEC_ID_H264;
       enc_ctx->width = output_w;
       enc_ctx->height = output_h;

       ost->time_base.num = ist->time_base.num;
       ost->time_base.den = ist->time_base.den;

       enc_ctx->time_base = ost->time_base;

       enc_ctx->gop_size = 250;
       enc_ctx->keyint_min = 25;
       enc_ctx->qmax = 51;
       enc_ctx->qmin = 30;
       enc_ctx->pix_fmt = AV_PIX_FMT_YUV422P;
       enc_ctx->max_b_frames = 6;
       enc_ctx->flags |= CODEC_FLAG_GLOBAL_HEADER;
       enc_ctx->flags |= CODEC_FLAG_LOW_DELAY;

       avcodec_open2(enc_ctx, enc, NULL);

       avio_open2(&amp;oc->pb, oc->filename, AVIO_FLAG_WRITE,
                  &amp;oc->interrupt_callback, NULL);

       av_dump_format(oc, 0, oc->filename, 1);

       avformat_write_header(oc, NULL);

       struct SwsContext *sws_ctx;

       sws_ctx = sws_getContext(input_w, input_h,
                                dec_ctx->pix_fmt,
                                output_w, output_h, enc_ctx->pix_fmt,
                                SWS_BICUBIC, NULL, NULL, NULL);

       frame = av_frame_alloc();
       tmp_frame = av_frame_alloc();

       frame->format = enc_ctx->pix_fmt;
       frame->width = output_w;
       frame->height = output_h;
       frame->pts = AV_NOPTS_VALUE;

       av_frame_get_buffer(frame, 32);
       av_frame_make_writable(frame);

       int got_picture=0;
       int got_packet=0;

       double recording_duration = 0;

       QTime timer;

       AVPacket pkt_out;

       av_init_packet(&amp;pkt_out);

       timer.start();

       bool started_recording = false;

       int64_t start_time = 0;

       int64_t last_pts = INT64_MIN;

       while(1) {
           timer.restart();
           AVPacket pkt_in;

           av_read_frame(ic, &amp;pkt_in);

           if (pkt_in.size == 0) {
               sleep_if_needed(timer.elapsed());
               continue;
           }

           avcodec_decode_video2(dec_ctx, tmp_frame, &amp;got_picture, &amp;pkt_in);

    #ifdef SM_DEBUG
           log_packet(&amp;pkt_in, ist->time_base, 1);
    #endif

           if (!started_recording) {

               start_time = pkt_in.dts;
               started_recording = true;
           }

           if (pkt_in.pts &lt; last_pts) {

               sleep_if_needed(timer.elapsed());

               continue;
           }

           last_pts = pkt_in.pts;

           frame->pts = (pkt_in.dts - start_time);

           if (!got_picture) {

               av_free_packet(&amp;pkt_in);

               sleep_if_needed(timer.elapsed());

               continue;
           } else {
               sws_scale(sws_ctx, tmp_frame->data, tmp_frame->linesize,
                 0, input_h, frame->data, frame->linesize);

               av_free_packet(&amp;pkt_in);
           }

           av_init_packet(&amp;pkt_out);

           avcodec_encode_video2(enc_ctx, &amp;pkt_out, frame, &amp;got_packet);

           if (got_packet) {

               if (pkt_out.pts &lt; pkt_out.dts) {
                   pkt_out.dts = pkt_out.pts;
               }

               pkt_out.stream_index = 0;

               recording_duration = pkt_out.pts * av_q2d(ost->time_base);
    #ifdef SM_DEBUG
               log_packet(&amp;pkt_out, ost->time_base, 0);
    #endif

               av_interleaved_write_frame(oc, &amp;pkt_out);

               av_free_packet(&amp;pkt_out);
           }

           if (recording_duration >= max_duration) {

               break;

           } else {

               sleep_if_needed(timer.elapsed());
           }
       }

       av_write_trailer(oc);

       av_dict_free(&amp;opts);

       av_frame_free(&amp;frame);
       av_frame_free(&amp;tmp_frame);

       sws_freeContext(sws_ctx);

       avcodec_close(dec_ctx);
       avcodec_close(enc_ctx);

       avio_close(oc->pb);
       avformat_free_context(oc);

       avformat_close_input(&amp;ic);

       return 0;
    }
    </qdebug></qthread></qtime></qcoreapplication>
  • PHP FFmpeg video aspect ratio problem [SOLVED]

    29 août 2011, par Herr Kaleun

    i compiled the new version of FFMPEG and the padding commands have been deprecated.
    As i try to get familiar with the new -vf pad= commands, i want to ask, how can i
    convert a video without changing it's aspect ratio.

    I've checked numerous solutions from stackoverflow, nothing seemed to work.
    Can someone, please post a working PHP example or cmd line. I would be VERY happy.

    Please note that the videos in question, could be 4:3 and also be 16:9

    Let's say, i convert a 16:9 video to 640x480 format. It will need some bars at
    the top and at the bottom. That is what i want to do.

    Thanks

    EDIT :
    I solved the problem on my own. The FFmpeg documentation is a little bit weird so
    you have to experiment yourself a little bit.
    The padding formula is like :

       $pad_horizontal = $target_width     + $pad_left + $pad_right;
       $pad_vertical   = $target_height;
       // blah
       $command .= " -vf pad=$pad_horizontal:$pad_vertical:". $pad_left .":". $pad_top  .":black";

    Pay special attention at the $pad_vertical part since the paddings there are better
    not added so that the padding calculation of ffmpeg is not broken.

    Here is the full source code to the demo

    &lt;?

       /***********************************************************************************
       get_dimensions()

       Takes in a set of video dimensions - original and target - and returns the optimal conversion
       dimensions.  It will always return the smaller of the original or target dimensions.
       For example: original dimensions of 320x240 and target dimensions of 640x480.
       The result will be 320x240 because converting to 640x480 would be a waste of disk
       space, processing, and bandwidth (assuming these videos are to be downloaded).

       @param $original_width:     The actual width of the original video file which is to be converted.
       @param $original_height:    The actual height of the original video file which is to be converted.
       @param $target_width:       The width of the video file which we will be converting to.
       @param $target_height:      The height of the video file which we will be converting to.
       @param $force_aspect:       Boolean value of whether or not to force conversion to the target&#39;s
                             aspect ratio using padding (so the video isn&#39;t stretched).  If false, the
                             conversion dimensions will retain the aspect ratio of the original.
                             Optional parameter.  Defaults to true.
       @return: An array containing the size and padding information to be used for conversion.
                   Format:
                   Array
                   (
                       [width] => int
                       [height] => int
                       [padtop] => int // top padding (if applicable)
                       [padbottom] => int // bottom padding (if applicable)
                       [padleft] => int // left padding (if applicable)
                       [padright] => int // right padding (if applicable)
                   )
       ***********************************************************************************/
       function get_dimensions($original_width,$original_height,$target_width,$target_height,$force_aspect)
       {
           if(!isset($force_aspect))
           {
               $force_aspect = true;
           }
           // Array to be returned by this function
           $target = array();
           $target[&#39;padleft&#39;] = 0;
           $target[&#39;padright&#39;] = 0;
           $target[&#39;padbottom&#39;] = 0;
           $target[&#39;padtop&#39;] = 0;



           // Target aspect ratio (width / height)
           $aspect = $target_width / $target_height;
           // Target reciprocal aspect ratio (height / width)
           $raspect = $target_height / $target_width;

           if($original_width/$original_height !== $aspect)
           {
               // Aspect ratio is different
               if($original_width/$original_height > $aspect)
               {
                   // Width is the greater of the two dimensions relative to the target dimensions
                   if($original_width &lt; $target_width)
                   {
                       // Original video is smaller.  Scale down dimensions for conversion
                       $target_width = $original_width;
                       $target_height = round($raspect * $target_width);
                   }
                   // Calculate height from width
                   $original_height = round($original_height / $original_width * $target_width);
                   $original_width = $target_width;
                   if($force_aspect)
                   {
                       // Pad top and bottom
                       $dif = round(($target_height - $original_height) / 2);
                       $target[&#39;padtop&#39;] = $dif;
                       $target[&#39;padbottom&#39;] = $dif;
                   }
               }
               else
               {
                   // Height is the greater of the two dimensions relative to the target dimensions
                   if($original_height &lt; $target_height)
                   {
                       // Original video is smaller.  Scale down dimensions for conversion
                       $target_height = $original_height;
                       $target_width = round($aspect * $target_height);
                   }
                   //Calculate width from height
                   $original_width = round($original_width / $original_height * $target_height);
                   $original_height = $target_height;
                   if($force_aspect)
                   {
                       // Pad left and right
                       $dif = round(($target_width - $original_width) / 2);
                       $target[&#39;padleft&#39;] = $dif;
                       $target[&#39;padright&#39;] = $dif;
                   }
               }
           }
           else
           {
               // The aspect ratio is the same
               if($original_width !== $target_width)
               {
                   if($original_width &lt; $target_width)
                   {
                       // The original video is smaller.  Use its resolution for conversion
                       $target_width = $original_width;
                       $target_height = $original_height;
                   }
                   else
                   {
                       // The original video is larger,  Use the target dimensions for conversion
                       $original_width = $target_width;
                       $original_height = $target_height;
                   }
               }
           }
           if($force_aspect)
           {
               // Use the target_ vars because they contain dimensions relative to the target aspect ratio
               $target[&#39;width&#39;] = $target_width;
               $target[&#39;height&#39;] = $target_height;
           }
           else
           {
               // Use the original_ vars because they contain dimensions relative to the original&#39;s aspect ratio
               $target[&#39;width&#39;] = $original_width;
               $target[&#39;height&#39;] = $original_height;
           }
           return $target;
       }

       function get_vid_dim($file)
       {
           $command = &#39;/usr/bin/ffmpeg -i &#39; . escapeshellarg($file) . &#39; 2>&amp;1&#39;;
           $dimensions = array();
           exec($command,$output,$status);
           if (!preg_match(&#39;/Stream #(?:[0-9\.]+)(?:.*)\: Video: (?P<videocodec>.*) (?P<width>[0-9]*)x(?P<height>[0-9]*)/&#39;,implode("\n",$output),$matches))
           {
               preg_match(&#39;/Could not find codec parameters \(Video: (?P<videocodec>.*) (?P<width>[0-9]*)x(?P<height>[0-9]*)\)/&#39;,implode("\n",$output),$matches);
           }
           if(!empty($matches[&#39;width&#39;]) &amp;&amp; !empty($matches[&#39;height&#39;]))
           {
               $dimensions[&#39;width&#39;] = $matches[&#39;width&#39;];
               $dimensions[&#39;height&#39;] = $matches[&#39;height&#39;];
           }
           return $dimensions;
       }


       $command    = &#39;/usr/bin/ffmpeg -i &#39; . $src . &#39; -ab 96k -b 700k -ar 44100 -f flv -s &#39; . &#39;640x480 -acodec mp3 &#39;. $video_output_dir . $video_filename . &#39; 2>&amp;1&#39;;


       define( &#39;VIDEO_WIDTH&#39;,      &#39;640&#39; );
       define( &#39;VIDEO_HEIGHT&#39;,     &#39;480&#39; );

       $src_1              = getcwd() .&#39;/&#39;. &#39;test_video1.mpeg&#39;;
       $video_filename1    = &#39;video1.flv&#39;;

       $src_2              = getcwd() .&#39;/&#39;. &#39;test_video2.mp4&#39;;
       $video_filename2    = &#39;video2.flv&#39;;

       $src_3              = getcwd() .&#39;/&#39;. &#39;test_video3.mp4&#39;;
       $video_filename3    = &#39;video3.flv&#39;;

       convert_video( $src_1, $video_filename1 );
       convert_video( $src_2, $video_filename2 );
       convert_video( $src_3, $video_filename3 );

       function convert_video( $src = &#39;&#39;, $video_filename = &#39;&#39; )
       {

           $video_output_dir   = getcwd() .&#39;/&#39;;

           @unlink ( $video_output_dir . $video_filename );

           $original   = get_vid_dim($src);
           $target     = get_dimensions( $original[&#39;width&#39;], $original[&#39;height&#39;], VIDEO_WIDTH, VIDEO_HEIGHT, TRUE );

           echo &#39;<pre>&#39;;
           print_r( $original );
           echo &#39;</pre>&#39;;
           echo &#39;<pre>&#39;;
           print_r( $target );
           echo &#39;</pre>&#39;;



           $target_width   = $target[&#39;width&#39;];
           $target_height  = $target[&#39;height&#39;];

           $pad_left       = $target[&#39;padleft&#39;];
           $pad_right      = $target[&#39;padright&#39;];
           $pad_bottom     = $target[&#39;padbottom&#39;];
           $pad_top        = $target[&#39;padtop&#39;];

           $pad_horizontal = $target_width     + $pad_left + $pad_right;
           $pad_vertical   = $target_height; //    + $pad_top + $pad_bottom;


           $command = &#39;/usr/bin/ffmpeg -i &#39; . $src;

           // $command .= " -s {$target_width}x{$target_height} ";

           $command .= " -vf pad=$pad_horizontal:$pad_vertical:". $pad_left .":". $pad_top  .":black";

           $command .= &#39; -ab 96k -b 700k -ar 44100&#39;;
           $command .= &#39; -f flv &#39;;
           $command .= &#39; -qscale 4&#39;;

           $command .= &#39; -ss 30&#39;;
           $command .= &#39; -t 5&#39;;

           $command .= &#39; -ac 2 -ab 128k -qscale 5 &#39;;
           $command .= &#39; &#39; . $video_output_dir . $video_filename;


           exec( $command, $output, $status );

           echo &#39;<pre>&#39;;
           print_r( $command );
           echo &#39;</pre>&#39;;

           if ( $status == 0 )
           {
               echo &#39;<br />Convert OK. <br />&#39;;
           }
           else
           {
               echo &#39;<pre>&#39;;
               print_r( $output );
               echo &#39;</pre>&#39;;
           }

           echo &#39;<br />&#39;;
           echo &#39;<br />&#39;;

       }





    ?>
    </height></width></videocodec></height></width></videocodec>

    Thank you and have fun :)

  • RTMP streaming using FFMPEG and HLS conversion in NGINX

    1er mai 2019, par Jonathan Piat

    i have some ffmpeg code in c++ that generates a RTMP stream from H264 NALU and audio samples encoded in AAC. I’am using NGINX to take the RTMP stream and forwards to clients and it is working fine. My issue is that when i use NGINX to convert the RTMP stream to HLS, there is no HLS chunks and playlist generated. I use ffmpeg to copy the RTMP stream and generate a new stream to NGINX, the HLS conversion works.

    Here is what i get when i do the stream copy using FFMPEG :

    Input #0, flv, from 'rtmp://127.0.0.1/live/beam_0'
    Metadata:
    Server          : NGINX RTMP (github.com/sergey-dryabzhinsky/nginx-rtmp-module)
    displayWidth    : 1920
    displayHeight   : 1080
    fps             : 30
    profile         :
    level           :
    Duration: 00:00:00.00, start: 5.019000, bitrate: N/A
    Stream #0:0: Audio: aac, 44100 Hz, mono, fltp, 128 kb/s
    Stream #0:1: Video: h264 (High), 1 reference frame, yuv420p(progressive, left), 1920x1080 (1920x1088), 8000 kb/s, 30 fps, 30.30 tbr, 1k tbn, 60 tbc

    Output #0, flv, to 'rtmp://localhost/live/copy_stream':
    Metadata:
    Server          : NGINX RTMP (github.com/sergey-dryabzhinsky/nginx-rtmp-module)
    displayWidth    : 1920
    displayHeight   : 1080
    fps             : 30
    profile         :
    level           :
    encoder         : Lavf57.83.100
    Stream #0:0: Video: h264 (High), 1 reference frame ([7][0][0][0] / 0x0007), yuv420p(progressive, left), 1920x1080 (0x0), q=2-31, 8000 kb/s, 30 fps, 30.30 tbr, 1k tbn, 1k tbc
    Stream #0:1: Audio: aac ([10][0][0][0] / 0x000A), 44100 Hz, mono, fltp, 128 kb/s

    There are not much differences between the two streams, so i don’t really get what is going wrong and what i should change in my C++ code. One very weird issue i see is that my audio stream is 48kHz when i publish it, but here it is recognized as 44100Hz :

    Output #0, flv, to 'rtmp://127.0.0.1/live/beam_0':
    Stream #0:0, 0, 1/1000: Video: h264 (libx264), 1 reference frame, yuv420p, 1920x1080, 0/1, q=-1--1, 8000 kb/s, 30 fps, 1k tbn, 1k tbc
    Stream #0:1, 0, 1/1000: Audio: aac, 48000 Hz, 1 channels, fltp, 128 kb/s

    UPDATE 1 :

    The output context is created using the following code :

    pOutputFormatContext->oformat = av_guess_format("flv", url.toStdString().c_str(), nullptr);
    memcpy(pOutputFormatContext->filename, url.toStdString().c_str(), url.length());
    avio_open(&amp;pOutputFormatContext->pb,  url.toStdString().c_str(), AVIO_FLAG_WRITE));
    pOutputFormatContext->oformat->video_codec = AV_CODEC_ID_H264 ;
    pOutputFormatContext->oformat->audio_codec = AV_CODEC_ID_AAC ;

    The audio stream is created with :

    AVDictionary *opts = nullptr;
    //pAudioCodec = avcodec_find_encoder(AV_CODEC_ID_VORBIS);
    pAudioCodec = avcodec_find_encoder(AV_CODEC_ID_AAC);

    pAudioCodecContext = avcodec_alloc_context3(pAudioCodec);

    pAudioCodecContext->thread_count = 1;
    pAudioFrame = av_frame_alloc();

    av_dict_set(&amp;opts, "strict", "experimental", 0);
    pAudioCodecContext->bit_rate = 128000;
    pAudioCodecContext->sample_fmt = AV_SAMPLE_FMT_FLTP;
    pAudioCodecContext->sample_rate = static_cast<int>(sample_rate) ;
    pAudioCodecContext->channels = nb_channels ;
    pAudioCodecContext->time_base.num = 1;
    pAudioCodecContext->time_base.den = 1000 ;

    //pAudioCodecContext->time_base.den = static_cast<int>(sample_rate) ;

    pAudioCodecContext->codec_type = AVMEDIA_TYPE_AUDIO;
    avcodec_open2(pAudioCodecContext, pAudioCodec, &amp;opts);


    pAudioFrame->nb_samples     = pAudioCodecContext->frame_size;
    pAudioFrame->format         = pAudioCodecContext->sample_fmt;
    pAudioFrame->channel_layout = pAudioCodecContext->channel_layout;
    mAudioSamplesBufferSize = av_samples_get_buffer_size(nullptr, pAudioCodecContext->channels, pAudioCodecContext->frame_size, pAudioCodecContext->sample_fmt, 0);

    avcodec_fill_audio_frame(pAudioFrame, pAudioCodecContext->channels, pAudioCodecContext->sample_fmt, (const uint8_t*) pAudioSamples, mAudioSamplesBufferSize, 0);

    if( pOutputFormatContext->oformat->flags &amp; AVFMT_GLOBALHEADER )
       pAudioCodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;

    pAudioStream = avformat_new_stream(pOutputFormatContext, 0);

    pAudioStream->codec = pAudioCodecContext ;
    pAudioStream->id =  pOutputFormatContext->nb_streams-1;;
    pAudioStream->time_base.den = pAudioStream->pts.den =  pAudioCodecContext->time_base.den;
    pAudioStream->time_base.num = pAudioStream->pts.num =  pAudioCodecContext->time_base.num;

    mAudioPacketTs = 0 ;
    </int></int>

    The video stream is created with :

    pVideoCodec = avcodec_find_encoder(AV_CODEC_ID_H264);

    pVideoCodecContext = avcodec_alloc_context3(pVideoCodec);

    pVideoCodecContext->codec_type = AVMEDIA_TYPE_VIDEO ;
    pVideoCodecContext->thread_count = 1 ;
    pVideoCodecContext->width = width;
    pVideoCodecContext->height = height;
    pVideoCodecContext->bit_rate = 8000000 ;
    pVideoCodecContext->time_base.den = 1000 ;
    pVideoCodecContext->time_base.num = 1 ;
    pVideoCodecContext->gop_size = 10;
    pVideoCodecContext->pix_fmt = AV_PIX_FMT_YUV420P;
    pVideoCodecContext->flags = 0x0007 ;

    pVideoCodecContext->extradata_size = sizeof(extra_data_buffer);
    pVideoCodecContext->extradata = (uint8_t *) av_malloc ( sizeof(extra_data_buffer) );
    memcpy ( pVideoCodecContext->extradata, extra_data_buffer, sizeof(extra_data_buffer));

    avcodec_open2(pVideoCodecContext,pVideoCodec,0);

    pVideoFrame = av_frame_alloc();

    AVDictionary *opts = nullptr;
    av_dict_set(&amp;opts, "strict", "experimental", 0);
    memcpy(pOutputFormatContext->filename, this->mStreamUrl.toStdString().c_str(), this->mStreamUrl.length());
    pOutputFormatContext->oformat->video_codec = AV_CODEC_ID_H264 ;

    if( pOutputFormatContext->oformat->flags &amp; AVFMT_GLOBALHEADER )
       pVideoCodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;

    pVideoStream = avformat_new_stream(pOutputFormatContext, pVideoCodec);


    //This following section is because AVFormat complains about parameters being passed throught the context and not CodecPar
    pVideoStream->codec = pVideoCodecContext ;
    pVideoStream->id = pOutputFormatContext->nb_streams-1;
    pVideoStream->time_base.den = pVideoStream->pts.den =  pVideoCodecContext->time_base.den;
    pVideoStream->time_base.num = pVideoStream->pts.num =  pVideoCodecContext->time_base.num;
    pVideoStream->avg_frame_rate.num = fps ;
    pVideoStream->avg_frame_rate.den = 1 ;
    pVideoStream->codec->gop_size = 10 ;

    mVideoPacketTs = 0 ;

    Then each video packet and audio packet is pushed with correct scaled pts/dts. I have corrected the 48kHz issue. It was because i was configuring the stream through the codec context and the through the codec parameters (because of waarning at runtime).

    This RTMP stream still does not work for HLS conversion by NGINX, but if i just use FFMPEG to take the RTMP stream from NGINX and re-publish it with copy codec then it works.