Recherche avancée

Médias (39)

Mot : - Tags -/audio

Autres articles (53)

  • Gestion générale des documents

    13 mai 2011, par

    MédiaSPIP ne modifie jamais le document original mis en ligne.
    Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
    Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (5703)

  • FFMPEG : cannot play MPEG4 video encoded from images. Duration and bitrate undefined

    17 juin 2013, par KaiK

    I've been trying to set a H264 video stream created from images, into an MPEG4 container. I've been able to get the video stream from images successfully. But when muxing it in the container, I must do something wrong because no player is able to reproduce it, despite ffplay - that plays the video until the end and after that, the image gets frozen until the eternity -.

    The ffplay cannot identify Duration neither bitrate, so I supose it might be an issue related with dts and pts, but I've searched about how to solve it with no success.

    Here's the ffplay output :

    ~$ ffplay testContainer.mp4
    ffplay version git-2012-01-31-c673671 Copyright (c) 2003-2012 the FFmpeg developers
     built on Feb  7 2012 20:32:12 with gcc 4.4.3
     configuration: --enable-gpl --enable-version3 --enable-nonfree --enable-postproc --enable-        libfaac --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libtheora --enable-libvorbis --enable-libx264 --enable-libxvid --enable-x11grab --enable-libvpx --enable-libmp3lame --enable-debug=3
     libavutil      51. 36.100 / 51. 36.100
     libavcodec     54.  0.102 / 54.  0.102
     libavformat    54.  0.100 / 54.  0.100
     libavdevice    53.  4.100 / 53.  4.100
     libavfilter     2. 60.100 /  2. 60.100
     libswscale      2.  1.100 /  2.  1.100
     libswresample   0.  6.100 /  0.  6.100
     libpostproc    52.  0.100 / 52.  0.100
    [h264 @ 0xa4849c0] max_analyze_duration 5000000 reached at 5000000
    [h264 @ 0xa4849c0] Estimating duration from bitrate, this may be inaccurate
    Input #0, h264, from 'testContainer.mp4':
     Duration: N/A, bitrate: N/A
       Stream #0:0: Video: h264 (High), yuv420p, 512x512, 25 fps, 25 tbr, 1200k tbn, 50 tbc
          2.74 A-V:  0.000 fd=   0 aq=    0KB vq=  160KB sq=    0B f=0/0   0/0

    Structure

    My code is C++ styled, so I've a class that handles all the encoding, and then a main that initilize it, passes some images in a bucle, and finally notify the end of the process as following :

    int main (int argc, const char * argv[])
    {

    MyVideoEncoder* videoEncoder = new MyVideoEncoder(512, 512, 512, 512, "output/testContainer.mp4", 25, 20);
    if(!videoEncoder->initWithCodec(MyVideoEncoder::H264))
    {
       std::cout << "something really bad happened. Exit!!" << std::endl;
       exit(-1);
    }

    /* encode 1 second of video */
    for(int i=0;i<228;i++) {

       std::stringstream filepath;
       filepath << "input2/image" << i << ".jpg";

       videoEncoder->encodeFrameFromJPG(const_cast(filepath.str().c_str()));

    }

    videoEncoder->endEncoding();

    }

    Hints

    I've seen a lot of examples about decoding of a video and encoding into another, but no working example of muxing a video from the scratch, so I'm not sure how to proceed with the pts and dts packet values. That's the reason why I suspect the issue must be in the following method :

    bool MyVideoEncoder::encodeImageAsFrame(){
       bool res = false;


       pTempFrame->pts = frameCount * frameRate * 90; //90Hz by the standard for PTS-values
       frameCount++;

       /* encode the image */
       out_size = avcodec_encode_video(pVideoStream->codec, outbuf, outbuf_size, pTempFrame);


       if (out_size > 0) {
           AVPacket pkt;
           av_init_packet(&pkt);
           pkt.pts = pkt.dts = 0;

           if (pVideoStream->codec->coded_frame->pts != AV_NOPTS_VALUE) {
               pkt.pts = av_rescale_q(pVideoStream->codec->coded_frame->pts,
                       pVideoStream->codec->time_base, pVideoStream->time_base);
               pkt.dts = pTempFrame->pts;

           }
           if (pVideoStream->codec->coded_frame->key_frame) {
               pkt.flags |= AV_PKT_FLAG_KEY;
           }
           pkt.stream_index = pVideoStream->index;
           pkt.data = outbuf;
           pkt.size = out_size;

           res = (av_interleaved_write_frame(pFormatContext, &pkt) == 0);
       }


       return res;
    }

    Any help or insight would be appreciated. Thanks in advance !!

    P.S. The rest of the code, where config is done, is the following :

    // MyVideoEncoder.cpp

    #include "MyVideoEncoder.h"
    #include "Image.hpp"
    #include <cstring>
    #include <sstream>
    #include

    #define MAX_AUDIO_PACKET_SIZE (128 * 1024)



    MyVideoEncoder::MyVideoEncoder(int inwidth, int inheight,
           int outwidth, int outheight, char* fileOutput, int framerate,
           int compFactor) {
       inWidth = inwidth;
       inHeight = inheight;
       outWidth = outwidth;
       outHeight = outheight;
       pathToMovie = fileOutput;
       frameRate = framerate;
       compressionFactor = compFactor;
       frameCount = 0;

    }

    MyVideoEncoder::~MyVideoEncoder() {

    }

    bool MyVideoEncoder::initWithCodec(
           MyVideoEncoder::encoderType type) {
       if (!initializeEncoder(type))
           return false;

       if (!configureFrames())
           return false;

       return true;

    }

    bool MyVideoEncoder::encodeFrameFromJPG(char* filepath) {

       setJPEGImage(filepath);
       return encodeImageAsFrame();
    }



    bool MyVideoEncoder::encodeDelayedFrames(){
       bool res = false;

       while(out_size > 0)
       {
           pTempFrame->pts = frameCount * frameRate * 90; //90Hz by the standard for PTS-values
           frameCount++;

           out_size = avcodec_encode_video(pVideoStream->codec, outbuf, outbuf_size, NULL);

           if (out_size > 0)
           {
               AVPacket pkt;
               av_init_packet(&amp;pkt);
               pkt.pts = pkt.dts = 0;

               if (pVideoStream->codec->coded_frame->pts != AV_NOPTS_VALUE) {
                   pkt.pts = av_rescale_q(pVideoStream->codec->coded_frame->pts,
                           pVideoStream->codec->time_base, pVideoStream->time_base);
                   pkt.dts = pTempFrame->pts;
               }
               if (pVideoStream->codec->coded_frame->key_frame) {
                   pkt.flags |= AV_PKT_FLAG_KEY;
               }
               pkt.stream_index = pVideoStream->index;
               pkt.data = outbuf;
               pkt.size = out_size;


               res = (av_interleaved_write_frame(pFormatContext, &amp;pkt) == 0);
           }

       }

       return res;
    }






    void MyVideoEncoder::endEncoding() {
       encodeDelayedFrames();
       closeEncoder();
    }

    bool MyVideoEncoder::setJPEGImage(char* imgFilename) {
       Image* rgbImage = new Image();
       rgbImage->read_jpeg_image(imgFilename);

       bool ret = setImageFromRGBArray(rgbImage->get_data());

       delete rgbImage;

       return ret;
    }

    bool MyVideoEncoder::setImageFromRGBArray(unsigned char* data) {

       memcpy(pFrameRGB->data[0], data, 3 * inWidth * inHeight);

       int ret = sws_scale(img_convert_ctx, pFrameRGB->data, pFrameRGB->linesize,
               0, inHeight, pTempFrame->data, pTempFrame->linesize);

       pFrameRGB->pts++;
       if (ret)
           return true;
       else
           return false;
    }

    bool MyVideoEncoder::initializeEncoder(encoderType type) {

       av_register_all();

       pTempFrame = avcodec_alloc_frame();
       pTempFrame->pts = 0;
       pOutFormat = NULL;
       pFormatContext = NULL;
       pVideoStream = NULL;
       pAudioStream = NULL;
       bool res = false;

       // Create format
       switch (type) {
           case MyVideoEncoder::H264:
               pOutFormat = av_guess_format("h264", NULL, NULL);
               break;
           case MyVideoEncoder::MPEG1:
               pOutFormat = av_guess_format("mpeg", NULL, NULL);
               break;
           default:
               pOutFormat = av_guess_format(NULL, pathToMovie.c_str(), NULL);
               break;
       }

       if (!pOutFormat) {
           pOutFormat = av_guess_format(NULL, pathToMovie.c_str(), NULL);
           if (!pOutFormat) {
               std::cout &lt;&lt; "output format not found" &lt;&lt; std::endl;
               return false;
           }
       }


       // allocate context
       pFormatContext = avformat_alloc_context();
       if(!pFormatContext)
       {
           std::cout &lt;&lt; "cannot alloc format context" &lt;&lt; std::endl;
           return false;
       }

       pFormatContext->oformat = pOutFormat;

       memcpy(pFormatContext->filename, pathToMovie.c_str(), min( (const int) pathToMovie.length(), (const int)sizeof(pFormatContext->filename)));


       //Add video and audio streams
       pVideoStream = AddVideoStream(pFormatContext,
               pOutFormat->video_codec);

       // Set the output parameters
       av_dump_format(pFormatContext, 0, pathToMovie.c_str(), 1);

       // Open Video stream
       if (pVideoStream) {
           res = openVideo(pFormatContext, pVideoStream);
       }


       if (res &amp;&amp; !(pOutFormat->flags &amp; AVFMT_NOFILE)) {
           if (avio_open(&amp;pFormatContext->pb, pathToMovie.c_str(), AVIO_FLAG_WRITE) &lt; 0) {
               res = false;
               std::cout &lt;&lt; "Cannot open output file" &lt;&lt; std::endl;
           }
       }

       if (res) {
           avformat_write_header(pFormatContext,NULL);
       }
       else{
           freeMemory();
           std::cout &lt;&lt; "Cannot init encoder" &lt;&lt; std::endl;
       }


       return res;

    }



    AVStream *MyVideoEncoder::AddVideoStream(AVFormatContext *pContext, CodecID codec_id)
    {
     AVCodecContext *pCodecCxt = NULL;
     AVStream *st    = NULL;

     st = avformat_new_stream(pContext, NULL);
     if (!st)
     {
         std::cout &lt;&lt; "Cannot add new video stream" &lt;&lt; std::endl;
         return NULL;
     }
     st->id = 0;

     pCodecCxt = st->codec;
     pCodecCxt->codec_id = (CodecID)codec_id;
     pCodecCxt->codec_type = AVMEDIA_TYPE_VIDEO;
     pCodecCxt->frame_number = 0;


     // Put sample parameters.
     pCodecCxt->bit_rate = outWidth * outHeight * 3 * frameRate/ compressionFactor;

     pCodecCxt->width  = outWidth;
     pCodecCxt->height = outHeight;

     /* frames per second */
     pCodecCxt->time_base= (AVRational){1,frameRate};

     /* pixel format must be YUV */
     pCodecCxt->pix_fmt = PIX_FMT_YUV420P;


     if (pCodecCxt->codec_id == CODEC_ID_H264)
     {
         av_opt_set(pCodecCxt->priv_data, "preset", "slow", 0);
         av_opt_set(pCodecCxt->priv_data, "vprofile", "baseline", 0);
         pCodecCxt->max_b_frames = 16;
     }
     if (pCodecCxt->codec_id == CODEC_ID_MPEG1VIDEO)
     {
         pCodecCxt->mb_decision = 1;
     }

     if(pContext->oformat->flags &amp; AVFMT_GLOBALHEADER)
     {
         pCodecCxt->flags |= CODEC_FLAG_GLOBAL_HEADER;
     }

     pCodecCxt->coder_type = 1;  // coder = 1
     pCodecCxt->flags|=CODEC_FLAG_LOOP_FILTER;   // flags=+loop
     pCodecCxt->me_range = 16;   // me_range=16
     pCodecCxt->gop_size = 50;  // g=250
     pCodecCxt->keyint_min = 25; // keyint_min=25


     return st;
    }


    bool MyVideoEncoder::openVideo(AVFormatContext *oc, AVStream *pStream)
    {
       AVCodec *pCodec;
       AVCodecContext *pContext;

       pContext = pStream->codec;

       // Find the video encoder.
       pCodec = avcodec_find_encoder(pContext->codec_id);
       if (!pCodec)
       {
           std::cout &lt;&lt; "Cannot found video codec" &lt;&lt; std::endl;
           return false;
       }

       // Open the codec.
       if (avcodec_open2(pContext, pCodec, NULL) &lt; 0)
       {
           std::cout &lt;&lt; "Cannot open video codec" &lt;&lt; std::endl;
           return false;
       }


       return true;
    }



    bool MyVideoEncoder::configureFrames() {

       /* alloc image and output buffer */
       outbuf_size = outWidth*outHeight*3;
       outbuf = (uint8_t*) malloc(outbuf_size);

       av_image_alloc(pTempFrame->data, pTempFrame->linesize, pVideoStream->codec->width,
               pVideoStream->codec->height, pVideoStream->codec->pix_fmt, 1);

       //Alloc RGB temp frame
       pFrameRGB = avcodec_alloc_frame();
       if (pFrameRGB == NULL)
           return false;
       avpicture_alloc((AVPicture *) pFrameRGB, PIX_FMT_RGB24, inWidth, inHeight);

       pFrameRGB->pts = 0;

       //Set SWS context to convert from RGB images to YUV images
       if (img_convert_ctx == NULL) {
           img_convert_ctx = sws_getContext(inWidth, inHeight, PIX_FMT_RGB24,
                   outWidth, outHeight, pVideoStream->codec->pix_fmt, /*SWS_BICUBIC*/
                   SWS_FAST_BILINEAR, NULL, NULL, NULL);
           if (img_convert_ctx == NULL) {
               fprintf(stderr, "Cannot initialize the conversion context!\n");
               return false;
           }
       }

       return true;

    }

    void MyVideoEncoder::closeEncoder() {
       av_write_frame(pFormatContext, NULL);
       av_write_trailer(pFormatContext);
       freeMemory();
    }


    void MyVideoEncoder::freeMemory()
    {
     bool res = true;

     if (pFormatContext)
     {
       // close video stream
       if (pVideoStream)
       {
         closeVideo(pFormatContext, pVideoStream);
       }

       // Free the streams.
       for(size_t i = 0; i &lt; pFormatContext->nb_streams; i++)
       {
         av_freep(&amp;pFormatContext->streams[i]->codec);
         av_freep(&amp;pFormatContext->streams[i]);
       }

       if (!(pFormatContext->flags &amp; AVFMT_NOFILE) &amp;&amp; pFormatContext->pb)
       {
         avio_close(pFormatContext->pb);
       }

       // Free the stream.
       av_free(pFormatContext);
       pFormatContext = NULL;
     }
    }

    void MyVideoEncoder::closeVideo(AVFormatContext *pContext, AVStream *pStream)
    {
     avcodec_close(pStream->codec);
     if (pTempFrame)
     {
       if (pTempFrame->data)
       {
         av_free(pTempFrame->data[0]);
         pTempFrame->data[0] = NULL;
       }
       av_free(pTempFrame);
       pTempFrame = NULL;
     }

     if (pFrameRGB)
     {
       if (pFrameRGB->data)
       {
         av_free(pFrameRGB->data[0]);
         pFrameRGB->data[0] = NULL;
       }
       av_free(pFrameRGB);
       pFrameRGB = NULL;
     }

    }
    </sstream></cstring>
  • FFMPEG : cannot play MPEG4 video encoded from images. Duration and bitrate undefined

    17 juin 2013, par KaiK

    I've been trying to set a H264 video stream created from images, into an MPEG4 container. I've been able to get the video stream from images successfully. But when muxing it in the container, I must do something wrong because no player is able to reproduce it, despite ffplay - that plays the video until the end and after that, the image gets frozen until the eternity -.

    The ffplay cannot identify Duration neither bitrate, so I supose it might be an issue related with dts and pts, but I've searched about how to solve it with no success.

    Here's the ffplay output :

    ~$ ffplay testContainer.mp4
    ffplay version git-2012-01-31-c673671 Copyright (c) 2003-2012 the FFmpeg developers
     built on Feb  7 2012 20:32:12 with gcc 4.4.3
     configuration: --enable-gpl --enable-version3 --enable-nonfree --enable-postproc --enable-        libfaac --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libtheora --enable-libvorbis --enable-libx264 --enable-libxvid --enable-x11grab --enable-libvpx --enable-libmp3lame --enable-debug=3
     libavutil      51. 36.100 / 51. 36.100
     libavcodec     54.  0.102 / 54.  0.102
     libavformat    54.  0.100 / 54.  0.100
     libavdevice    53.  4.100 / 53.  4.100
     libavfilter     2. 60.100 /  2. 60.100
     libswscale      2.  1.100 /  2.  1.100
     libswresample   0.  6.100 /  0.  6.100
     libpostproc    52.  0.100 / 52.  0.100
    [h264 @ 0xa4849c0] max_analyze_duration 5000000 reached at 5000000
    [h264 @ 0xa4849c0] Estimating duration from bitrate, this may be inaccurate
    Input #0, h264, from &#39;testContainer.mp4&#39;:
     Duration: N/A, bitrate: N/A
       Stream #0:0: Video: h264 (High), yuv420p, 512x512, 25 fps, 25 tbr, 1200k tbn, 50 tbc
          2.74 A-V:  0.000 fd=   0 aq=    0KB vq=  160KB sq=    0B f=0/0   0/0

    Structure

    My code is C++ styled, so I've a class that handles all the encoding, and then a main that initilize it, passes some images in a bucle, and finally notify the end of the process as following :

    int main (int argc, const char * argv[])
    {

    MyVideoEncoder* videoEncoder = new MyVideoEncoder(512, 512, 512, 512, "output/testContainer.mp4", 25, 20);
    if(!videoEncoder->initWithCodec(MyVideoEncoder::H264))
    {
       std::cout &lt;&lt; "something really bad happened. Exit!!" &lt;&lt; std::endl;
       exit(-1);
    }

    /* encode 1 second of video */
    for(int i=0;i&lt;228;i++) {

       std::stringstream filepath;
       filepath &lt;&lt; "input2/image" &lt;&lt; i &lt;&lt; ".jpg";

       videoEncoder->encodeFrameFromJPG(const_cast(filepath.str().c_str()));

    }

    videoEncoder->endEncoding();

    }

    Hints

    I've seen a lot of examples about decoding of a video and encoding into another, but no working example of muxing a video from the scratch, so I'm not sure how to proceed with the pts and dts packet values. That's the reason why I suspect the issue must be in the following method :

    bool MyVideoEncoder::encodeImageAsFrame(){
       bool res = false;


       pTempFrame->pts = frameCount * frameRate * 90; //90Hz by the standard for PTS-values
       frameCount++;

       /* encode the image */
       out_size = avcodec_encode_video(pVideoStream->codec, outbuf, outbuf_size, pTempFrame);


       if (out_size > 0) {
           AVPacket pkt;
           av_init_packet(&amp;pkt);
           pkt.pts = pkt.dts = 0;

           if (pVideoStream->codec->coded_frame->pts != AV_NOPTS_VALUE) {
               pkt.pts = av_rescale_q(pVideoStream->codec->coded_frame->pts,
                       pVideoStream->codec->time_base, pVideoStream->time_base);
               pkt.dts = pTempFrame->pts;

           }
           if (pVideoStream->codec->coded_frame->key_frame) {
               pkt.flags |= AV_PKT_FLAG_KEY;
           }
           pkt.stream_index = pVideoStream->index;
           pkt.data = outbuf;
           pkt.size = out_size;

           res = (av_interleaved_write_frame(pFormatContext, &amp;pkt) == 0);
       }


       return res;
    }

    Any help or insight would be appreciated. Thanks in advance !!

    P.S. The rest of the code, where config is done, is the following :

    // MyVideoEncoder.cpp

    #include "MyVideoEncoder.h"
    #include "Image.hpp"
    #include <cstring>
    #include <sstream>
    #include

    #define MAX_AUDIO_PACKET_SIZE (128 * 1024)



    MyVideoEncoder::MyVideoEncoder(int inwidth, int inheight,
           int outwidth, int outheight, char* fileOutput, int framerate,
           int compFactor) {
       inWidth = inwidth;
       inHeight = inheight;
       outWidth = outwidth;
       outHeight = outheight;
       pathToMovie = fileOutput;
       frameRate = framerate;
       compressionFactor = compFactor;
       frameCount = 0;

    }

    MyVideoEncoder::~MyVideoEncoder() {

    }

    bool MyVideoEncoder::initWithCodec(
           MyVideoEncoder::encoderType type) {
       if (!initializeEncoder(type))
           return false;

       if (!configureFrames())
           return false;

       return true;

    }

    bool MyVideoEncoder::encodeFrameFromJPG(char* filepath) {

       setJPEGImage(filepath);
       return encodeImageAsFrame();
    }



    bool MyVideoEncoder::encodeDelayedFrames(){
       bool res = false;

       while(out_size > 0)
       {
           pTempFrame->pts = frameCount * frameRate * 90; //90Hz by the standard for PTS-values
           frameCount++;

           out_size = avcodec_encode_video(pVideoStream->codec, outbuf, outbuf_size, NULL);

           if (out_size > 0)
           {
               AVPacket pkt;
               av_init_packet(&amp;pkt);
               pkt.pts = pkt.dts = 0;

               if (pVideoStream->codec->coded_frame->pts != AV_NOPTS_VALUE) {
                   pkt.pts = av_rescale_q(pVideoStream->codec->coded_frame->pts,
                           pVideoStream->codec->time_base, pVideoStream->time_base);
                   pkt.dts = pTempFrame->pts;
               }
               if (pVideoStream->codec->coded_frame->key_frame) {
                   pkt.flags |= AV_PKT_FLAG_KEY;
               }
               pkt.stream_index = pVideoStream->index;
               pkt.data = outbuf;
               pkt.size = out_size;


               res = (av_interleaved_write_frame(pFormatContext, &amp;pkt) == 0);
           }

       }

       return res;
    }






    void MyVideoEncoder::endEncoding() {
       encodeDelayedFrames();
       closeEncoder();
    }

    bool MyVideoEncoder::setJPEGImage(char* imgFilename) {
       Image* rgbImage = new Image();
       rgbImage->read_jpeg_image(imgFilename);

       bool ret = setImageFromRGBArray(rgbImage->get_data());

       delete rgbImage;

       return ret;
    }

    bool MyVideoEncoder::setImageFromRGBArray(unsigned char* data) {

       memcpy(pFrameRGB->data[0], data, 3 * inWidth * inHeight);

       int ret = sws_scale(img_convert_ctx, pFrameRGB->data, pFrameRGB->linesize,
               0, inHeight, pTempFrame->data, pTempFrame->linesize);

       pFrameRGB->pts++;
       if (ret)
           return true;
       else
           return false;
    }

    bool MyVideoEncoder::initializeEncoder(encoderType type) {

       av_register_all();

       pTempFrame = avcodec_alloc_frame();
       pTempFrame->pts = 0;
       pOutFormat = NULL;
       pFormatContext = NULL;
       pVideoStream = NULL;
       pAudioStream = NULL;
       bool res = false;

       // Create format
       switch (type) {
           case MyVideoEncoder::H264:
               pOutFormat = av_guess_format("h264", NULL, NULL);
               break;
           case MyVideoEncoder::MPEG1:
               pOutFormat = av_guess_format("mpeg", NULL, NULL);
               break;
           default:
               pOutFormat = av_guess_format(NULL, pathToMovie.c_str(), NULL);
               break;
       }

       if (!pOutFormat) {
           pOutFormat = av_guess_format(NULL, pathToMovie.c_str(), NULL);
           if (!pOutFormat) {
               std::cout &lt;&lt; "output format not found" &lt;&lt; std::endl;
               return false;
           }
       }


       // allocate context
       pFormatContext = avformat_alloc_context();
       if(!pFormatContext)
       {
           std::cout &lt;&lt; "cannot alloc format context" &lt;&lt; std::endl;
           return false;
       }

       pFormatContext->oformat = pOutFormat;

       memcpy(pFormatContext->filename, pathToMovie.c_str(), min( (const int) pathToMovie.length(), (const int)sizeof(pFormatContext->filename)));


       //Add video and audio streams
       pVideoStream = AddVideoStream(pFormatContext,
               pOutFormat->video_codec);

       // Set the output parameters
       av_dump_format(pFormatContext, 0, pathToMovie.c_str(), 1);

       // Open Video stream
       if (pVideoStream) {
           res = openVideo(pFormatContext, pVideoStream);
       }


       if (res &amp;&amp; !(pOutFormat->flags &amp; AVFMT_NOFILE)) {
           if (avio_open(&amp;pFormatContext->pb, pathToMovie.c_str(), AVIO_FLAG_WRITE) &lt; 0) {
               res = false;
               std::cout &lt;&lt; "Cannot open output file" &lt;&lt; std::endl;
           }
       }

       if (res) {
           avformat_write_header(pFormatContext,NULL);
       }
       else{
           freeMemory();
           std::cout &lt;&lt; "Cannot init encoder" &lt;&lt; std::endl;
       }


       return res;

    }



    AVStream *MyVideoEncoder::AddVideoStream(AVFormatContext *pContext, CodecID codec_id)
    {
     AVCodecContext *pCodecCxt = NULL;
     AVStream *st    = NULL;

     st = avformat_new_stream(pContext, NULL);
     if (!st)
     {
         std::cout &lt;&lt; "Cannot add new video stream" &lt;&lt; std::endl;
         return NULL;
     }
     st->id = 0;

     pCodecCxt = st->codec;
     pCodecCxt->codec_id = (CodecID)codec_id;
     pCodecCxt->codec_type = AVMEDIA_TYPE_VIDEO;
     pCodecCxt->frame_number = 0;


     // Put sample parameters.
     pCodecCxt->bit_rate = outWidth * outHeight * 3 * frameRate/ compressionFactor;

     pCodecCxt->width  = outWidth;
     pCodecCxt->height = outHeight;

     /* frames per second */
     pCodecCxt->time_base= (AVRational){1,frameRate};

     /* pixel format must be YUV */
     pCodecCxt->pix_fmt = PIX_FMT_YUV420P;


     if (pCodecCxt->codec_id == CODEC_ID_H264)
     {
         av_opt_set(pCodecCxt->priv_data, "preset", "slow", 0);
         av_opt_set(pCodecCxt->priv_data, "vprofile", "baseline", 0);
         pCodecCxt->max_b_frames = 16;
     }
     if (pCodecCxt->codec_id == CODEC_ID_MPEG1VIDEO)
     {
         pCodecCxt->mb_decision = 1;
     }

     if(pContext->oformat->flags &amp; AVFMT_GLOBALHEADER)
     {
         pCodecCxt->flags |= CODEC_FLAG_GLOBAL_HEADER;
     }

     pCodecCxt->coder_type = 1;  // coder = 1
     pCodecCxt->flags|=CODEC_FLAG_LOOP_FILTER;   // flags=+loop
     pCodecCxt->me_range = 16;   // me_range=16
     pCodecCxt->gop_size = 50;  // g=250
     pCodecCxt->keyint_min = 25; // keyint_min=25


     return st;
    }


    bool MyVideoEncoder::openVideo(AVFormatContext *oc, AVStream *pStream)
    {
       AVCodec *pCodec;
       AVCodecContext *pContext;

       pContext = pStream->codec;

       // Find the video encoder.
       pCodec = avcodec_find_encoder(pContext->codec_id);
       if (!pCodec)
       {
           std::cout &lt;&lt; "Cannot found video codec" &lt;&lt; std::endl;
           return false;
       }

       // Open the codec.
       if (avcodec_open2(pContext, pCodec, NULL) &lt; 0)
       {
           std::cout &lt;&lt; "Cannot open video codec" &lt;&lt; std::endl;
           return false;
       }


       return true;
    }



    bool MyVideoEncoder::configureFrames() {

       /* alloc image and output buffer */
       outbuf_size = outWidth*outHeight*3;
       outbuf = (uint8_t*) malloc(outbuf_size);

       av_image_alloc(pTempFrame->data, pTempFrame->linesize, pVideoStream->codec->width,
               pVideoStream->codec->height, pVideoStream->codec->pix_fmt, 1);

       //Alloc RGB temp frame
       pFrameRGB = avcodec_alloc_frame();
       if (pFrameRGB == NULL)
           return false;
       avpicture_alloc((AVPicture *) pFrameRGB, PIX_FMT_RGB24, inWidth, inHeight);

       pFrameRGB->pts = 0;

       //Set SWS context to convert from RGB images to YUV images
       if (img_convert_ctx == NULL) {
           img_convert_ctx = sws_getContext(inWidth, inHeight, PIX_FMT_RGB24,
                   outWidth, outHeight, pVideoStream->codec->pix_fmt, /*SWS_BICUBIC*/
                   SWS_FAST_BILINEAR, NULL, NULL, NULL);
           if (img_convert_ctx == NULL) {
               fprintf(stderr, "Cannot initialize the conversion context!\n");
               return false;
           }
       }

       return true;

    }

    void MyVideoEncoder::closeEncoder() {
       av_write_frame(pFormatContext, NULL);
       av_write_trailer(pFormatContext);
       freeMemory();
    }


    void MyVideoEncoder::freeMemory()
    {
     bool res = true;

     if (pFormatContext)
     {
       // close video stream
       if (pVideoStream)
       {
         closeVideo(pFormatContext, pVideoStream);
       }

       // Free the streams.
       for(size_t i = 0; i &lt; pFormatContext->nb_streams; i++)
       {
         av_freep(&amp;pFormatContext->streams[i]->codec);
         av_freep(&amp;pFormatContext->streams[i]);
       }

       if (!(pFormatContext->flags &amp; AVFMT_NOFILE) &amp;&amp; pFormatContext->pb)
       {
         avio_close(pFormatContext->pb);
       }

       // Free the stream.
       av_free(pFormatContext);
       pFormatContext = NULL;
     }
    }

    void MyVideoEncoder::closeVideo(AVFormatContext *pContext, AVStream *pStream)
    {
     avcodec_close(pStream->codec);
     if (pTempFrame)
     {
       if (pTempFrame->data)
       {
         av_free(pTempFrame->data[0]);
         pTempFrame->data[0] = NULL;
       }
       av_free(pTempFrame);
       pTempFrame = NULL;
     }

     if (pFrameRGB)
     {
       if (pFrameRGB->data)
       {
         av_free(pFrameRGB->data[0]);
         pFrameRGB->data[0] = NULL;
       }
       av_free(pFrameRGB);
       pFrameRGB = NULL;
     }

    }
    </sstream></cstring>
  • FFmpeg Slideshow issues

    19 décembre 2013, par loveforfire33

    trying to get my head around ffmpeg to create a slideshow where each image is displayed for 5 seconds with some audio. created a bat file to run the following so far :

    ffmpeg -f image2 -i image-%%03d.jpg -i music.mp3 output.mpg

    It gets the images and displayes them all very fast in the first second of the video, it then plays out the rest of the audio while showing the last image.

    I want to make the images stay up longer (about 5 seconds), and stop the video after the last frame (not playing the rest of the song), are either of these things possible ? i could hack the frame rate thing i guess by having hundreds of the same image in order to keep it up longer, but this is far from ideal !

    Thanks