Recherche avancée

Médias (0)

Mot : - Tags -/performance

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (62)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

Sur d’autres sites (7124)

  • FFMPEG : cannot play MPEG4 video encoded from images. Duration and bitrate undefined

    17 juin 2013, par KaiK

    I've been trying to set a H264 video stream created from images, into an MPEG4 container. I've been able to get the video stream from images successfully. But when muxing it in the container, I must do something wrong because no player is able to reproduce it, despite ffplay - that plays the video until the end and after that, the image gets frozen until the eternity -.

    The ffplay cannot identify Duration neither bitrate, so I supose it might be an issue related with dts and pts, but I've searched about how to solve it with no success.

    Here's the ffplay output :

    ~$ ffplay testContainer.mp4
    ffplay version git-2012-01-31-c673671 Copyright (c) 2003-2012 the FFmpeg developers
     built on Feb  7 2012 20:32:12 with gcc 4.4.3
     configuration: --enable-gpl --enable-version3 --enable-nonfree --enable-postproc --enable-        libfaac --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libtheora --enable-libvorbis --enable-libx264 --enable-libxvid --enable-x11grab --enable-libvpx --enable-libmp3lame --enable-debug=3
     libavutil      51. 36.100 / 51. 36.100
     libavcodec     54.  0.102 / 54.  0.102
     libavformat    54.  0.100 / 54.  0.100
     libavdevice    53.  4.100 / 53.  4.100
     libavfilter     2. 60.100 /  2. 60.100
     libswscale      2.  1.100 /  2.  1.100
     libswresample   0.  6.100 /  0.  6.100
     libpostproc    52.  0.100 / 52.  0.100
    [h264 @ 0xa4849c0] max_analyze_duration 5000000 reached at 5000000
    [h264 @ 0xa4849c0] Estimating duration from bitrate, this may be inaccurate
    Input #0, h264, from 'testContainer.mp4':
     Duration: N/A, bitrate: N/A
       Stream #0:0: Video: h264 (High), yuv420p, 512x512, 25 fps, 25 tbr, 1200k tbn, 50 tbc
          2.74 A-V:  0.000 fd=   0 aq=    0KB vq=  160KB sq=    0B f=0/0   0/0

    Structure

    My code is C++ styled, so I've a class that handles all the encoding, and then a main that initilize it, passes some images in a bucle, and finally notify the end of the process as following :

    int main (int argc, const char * argv[])
    {

    MyVideoEncoder* videoEncoder = new MyVideoEncoder(512, 512, 512, 512, "output/testContainer.mp4", 25, 20);
    if(!videoEncoder->initWithCodec(MyVideoEncoder::H264))
    {
       std::cout << "something really bad happened. Exit!!" << std::endl;
       exit(-1);
    }

    /* encode 1 second of video */
    for(int i=0;i<228;i++) {

       std::stringstream filepath;
       filepath << "input2/image" << i << ".jpg";

       videoEncoder->encodeFrameFromJPG(const_cast(filepath.str().c_str()));

    }

    videoEncoder->endEncoding();

    }

    Hints

    I've seen a lot of examples about decoding of a video and encoding into another, but no working example of muxing a video from the scratch, so I'm not sure how to proceed with the pts and dts packet values. That's the reason why I suspect the issue must be in the following method :

    bool MyVideoEncoder::encodeImageAsFrame(){
       bool res = false;


       pTempFrame->pts = frameCount * frameRate * 90; //90Hz by the standard for PTS-values
       frameCount++;

       /* encode the image */
       out_size = avcodec_encode_video(pVideoStream->codec, outbuf, outbuf_size, pTempFrame);


       if (out_size > 0) {
           AVPacket pkt;
           av_init_packet(&pkt);
           pkt.pts = pkt.dts = 0;

           if (pVideoStream->codec->coded_frame->pts != AV_NOPTS_VALUE) {
               pkt.pts = av_rescale_q(pVideoStream->codec->coded_frame->pts,
                       pVideoStream->codec->time_base, pVideoStream->time_base);
               pkt.dts = pTempFrame->pts;

           }
           if (pVideoStream->codec->coded_frame->key_frame) {
               pkt.flags |= AV_PKT_FLAG_KEY;
           }
           pkt.stream_index = pVideoStream->index;
           pkt.data = outbuf;
           pkt.size = out_size;

           res = (av_interleaved_write_frame(pFormatContext, &pkt) == 0);
       }


       return res;
    }

    Any help or insight would be appreciated. Thanks in advance !!

    P.S. The rest of the code, where config is done, is the following :

    // MyVideoEncoder.cpp

    #include "MyVideoEncoder.h"
    #include "Image.hpp"
    #include <cstring>
    #include <sstream>
    #include

    #define MAX_AUDIO_PACKET_SIZE (128 * 1024)



    MyVideoEncoder::MyVideoEncoder(int inwidth, int inheight,
           int outwidth, int outheight, char* fileOutput, int framerate,
           int compFactor) {
       inWidth = inwidth;
       inHeight = inheight;
       outWidth = outwidth;
       outHeight = outheight;
       pathToMovie = fileOutput;
       frameRate = framerate;
       compressionFactor = compFactor;
       frameCount = 0;

    }

    MyVideoEncoder::~MyVideoEncoder() {

    }

    bool MyVideoEncoder::initWithCodec(
           MyVideoEncoder::encoderType type) {
       if (!initializeEncoder(type))
           return false;

       if (!configureFrames())
           return false;

       return true;

    }

    bool MyVideoEncoder::encodeFrameFromJPG(char* filepath) {

       setJPEGImage(filepath);
       return encodeImageAsFrame();
    }



    bool MyVideoEncoder::encodeDelayedFrames(){
       bool res = false;

       while(out_size > 0)
       {
           pTempFrame->pts = frameCount * frameRate * 90; //90Hz by the standard for PTS-values
           frameCount++;

           out_size = avcodec_encode_video(pVideoStream->codec, outbuf, outbuf_size, NULL);

           if (out_size > 0)
           {
               AVPacket pkt;
               av_init_packet(&amp;pkt);
               pkt.pts = pkt.dts = 0;

               if (pVideoStream->codec->coded_frame->pts != AV_NOPTS_VALUE) {
                   pkt.pts = av_rescale_q(pVideoStream->codec->coded_frame->pts,
                           pVideoStream->codec->time_base, pVideoStream->time_base);
                   pkt.dts = pTempFrame->pts;
               }
               if (pVideoStream->codec->coded_frame->key_frame) {
                   pkt.flags |= AV_PKT_FLAG_KEY;
               }
               pkt.stream_index = pVideoStream->index;
               pkt.data = outbuf;
               pkt.size = out_size;


               res = (av_interleaved_write_frame(pFormatContext, &amp;pkt) == 0);
           }

       }

       return res;
    }






    void MyVideoEncoder::endEncoding() {
       encodeDelayedFrames();
       closeEncoder();
    }

    bool MyVideoEncoder::setJPEGImage(char* imgFilename) {
       Image* rgbImage = new Image();
       rgbImage->read_jpeg_image(imgFilename);

       bool ret = setImageFromRGBArray(rgbImage->get_data());

       delete rgbImage;

       return ret;
    }

    bool MyVideoEncoder::setImageFromRGBArray(unsigned char* data) {

       memcpy(pFrameRGB->data[0], data, 3 * inWidth * inHeight);

       int ret = sws_scale(img_convert_ctx, pFrameRGB->data, pFrameRGB->linesize,
               0, inHeight, pTempFrame->data, pTempFrame->linesize);

       pFrameRGB->pts++;
       if (ret)
           return true;
       else
           return false;
    }

    bool MyVideoEncoder::initializeEncoder(encoderType type) {

       av_register_all();

       pTempFrame = avcodec_alloc_frame();
       pTempFrame->pts = 0;
       pOutFormat = NULL;
       pFormatContext = NULL;
       pVideoStream = NULL;
       pAudioStream = NULL;
       bool res = false;

       // Create format
       switch (type) {
           case MyVideoEncoder::H264:
               pOutFormat = av_guess_format("h264", NULL, NULL);
               break;
           case MyVideoEncoder::MPEG1:
               pOutFormat = av_guess_format("mpeg", NULL, NULL);
               break;
           default:
               pOutFormat = av_guess_format(NULL, pathToMovie.c_str(), NULL);
               break;
       }

       if (!pOutFormat) {
           pOutFormat = av_guess_format(NULL, pathToMovie.c_str(), NULL);
           if (!pOutFormat) {
               std::cout &lt;&lt; "output format not found" &lt;&lt; std::endl;
               return false;
           }
       }


       // allocate context
       pFormatContext = avformat_alloc_context();
       if(!pFormatContext)
       {
           std::cout &lt;&lt; "cannot alloc format context" &lt;&lt; std::endl;
           return false;
       }

       pFormatContext->oformat = pOutFormat;

       memcpy(pFormatContext->filename, pathToMovie.c_str(), min( (const int) pathToMovie.length(), (const int)sizeof(pFormatContext->filename)));


       //Add video and audio streams
       pVideoStream = AddVideoStream(pFormatContext,
               pOutFormat->video_codec);

       // Set the output parameters
       av_dump_format(pFormatContext, 0, pathToMovie.c_str(), 1);

       // Open Video stream
       if (pVideoStream) {
           res = openVideo(pFormatContext, pVideoStream);
       }


       if (res &amp;&amp; !(pOutFormat->flags &amp; AVFMT_NOFILE)) {
           if (avio_open(&amp;pFormatContext->pb, pathToMovie.c_str(), AVIO_FLAG_WRITE) &lt; 0) {
               res = false;
               std::cout &lt;&lt; "Cannot open output file" &lt;&lt; std::endl;
           }
       }

       if (res) {
           avformat_write_header(pFormatContext,NULL);
       }
       else{
           freeMemory();
           std::cout &lt;&lt; "Cannot init encoder" &lt;&lt; std::endl;
       }


       return res;

    }



    AVStream *MyVideoEncoder::AddVideoStream(AVFormatContext *pContext, CodecID codec_id)
    {
     AVCodecContext *pCodecCxt = NULL;
     AVStream *st    = NULL;

     st = avformat_new_stream(pContext, NULL);
     if (!st)
     {
         std::cout &lt;&lt; "Cannot add new video stream" &lt;&lt; std::endl;
         return NULL;
     }
     st->id = 0;

     pCodecCxt = st->codec;
     pCodecCxt->codec_id = (CodecID)codec_id;
     pCodecCxt->codec_type = AVMEDIA_TYPE_VIDEO;
     pCodecCxt->frame_number = 0;


     // Put sample parameters.
     pCodecCxt->bit_rate = outWidth * outHeight * 3 * frameRate/ compressionFactor;

     pCodecCxt->width  = outWidth;
     pCodecCxt->height = outHeight;

     /* frames per second */
     pCodecCxt->time_base= (AVRational){1,frameRate};

     /* pixel format must be YUV */
     pCodecCxt->pix_fmt = PIX_FMT_YUV420P;


     if (pCodecCxt->codec_id == CODEC_ID_H264)
     {
         av_opt_set(pCodecCxt->priv_data, "preset", "slow", 0);
         av_opt_set(pCodecCxt->priv_data, "vprofile", "baseline", 0);
         pCodecCxt->max_b_frames = 16;
     }
     if (pCodecCxt->codec_id == CODEC_ID_MPEG1VIDEO)
     {
         pCodecCxt->mb_decision = 1;
     }

     if(pContext->oformat->flags &amp; AVFMT_GLOBALHEADER)
     {
         pCodecCxt->flags |= CODEC_FLAG_GLOBAL_HEADER;
     }

     pCodecCxt->coder_type = 1;  // coder = 1
     pCodecCxt->flags|=CODEC_FLAG_LOOP_FILTER;   // flags=+loop
     pCodecCxt->me_range = 16;   // me_range=16
     pCodecCxt->gop_size = 50;  // g=250
     pCodecCxt->keyint_min = 25; // keyint_min=25


     return st;
    }


    bool MyVideoEncoder::openVideo(AVFormatContext *oc, AVStream *pStream)
    {
       AVCodec *pCodec;
       AVCodecContext *pContext;

       pContext = pStream->codec;

       // Find the video encoder.
       pCodec = avcodec_find_encoder(pContext->codec_id);
       if (!pCodec)
       {
           std::cout &lt;&lt; "Cannot found video codec" &lt;&lt; std::endl;
           return false;
       }

       // Open the codec.
       if (avcodec_open2(pContext, pCodec, NULL) &lt; 0)
       {
           std::cout &lt;&lt; "Cannot open video codec" &lt;&lt; std::endl;
           return false;
       }


       return true;
    }



    bool MyVideoEncoder::configureFrames() {

       /* alloc image and output buffer */
       outbuf_size = outWidth*outHeight*3;
       outbuf = (uint8_t*) malloc(outbuf_size);

       av_image_alloc(pTempFrame->data, pTempFrame->linesize, pVideoStream->codec->width,
               pVideoStream->codec->height, pVideoStream->codec->pix_fmt, 1);

       //Alloc RGB temp frame
       pFrameRGB = avcodec_alloc_frame();
       if (pFrameRGB == NULL)
           return false;
       avpicture_alloc((AVPicture *) pFrameRGB, PIX_FMT_RGB24, inWidth, inHeight);

       pFrameRGB->pts = 0;

       //Set SWS context to convert from RGB images to YUV images
       if (img_convert_ctx == NULL) {
           img_convert_ctx = sws_getContext(inWidth, inHeight, PIX_FMT_RGB24,
                   outWidth, outHeight, pVideoStream->codec->pix_fmt, /*SWS_BICUBIC*/
                   SWS_FAST_BILINEAR, NULL, NULL, NULL);
           if (img_convert_ctx == NULL) {
               fprintf(stderr, "Cannot initialize the conversion context!\n");
               return false;
           }
       }

       return true;

    }

    void MyVideoEncoder::closeEncoder() {
       av_write_frame(pFormatContext, NULL);
       av_write_trailer(pFormatContext);
       freeMemory();
    }


    void MyVideoEncoder::freeMemory()
    {
     bool res = true;

     if (pFormatContext)
     {
       // close video stream
       if (pVideoStream)
       {
         closeVideo(pFormatContext, pVideoStream);
       }

       // Free the streams.
       for(size_t i = 0; i &lt; pFormatContext->nb_streams; i++)
       {
         av_freep(&amp;pFormatContext->streams[i]->codec);
         av_freep(&amp;pFormatContext->streams[i]);
       }

       if (!(pFormatContext->flags &amp; AVFMT_NOFILE) &amp;&amp; pFormatContext->pb)
       {
         avio_close(pFormatContext->pb);
       }

       // Free the stream.
       av_free(pFormatContext);
       pFormatContext = NULL;
     }
    }

    void MyVideoEncoder::closeVideo(AVFormatContext *pContext, AVStream *pStream)
    {
     avcodec_close(pStream->codec);
     if (pTempFrame)
     {
       if (pTempFrame->data)
       {
         av_free(pTempFrame->data[0]);
         pTempFrame->data[0] = NULL;
       }
       av_free(pTempFrame);
       pTempFrame = NULL;
     }

     if (pFrameRGB)
     {
       if (pFrameRGB->data)
       {
         av_free(pFrameRGB->data[0]);
         pFrameRGB->data[0] = NULL;
       }
       av_free(pFrameRGB);
       pFrameRGB = NULL;
     }

    }
    </sstream></cstring>
  • FFMPEG : cannot play MPEG4 video encoded from images. Duration and bitrate undefined

    17 juin 2013, par KaiK

    I've been trying to set a H264 video stream created from images, into an MPEG4 container. I've been able to get the video stream from images successfully. But when muxing it in the container, I must do something wrong because no player is able to reproduce it, despite ffplay - that plays the video until the end and after that, the image gets frozen until the eternity -.

    The ffplay cannot identify Duration neither bitrate, so I supose it might be an issue related with dts and pts, but I've searched about how to solve it with no success.

    Here's the ffplay output :

    ~$ ffplay testContainer.mp4
    ffplay version git-2012-01-31-c673671 Copyright (c) 2003-2012 the FFmpeg developers
     built on Feb  7 2012 20:32:12 with gcc 4.4.3
     configuration: --enable-gpl --enable-version3 --enable-nonfree --enable-postproc --enable-        libfaac --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libtheora --enable-libvorbis --enable-libx264 --enable-libxvid --enable-x11grab --enable-libvpx --enable-libmp3lame --enable-debug=3
     libavutil      51. 36.100 / 51. 36.100
     libavcodec     54.  0.102 / 54.  0.102
     libavformat    54.  0.100 / 54.  0.100
     libavdevice    53.  4.100 / 53.  4.100
     libavfilter     2. 60.100 /  2. 60.100
     libswscale      2.  1.100 /  2.  1.100
     libswresample   0.  6.100 /  0.  6.100
     libpostproc    52.  0.100 / 52.  0.100
    [h264 @ 0xa4849c0] max_analyze_duration 5000000 reached at 5000000
    [h264 @ 0xa4849c0] Estimating duration from bitrate, this may be inaccurate
    Input #0, h264, from &#39;testContainer.mp4&#39;:
     Duration: N/A, bitrate: N/A
       Stream #0:0: Video: h264 (High), yuv420p, 512x512, 25 fps, 25 tbr, 1200k tbn, 50 tbc
          2.74 A-V:  0.000 fd=   0 aq=    0KB vq=  160KB sq=    0B f=0/0   0/0

    Structure

    My code is C++ styled, so I've a class that handles all the encoding, and then a main that initilize it, passes some images in a bucle, and finally notify the end of the process as following :

    int main (int argc, const char * argv[])
    {

    MyVideoEncoder* videoEncoder = new MyVideoEncoder(512, 512, 512, 512, "output/testContainer.mp4", 25, 20);
    if(!videoEncoder->initWithCodec(MyVideoEncoder::H264))
    {
       std::cout &lt;&lt; "something really bad happened. Exit!!" &lt;&lt; std::endl;
       exit(-1);
    }

    /* encode 1 second of video */
    for(int i=0;i&lt;228;i++) {

       std::stringstream filepath;
       filepath &lt;&lt; "input2/image" &lt;&lt; i &lt;&lt; ".jpg";

       videoEncoder->encodeFrameFromJPG(const_cast(filepath.str().c_str()));

    }

    videoEncoder->endEncoding();

    }

    Hints

    I've seen a lot of examples about decoding of a video and encoding into another, but no working example of muxing a video from the scratch, so I'm not sure how to proceed with the pts and dts packet values. That's the reason why I suspect the issue must be in the following method :

    bool MyVideoEncoder::encodeImageAsFrame(){
       bool res = false;


       pTempFrame->pts = frameCount * frameRate * 90; //90Hz by the standard for PTS-values
       frameCount++;

       /* encode the image */
       out_size = avcodec_encode_video(pVideoStream->codec, outbuf, outbuf_size, pTempFrame);


       if (out_size > 0) {
           AVPacket pkt;
           av_init_packet(&amp;pkt);
           pkt.pts = pkt.dts = 0;

           if (pVideoStream->codec->coded_frame->pts != AV_NOPTS_VALUE) {
               pkt.pts = av_rescale_q(pVideoStream->codec->coded_frame->pts,
                       pVideoStream->codec->time_base, pVideoStream->time_base);
               pkt.dts = pTempFrame->pts;

           }
           if (pVideoStream->codec->coded_frame->key_frame) {
               pkt.flags |= AV_PKT_FLAG_KEY;
           }
           pkt.stream_index = pVideoStream->index;
           pkt.data = outbuf;
           pkt.size = out_size;

           res = (av_interleaved_write_frame(pFormatContext, &amp;pkt) == 0);
       }


       return res;
    }

    Any help or insight would be appreciated. Thanks in advance !!

    P.S. The rest of the code, where config is done, is the following :

    // MyVideoEncoder.cpp

    #include "MyVideoEncoder.h"
    #include "Image.hpp"
    #include <cstring>
    #include <sstream>
    #include

    #define MAX_AUDIO_PACKET_SIZE (128 * 1024)



    MyVideoEncoder::MyVideoEncoder(int inwidth, int inheight,
           int outwidth, int outheight, char* fileOutput, int framerate,
           int compFactor) {
       inWidth = inwidth;
       inHeight = inheight;
       outWidth = outwidth;
       outHeight = outheight;
       pathToMovie = fileOutput;
       frameRate = framerate;
       compressionFactor = compFactor;
       frameCount = 0;

    }

    MyVideoEncoder::~MyVideoEncoder() {

    }

    bool MyVideoEncoder::initWithCodec(
           MyVideoEncoder::encoderType type) {
       if (!initializeEncoder(type))
           return false;

       if (!configureFrames())
           return false;

       return true;

    }

    bool MyVideoEncoder::encodeFrameFromJPG(char* filepath) {

       setJPEGImage(filepath);
       return encodeImageAsFrame();
    }



    bool MyVideoEncoder::encodeDelayedFrames(){
       bool res = false;

       while(out_size > 0)
       {
           pTempFrame->pts = frameCount * frameRate * 90; //90Hz by the standard for PTS-values
           frameCount++;

           out_size = avcodec_encode_video(pVideoStream->codec, outbuf, outbuf_size, NULL);

           if (out_size > 0)
           {
               AVPacket pkt;
               av_init_packet(&amp;pkt);
               pkt.pts = pkt.dts = 0;

               if (pVideoStream->codec->coded_frame->pts != AV_NOPTS_VALUE) {
                   pkt.pts = av_rescale_q(pVideoStream->codec->coded_frame->pts,
                           pVideoStream->codec->time_base, pVideoStream->time_base);
                   pkt.dts = pTempFrame->pts;
               }
               if (pVideoStream->codec->coded_frame->key_frame) {
                   pkt.flags |= AV_PKT_FLAG_KEY;
               }
               pkt.stream_index = pVideoStream->index;
               pkt.data = outbuf;
               pkt.size = out_size;


               res = (av_interleaved_write_frame(pFormatContext, &amp;pkt) == 0);
           }

       }

       return res;
    }






    void MyVideoEncoder::endEncoding() {
       encodeDelayedFrames();
       closeEncoder();
    }

    bool MyVideoEncoder::setJPEGImage(char* imgFilename) {
       Image* rgbImage = new Image();
       rgbImage->read_jpeg_image(imgFilename);

       bool ret = setImageFromRGBArray(rgbImage->get_data());

       delete rgbImage;

       return ret;
    }

    bool MyVideoEncoder::setImageFromRGBArray(unsigned char* data) {

       memcpy(pFrameRGB->data[0], data, 3 * inWidth * inHeight);

       int ret = sws_scale(img_convert_ctx, pFrameRGB->data, pFrameRGB->linesize,
               0, inHeight, pTempFrame->data, pTempFrame->linesize);

       pFrameRGB->pts++;
       if (ret)
           return true;
       else
           return false;
    }

    bool MyVideoEncoder::initializeEncoder(encoderType type) {

       av_register_all();

       pTempFrame = avcodec_alloc_frame();
       pTempFrame->pts = 0;
       pOutFormat = NULL;
       pFormatContext = NULL;
       pVideoStream = NULL;
       pAudioStream = NULL;
       bool res = false;

       // Create format
       switch (type) {
           case MyVideoEncoder::H264:
               pOutFormat = av_guess_format("h264", NULL, NULL);
               break;
           case MyVideoEncoder::MPEG1:
               pOutFormat = av_guess_format("mpeg", NULL, NULL);
               break;
           default:
               pOutFormat = av_guess_format(NULL, pathToMovie.c_str(), NULL);
               break;
       }

       if (!pOutFormat) {
           pOutFormat = av_guess_format(NULL, pathToMovie.c_str(), NULL);
           if (!pOutFormat) {
               std::cout &lt;&lt; "output format not found" &lt;&lt; std::endl;
               return false;
           }
       }


       // allocate context
       pFormatContext = avformat_alloc_context();
       if(!pFormatContext)
       {
           std::cout &lt;&lt; "cannot alloc format context" &lt;&lt; std::endl;
           return false;
       }

       pFormatContext->oformat = pOutFormat;

       memcpy(pFormatContext->filename, pathToMovie.c_str(), min( (const int) pathToMovie.length(), (const int)sizeof(pFormatContext->filename)));


       //Add video and audio streams
       pVideoStream = AddVideoStream(pFormatContext,
               pOutFormat->video_codec);

       // Set the output parameters
       av_dump_format(pFormatContext, 0, pathToMovie.c_str(), 1);

       // Open Video stream
       if (pVideoStream) {
           res = openVideo(pFormatContext, pVideoStream);
       }


       if (res &amp;&amp; !(pOutFormat->flags &amp; AVFMT_NOFILE)) {
           if (avio_open(&amp;pFormatContext->pb, pathToMovie.c_str(), AVIO_FLAG_WRITE) &lt; 0) {
               res = false;
               std::cout &lt;&lt; "Cannot open output file" &lt;&lt; std::endl;
           }
       }

       if (res) {
           avformat_write_header(pFormatContext,NULL);
       }
       else{
           freeMemory();
           std::cout &lt;&lt; "Cannot init encoder" &lt;&lt; std::endl;
       }


       return res;

    }



    AVStream *MyVideoEncoder::AddVideoStream(AVFormatContext *pContext, CodecID codec_id)
    {
     AVCodecContext *pCodecCxt = NULL;
     AVStream *st    = NULL;

     st = avformat_new_stream(pContext, NULL);
     if (!st)
     {
         std::cout &lt;&lt; "Cannot add new video stream" &lt;&lt; std::endl;
         return NULL;
     }
     st->id = 0;

     pCodecCxt = st->codec;
     pCodecCxt->codec_id = (CodecID)codec_id;
     pCodecCxt->codec_type = AVMEDIA_TYPE_VIDEO;
     pCodecCxt->frame_number = 0;


     // Put sample parameters.
     pCodecCxt->bit_rate = outWidth * outHeight * 3 * frameRate/ compressionFactor;

     pCodecCxt->width  = outWidth;
     pCodecCxt->height = outHeight;

     /* frames per second */
     pCodecCxt->time_base= (AVRational){1,frameRate};

     /* pixel format must be YUV */
     pCodecCxt->pix_fmt = PIX_FMT_YUV420P;


     if (pCodecCxt->codec_id == CODEC_ID_H264)
     {
         av_opt_set(pCodecCxt->priv_data, "preset", "slow", 0);
         av_opt_set(pCodecCxt->priv_data, "vprofile", "baseline", 0);
         pCodecCxt->max_b_frames = 16;
     }
     if (pCodecCxt->codec_id == CODEC_ID_MPEG1VIDEO)
     {
         pCodecCxt->mb_decision = 1;
     }

     if(pContext->oformat->flags &amp; AVFMT_GLOBALHEADER)
     {
         pCodecCxt->flags |= CODEC_FLAG_GLOBAL_HEADER;
     }

     pCodecCxt->coder_type = 1;  // coder = 1
     pCodecCxt->flags|=CODEC_FLAG_LOOP_FILTER;   // flags=+loop
     pCodecCxt->me_range = 16;   // me_range=16
     pCodecCxt->gop_size = 50;  // g=250
     pCodecCxt->keyint_min = 25; // keyint_min=25


     return st;
    }


    bool MyVideoEncoder::openVideo(AVFormatContext *oc, AVStream *pStream)
    {
       AVCodec *pCodec;
       AVCodecContext *pContext;

       pContext = pStream->codec;

       // Find the video encoder.
       pCodec = avcodec_find_encoder(pContext->codec_id);
       if (!pCodec)
       {
           std::cout &lt;&lt; "Cannot found video codec" &lt;&lt; std::endl;
           return false;
       }

       // Open the codec.
       if (avcodec_open2(pContext, pCodec, NULL) &lt; 0)
       {
           std::cout &lt;&lt; "Cannot open video codec" &lt;&lt; std::endl;
           return false;
       }


       return true;
    }



    bool MyVideoEncoder::configureFrames() {

       /* alloc image and output buffer */
       outbuf_size = outWidth*outHeight*3;
       outbuf = (uint8_t*) malloc(outbuf_size);

       av_image_alloc(pTempFrame->data, pTempFrame->linesize, pVideoStream->codec->width,
               pVideoStream->codec->height, pVideoStream->codec->pix_fmt, 1);

       //Alloc RGB temp frame
       pFrameRGB = avcodec_alloc_frame();
       if (pFrameRGB == NULL)
           return false;
       avpicture_alloc((AVPicture *) pFrameRGB, PIX_FMT_RGB24, inWidth, inHeight);

       pFrameRGB->pts = 0;

       //Set SWS context to convert from RGB images to YUV images
       if (img_convert_ctx == NULL) {
           img_convert_ctx = sws_getContext(inWidth, inHeight, PIX_FMT_RGB24,
                   outWidth, outHeight, pVideoStream->codec->pix_fmt, /*SWS_BICUBIC*/
                   SWS_FAST_BILINEAR, NULL, NULL, NULL);
           if (img_convert_ctx == NULL) {
               fprintf(stderr, "Cannot initialize the conversion context!\n");
               return false;
           }
       }

       return true;

    }

    void MyVideoEncoder::closeEncoder() {
       av_write_frame(pFormatContext, NULL);
       av_write_trailer(pFormatContext);
       freeMemory();
    }


    void MyVideoEncoder::freeMemory()
    {
     bool res = true;

     if (pFormatContext)
     {
       // close video stream
       if (pVideoStream)
       {
         closeVideo(pFormatContext, pVideoStream);
       }

       // Free the streams.
       for(size_t i = 0; i &lt; pFormatContext->nb_streams; i++)
       {
         av_freep(&amp;pFormatContext->streams[i]->codec);
         av_freep(&amp;pFormatContext->streams[i]);
       }

       if (!(pFormatContext->flags &amp; AVFMT_NOFILE) &amp;&amp; pFormatContext->pb)
       {
         avio_close(pFormatContext->pb);
       }

       // Free the stream.
       av_free(pFormatContext);
       pFormatContext = NULL;
     }
    }

    void MyVideoEncoder::closeVideo(AVFormatContext *pContext, AVStream *pStream)
    {
     avcodec_close(pStream->codec);
     if (pTempFrame)
     {
       if (pTempFrame->data)
       {
         av_free(pTempFrame->data[0]);
         pTempFrame->data[0] = NULL;
       }
       av_free(pTempFrame);
       pTempFrame = NULL;
     }

     if (pFrameRGB)
     {
       if (pFrameRGB->data)
       {
         av_free(pFrameRGB->data[0]);
         pFrameRGB->data[0] = NULL;
       }
       av_free(pFrameRGB);
       pFrameRGB = NULL;
     }

    }
    </sstream></cstring>
  • FFMPEG : i need audio channels 7 & 8 to be the main audio track for a video

    10 avril 2013, par lo_fye

    I have a video with 8 channels of audio.

    I need tracks 7 (Left Stereo) and 8 (Right Stereo) to be the audio for the video (which I'm converting to flv).

    I've tried playing with -filter_complex and the join, amix, and amerge filters, as well as the -map parameter, but I can't seem to find the right combination of values :-/

    Output :

    /usr/local/bin/ffmpeg-1.0/bin/ffmpeg -i &#39;/folder/video_name.mov&#39; -f &#39;flv&#39; \
    -s &#39;320x240&#39; -b &#39;250k&#39; -aspect &#39;4:3&#39; -ac 1 -ab &#39;64k&#39; -ar &#39;22050&#39; -y \
    /folder/video_name.flv

    ffmpeg version N-46241-g09ea482 Copyright (c) 2000-2012 the FFmpeg developers
     built on Nov  5 2012 07:33:09 with gcc 4.1.2 (GCC) 20080704 (Red Hat 4.1.2-46)
     configuration: --prefix=/usr/local/bin/ffmpeg-1.0
     libavutil      52.  1.100 / 52.  1.100
     libavcodec     54. 70.100 / 54. 70.100
     libavformat    54. 35.100 / 54. 35.100
     libavdevice    54.  3.100 / 54.  3.100
     libavfilter     3. 21.105 /  3. 21.105
     libswscale      2.  1.101 /  2.  1.101
     libswresample   0. 16.100 /  0. 16.100
    Guessed Channel Layout for  Input Stream #0.1 : mono
    Guessed Channel Layout for  Input Stream #0.2 : mono
    Guessed Channel Layout for  Input Stream #0.3 : mono
    Guessed Channel Layout for  Input Stream #0.4 : mono
    Guessed Channel Layout for  Input Stream #0.5 : mono
    Guessed Channel Layout for  Input Stream #0.6 : mono
    Guessed Channel Layout for  Input Stream #0.7 : mono
    Guessed Channel Layout for  Input Stream #0.8 : mono
    Guessed Channel Layout for  Input Stream #0.9 : mono
    Guessed Channel Layout for  Input Stream #0.10 : mono
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from &#39;/folder/video_name.mov&#39;:
     Metadata:
       major_brand     : qt
       minor_version   : 537199360
       compatible_brands: qt
       creation_time   : 2013-04-03 19:45:26
     Duration: 00:00:39.03, start: 0.000000, bitrate: 122149 kb/s
       Stream #0:0(eng): Video: prores (apch / 0x68637061), yuv422p10le, 1920x1080, 110585 kb/s, SAR 1:1 DAR 16:9, 23.98 fps, 23.98 tbr, 23976 tbn, 23976 tbc
       Metadata:
         creation_time   : 2013-04-03 19:45:26
         handler_name    : Apple Alias Data Handler
         timecode        : 00:59:53:00
       Stream #0:1(eng): Audio: pcm_s24le (in24 / 0x34326E69), 48000 Hz, mono, s32, 1152 kb/s
       Metadata:
         creation_time   : 2013-04-03 19:45:26
         handler_name    : Apple Alias Data Handler
       Stream #0:2(eng): Audio: pcm_s24le (in24 / 0x34326E69), 48000 Hz, mono, s32, 1152 kb/s
       Metadata:
         creation_time   : 2013-04-03 19:45:26
         handler_name    : Apple Alias Data Handler
       Stream #0:3(eng): Audio: pcm_s24le (in24 / 0x34326E69), 48000 Hz, mono, s32, 1152 kb/s
       Metadata:
         creation_time   : 2013-04-03 19:45:26
         handler_name    : Apple Alias Data Handler
       Stream #0:4(eng): Audio: pcm_s24le (in24 / 0x34326E69), 48000 Hz, mono, s32, 1152 kb/s
       Metadata:
         creation_time   : 2013-04-03 19:45:26
         handler_name    : Apple Alias Data Handler
       Stream #0:5(eng): Audio: pcm_s24le (in24 / 0x34326E69), 48000 Hz, mono, s32, 1152 kb/s
       Metadata:
         creation_time   : 2013-04-03 19:45:26
         handler_name    : Apple Alias Data Handler
       Stream #0:6(eng): Audio: pcm_s24le (in24 / 0x34326E69), 48000 Hz, mono, s32, 1152 kb/s
       Metadata:
         creation_time   : 2013-04-03 19:45:26
         handler_name    : Apple Alias Data Handler
       Stream #0:7(eng): Audio: pcm_s24le (in24 / 0x34326E69), 48000 Hz, mono, s32, 1152 kb/s
       Metadata:
         creation_time   : 2013-04-03 19:45:26
         handler_name    : Apple Alias Data Handler
       Stream #0:8(eng): Audio: pcm_s24le (in24 / 0x34326E69), 48000 Hz, mono, s32, 1152 kb/s
       Metadata:
         creation_time   : 2013-04-03 19:45:26
         handler_name    : Apple Alias Data Handler
       Stream #0:9(eng): Audio: pcm_s24le (in24 / 0x34326E69), 48000 Hz, mono, s32, 1152 kb/s
       Metadata:
         creation_time   : 2013-04-03 19:45:26
         handler_name    : Apple Alias Data Handler
       Stream #0:10(eng): Audio: pcm_s24le (in24 / 0x34326E69), 48000 Hz, mono, s32, 1152 kb/s
       Metadata:
         creation_time   : 2013-04-03 19:45:26
         handler_name    : Apple Alias Data Handler
       Stream #0:11(eng): Data: none (tmcd / 0x64636D74)
       Metadata:
         creation_time   : 2013-04-03 19:45:30
         handler_name    : Apple Alias Data Handler
         timecode        : 00:59:53:00
    Please use -b:a or -b:v, -b is ambiguous
    Output #0, flv, to &#39;/folder/video_name.flv&#39;:
     Metadata:
       major_brand     : qt
       minor_version   : 537199360
       compatible_brands: qt
       encoder         : Lavf54.35.100
       Stream #0:0(eng): Video: flv1 ([2][0][0][0] / 0x0002), yuv420p, 320x240 [SAR 1:1 DAR 4:3], q=2-31, 250 kb/s, 1k tbn, 23.98 tbc
       Metadata:
         creation_time   : 2013-04-03 19:45:26
         handler_name    : Apple Alias Data Handler
         timecode        : 00:59:53:00
       Stream #0:1(eng): Audio: adpcm_swf ([1][0][0][0] / 0x0001), 22050 Hz, mono, s16, 88 kb/s
       Metadata:
         creation_time   : 2013-04-03 19:45:26
         handler_name    : Apple Alias Data Handler
    Stream mapping:
     Stream #0:0 -> #0:0 (prores -> flv)
     Stream #0:1 -> #0:1 (pcm_s24le -> adpcm_swf)
    Press [q] to stop, [?] for help
    frame=   33 fps=0.0 q=2.0 size=     108kB time=00:00:01.99 bitrate= 442.4kbits/s    
    frame=   66 fps= 65 q=2.0 size=     225kB time=00:00:02.97 bitrate= 619.0kbits/s    
    frame=   99 fps= 65 q=2.0 size=     341kB time=00:00:04.96 bitrate= 561.8kbits/s    
    frame=  136 fps= 67 q=2.0 size=     400kB time=00:00:05.99 bitrate= 547.5kbits/s    
    frame=  177 fps= 70 q=3.0 size=     482kB time=00:00:07.98 bitrate= 494.3kbits/s    
    frame=  210 fps= 69 q=3.7 size=     590kB time=00:00:08.96 bitrate= 539.7kbits/s    
    frame=  240 fps= 68 q=6.3 size=     660kB time=00:00:10.01 bitrate= 539.7kbits/s    
    frame=  264 fps= 65 q=6.7 size=     719kB time=00:00:11.01 bitrate= 535.2kbits/s    
    frame=  288 fps= 63 q=8.4 size=     772kB time=00:00:12.02 bitrate= 526.1kbits/s    
    frame=  312 fps= 62 q=15.4 size=     829kB time=00:00:13.65 bitrate= 497.4kbits/s    
    frame=  336 fps= 60 q=10.4 size=     875kB time=00:00:14.02 bitrate= 511.1kbits/s    
    frame=  360 fps= 59 q=10.6 size=     916kB time=00:00:15.01 bitrate= 499.9kbits/s    
    frame=  383 fps= 58 q=17.8 size=     957kB time=00:00:15.97 bitrate= 490.6kbits/s    
    frame=  411 fps= 58 q=6.5 size=    1008kB time=00:00:17.97 bitrate= 459.3kbits/s    
    frame=  437 fps= 57 q=9.7 size=    1046kB time=00:00:18.99 bitrate= 451.3kbits/s    
    frame=  460 fps= 57 q=7.7 size=    1086kB time=00:00:20.01 bitrate= 444.6kbits/s    
    frame=  489 fps= 57 q=11.3 size=    1144kB time=00:00:20.99 bitrate= 446.3kbits/s    
    frame=  512 fps= 56 q=10.3 size=    1182kB time=00:00:22.01 bitrate= 439.8kbits/s    
    frame=  535 fps= 55 q=21.5 size=    1225kB time=00:00:22.98 bitrate= 436.7kbits/s    
    frame=  564 fps= 55 q=18.3 size=    1280kB time=00:00:24.00 bitrate= 436.8kbits/s    
    frame=  587 fps= 55 q=8.5 size=    1311kB time=00:00:24.98 bitrate= 429.7kbits/s    
    frame=  610 fps= 54 q=11.9 size=    1349kB time=00:00:26.00 bitrate= 424.9kbits/s    
    frame=  636 fps= 54 q=7.5 size=    1383kB time=00:00:26.98 bitrate= 419.8kbits/s    
    frame=  659 fps= 54 q=9.6 size=    1421kB time=00:00:28.00 bitrate= 415.6kbits/s    
    frame=  683 fps= 54 q=20.0 size=    1471kB time=00:00:29.02 bitrate= 415.1kbits/s    
    frame=  711 fps= 54 q=6.4 size=    1518kB time=00:00:30.00 bitrate= 414.5kbits/s    
    frame=  742 fps= 54 q=6.2 size=    1558kB time=00:00:31.02 bitrate= 411.5kbits/s    
    frame=  774 fps= 54 q=2.5 size=    1601kB time=00:00:33.01 bitrate= 397.1kbits/s    
    frame=  816 fps= 55 q=2.0 size=    1632kB time=00:00:34.50 bitrate= 387.6kbits/s    
    frame=  861 fps= 56 q=2.0 size=    1670kB time=00:00:35.99 bitrate= 380.1kbits/s    
    frame=  905 fps= 57 q=2.0 size=    1706kB time=00:00:38.03 bitrate= 367.4kbits/s    
    frame=  936 fps= 58 q=2.0 Lsize=    1730kB time=00:00:39.05 bitrate= 362.8kbits/s
    video:1278kB audio:423kB subtitle:0 global headers:0kB muxing overhead 1.654557%