Recherche avancée

Médias (1)

Mot : - Tags -/école

Autres articles (52)

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (8701)

  • Trying to cancel execution and delete file using ffmpeg C API

    6 mars 2020, par Vuwox

    The code below is a class that handle the conversion of multiples images, through add_frame() method, into a GIF with encode(). It also use a filter to generate and apply the palette. The usage is like this :

    Code call example

    std::unique_ptr gif_obj = nullptr;
    try
    {
       gif_obj = std::make_unique({1000,1000}, 12, "C:/out.gif",
                 "format=pix_fmts=rgb24,split [a][b];[a]palettegen[p];[b][p]paletteuse");

       // Example: a simple vector of images (usually process internally)
       for(auto img : image_vector)
            gif_obj->add_frame(img);

       // Once all frame were added, encode the final GIF with the filter applied.
       gif_obj->encode();
    }
    catch(const std::exception& e)
    {
       // An error occured! We must close FFMPEG properly and delete the created file.
       gif_obj->cancel();
    }

    I have the following issue. If the code for any reason throw an exception, I call ffmpeg->cancel() and it supposes to delete the GIF file on disk. But this is never working, I assume there is a lock on the file or something like that. So here are my question :

    What is the proper way to close/free ffmpeg object in order to remove the file afterward ?


    Full class code below

    Header

    // C++ Standard includes    
    #include <memory>
    #include <string>
    #include <vector>


    // 3rd Party incldues
    #ifdef __cplusplus
    extern "C" {
    #include "libavformat/avformat.h"
    #include "libavfilter/avfilter.h"
    #include "libavutil/opt.h"
    #include "libavfilter/buffersrc.h"
    #include "libavfilter/buffersink.h"
    #include "libswscale/swscale.h"
    #include "libavutil/imgutils.h"
    }
    #endif

    #define FFMPEG_MSG_LEN 2000

    namespace px
    {
       namespace GIF
       {
           class FFMPEG
           {
           public:
               FFMPEG(const px::Point2D<int>&amp; dim,
                      const int framerate,
                      const std::string&amp; filename,
                      const std::string&amp; filter_cmd);

               ~FFMPEG();

               void add_frame(pxImage * const img);
               void encode();
               void cancel();

           private:

               void init_filters();            // Init everything that needed to filter the input frame.
               void init_muxer();              // The muxer that creates the output file.
               void muxing_one_frame(AVFrame* frame);
               void release();

               int _ret = 0;                   // status code from FFMPEG.
               char _err_msg[FFMPEG_MSG_LEN];  // Error message buffer.


               int m_width = 0;                // The width that all futur images must have to be accepted.
               int m_height = 0;               // The height that all futur images must have to be accepted.

               int m_framerate = 0;            // GIF Framerate.
               std::string m_filename = "";    // The GIF filename (on cache?)
               std::string m_filter_desc = ""; // The FFMPEG filter to apply over the frames.

               bool as_frame = false;

               AVFrame* picture_rgb24 = nullptr;           // Temporary frame that will hold the pxImage in an RGB24 format (NOTE: TOP-LEFT origin)

               AVFormatContext* ofmt_ctx = nullptr;        // ouput format context associated to the
               AVCodecContext* o_codec_ctx = nullptr;      // output codec for the GIF

               AVFilterGraph* filter_graph = nullptr;      // filter graph associate with the string we want to execute
               AVFilterContext* buffersrc_ctx = nullptr;   // The buffer that will store all the frames in one place for the palette generation.
               AVFilterContext* buffersink_ctx = nullptr;  // The buffer that will store the result afterward (once the palette are used).

               int64_t m_pts_increment = 0;
           };
       };
    };
    </int></vector></string></memory>

    ctor

    px::GIF::FFMPEG::FFMPEG(const px::Point2D<int>&amp; dim,
                           const int framerate,
                           const std::string&amp; filename,
                           const std::string&amp; filter_cmd) :
       m_width(dim.x()),
       m_height(dim.y()),
       m_framerate(framerate),
       m_filename(filename),
       m_filter_desc(filter_cmd)
    {
    #if !_DEBUG
       av_log_set_level(AV_LOG_QUIET); // Set the FFMPEG log to quiet to avoid too much logs.
    #endif

       // Allocate the temporary buffer that hold the ffmpeg image (pxImage to AVFrame conversion).
       picture_rgb24 = av_frame_alloc();
       picture_rgb24->pts = 0;
       picture_rgb24->data[0] = NULL;
       picture_rgb24->linesize[0] = -1;
       picture_rgb24->format = AV_PIX_FMT_RGB24;
       picture_rgb24->height = m_height;
       picture_rgb24->width = m_width;

       if ((_ret = av_image_alloc(picture_rgb24->data, picture_rgb24->linesize, m_width, m_height, (AVPixelFormat)picture_rgb24->format, 24)) &lt; 0)
           throw px::GIF::Error("Failed to allocate the AVFrame for pxImage conversion with error: " +
                                std::string(av_make_error_string(_err_msg, FFMPEG_MSG_LEN, _ret)),
                                "GIF::FFMPEG CTOR");  

       //printf("allocated picture of size %d, linesize %d %d %d %d\n", _ret, picture_rgb24->linesize[0], picture_rgb24->linesize[1], picture_rgb24->linesize[2], picture_rgb24->linesize[3]);

       init_muxer();   // Prepare the GIF encoder (open it on disk).
       init_filters(); // Prepare the filter that will be applied over the frame.

       // Instead of hardcoder {1,100} which is the GIF tbn, we collect it from its stream.
       // This will avoid future problem if the codec change in ffmpeg.
       if (ofmt_ctx &amp;&amp; ofmt_ctx->nb_streams > 0)
           m_pts_increment = av_rescale_q(1, { 1, m_framerate }, ofmt_ctx->streams[0]->time_base);
       else
           m_pts_increment = av_rescale_q(1, { 1, m_framerate }, { 1, 100 });
    }
    </int>

    FFMPEG Initialization (Filter and muxer)

    void px::GIF::FFMPEG::init_filters()
    {
       const AVFilter* buffersrc = avfilter_get_by_name("buffer");
       const AVFilter* buffersink = avfilter_get_by_name("buffersink");

       AVRational time_base = { 1, m_framerate };
       AVRational aspect_pixel = { 1, 1 };

       AVFilterInOut* inputs = avfilter_inout_alloc();
       AVFilterInOut* outputs = avfilter_inout_alloc();

       filter_graph = avfilter_graph_alloc();

       try
       {
           if (!outputs || !inputs || !filter_graph)
               throw px::GIF::Error("Failed to 'init_filters' could not allocated the graph/filters.", "GIF::FFMPEG init_filters");

           char args[512];
           snprintf(args, sizeof(args),
                    "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
                    m_width, m_height,
                    picture_rgb24->format,
                    time_base.num, time_base.den,
                    aspect_pixel.num, aspect_pixel.den);

           if (avfilter_graph_create_filter(&amp;buffersrc_ctx, buffersrc, "in", args, nullptr, filter_graph) &lt; 0)
               throw px::GIF::Error("Failed to create the 'source buffer' in init_filer method.", "GIF::FFMPEG init_filters");


           if (avfilter_graph_create_filter(&amp;buffersink_ctx, buffersink, "out", nullptr, nullptr, filter_graph) &lt; 0)
               throw px::GIF::Error("Failed to create the 'sink buffer' in init_filer method.", "GIF::FFMPEG init_filters");

           // GIF has possible output of PAL8.
           enum AVPixelFormat pix_fmts[] = { AV_PIX_FMT_PAL8, AV_PIX_FMT_NONE };

           if (av_opt_set_int_list(buffersink_ctx, "pix_fmts", pix_fmts, AV_PIX_FMT_NONE, AV_OPT_SEARCH_CHILDREN) &lt; 0)
               throw px::GIF::Error("Failed to set the output pixel format.", "GIF::FFMPEG init_filters");

           outputs->name = av_strdup("in");
           outputs->filter_ctx = buffersrc_ctx;
           outputs->pad_idx = 0;
           outputs->next = nullptr;

           inputs->name = av_strdup("out");
           inputs->filter_ctx = buffersink_ctx;
           inputs->pad_idx = 0;
           inputs->next = nullptr;

           // GIF has possible output of PAL8.
           if (avfilter_graph_parse_ptr(filter_graph, m_filter_desc.c_str(), &amp;inputs, &amp;outputs, nullptr) &lt; 0)
               throw px::GIF::Error("Failed to parse the filter graph (bad string!).", "GIF::FFMPEG init_filters");

           if (avfilter_graph_config(filter_graph, nullptr) &lt; 0)
               throw px::GIF::Error("Failed to configure the filter graph (bad string!).", "GIF::FFMPEG init_filters");

           avfilter_inout_free(&amp;inputs);
           avfilter_inout_free(&amp;outputs);
       }
       catch (const std::exception&amp; e)
       {
           // Catch exception to delete element.
           avfilter_inout_free(&amp;inputs);
           avfilter_inout_free(&amp;outputs);
           throw e; // re-throuw
       }
    }


    void px::GIF::FFMPEG::init_muxer()
    {
       AVOutputFormat* o_fmt = av_guess_format("gif", m_filename.c_str(), "video/gif");

       if ((_ret = avformat_alloc_output_context2(&amp;ofmt_ctx, o_fmt, "gif", m_filename.c_str())) &lt; 0)
           throw px::GIF::Error(std::string(av_make_error_string(_err_msg, FFMPEG_MSG_LEN, _ret)) + " allocate output format.", "GIF::FFMPEG init_muxer");

       AVCodec* codec = avcodec_find_encoder(AV_CODEC_ID_GIF);
       if (!codec) throw px::GIF::Error("Could to find the 'GIF' codec.", "GIF::FFMPEG init_muxer");

    #if 0
       const AVPixelFormat* p = codec->pix_fmts;
       while (p != NULL &amp;&amp; *p != AV_PIX_FMT_NONE) {
           printf("supported pix fmt: %s\n", av_get_pix_fmt_name(*p));
           ++p;
       }
    #endif

       AVStream* stream = avformat_new_stream(ofmt_ctx, codec);

       AVCodecParameters* codec_paramters = stream->codecpar;
       codec_paramters->codec_tag = 0;
       codec_paramters->codec_id = codec->id;
       codec_paramters->codec_type = AVMEDIA_TYPE_VIDEO;
       codec_paramters->width = m_width;
       codec_paramters->height = m_height;
       codec_paramters->format = AV_PIX_FMT_PAL8;

       o_codec_ctx = avcodec_alloc_context3(codec);
       avcodec_parameters_to_context(o_codec_ctx, codec_paramters);

       o_codec_ctx->time_base = { 1, m_framerate };

       if (ofmt_ctx->oformat->flags &amp; AVFMT_GLOBALHEADER)
           o_codec_ctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;

       if ((_ret = avcodec_open2(o_codec_ctx, codec, NULL)) &lt; 0)
           throw px::GIF::Error(std::string(av_make_error_string(_err_msg, FFMPEG_MSG_LEN, _ret)) + " open output codec.", "GIF::FFMPEG init_muxer");

       if ((_ret = avio_open(&amp;ofmt_ctx->pb, m_filename.c_str(), AVIO_FLAG_WRITE)) &lt; 0)
           throw px::GIF::Error(std::string(av_make_error_string(_err_msg, FFMPEG_MSG_LEN, _ret)) + " avio open error.", "GIF::FFMPEG init_muxer");

       if ((_ret = avformat_write_header(ofmt_ctx, NULL)) &lt; 0)
           throw px::GIF::Error(std::string(av_make_error_string(_err_msg, FFMPEG_MSG_LEN, _ret)) + " write GIF header", "GIF::FFMPEG init_muxer");

    #if _DEBUG
       // This print the stream/output format.
       av_dump_format(ofmt_ctx, -1, m_filename.c_str(), 1);
    #endif
    }

    Add frame (usually in a loop)

    void px::GIF::FFMPEG::add_frame(pxImage * const img)
    {
       if (img->getImageType() != PXT_BYTE || img->getNChannels() != 4)
           throw px::GIF::Error("Failed to 'add_frame' since image is not PXT_BYTE and 4-channels.", "GIF::FFMPEG add_frame");

       if (img->getWidth() != m_width || img->getHeight() != m_height)
           throw px::GIF::Error("Failed to 'add_frame' since the size is not same to other inputs.", "GIF::FFMPEG add_frame");

       const int pitch = picture_rgb24->linesize[0];
       auto px_ptr = getImageAccessor(img);

       for (int y = 0; y &lt; m_height; y++)
       {
           const int px_row = img->getOrigin() == ORIGIN_BOT_LEFT ? m_height - y - 1 : y;
           for (int x = 0; x &lt; m_width; x++)
           {
               const int idx = y * pitch + 3 * x;
               picture_rgb24->data[0][idx] = px_ptr[px_row][x].ch[PX_RE];
               picture_rgb24->data[0][idx + 1] = px_ptr[px_row][x].ch[PX_GR];
               picture_rgb24->data[0][idx + 2] = px_ptr[px_row][x].ch[PX_BL];
           }
       }

       // palettegen need a whole stream, just add frame to buffer.
       if ((_ret = av_buffersrc_add_frame_flags(buffersrc_ctx, picture_rgb24, AV_BUFFERSRC_FLAG_KEEP_REF)) &lt; 0)
           throw px::GIF::Error("Failed to 'add_frame' to global buffer with error: " +
                                std::string(av_make_error_string(_err_msg, FFMPEG_MSG_LEN, _ret)),
                                "GIF::FFMPEG add_frame");

       // Increment the FPS of the picture for the next add-up to the buffer.      
       picture_rgb24->pts += m_pts_increment;

       as_frame = true;
    }    

    Encoder (final step)

    void px::GIF::FFMPEG::encode()
    {
       if (!as_frame)
           throw px::GIF::Error("Please 'add_frame' before running the Encoding().", "GIF::FFMPEG encode");

       // end of buffer
       if ((_ret = av_buffersrc_add_frame_flags(buffersrc_ctx, nullptr, AV_BUFFERSRC_FLAG_KEEP_REF)) &lt; 0)
           throw px::GIF::Error("error add frame to buffer source: " + std::string(av_make_error_string(_err_msg, FFMPEG_MSG_LEN, _ret)), "GIF::FFMPEG encode");

       do {
           AVFrame* filter_frame = av_frame_alloc();
           _ret = av_buffersink_get_frame(buffersink_ctx, filter_frame);
           if (_ret == AVERROR(EAGAIN) || _ret == AVERROR_EOF) {
               av_frame_unref(filter_frame);
               break;
           }

           // write the filter frame to output file
           muxing_one_frame(filter_frame);

           av_frame_unref(filter_frame);
       } while (_ret >= 0);

       av_write_trailer(ofmt_ctx);
    }

    void px::GIF::FFMPEG::muxing_one_frame(AVFrame* frame)
    {
       int ret = avcodec_send_frame(o_codec_ctx, frame);
       AVPacket *pkt = av_packet_alloc();
       av_init_packet(pkt);

       while (ret >= 0) {
           ret = avcodec_receive_packet(o_codec_ctx, pkt);
           if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
               break;
           }

           av_write_frame(ofmt_ctx, pkt);
       }
       av_packet_unref(pkt);
    }

    DTOR, Release and Cancel

    px::GIF::FFMPEG::~FFMPEG()
    {
       release();
    }


    void px::GIF::FFMPEG::release()
    {
       // Muxer stuffs
       if (ofmt_ctx != nullptr) avformat_free_context(ofmt_ctx);
       if (o_codec_ctx != nullptr) avcodec_close(o_codec_ctx);
       if (o_codec_ctx != nullptr) avcodec_free_context(&amp;o_codec_ctx);

       ofmt_ctx = nullptr;
       o_codec_ctx = nullptr;

       // Filter stuffs
       if (buffersrc_ctx != nullptr) avfilter_free(buffersrc_ctx);
       if (buffersink_ctx != nullptr) avfilter_free(buffersink_ctx);
       if (filter_graph != nullptr) avfilter_graph_free(&amp;filter_graph);

       buffersrc_ctx = nullptr;
       buffersink_ctx = nullptr;
       filter_graph = nullptr;

       // Conversion image.
       if (picture_rgb24 != nullptr) av_frame_free(&amp;picture_rgb24);
       picture_rgb24 = nullptr;
    }

    void px::GIF::FFMPEG::cancel()
    {
       // In-case of failure we must close ffmpeg and exit.
       av_write_trailer(ofmt_ctx);

       // Release and close all elements.
       release();

       // Delete the file on disk.
       if (remove(m_filename.c_str()) != 0)
           PX_LOG0(PX_LOGLEVEL_ERROR, "GIF::FFMPEG - On 'cancel' failed to remove the file.");
    }
  • ffmpeg determine correct codec to use

    8 avril 2020, par nha

    I am trying to use .y4m files that I feed into Chrome for playing.&#xA;This is the only format that I can use.

    &#xA;&#xA;

    I have a set of .y4m files that are working for my use case, taken from here.&#xA;For example let's say that I use the file mobile_qcif_7.5fps.y4m.

    &#xA;&#xA;

    I also have a .mp4 file (specifically the sample small.mp4 file from techslides ). I know how to convert small.mp4 to small.y4m with the following command :

    &#xA;&#xA;

    ffmpeg -y -i small.mp4 small.y4m

    &#xA;&#xA;

    I would like the resulting file to be accepted by Chrome, just like the other file.&#xA;What options should I use (and how to find them) ?

    &#xA;&#xA;

    Additional information

    &#xA;&#xA;

    ffprobe gives :

    &#xA;&#xA;

    $ ffprobe small.y4m &#xA;avprobe version 0.8.13-6:0.8.13-0ubuntu0.13.10.1, Copyright (c) 2007-2014 the Libav developers&#xA;  built on Jul 15 2014 13:48:35 with gcc 4.8.1&#xA;[yuv4mpegpipe @ 0x23160c0] Estimating duration from bitrate, this may be inaccurate&#xA;Input #0, yuv4mpegpipe, from &#x27;small.y4m&#x27;:&#xA;  Duration: N/A, bitrate: N/A&#xA;    Stream #0.0: Video: rawvideo, yuv420p, 560x320, 30 fps, 30 tbr, 30 tbn, 30 tbc&#xA;&#xA;&#xA;$ ffprobe mobile_qcif_7.5fps.y4m&#xA;avprobe version 0.8.13-6:0.8.13-0ubuntu0.13.10.1, Copyright (c) 2007-2014 the Libav developers&#xA;  built on Jul 15 2014 13:48:35 with gcc 4.8.1&#xA;[yuv4mpegpipe @ 0x1e080c0] max_analyze_duration reached&#xA;[yuv4mpegpipe @ 0x1e080c0] Estimating duration from bitrate, this may be inaccurate&#xA;Input #0, yuv4mpegpipe, from &#x27;mobile_qcif_7.5fps.y4m&#x27;:&#xA;  Duration: N/A, bitrate: N/A&#xA;    Stream #0.0: Video: rawvideo, yuv420p, 176x144, PAR 128:117 DAR 1408:1053, 7.50 fps, 7.50 tbr, 7.50 tbn, 7.50 tbc&#xA;

    &#xA;&#xA;

    OS : I am using ffmpeg in Ubuntu 13.10 from the default repository : I ran sudo apt-get install ffmpeg libavcodec-extra-53

    &#xA;&#xA;

    Complete console output

    &#xA;&#xA;

    $ ffmpeg -y -i small.mp4 small.y4m&#xA;ffmpeg version 0.8.13-6:0.8.13-0ubuntu0.13.10.1, Copyright (c) 2000-2014 the Libav developers&#xA;  built on Jul 15 2014 13:48:35 with gcc 4.8.1&#xA;*** THIS PROGRAM IS DEPRECATED ***&#xA;This program is only provided for compatibility and will be removed in a future release. Please use avconv instead.&#xA;Input #0, mov,mp4,m4a,3gp,3g2,mj2, from &#x27;small.mp4&#x27;:&#xA;  Metadata:&#xA;    major_brand     : mp42&#xA;    minor_version   : 0&#xA;    compatible_brands: mp42isomavc1&#xA;    creation_time   : 2010-03-20 21:29:11&#xA;    encoder         : HandBrake 0.9.4 2009112300&#xA;  Duration: 00:00:05.56, start: 0.000000, bitrate: 551 kb/s&#xA;    Stream #0.0(und): Video: h264 (Constrained Baseline), yuv420p, 560x320, 465 kb/s, 30 fps, 30 tbr, 90k tbn, 60 tbc&#xA;    Metadata:&#xA;      creation_time   : 2010-03-20 21:29:11&#xA;    Stream #0.1(eng): Audio: aac, 48000 Hz, mono, s16, 83 kb/s&#xA;    Metadata:&#xA;      creation_time   : 2010-03-20 21:29:11&#xA;[buffer @ 0x7274e0] w:560 h:320 pixfmt:yuv420p&#xA;Output #0, yuv4mpegpipe, to &#x27;small.y4m&#x27;:&#xA;  Metadata:&#xA;    major_brand     : mp42&#xA;    minor_version   : 0&#xA;    compatible_brands: mp42isomavc1&#xA;    creation_time   : 2010-03-20 21:29:11&#xA;    encoder         : Lavf53.21.1&#xA;    Stream #0.0(und): Video: rawvideo, yuv420p, 560x320, q=2-31, 200 kb/s, 90k tbn, 30 tbc&#xA;    Metadata:&#xA;      creation_time   : 2010-03-20 21:29:11&#xA;Stream mapping:&#xA;  Stream #0.0 -> #0.0&#xA;Press ctrl-c to stop encoding&#xA;frame=  166 fps=  0 q=0.0 Lsize=   43576kB time=5.53 bitrate=64513.5kbits/s    &#xA;video:0kB audio:0kB global headers:0kB muxing overhead inf%&#xA;

    &#xA;&#xA;

    Head of the files

    &#xA;&#xA;

    Not sure if this is relevant, but the text content of both files begin like that :

    &#xA;&#xA;

    mobile_qcif_7.5fps.y4m&#xA;YUV4MPEG2 W176 H144 F15:2 Ip A128:117&#xA;FRAME&#xA;&#xA;small.y4m (invalid)&#xA;YUV4MPEG2 W560 H320 F15:2 Ip A0:0 C420mpeg2 XYSCSS=420MPEG2&#xA;FRAME&#xA;

    &#xA;&#xA;

    Additional experiment

    &#xA;&#xA;

    Taking the sample mobile_qcif_7.5fps.y4m, turning it into a .mp4 file and then back into a .y4m produces an invalid file (at least for Chrome).

    &#xA;&#xA;

    Answer to comments in the answer of @SirDarius

    &#xA;&#xA;

    Following the suggestion of @SirDarius ; I tried the following commands.&#xA;They failed to produce a valid file.

    &#xA;&#xA;

    # [COMMAND]   # [RESULT] [HEADER]&#xA;ffmpeg -i small.mp4 -vf fps=7.5 -an -c:v rawvideo -f yuv4mpegpipe -pix_fmt yuv444p -y small_yuv444p.y4m   # KO YUV4MPEG2 W560 H320 F15:2 Ip A0:0 C444 XYSCSS=444&#xA;ffmpeg -i small.mp4 -vf fps=7.5 -an -c:v rawvideo -f yuv4mpegpipe -pix_fmt yuv422p -y small_yuv422p.y4m   # KO YUV4MPEG2 W560 H320 F15:2 Ip A0:0 C422 XYSCSS=422&#xA;ffmpeg -i small.mp4 -vf fps=7.5 -an -c:v rawvideo -f yuv4mpegpipe -pix_fmt yuv420p -y small_yuv420p.y4m   # KO YUV4MPEG2 W560 H320 F15:2 Ip A0:0 C420mpeg2 XYSCSS=420MPEG2&#xA;ffmpeg -i small.mp4 -vf fps=7.5 -an -c:v rawvideo -f yuv4mpegpipe -pix_fmt yuv411p -y small_yuv411p.y4m   #&#xA0;KO   YUV4MPEG2 W560 H320 F15:2 Ip A0:0 C411 XYSCSS=411&#xA;&#xA;ffmpeg -i small.mp4 -vf fps=7.5 -an -c:v rawvideo -f yuv4mpegpipe -strict -1 -pix_fmt yuv420p9 -y small_yuv420p9.y4m   # KO YUV4MPEG2 W560 H320 F15:2 Ip A0:0 C420p9 XYSCSS=420P9&#xA;ffmpeg -i small.mp4 -vf fps=7.5 -an -c:v rawvideo -f yuv4mpegpipe -strict -1 -pix_fmt yuv420p12 -y small_yuv420p12.y4m # KO YUV4MPEG2 W560 H320 F15:2 Ip A0:0 C420p12 XYSCSS=420P12&#xA;ffmpeg -i small.mp4 -vf fps=7.5 -an -c:v rawvideo -f yuv4mpegpipe -strict -1 -pix_fmt yuv420p14 -y small_yuv420p14.y4m #&#xA0;KO YUV4MPEG2 W560 H320 F15:2 Ip A0:0 C420p14 XYSCSS=420P14&#xA;ffmpeg -i small.mp4 -vf fps=7.5 -an -c:v rawvideo -f yuv4mpegpipe -strict -1 -pix_fmt yuv420p16 -y small_yuv420p16.y4m # KO YUV4MPEG2 W560 H320 F15:2 Ip A0:0 C420p16 XYSCSS=420P16&#xA;

    &#xA;&#xA;

    When trying other formats (like yuv420p12be, which was in the output of -pix_fmts), I had the following output (hence my previous trials) :

    &#xA;&#xA;

    $  ffmpeg -i small.mp4 -vf fps=7.5 -an -c:v rawvideo -f yuv4mpegpipe -pix_fmt yuv420p12be -y small_yuv420p12be.y4m &#xA;ffmpeg version N-68884-g3ebd76a Copyright (c) 2000-2015 the FFmpeg developers&#xA;  built on Jan  6 2015 11:03:21 with gcc 4.8 (Ubuntu/Linaro 4.8.1-10ubuntu9)&#xA;  configuration: &#xA;  libavutil      54. 16.100 / 54. 16.100&#xA;  libavcodec     56. 19.100 / 56. 19.100&#xA;  libavformat    56. 18.100 / 56. 18.100&#xA;  libavdevice    56.  3.100 / 56.  3.100&#xA;  libavfilter     5.  6.100 /  5.  6.100&#xA;  libswscale      3.  1.101 /  3.  1.101&#xA;  libswresample   1.  1.100 /  1.  1.100&#xA;[mov,mp4,m4a,3gp,3g2,mj2 @ 0x21d0220] overread end of atom &#x27;colr&#x27; by 1 bytes&#xA;Input #0, mov,mp4,m4a,3gp,3g2,mj2, from &#x27;small.mp4&#x27;:&#xA;  Metadata:&#xA;    major_brand     : mp42&#xA;    minor_version   : 0&#xA;    compatible_brands: mp42isomavc1&#xA;    creation_time   : 2010-03-20 21:29:11&#xA;    encoder         : HandBrake 0.9.4 2009112300&#xA;  Duration: 00:00:05.57, start: 0.000000, bitrate: 551 kb/s&#xA;    Stream #0:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p(tv, bt709), 560x320, 465 kb/s, 30 fps, 30 tbr, 90k tbn, 60 tbc (default)&#xA;    Metadata:&#xA;      creation_time   : 2010-03-20 21:29:11&#xA;      encoder         : JVT/AVC Coding&#xA;    Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, mono, fltp, 83 kb/s (default)&#xA;    Metadata:&#xA;      creation_time   : 2010-03-20 21:29:11&#xA;[yuv4mpegpipe @ 0x21d8180] ERROR: yuv4mpeg can only handle yuv444p, yuv422p, yuv420p, yuv411p and gray8 pixel formats. And using &#x27;strict -1&#x27; also yuv444p9, yuv422p9, yuv420p9, yuv444p10, yuv422p10, yuv420p10, yuv444p12, yuv422p12, yuv420p12, yuv444p14, yuv422p14, yuv420p14, yuv444p16, yuv422p16, yuv420p16 and gray16 pixel formats. Use -pix_fmt to select one.&#xA;Output #0, yuv4mpegpipe, to &#x27;small_yuv420p12be.y4m&#x27;:&#xA;  Metadata:&#xA;    major_brand     : mp42&#xA;    minor_version   : 0&#xA;    compatible_brands: mp42isomavc1&#xA;    encoder         : Lavf56.18.100&#xA;    Stream #0:0(und): Video: rawvideo ([12][11]3Y / 0x59330B0C), yuv420p12be, 560x320, q=2-31, 200 kb/s, 7.50 fps, 7.50 tbn, 7.50 tbc (default)&#xA;    Metadata:&#xA;      creation_time   : 2010-03-20 21:29:11&#xA;      encoder         : Lavc56.19.100 rawvideo&#xA;Stream mapping:&#xA;  Stream #0:0 -> #0:0 (h264 (native) -> rawvideo (native))&#xA;Could not write header for output file #0 (incorrect codec parameters ?): Input/output error&#xA;

    &#xA;&#xA;

    After that, I also tried to edit with an hexadecimal editor (bless) the small_yuv420p.y4m file. I replace C420mpeg2 by C420, and the file was accepted by Chrome.&#xA;What does the number in 420p[number] correspond to ? It is not explained there.&#xA;Is there any way to make ffmpeg output this kind of header files ?

    &#xA;&#xA;

    @SirDarius, thank you again for your help.

    &#xA;&#xA;

    The question

    &#xA;&#xA;

    (in case you forgot after this text blob)

    &#xA;&#xA;

    How can I convert the small.mp4 into a small.y4m file that has the same properties as the mobile_qcif_7.5fps.y4m file ?

    &#xA;

  • Python script creates too short video using ffmpeg

    25 mai 2014, par Majzlik

    I use python script to create multiple pictures and call ffmpeg to create video. But there is a problem, because ffmpeg use just few pictures (about 7 - 10 from 160), but throws no error. I’ve tried the same command from commandline and video was correct. I’m calling ffmpeg this way :

    ffmpeg_call = ["ffmpeg", "-r", str(FPS), "-b", "16777216", "-y", "-i", "./sample_%05d.png", FILEOUTNAME + ".mp4"]
    subprocess.call(ffmpeg_call)

    and this was command in commandline :

    ffmpeg -r 25 -b 16777216 -y -i ./sample_%05d.png animation.mp4

    I’ve printed these commands to compare and they were the same, so there has to be problem in ffmpeg + python cooperation. Don’t you know, how to fix it ?

    UPDATE :

    this is log from ffmpeg :

    ffmpeg version 0.8.10-4:0.8.10-0ubuntu0.12.04.1, Copyright (c) 2000-2013 the Libav     developers
     built on Feb  6 2014 20:56:59 with gcc 4.6.3
    *** THIS PROGRAM IS DEPRECATED ***
    This program is only provided for compatibility and will be removed in a future release. Please use avconv instead.
    Input #0, image2, from '/tmp/tmpRKxT6s/ampgraph/tmp/sample_%05d.png':
    Duration: 00:00:00.44, start: 0.000000, bitrate: N/A
    Stream #0.0: Video: png, pal8, 640x480, 25 fps, 25 tbr, 25 tbn, 25 tbc
    Incompatible pixel format 'pal8' for codec 'mpeg4', auto-selecting format 'yuv420p'
    [buffer @ 0x19e18a0] w:640 h:480 pixfmt:pal8
    [avsink @ 0x19ee1c0] auto-inserting filter 'auto-inserted scaler 0' between the filter 'src' and the filter 'out'
    [scale @ 0x19e2fc0] w:640 h:480 fmt:pal8 -> w:640 h:480 fmt:yuv420p flags:0x4
    Output #0, mp4, to './ampgraph/animation.mp4':
     Metadata:
       encoder         : Lavf53.21.1
       Stream #0.0: Video: mpeg4, yuv420p, 640x480, q=2-31, 200 kb/s, 25 tbn, 25 tbc
    Stream mapping:
     Stream #0.0 -> #0.0
    Press ctrl-c to stop encoding
    frame=   11 fps=  0 q=2.5 Lsize=      46kB time=0.44 bitrate= 859.1kbits/s    
    video:45kB audio:0kB global headers:0kB muxing overhead 1.906569%