Recherche avancée

Médias (1)

Mot : - Tags -/3GS

Autres articles (111)

  • Changer son thème graphique

    22 février 2011, par

    Le thème graphique ne touche pas à la disposition à proprement dite des éléments dans la page. Il ne fait que modifier l’apparence des éléments.
    Le placement peut être modifié effectivement, mais cette modification n’est que visuelle et non pas au niveau de la représentation sémantique de la page.
    Modifier le thème graphique utilisé
    Pour modifier le thème graphique utilisé, il est nécessaire que le plugin zen-garden soit activé sur le site.
    Il suffit ensuite de se rendre dans l’espace de configuration du (...)

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

  • Initialisation de MediaSPIP (préconfiguration)

    20 février 2010, par

    Lors de l’installation de MediaSPIP, celui-ci est préconfiguré pour les usages les plus fréquents.
    Cette préconfiguration est réalisée par un plugin activé par défaut et non désactivable appelé MediaSPIP Init.
    Ce plugin sert à préconfigurer de manière correcte chaque instance de MediaSPIP. Il doit donc être placé dans le dossier plugins-dist/ du site ou de la ferme pour être installé par défaut avant de pouvoir utiliser le site.
    Dans un premier temps il active ou désactive des options de SPIP qui ne le (...)

Sur d’autres sites (10463)

  • ffmpeg/libx264 C API : frames dropped from end of short MP4

    19 juillet 2017, par Blake McConnell

    In my C++ application, I am taking a series of JPEG images, manipulating their data using FreeImage, and then encoding the bitmaps as H264 using the ffmpeg/libx264 C API. The output is an MP4 which shows the series of 22 images at 12fps. My code is adapted from the "muxing" example that comes with ffmpeg C source code.

    My problem : no matter how I tune the codec parameters, a certain number of frames at the end of the sequence which are passed to the encoder do not appear in the final output. I’ve set the AVCodecContext parameters like this :

    //set context params
    ctx->codec_id = AV_CODEC_ID_H264;
    ctx->bit_rate = 4000 * 1000;
    ctx->width = _width;
    ctx->height = _height;
    ost->st->time_base = AVRational{ 1, 12 };
    ctx->time_base = ost->st->time_base;
    ctx->gop_size = 1;
    ctx->pix_fmt = AV_PIX_FMT_YUV420P;

    I have found that the higher the gop_size the more frames are dropped from the end of the video. I can also see from the output that, with this gop size (where I’m essentially directing that all output frames be I frames) that only 9 frames are written.

    I’m not sure why this is occurring. I experimented with encoding duplicate frames and making a much longer video. This resulted in no frames being dropped. I know with the ffmpeg command line tool there is a concatenation command that accomplishes what I am trying to do, but I’m not sure how to accomplish the same goal using the C API.

    Here’s the output I’m getting from the console :

    [libx264 @ 026d81c0] using cpu capabilities : MMX2 SSE2Fast SSSE3
    SSE4.2 AVX FMA3 BMI2 AVX2 [libx264 @ 026d81c0] profile High, level
    3.1 [libx264 @ 026d81c0] 264 - core 152 r2851 ba24899 - H.264/MPEG-4 AVC codec - Cop yleft 2003-2017 - http://www.videolan.org/x264.html -
    options : cabac=1 ref=1 deb lock=1:0:0 analyse=0x3:0x113 me=hex subme=7
    psy=1 psy_rd=1.00:0.00 mixed_ref=0 m e_range=16 chroma_me=1 trellis=1
    8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chro ma_qp_offset=-2
    threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1
    interlaced=0 bluray_compat=0 constrained_intra=0 bframes=0 weightp=0
    keyint=1 ke yint_min=1 scenecut=40 intra_refresh=0 rc=abr mbtree=0
    bitrate=4000 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4
    ip_ratio=1.40 aq=1:1.00 Output #0, mp4, to
    ’....\images\c411a991-46f6-400c-8bb0-77af3738559a.mp4’ :
    Stream #0:0 : Video : h264, yuv420p, 700x700, q=2-31, 4000 kb/s, 12 tbn

    [libx264 @ 026d81c0] frame I:9 Avg QP:17.83 size:111058 [libx264
    @ 026d81c0] mb I I16..4 : 1.9% 47.7% 50.5% [libx264 @ 026d81c0] final
    ratefactor : 19.14 [libx264 @ 026d81c0] 8x8 transform intra:47.7%
    [libx264 @ 026d81c0] coded y,uvDC,uvAC intra : 98.4% 96.9% 89.5%
    [libx264 @ 026d81c0] i16 v,h,dc,p : 64% 6% 2% 28% [libx264 @
    026d81c0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu : 32% 15% 9% 5% 5% 6% 8%
    10% 10% [libx264 @ 026d81c0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu : 28% 18%
    7% 6% 8% 8% 8% 9% 8% [libx264 @ 026d81c0] i8c dc,h,v,p : 43% 22%
    25% 10% [libx264 @ 026d81c0] kb/s:10661.53

    Code included below :

    MP4Writer.h

    #ifndef MPEG_WRITER
    #define MPEG_WRITER

    #include <iostream>
    #include <string>
    #include <vector>
    #include
    extern "C" {
       #include <libavformat></libavformat>avformat.h>
       #include <libswscale></libswscale>swscale.h>
       #include <libswresample></libswresample>swresample.h>
       #include <libswscale></libswscale>swscale.h>
    }

    typedef struct OutputStream
    {
       AVStream *st;
       AVCodecContext *enc;

       //pts of the next frame that will be generated
       int64_t next_pts;
       int samples_count;

       AVFrame *frame;
       AVFrame *tmp_frame;

       float t, tincr, tincr2;

       struct SwsContext *sws_ctx;
       struct SwrContext *swr_ctx;
    };

    class MP4Writer {
       public:
           MP4Writer();
           void Init();
           int16_t SetOutput( const std::string &amp; path );
           int16_t AddFrame( uint8_t * imgData );
           int16_t Write( std::vector<imgdata> &amp; imgData );
           int16_t Finalize();
           void SetHeight( const int height ) { _height = _width = height; } //assuming 1:1 aspect ratio

       private:
           int16_t AddStream( OutputStream * ost, AVFormatContext * formatCtx, AVCodec ** codec, enum AVCodecID codecId );
           int16_t OpenVideo( AVFormatContext * formatCtx, AVCodec *codec, OutputStream * ost, AVDictionary * optArg );
           static AVFrame * AllocPicture( enum AVPixelFormat pixFmt, int width, int height );
           static AVFrame * GetVideoFrame( uint8_t * imgData, OutputStream * ost, const int width, const int height );
           static int WriteFrame( AVFormatContext * formatCtx, const AVRational * timeBase, AVStream * stream, AVPacket * packet );

           int _width;
           int _height;
           OutputStream _ost;
           AVFormatContext * _formatCtx;
           AVDictionary * _dict;
    };

    #endif //MPEG_WRITER
    </imgdata></vector></string></iostream>

    MP4Writer.cpp

    #include
    #include <algorithm>

    MP4Writer::MP4Writer()
    {
       _width = 0;
       _height = 0;
    }

    void MP4Writer::Init()
    {
       av_register_all();
    }

    /**
    sets up output stream for the specified path.
    note that the output format is deduced automatically from the file extension passed
    @param path: output file path
    @returns: -1 = output could not be deduced, -2 = invalid codec, -3 = error opening output file,
              -4 = error writing header
    */
    int16_t MP4Writer::SetOutput( const std::string &amp; path )
    {
       int error;
       AVCodec * codec;
       AVOutputFormat * format;

       _ost = OutputStream{}; //TODO reset state in a more focused way?

       //allocate output media context
       avformat_alloc_output_context2( &amp;_formatCtx, NULL, NULL, path.c_str() );
       if ( !_formatCtx ) {
           std::cout &lt;&lt; "could not deduce output format from file extension.  aborting" &lt;&lt; std::endl;
           return -1;
       }
       //set format
       format = _formatCtx->oformat;
       if ( format->video_codec != AV_CODEC_ID_NONE ) {
           AddStream( &amp;_ost, _formatCtx, &amp;codec, format->video_codec );
       }
       else {
           std::cout &lt;&lt; "there is no video codec set.  aborting" &lt;&lt; std::endl;
           return -2;
       }

       OpenVideo( _formatCtx, codec, &amp;_ost, _dict );

       av_dump_format( _formatCtx, 0, path.c_str(), 1 );

       //open output file
       if ( !( format->flags &amp; AVFMT_NOFILE )) {
           error = avio_open( &amp;_formatCtx->pb, path.c_str(), AVIO_FLAG_WRITE );
           if ( error &lt; 0 ) {
               std::cout &lt;&lt; "there was an error opening output file " &lt;&lt; path &lt;&lt; ".  aborting" &lt;&lt; std::endl;
               return -3;
           }
       }

       //write header
       error = avformat_write_header( _formatCtx, &amp;_dict );
       if ( error &lt; 0 ) {
           std::cout &lt;&lt; "an error occurred writing header. aborting" &lt;&lt; std::endl;
           return -4;
       }

       return 0;
    }

    /**
    initialize the output stream
    @param ost: the output stream
    @param formatCtx: the context format
    @param codec: the output codec
    @param codec: the ffmpeg enumerated id of the codec
    @returns: -1 = encoder not found, -2 = stream could not be allocated, -3 = encoding context could not be allocated
    */
    int16_t MP4Writer::AddStream( OutputStream * ost, AVFormatContext * formatCtx, AVCodec ** codec, enum AVCodecID codecId )
    {
       AVCodecContext * ctx; //TODO not sure why this is here, could just set ost->enc directly
       int i;

       //detect the encoder
       *codec = avcodec_find_encoder( codecId );
       if ( (*codec) == NULL ) {
           std::cout &lt;&lt; "could not find encoder.  aborting" &lt;&lt; std::endl;
           return -1;
       }

       //allocate stream
       ost->st = avformat_new_stream( formatCtx, NULL );
       if ( ost->st == NULL ) {
           std::cout &lt;&lt; "could not allocate stream.  aborting" &lt;&lt; std::endl;
           return -2;
       }

       //allocate encoding context
       ost->st->id = formatCtx->nb_streams - 1;
       ctx = avcodec_alloc_context3( *codec );
       if ( ctx == NULL ) {
           std::cout &lt;&lt; "could not allocate encoding context.  aborting" &lt;&lt; std::endl;
           return -3;
       }

       ost->enc = ctx;

       //set context params
       ctx->codec_id = AV_CODEC_ID_H264;
       ctx->bit_rate = 4000 * 1000;
       ctx->width = _width;
       ctx->height = _height;
       ost->st->time_base = AVRational{ 1, 12 };
       ctx->time_base = ost->st->time_base;
       ctx->gop_size = 1;
       ctx->pix_fmt = AV_PIX_FMT_YUV420P;

       //if neccesary, set stream headers and formats separately
       if ( formatCtx->oformat->flags &amp; AVFMT_GLOBALHEADER ) {
           std::cout &lt;&lt; "setting stream and headers to be separate" &lt;&lt; std::endl;
           ctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
       }

       return 0;
    }

    /**
    open the video for writing
    @param formatCtx: the format context
    @param codec: output codec
    @param ost: output stream
    @param optArg: dictionary
    @return: -1 = error opening codec, -2 = allocate new frame, -3 = copy stream params
    */
    int16_t MP4Writer::OpenVideo( AVFormatContext * formatCtx, AVCodec *codec, OutputStream * ost, AVDictionary * optArg )
    {
       int error;
       AVCodecContext * ctx = ost->enc;
       AVDictionary * dict = NULL;
       av_dict_copy( &amp;dict, optArg, 0 );

       //open codec
       error = avcodec_open2( ctx, codec, &amp;dict );
       av_dict_free( &amp;dict );
       if ( error &lt; 0 ) {
           std::cout &lt;&lt; "there was an error opening the codec.  aborting" &lt;&lt; std::endl;
           return -1;
       }

       //allocate new frame
       ost->frame = AllocPicture( ctx->pix_fmt, ctx->width, ctx->height );
       if ( ost->frame == NULL ) {
           std::cout &lt;&lt; "there was an error allocating a new frame.  aborting" &lt;&lt; std::endl;
           return -2;
       }

       //copy steam params
       error = avcodec_parameters_from_context( ost->st->codecpar, ctx );
       if ( error &lt; 0 ) {
           std::cout &lt;&lt; "could not copy stream parameters.  aborting" &lt;&lt; std::endl;
           return -3;
       }

       return 0;
    }

    /**
    allocate a new frame
    @param pixFmt: ffmpeg enumerated pixel format
    @param width: output width
    @param height: output height
    @returns: an inititalized frame
    */
    AVFrame * MP4Writer::AllocPicture( enum AVPixelFormat pixFmt, int width, int height )
    {
       AVFrame * picture;
       int error;

       //allocate the frame
       picture = av_frame_alloc();
       if ( picture == NULL ) {
           std::cout &lt;&lt; "there was an error allocating the picture" &lt;&lt; std::endl;
           return NULL;
       }

       picture->format = pixFmt;
       picture->width = width;
       picture->height = height;

       //allocate the frame's data buffer
       error = av_frame_get_buffer( picture, 32 );
       if ( error &lt; 0 ) {
           std::cout &lt;&lt; "could not allocate frame data" &lt;&lt; std::endl;
           return NULL;
       }
       picture->pts = 0;
       return picture;
    }

    /**
    convert raw RGB buffer to YUV frame
    @return: frame that contains image data
    */
    AVFrame * MP4Writer::GetVideoFrame( uint8_t * imgData, OutputStream * ost, const int width, const int height )
    {
       int error;
       AVCodecContext * ctx = ost->enc;

       //prepare the frame
       error = av_frame_make_writable( ost->frame );
       if ( error &lt; 0 ) {
           std::cout &lt;&lt; "could not make frame writeable" &lt;&lt; std::endl;
           return NULL;
       }

       //TODO set this context one time per run, or even better, one time at init
       //convert RGB to YUV
       struct SwsContext* fooContext = sws_getContext( width, height, AV_PIX_FMT_BGR24,
           width, height, AV_PIX_FMT_YUV420P, SWS_BICUBIC, NULL, NULL, NULL );
       int inLinesize[1] = { 3 * width }; // RGB stride
       uint8_t * inData[1] = { imgData };
       int sliceHeight = sws_scale( fooContext, inData, inLinesize, 0, height, ost->frame->data, ost->frame->linesize );
       sws_freeContext( fooContext );

       ost->frame->pts = ost->next_pts++;
       //TODO does the frame need to be returned here as it is available at the class level?
       return ost->frame;
    }

    /**
    write frame to file
    @param formatCtx: the output format context
    @param timeBase: the framerate
    @param stream: output stream
    @param packet: data packet
    @returns: see return values for av_interleaved_write_frame
    */
    int MP4Writer::WriteFrame( AVFormatContext * formatCtx, const AVRational * timeBase, AVStream * stream, AVPacket * packet )
    {
       av_packet_rescale_ts( packet, *timeBase, stream->time_base );
       packet->stream_index = stream->index;

       //write compressed file to media file
       return av_interleaved_write_frame( formatCtx, packet );
    }

    int16_t MP4Writer::Write( std::vector<imgdata> &amp; imgData )
    {
       int16_t errorCount = 0;
       int16_t retVal = 0;
       bool countingUp = true;
       size_t i = 0;
       while ( true ) {
           //don't show first frame again when counting back down
           if ( !countingUp &amp;&amp; i == 0 ) {
               break;
           }
           uint8_t * pixels = imgData[i].GetBits( imgData[i].mp4Input );
           AddFrame( pixels );

           //handle inc/dec without repeating last frame
           if ( countingUp ) {
               if ( i == imgData.size() -1 ) {
                   countingUp = false;
                   i--;
               }
               else {
                   i++;
               }
           }
           else {
               i--;
           }
       }
       Finalize();
       return 0; //TODO return error code
    }

    /**
    add another frame to output video
    @param imgData: the raw image data
    @returns -1 = error encoding video frame, -2 = error writing frame
    */
    int16_t MP4Writer::AddFrame( uint8_t * imgData )
    {
       int error;
       AVCodecContext * ctx;
       AVFrame * frame;
       int gotPacket = 0;
       AVPacket pkt = { 0 };

       ctx = _ost.enc;
       av_init_packet( &amp;pkt );

       frame = GetVideoFrame( imgData, &amp;_ost, _width, _height );

       //encode the image
       error = avcodec_encode_video2( ctx, &amp;pkt, frame, &amp;gotPacket );
       if ( error &lt; 0 ) {
           std::cout &lt;&lt; "there was an error encoding the video frame" &lt;&lt; std::endl;
           return -1;
       }

       //write the frame.  NOTE: this doesn't kick in until the encoder has received a certain number of frames
       if ( gotPacket ) {
           error = WriteFrame( _formatCtx, &amp;ctx->time_base, _ost.st, &amp;pkt );
           if ( error &lt; 0 ) {
               std::cout &lt;&lt; "the video frame could not be written" &lt;&lt; std::endl;
               return -2;
           }
       }
       return 0;
    }

    /**
    finalize output video and cleanup
    */
    int16_t MP4Writer::Finalize()
    {
       av_write_trailer( _formatCtx );
       avcodec_free_context( &amp;_ost.enc );
       av_frame_free( &amp;_ost.frame);
       av_frame_free( &amp;_ost.tmp_frame );
       avio_closep( &amp;_formatCtx->pb );
       avformat_free_context( _formatCtx );
       sws_freeContext( _ost.sws_ctx );
       swr_free( &amp;_ost.swr_ctx);
       return 0;
    }
    </imgdata></algorithm>

    usage

    #include
    #include
    #include <vector>

    struct ImgData
    {
       unsigned int width;
       unsigned int height;
       std::string path;
       FIBITMAP * mp4Input;

       uint8_t * GetBits( FIBITMAP * bmp ) { return FreeImage_GetBits( bmp ); }
    };

    int main()
    {
        std::vector<imgdata> imgDataVec;
        //load images and push to imgDataVec
        MP4Writer mp4Writer;
        mp4Writer.SetHeight( 1200 ); //assumes 1:1 aspect ratio
        mp4Writer.Init();
        mp4Writer.SetOutput( "test.mp4" );
        mp4Writer.Write( imgDataVec );
    }
    </imgdata></vector>
  • ffmpeg/libx264 C API : frames dropped from end of short MP4

    19 juillet 2017, par Blake McConnell

    In my C++ application, I am taking a series of JPEG images, manipulating their data using FreeImage, and then encoding the bitmaps as H264 using the ffmpeg/libx264 C API. The output is an MP4 which shows the series of 22 images at 12fps. My code is adapted from the "muxing" example that comes with ffmpeg C source code.

    My problem : no matter how I tune the codec parameters, a certain number of frames at the end of the sequence which are passed to the encoder do not appear in the final output. I’ve set the AVCodecContext parameters like this :

    //set context params
    ctx->codec_id = AV_CODEC_ID_H264;
    ctx->bit_rate = 4000 * 1000;
    ctx->width = _width;
    ctx->height = _height;
    ost->st->time_base = AVRational{ 1, 12 };
    ctx->time_base = ost->st->time_base;
    ctx->gop_size = 1;
    ctx->pix_fmt = AV_PIX_FMT_YUV420P;

    I have found that the higher the gop_size the more frames are dropped from the end of the video. I can also see from the output that, with this gop size (where I’m essentially directing that all output frames be I frames) that only 9 frames are written.

    I’m not sure why this is occurring. I experimented with encoding duplicate frames and making a much longer video. This resulted in no frames being dropped. I know with the ffmpeg command line tool there is a concatenation command that accomplishes what I am trying to do, but I’m not sure how to accomplish the same goal using the C API.

    Here’s the output I’m getting from the console :

    [libx264 @ 026d81c0] using cpu capabilities : MMX2 SSE2Fast SSSE3
    SSE4.2 AVX FMA3 BMI2 AVX2 [libx264 @ 026d81c0] profile High, level
    3.1 [libx264 @ 026d81c0] 264 - core 152 r2851 ba24899 - H.264/MPEG-4 AVC codec - Cop yleft 2003-2017 - http://www.videolan.org/x264.html -
    options : cabac=1 ref=1 deb lock=1:0:0 analyse=0x3:0x113 me=hex subme=7
    psy=1 psy_rd=1.00:0.00 mixed_ref=0 m e_range=16 chroma_me=1 trellis=1
    8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chro ma_qp_offset=-2
    threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1
    interlaced=0 bluray_compat=0 constrained_intra=0 bframes=0 weightp=0
    keyint=1 ke yint_min=1 scenecut=40 intra_refresh=0 rc=abr mbtree=0
    bitrate=4000 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4
    ip_ratio=1.40 aq=1:1.00 Output #0, mp4, to
    ’....\images\c411a991-46f6-400c-8bb0-77af3738559a.mp4’ :
    Stream #0:0 : Video : h264, yuv420p, 700x700, q=2-31, 4000 kb/s, 12 tbn

    [libx264 @ 026d81c0] frame I:9 Avg QP:17.83 size:111058 [libx264
    @ 026d81c0] mb I I16..4 : 1.9% 47.7% 50.5% [libx264 @ 026d81c0] final
    ratefactor : 19.14 [libx264 @ 026d81c0] 8x8 transform intra:47.7%
    [libx264 @ 026d81c0] coded y,uvDC,uvAC intra : 98.4% 96.9% 89.5%
    [libx264 @ 026d81c0] i16 v,h,dc,p : 64% 6% 2% 28% [libx264 @
    026d81c0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu : 32% 15% 9% 5% 5% 6% 8%
    10% 10% [libx264 @ 026d81c0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu : 28% 18%
    7% 6% 8% 8% 8% 9% 8% [libx264 @ 026d81c0] i8c dc,h,v,p : 43% 22%
    25% 10% [libx264 @ 026d81c0] kb/s:10661.53

    Code included below :

    MP4Writer.h

    #ifndef MPEG_WRITER
    #define MPEG_WRITER

    #include <iostream>
    #include <string>
    #include <vector>
    #include
    extern "C" {
       #include <libavformat></libavformat>avformat.h>
       #include <libswscale></libswscale>swscale.h>
       #include <libswresample></libswresample>swresample.h>
       #include <libswscale></libswscale>swscale.h>
    }

    typedef struct OutputStream
    {
       AVStream *st;
       AVCodecContext *enc;

       //pts of the next frame that will be generated
       int64_t next_pts;
       int samples_count;

       AVFrame *frame;
       AVFrame *tmp_frame;

       float t, tincr, tincr2;

       struct SwsContext *sws_ctx;
       struct SwrContext *swr_ctx;
    };

    class MP4Writer {
       public:
           MP4Writer();
           void Init();
           int16_t SetOutput( const std::string &amp; path );
           int16_t AddFrame( uint8_t * imgData );
           int16_t Write( std::vector<imgdata> &amp; imgData );
           int16_t Finalize();
           void SetHeight( const int height ) { _height = _width = height; } //assuming 1:1 aspect ratio

       private:
           int16_t AddStream( OutputStream * ost, AVFormatContext * formatCtx, AVCodec ** codec, enum AVCodecID codecId );
           int16_t OpenVideo( AVFormatContext * formatCtx, AVCodec *codec, OutputStream * ost, AVDictionary * optArg );
           static AVFrame * AllocPicture( enum AVPixelFormat pixFmt, int width, int height );
           static AVFrame * GetVideoFrame( uint8_t * imgData, OutputStream * ost, const int width, const int height );
           static int WriteFrame( AVFormatContext * formatCtx, const AVRational * timeBase, AVStream * stream, AVPacket * packet );

           int _width;
           int _height;
           OutputStream _ost;
           AVFormatContext * _formatCtx;
           AVDictionary * _dict;
    };

    #endif //MPEG_WRITER
    </imgdata></vector></string></iostream>

    MP4Writer.cpp

    #include
    #include <algorithm>

    MP4Writer::MP4Writer()
    {
       _width = 0;
       _height = 0;
    }

    void MP4Writer::Init()
    {
       av_register_all();
    }

    /**
    sets up output stream for the specified path.
    note that the output format is deduced automatically from the file extension passed
    @param path: output file path
    @returns: -1 = output could not be deduced, -2 = invalid codec, -3 = error opening output file,
              -4 = error writing header
    */
    int16_t MP4Writer::SetOutput( const std::string &amp; path )
    {
       int error;
       AVCodec * codec;
       AVOutputFormat * format;

       _ost = OutputStream{}; //TODO reset state in a more focused way?

       //allocate output media context
       avformat_alloc_output_context2( &amp;_formatCtx, NULL, NULL, path.c_str() );
       if ( !_formatCtx ) {
           std::cout &lt;&lt; "could not deduce output format from file extension.  aborting" &lt;&lt; std::endl;
           return -1;
       }
       //set format
       format = _formatCtx->oformat;
       if ( format->video_codec != AV_CODEC_ID_NONE ) {
           AddStream( &amp;_ost, _formatCtx, &amp;codec, format->video_codec );
       }
       else {
           std::cout &lt;&lt; "there is no video codec set.  aborting" &lt;&lt; std::endl;
           return -2;
       }

       OpenVideo( _formatCtx, codec, &amp;_ost, _dict );

       av_dump_format( _formatCtx, 0, path.c_str(), 1 );

       //open output file
       if ( !( format->flags &amp; AVFMT_NOFILE )) {
           error = avio_open( &amp;_formatCtx->pb, path.c_str(), AVIO_FLAG_WRITE );
           if ( error &lt; 0 ) {
               std::cout &lt;&lt; "there was an error opening output file " &lt;&lt; path &lt;&lt; ".  aborting" &lt;&lt; std::endl;
               return -3;
           }
       }

       //write header
       error = avformat_write_header( _formatCtx, &amp;_dict );
       if ( error &lt; 0 ) {
           std::cout &lt;&lt; "an error occurred writing header. aborting" &lt;&lt; std::endl;
           return -4;
       }

       return 0;
    }

    /**
    initialize the output stream
    @param ost: the output stream
    @param formatCtx: the context format
    @param codec: the output codec
    @param codec: the ffmpeg enumerated id of the codec
    @returns: -1 = encoder not found, -2 = stream could not be allocated, -3 = encoding context could not be allocated
    */
    int16_t MP4Writer::AddStream( OutputStream * ost, AVFormatContext * formatCtx, AVCodec ** codec, enum AVCodecID codecId )
    {
       AVCodecContext * ctx; //TODO not sure why this is here, could just set ost->enc directly
       int i;

       //detect the encoder
       *codec = avcodec_find_encoder( codecId );
       if ( (*codec) == NULL ) {
           std::cout &lt;&lt; "could not find encoder.  aborting" &lt;&lt; std::endl;
           return -1;
       }

       //allocate stream
       ost->st = avformat_new_stream( formatCtx, NULL );
       if ( ost->st == NULL ) {
           std::cout &lt;&lt; "could not allocate stream.  aborting" &lt;&lt; std::endl;
           return -2;
       }

       //allocate encoding context
       ost->st->id = formatCtx->nb_streams - 1;
       ctx = avcodec_alloc_context3( *codec );
       if ( ctx == NULL ) {
           std::cout &lt;&lt; "could not allocate encoding context.  aborting" &lt;&lt; std::endl;
           return -3;
       }

       ost->enc = ctx;

       //set context params
       ctx->codec_id = AV_CODEC_ID_H264;
       ctx->bit_rate = 4000 * 1000;
       ctx->width = _width;
       ctx->height = _height;
       ost->st->time_base = AVRational{ 1, 12 };
       ctx->time_base = ost->st->time_base;
       ctx->gop_size = 1;
       ctx->pix_fmt = AV_PIX_FMT_YUV420P;

       //if neccesary, set stream headers and formats separately
       if ( formatCtx->oformat->flags &amp; AVFMT_GLOBALHEADER ) {
           std::cout &lt;&lt; "setting stream and headers to be separate" &lt;&lt; std::endl;
           ctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
       }

       return 0;
    }

    /**
    open the video for writing
    @param formatCtx: the format context
    @param codec: output codec
    @param ost: output stream
    @param optArg: dictionary
    @return: -1 = error opening codec, -2 = allocate new frame, -3 = copy stream params
    */
    int16_t MP4Writer::OpenVideo( AVFormatContext * formatCtx, AVCodec *codec, OutputStream * ost, AVDictionary * optArg )
    {
       int error;
       AVCodecContext * ctx = ost->enc;
       AVDictionary * dict = NULL;
       av_dict_copy( &amp;dict, optArg, 0 );

       //open codec
       error = avcodec_open2( ctx, codec, &amp;dict );
       av_dict_free( &amp;dict );
       if ( error &lt; 0 ) {
           std::cout &lt;&lt; "there was an error opening the codec.  aborting" &lt;&lt; std::endl;
           return -1;
       }

       //allocate new frame
       ost->frame = AllocPicture( ctx->pix_fmt, ctx->width, ctx->height );
       if ( ost->frame == NULL ) {
           std::cout &lt;&lt; "there was an error allocating a new frame.  aborting" &lt;&lt; std::endl;
           return -2;
       }

       //copy steam params
       error = avcodec_parameters_from_context( ost->st->codecpar, ctx );
       if ( error &lt; 0 ) {
           std::cout &lt;&lt; "could not copy stream parameters.  aborting" &lt;&lt; std::endl;
           return -3;
       }

       return 0;
    }

    /**
    allocate a new frame
    @param pixFmt: ffmpeg enumerated pixel format
    @param width: output width
    @param height: output height
    @returns: an inititalized frame
    */
    AVFrame * MP4Writer::AllocPicture( enum AVPixelFormat pixFmt, int width, int height )
    {
       AVFrame * picture;
       int error;

       //allocate the frame
       picture = av_frame_alloc();
       if ( picture == NULL ) {
           std::cout &lt;&lt; "there was an error allocating the picture" &lt;&lt; std::endl;
           return NULL;
       }

       picture->format = pixFmt;
       picture->width = width;
       picture->height = height;

       //allocate the frame's data buffer
       error = av_frame_get_buffer( picture, 32 );
       if ( error &lt; 0 ) {
           std::cout &lt;&lt; "could not allocate frame data" &lt;&lt; std::endl;
           return NULL;
       }
       picture->pts = 0;
       return picture;
    }

    /**
    convert raw RGB buffer to YUV frame
    @return: frame that contains image data
    */
    AVFrame * MP4Writer::GetVideoFrame( uint8_t * imgData, OutputStream * ost, const int width, const int height )
    {
       int error;
       AVCodecContext * ctx = ost->enc;

       //prepare the frame
       error = av_frame_make_writable( ost->frame );
       if ( error &lt; 0 ) {
           std::cout &lt;&lt; "could not make frame writeable" &lt;&lt; std::endl;
           return NULL;
       }

       //TODO set this context one time per run, or even better, one time at init
       //convert RGB to YUV
       struct SwsContext* fooContext = sws_getContext( width, height, AV_PIX_FMT_BGR24,
           width, height, AV_PIX_FMT_YUV420P, SWS_BICUBIC, NULL, NULL, NULL );
       int inLinesize[1] = { 3 * width }; // RGB stride
       uint8_t * inData[1] = { imgData };
       int sliceHeight = sws_scale( fooContext, inData, inLinesize, 0, height, ost->frame->data, ost->frame->linesize );
       sws_freeContext( fooContext );

       ost->frame->pts = ost->next_pts++;
       //TODO does the frame need to be returned here as it is available at the class level?
       return ost->frame;
    }

    /**
    write frame to file
    @param formatCtx: the output format context
    @param timeBase: the framerate
    @param stream: output stream
    @param packet: data packet
    @returns: see return values for av_interleaved_write_frame
    */
    int MP4Writer::WriteFrame( AVFormatContext * formatCtx, const AVRational * timeBase, AVStream * stream, AVPacket * packet )
    {
       av_packet_rescale_ts( packet, *timeBase, stream->time_base );
       packet->stream_index = stream->index;

       //write compressed file to media file
       return av_interleaved_write_frame( formatCtx, packet );
    }

    int16_t MP4Writer::Write( std::vector<imgdata> &amp; imgData )
    {
       int16_t errorCount = 0;
       int16_t retVal = 0;
       bool countingUp = true;
       size_t i = 0;
       while ( true ) {
           //don't show first frame again when counting back down
           if ( !countingUp &amp;&amp; i == 0 ) {
               break;
           }
           uint8_t * pixels = imgData[i].GetBits( imgData[i].mp4Input );
           AddFrame( pixels );

           //handle inc/dec without repeating last frame
           if ( countingUp ) {
               if ( i == imgData.size() -1 ) {
                   countingUp = false;
                   i--;
               }
               else {
                   i++;
               }
           }
           else {
               i--;
           }
       }
       Finalize();
       return 0; //TODO return error code
    }

    /**
    add another frame to output video
    @param imgData: the raw image data
    @returns -1 = error encoding video frame, -2 = error writing frame
    */
    int16_t MP4Writer::AddFrame( uint8_t * imgData )
    {
       int error;
       AVCodecContext * ctx;
       AVFrame * frame;
       int gotPacket = 0;
       AVPacket pkt = { 0 };

       ctx = _ost.enc;
       av_init_packet( &amp;pkt );

       frame = GetVideoFrame( imgData, &amp;_ost, _width, _height );

       //encode the image
       error = avcodec_encode_video2( ctx, &amp;pkt, frame, &amp;gotPacket );
       if ( error &lt; 0 ) {
           std::cout &lt;&lt; "there was an error encoding the video frame" &lt;&lt; std::endl;
           return -1;
       }

       //write the frame.  NOTE: this doesn't kick in until the encoder has received a certain number of frames
       if ( gotPacket ) {
           error = WriteFrame( _formatCtx, &amp;ctx->time_base, _ost.st, &amp;pkt );
           if ( error &lt; 0 ) {
               std::cout &lt;&lt; "the video frame could not be written" &lt;&lt; std::endl;
               return -2;
           }
       }
       return 0;
    }

    /**
    finalize output video and cleanup
    */
    int16_t MP4Writer::Finalize()
    {
       av_write_trailer( _formatCtx );
       avcodec_free_context( &amp;_ost.enc );
       av_frame_free( &amp;_ost.frame);
       av_frame_free( &amp;_ost.tmp_frame );
       avio_closep( &amp;_formatCtx->pb );
       avformat_free_context( _formatCtx );
       sws_freeContext( _ost.sws_ctx );
       swr_free( &amp;_ost.swr_ctx);
       return 0;
    }
    </imgdata></algorithm>

    usage

    #include
    #include
    #include <vector>

    struct ImgData
    {
       unsigned int width;
       unsigned int height;
       std::string path;
       FIBITMAP * mp4Input;

       uint8_t * GetBits( FIBITMAP * bmp ) { return FreeImage_GetBits( bmp ); }
    };

    int main()
    {
        std::vector<imgdata> imgDataVec;
        //load images and push to imgDataVec
        MP4Writer mp4Writer;
        mp4Writer.SetHeight( 1200 ); //assumes 1:1 aspect ratio
        mp4Writer.Init();
        mp4Writer.SetOutput( "test.mp4" );
        mp4Writer.Write( imgDataVec );
    }
    </imgdata></vector>
  • Introducing Updates to the Funnels Feature

    29 mai 2024, par Erin

    We’ve made improvements to the Funnels feature to be more user-friendly and offer you greater flexibility. 

    &lt;script type=&quot;text/javascript&quot;&gt;<br />
           if ('function' === typeof window.playMatomoVideo){<br />
           window.playMatomoVideo(&quot;FunnelsProductUpdate2024&quot;, &quot;#FunnelsProductUpdate2024&quot;)<br />
           } else {<br />
           document.addEventListener(&quot;DOMContentLoaded&quot;, function() { window.playMatomoVideo(&quot;FunnelsProductUpdate2024&quot;, &quot;#FunnelsProductUpdate2024&quot;); });<br />
           }<br />
      &lt;/script&gt;

    Here’s what’s changing :

    Setting up and managing funnels is now easier than ever 

    Previously, creating funnels was tedious and required going through the Goals feature. But we’ve changed that with the introduction of a separate page to configure funnels. 

    Dedicated Manage Funnels page in Matomo

    Create funnels with greater flexibility—no longer tied to goals 

    Funnels is now a standalone feature, providing you with more flexibility. Before, you could only create a funnel if it was tied to a goal, in other words, the final step in the funnel had to be a goal. What’s more, you also couldn’t use goals for steps in the funnel.  

    Previous configuration requirements of Funnels in Matomo
    Previous configuration requirements of Funnels

    Now, funnels are independent of goals, and goals can serve as steps within the funnel. This means you have the freedom to configure any combination of steps in a funnel : 

    • All steps can be goals 
    • No steps need to be goals 
    • Or some steps can be goals, some steps can be events 
    Goals no longer required in Matomo Funnels

    No matter what your customer journey looks like, funnels now offer the versatility to meet your business’s specific needs. 

    Find friction points faster with intuitive visuals 

    One of the most significant improvements is the visual upgrade of the Funnels feature. The new Funnels graph is now visually in line with industry standards and intuitive. 

    New Funnel Analytics chart in Matomo

    The new visual provides a clearer view of your drop-off and conversion rates so you can instantly find points of friction in your funnel to improve the user experience and overall conversion rate.  

    This visualisation also provides a detailed overview of the number of visitors who enter, exit, skip, or proceed at each step of your funnel by using different coloured bars for visual clarity on each step’s performance. 

    With this update, we’ve also replaced ‘backfilled visits’ with ‘skipped steps’ to avoid misinterpretation of the data. 

    New data table for more granular insights 

    Accompanying this visual improvement is a new data table, allowing for more granular insights, segment comparison, and easy data export.

    We’ve also increased Funnel analysis limits. You can now compare funnel data for 2 date periods and 6 segments (up to 12 compared datasets in total). 

    Sign up for our newsletter to receive all the latest Matomo updates.