Recherche avancée

Médias (2)

Mot : - Tags -/media

Autres articles (102)

  • Automated installation script of MediaSPIP

    25 avril 2011, par

    To overcome the difficulties mainly due to the installation of server side software dependencies, an "all-in-one" installation script written in bash was created to facilitate this step on a server with a compatible Linux distribution.
    You must have access to your server via SSH and a root account to use it, which will install the dependencies. Contact your provider if you do not have that.
    The documentation of the use of this installation script is available here.
    The code of this (...)

  • Possibilité de déploiement en ferme

    12 avril 2011, par

    MediaSPIP peut être installé comme une ferme, avec un seul "noyau" hébergé sur un serveur dédié et utilisé par une multitude de sites différents.
    Cela permet, par exemple : de pouvoir partager les frais de mise en œuvre entre plusieurs projets / individus ; de pouvoir déployer rapidement une multitude de sites uniques ; d’éviter d’avoir à mettre l’ensemble des créations dans un fourre-tout numérique comme c’est le cas pour les grandes plate-formes tout public disséminées sur le (...)

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

Sur d’autres sites (6740)

  • ffmpeg/libx264 C API : frames dropped from end of short MP4

    19 juillet 2017, par Blake McConnell

    In my C++ application, I am taking a series of JPEG images, manipulating their data using FreeImage, and then encoding the bitmaps as H264 using the ffmpeg/libx264 C API. The output is an MP4 which shows the series of 22 images at 12fps. My code is adapted from the "muxing" example that comes with ffmpeg C source code.

    My problem : no matter how I tune the codec parameters, a certain number of frames at the end of the sequence which are passed to the encoder do not appear in the final output. I’ve set the AVCodecContext parameters like this :

    //set context params
    ctx->codec_id = AV_CODEC_ID_H264;
    ctx->bit_rate = 4000 * 1000;
    ctx->width = _width;
    ctx->height = _height;
    ost->st->time_base = AVRational{ 1, 12 };
    ctx->time_base = ost->st->time_base;
    ctx->gop_size = 1;
    ctx->pix_fmt = AV_PIX_FMT_YUV420P;

    I have found that the higher the gop_size the more frames are dropped from the end of the video. I can also see from the output that, with this gop size (where I’m essentially directing that all output frames be I frames) that only 9 frames are written.

    I’m not sure why this is occurring. I experimented with encoding duplicate frames and making a much longer video. This resulted in no frames being dropped. I know with the ffmpeg command line tool there is a concatenation command that accomplishes what I am trying to do, but I’m not sure how to accomplish the same goal using the C API.

    Here’s the output I’m getting from the console :

    [libx264 @ 026d81c0] using cpu capabilities : MMX2 SSE2Fast SSSE3
    SSE4.2 AVX FMA3 BMI2 AVX2 [libx264 @ 026d81c0] profile High, level
    3.1 [libx264 @ 026d81c0] 264 - core 152 r2851 ba24899 - H.264/MPEG-4 AVC codec - Cop yleft 2003-2017 - http://www.videolan.org/x264.html -
    options : cabac=1 ref=1 deb lock=1:0:0 analyse=0x3:0x113 me=hex subme=7
    psy=1 psy_rd=1.00:0.00 mixed_ref=0 m e_range=16 chroma_me=1 trellis=1
    8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chro ma_qp_offset=-2
    threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1
    interlaced=0 bluray_compat=0 constrained_intra=0 bframes=0 weightp=0
    keyint=1 ke yint_min=1 scenecut=40 intra_refresh=0 rc=abr mbtree=0
    bitrate=4000 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4
    ip_ratio=1.40 aq=1:1.00 Output #0, mp4, to
    ’....\images\c411a991-46f6-400c-8bb0-77af3738559a.mp4’ :
    Stream #0:0 : Video : h264, yuv420p, 700x700, q=2-31, 4000 kb/s, 12 tbn

    [libx264 @ 026d81c0] frame I:9 Avg QP:17.83 size:111058 [libx264
    @ 026d81c0] mb I I16..4 : 1.9% 47.7% 50.5% [libx264 @ 026d81c0] final
    ratefactor : 19.14 [libx264 @ 026d81c0] 8x8 transform intra:47.7%
    [libx264 @ 026d81c0] coded y,uvDC,uvAC intra : 98.4% 96.9% 89.5%
    [libx264 @ 026d81c0] i16 v,h,dc,p : 64% 6% 2% 28% [libx264 @
    026d81c0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu : 32% 15% 9% 5% 5% 6% 8%
    10% 10% [libx264 @ 026d81c0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu : 28% 18%
    7% 6% 8% 8% 8% 9% 8% [libx264 @ 026d81c0] i8c dc,h,v,p : 43% 22%
    25% 10% [libx264 @ 026d81c0] kb/s:10661.53

    Code included below :

    MP4Writer.h

    #ifndef MPEG_WRITER
    #define MPEG_WRITER

    #include <iostream>
    #include <string>
    #include <vector>
    #include
    extern "C" {
       #include <libavformat></libavformat>avformat.h>
       #include <libswscale></libswscale>swscale.h>
       #include <libswresample></libswresample>swresample.h>
       #include <libswscale></libswscale>swscale.h>
    }

    typedef struct OutputStream
    {
       AVStream *st;
       AVCodecContext *enc;

       //pts of the next frame that will be generated
       int64_t next_pts;
       int samples_count;

       AVFrame *frame;
       AVFrame *tmp_frame;

       float t, tincr, tincr2;

       struct SwsContext *sws_ctx;
       struct SwrContext *swr_ctx;
    };

    class MP4Writer {
       public:
           MP4Writer();
           void Init();
           int16_t SetOutput( const std::string &amp; path );
           int16_t AddFrame( uint8_t * imgData );
           int16_t Write( std::vector<imgdata> &amp; imgData );
           int16_t Finalize();
           void SetHeight( const int height ) { _height = _width = height; } //assuming 1:1 aspect ratio

       private:
           int16_t AddStream( OutputStream * ost, AVFormatContext * formatCtx, AVCodec ** codec, enum AVCodecID codecId );
           int16_t OpenVideo( AVFormatContext * formatCtx, AVCodec *codec, OutputStream * ost, AVDictionary * optArg );
           static AVFrame * AllocPicture( enum AVPixelFormat pixFmt, int width, int height );
           static AVFrame * GetVideoFrame( uint8_t * imgData, OutputStream * ost, const int width, const int height );
           static int WriteFrame( AVFormatContext * formatCtx, const AVRational * timeBase, AVStream * stream, AVPacket * packet );

           int _width;
           int _height;
           OutputStream _ost;
           AVFormatContext * _formatCtx;
           AVDictionary * _dict;
    };

    #endif //MPEG_WRITER
    </imgdata></vector></string></iostream>

    MP4Writer.cpp

    #include
    #include <algorithm>

    MP4Writer::MP4Writer()
    {
       _width = 0;
       _height = 0;
    }

    void MP4Writer::Init()
    {
       av_register_all();
    }

    /**
    sets up output stream for the specified path.
    note that the output format is deduced automatically from the file extension passed
    @param path: output file path
    @returns: -1 = output could not be deduced, -2 = invalid codec, -3 = error opening output file,
              -4 = error writing header
    */
    int16_t MP4Writer::SetOutput( const std::string &amp; path )
    {
       int error;
       AVCodec * codec;
       AVOutputFormat * format;

       _ost = OutputStream{}; //TODO reset state in a more focused way?

       //allocate output media context
       avformat_alloc_output_context2( &amp;_formatCtx, NULL, NULL, path.c_str() );
       if ( !_formatCtx ) {
           std::cout &lt;&lt; "could not deduce output format from file extension.  aborting" &lt;&lt; std::endl;
           return -1;
       }
       //set format
       format = _formatCtx->oformat;
       if ( format->video_codec != AV_CODEC_ID_NONE ) {
           AddStream( &amp;_ost, _formatCtx, &amp;codec, format->video_codec );
       }
       else {
           std::cout &lt;&lt; "there is no video codec set.  aborting" &lt;&lt; std::endl;
           return -2;
       }

       OpenVideo( _formatCtx, codec, &amp;_ost, _dict );

       av_dump_format( _formatCtx, 0, path.c_str(), 1 );

       //open output file
       if ( !( format->flags &amp; AVFMT_NOFILE )) {
           error = avio_open( &amp;_formatCtx->pb, path.c_str(), AVIO_FLAG_WRITE );
           if ( error &lt; 0 ) {
               std::cout &lt;&lt; "there was an error opening output file " &lt;&lt; path &lt;&lt; ".  aborting" &lt;&lt; std::endl;
               return -3;
           }
       }

       //write header
       error = avformat_write_header( _formatCtx, &amp;_dict );
       if ( error &lt; 0 ) {
           std::cout &lt;&lt; "an error occurred writing header. aborting" &lt;&lt; std::endl;
           return -4;
       }

       return 0;
    }

    /**
    initialize the output stream
    @param ost: the output stream
    @param formatCtx: the context format
    @param codec: the output codec
    @param codec: the ffmpeg enumerated id of the codec
    @returns: -1 = encoder not found, -2 = stream could not be allocated, -3 = encoding context could not be allocated
    */
    int16_t MP4Writer::AddStream( OutputStream * ost, AVFormatContext * formatCtx, AVCodec ** codec, enum AVCodecID codecId )
    {
       AVCodecContext * ctx; //TODO not sure why this is here, could just set ost->enc directly
       int i;

       //detect the encoder
       *codec = avcodec_find_encoder( codecId );
       if ( (*codec) == NULL ) {
           std::cout &lt;&lt; "could not find encoder.  aborting" &lt;&lt; std::endl;
           return -1;
       }

       //allocate stream
       ost->st = avformat_new_stream( formatCtx, NULL );
       if ( ost->st == NULL ) {
           std::cout &lt;&lt; "could not allocate stream.  aborting" &lt;&lt; std::endl;
           return -2;
       }

       //allocate encoding context
       ost->st->id = formatCtx->nb_streams - 1;
       ctx = avcodec_alloc_context3( *codec );
       if ( ctx == NULL ) {
           std::cout &lt;&lt; "could not allocate encoding context.  aborting" &lt;&lt; std::endl;
           return -3;
       }

       ost->enc = ctx;

       //set context params
       ctx->codec_id = AV_CODEC_ID_H264;
       ctx->bit_rate = 4000 * 1000;
       ctx->width = _width;
       ctx->height = _height;
       ost->st->time_base = AVRational{ 1, 12 };
       ctx->time_base = ost->st->time_base;
       ctx->gop_size = 1;
       ctx->pix_fmt = AV_PIX_FMT_YUV420P;

       //if neccesary, set stream headers and formats separately
       if ( formatCtx->oformat->flags &amp; AVFMT_GLOBALHEADER ) {
           std::cout &lt;&lt; "setting stream and headers to be separate" &lt;&lt; std::endl;
           ctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
       }

       return 0;
    }

    /**
    open the video for writing
    @param formatCtx: the format context
    @param codec: output codec
    @param ost: output stream
    @param optArg: dictionary
    @return: -1 = error opening codec, -2 = allocate new frame, -3 = copy stream params
    */
    int16_t MP4Writer::OpenVideo( AVFormatContext * formatCtx, AVCodec *codec, OutputStream * ost, AVDictionary * optArg )
    {
       int error;
       AVCodecContext * ctx = ost->enc;
       AVDictionary * dict = NULL;
       av_dict_copy( &amp;dict, optArg, 0 );

       //open codec
       error = avcodec_open2( ctx, codec, &amp;dict );
       av_dict_free( &amp;dict );
       if ( error &lt; 0 ) {
           std::cout &lt;&lt; "there was an error opening the codec.  aborting" &lt;&lt; std::endl;
           return -1;
       }

       //allocate new frame
       ost->frame = AllocPicture( ctx->pix_fmt, ctx->width, ctx->height );
       if ( ost->frame == NULL ) {
           std::cout &lt;&lt; "there was an error allocating a new frame.  aborting" &lt;&lt; std::endl;
           return -2;
       }

       //copy steam params
       error = avcodec_parameters_from_context( ost->st->codecpar, ctx );
       if ( error &lt; 0 ) {
           std::cout &lt;&lt; "could not copy stream parameters.  aborting" &lt;&lt; std::endl;
           return -3;
       }

       return 0;
    }

    /**
    allocate a new frame
    @param pixFmt: ffmpeg enumerated pixel format
    @param width: output width
    @param height: output height
    @returns: an inititalized frame
    */
    AVFrame * MP4Writer::AllocPicture( enum AVPixelFormat pixFmt, int width, int height )
    {
       AVFrame * picture;
       int error;

       //allocate the frame
       picture = av_frame_alloc();
       if ( picture == NULL ) {
           std::cout &lt;&lt; "there was an error allocating the picture" &lt;&lt; std::endl;
           return NULL;
       }

       picture->format = pixFmt;
       picture->width = width;
       picture->height = height;

       //allocate the frame's data buffer
       error = av_frame_get_buffer( picture, 32 );
       if ( error &lt; 0 ) {
           std::cout &lt;&lt; "could not allocate frame data" &lt;&lt; std::endl;
           return NULL;
       }
       picture->pts = 0;
       return picture;
    }

    /**
    convert raw RGB buffer to YUV frame
    @return: frame that contains image data
    */
    AVFrame * MP4Writer::GetVideoFrame( uint8_t * imgData, OutputStream * ost, const int width, const int height )
    {
       int error;
       AVCodecContext * ctx = ost->enc;

       //prepare the frame
       error = av_frame_make_writable( ost->frame );
       if ( error &lt; 0 ) {
           std::cout &lt;&lt; "could not make frame writeable" &lt;&lt; std::endl;
           return NULL;
       }

       //TODO set this context one time per run, or even better, one time at init
       //convert RGB to YUV
       struct SwsContext* fooContext = sws_getContext( width, height, AV_PIX_FMT_BGR24,
           width, height, AV_PIX_FMT_YUV420P, SWS_BICUBIC, NULL, NULL, NULL );
       int inLinesize[1] = { 3 * width }; // RGB stride
       uint8_t * inData[1] = { imgData };
       int sliceHeight = sws_scale( fooContext, inData, inLinesize, 0, height, ost->frame->data, ost->frame->linesize );
       sws_freeContext( fooContext );

       ost->frame->pts = ost->next_pts++;
       //TODO does the frame need to be returned here as it is available at the class level?
       return ost->frame;
    }

    /**
    write frame to file
    @param formatCtx: the output format context
    @param timeBase: the framerate
    @param stream: output stream
    @param packet: data packet
    @returns: see return values for av_interleaved_write_frame
    */
    int MP4Writer::WriteFrame( AVFormatContext * formatCtx, const AVRational * timeBase, AVStream * stream, AVPacket * packet )
    {
       av_packet_rescale_ts( packet, *timeBase, stream->time_base );
       packet->stream_index = stream->index;

       //write compressed file to media file
       return av_interleaved_write_frame( formatCtx, packet );
    }

    int16_t MP4Writer::Write( std::vector<imgdata> &amp; imgData )
    {
       int16_t errorCount = 0;
       int16_t retVal = 0;
       bool countingUp = true;
       size_t i = 0;
       while ( true ) {
           //don't show first frame again when counting back down
           if ( !countingUp &amp;&amp; i == 0 ) {
               break;
           }
           uint8_t * pixels = imgData[i].GetBits( imgData[i].mp4Input );
           AddFrame( pixels );

           //handle inc/dec without repeating last frame
           if ( countingUp ) {
               if ( i == imgData.size() -1 ) {
                   countingUp = false;
                   i--;
               }
               else {
                   i++;
               }
           }
           else {
               i--;
           }
       }
       Finalize();
       return 0; //TODO return error code
    }

    /**
    add another frame to output video
    @param imgData: the raw image data
    @returns -1 = error encoding video frame, -2 = error writing frame
    */
    int16_t MP4Writer::AddFrame( uint8_t * imgData )
    {
       int error;
       AVCodecContext * ctx;
       AVFrame * frame;
       int gotPacket = 0;
       AVPacket pkt = { 0 };

       ctx = _ost.enc;
       av_init_packet( &amp;pkt );

       frame = GetVideoFrame( imgData, &amp;_ost, _width, _height );

       //encode the image
       error = avcodec_encode_video2( ctx, &amp;pkt, frame, &amp;gotPacket );
       if ( error &lt; 0 ) {
           std::cout &lt;&lt; "there was an error encoding the video frame" &lt;&lt; std::endl;
           return -1;
       }

       //write the frame.  NOTE: this doesn't kick in until the encoder has received a certain number of frames
       if ( gotPacket ) {
           error = WriteFrame( _formatCtx, &amp;ctx->time_base, _ost.st, &amp;pkt );
           if ( error &lt; 0 ) {
               std::cout &lt;&lt; "the video frame could not be written" &lt;&lt; std::endl;
               return -2;
           }
       }
       return 0;
    }

    /**
    finalize output video and cleanup
    */
    int16_t MP4Writer::Finalize()
    {
       av_write_trailer( _formatCtx );
       avcodec_free_context( &amp;_ost.enc );
       av_frame_free( &amp;_ost.frame);
       av_frame_free( &amp;_ost.tmp_frame );
       avio_closep( &amp;_formatCtx->pb );
       avformat_free_context( _formatCtx );
       sws_freeContext( _ost.sws_ctx );
       swr_free( &amp;_ost.swr_ctx);
       return 0;
    }
    </imgdata></algorithm>

    usage

    #include
    #include
    #include <vector>

    struct ImgData
    {
       unsigned int width;
       unsigned int height;
       std::string path;
       FIBITMAP * mp4Input;

       uint8_t * GetBits( FIBITMAP * bmp ) { return FreeImage_GetBits( bmp ); }
    };

    int main()
    {
        std::vector<imgdata> imgDataVec;
        //load images and push to imgDataVec
        MP4Writer mp4Writer;
        mp4Writer.SetHeight( 1200 ); //assumes 1:1 aspect ratio
        mp4Writer.Init();
        mp4Writer.SetOutput( "test.mp4" );
        mp4Writer.Write( imgDataVec );
    }
    </imgdata></vector>
  • Create thumbnail from a video directory using ffmpeg

    21 juillet 2017, par Shimon Wiener

    I have a project that requires me to produce a thumbnail from 2000 videos, i researched for a tool on the net and found that ffmpeg can do it, however i could not find a sample that will demonstrate how to work on a directory of videos and run on it do create thumbnail for all the videos, can any one point to a good sample
    Thanks Shimon

  • Analytics for ePortfolios, Mahara hui conference

    20 mars 2014, par Matthieu Aubry — Community, Meta

    I was privileged to present at the Mahara Hui conference in Wellington, New Zealand.

    Here are the slides of my presentation “Analytics for ePortfolios” :

    Summary : by using an analytics tool that integrates well with Mahara, such as Piwik, Mahara users can benefit from a multitude of insightful analytics reports.

    Learn more

    Mahara is a web application to build your electronic portfolio. You can create journals, upload files, embed social media resources from the web and collaborate with other users in groups. Mahara is a popular open source project built by a passionate community, and used in universities, schools and companies all over the world.

    Mahara Hui is the first kiwi conference on Mahara, the open source ePortfolio system, in New Zealand. This 2-day conference was held at Te Papa in Wellington from 19 to 20 March 2014 (schedule)

    Next steps

    I’m excited to join the Mahara team at the Mahara Hui Hackfest organised today at Catalyst IT offices. We will brainstorm how to integrate Piwik beautifully within Mahara, and how to ultimately provide students and employees useful analytics on all the content they create !