Recherche avancée

Médias (91)

Autres articles (106)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

Sur d’autres sites (10714)

  • Query on ffmpeg's concat demuxer parameters to end segments at outpoint timestamps and retain most properties same as input (fps, time_base etc)

    17 octobre 2024, par code

    In order to concatenate video files of matching fps, codec, resolution, time_base (in fact some are segments from same video), ffmpeg concat demuxer (to prevent re-encoding) approach has been used.
But ffmpeg did not stop concatenating segments at outpoint time specified in input demuxer text file !

    


    (Note : From Python code programmatically via sub-process approach, ffmpeg has been invoked )

    


    input.txt file for concat demuxer contained :

    


    ffconcat version 1.0
file 'numbered.mp4'
inpoint  2.083333
outpoint 2.166667



    


    (the time stamps provided above are of 50th (frame pts 49 in video, this is a keyframe) to 52nd (frame pts 51 in video, non-key frame), total 3 consecutive frames, inpoints given in this query are of a key-frame

    


    (video file referred in it i.e 'numbered.mp4' has each frame displaying its frame number ; it has h264 high coded 480p resolution, 10-seconds long with 240 frames and 24fps , <1MB in size shared at : https://filetransfer.io/data-package/F5caZ0xT#link )

    &#xA;

    ffmpeg command invoked programmatically with parameters :

    &#xA;

    &#xA;

    ffmpeg -f concat -safe 0 -fflags +genpts -i -c copy -video_track_timescale 24

    &#xA;

    &#xA;

    Output snippet contained :

    &#xA;

    &#xA;

    frame= 5 fps=0.0 q=-1.0 Lsize= 28KiB time=00:00:00.12 bitrate=1857.8kbits/s speed= 56x

    &#xA;

    &#xA;

    Problem 1 : It Shows there are 5 frames ! ffmpeg has concatenated 2 more frames beyond outpoint timestamp !

    &#xA;

    Query 1) : What are right parameter values to be given to ffmpeg concat demuxer method(or in concat demuxer input file) to make ffmpeg concatenate segments accurately till frames matching outpoint timestamp without overshooting or concatenating frames beyond outpoint timestamps ?

    &#xA;

    Problem2 : When another segment from same input file is referenced in concat demuxer input file, frame pts and timestamps were messed up in resultant output !

    &#xA;

    concat demuxer input file content(updated) :

    &#xA;

    ffconcat version 1.0&#xA;file &#x27;numbered.mp4&#x27;&#xA;inpoint  0&#xA;outpoint 0.125000&#xA;file &#x27;numbered.mp4&#x27;&#xA;inpoint  2.083333&#xA;outpoint 2.166667&#xA;&#xA;

    &#xA;

    command invoked was :

    &#xA;

    ffmpeg -f **concat **-safe 0 -fflags &#x2B;genpts   -i \ -c copy &#xA;  -video_track_timescale 24  \&#xA;

    &#xA;

    ffprobe output (on above output file, edited to reduce size) :

    &#xA;

    &#xA;key_frame=1, pts=0&#xA;    pts_time=0.000000&#xA;key_frame=0, pts=1&#xA;    pts_time=0.041667&#xA;key_frame=0, pts=2&#xA;    pts_time=0.083333&#xA;key_frame=0, pts=3&#xA;    pts_time=0.125000&#xA;key_frame=0, pts=4&#xA;    pts_time=0.166667&#xA;key_frame=1, pts=3&#xA;    pts_time=0.125000&#xA;key_frame=0, pts=6&#xA;    pts_time=0.250000&#xA;key_frame=0, pts=5&#xA;    pts_time=0.208333&#xA;key_frame=0, pts=7&#xA;    pts_time=0.291667&#xA;key_frame=0, pts=7&#xA;    pts_time=0.291667&#xA;

    &#xA;

    Confirms both pts and pts_time values are messed up (though segments referred in demuxer input file were several frames away with no overlapping ).

    &#xA;

    Query 2) What are accurate parameters to be given to concatenate segments represented by input demuxer file without causing this pts or pts_time issues ?

    &#xA;

    (In this test, all segments referred by demuxer have same parameters and are different segments of same video file itself ! so mismatch in codec parameters may not be the cause )

    &#xA;

    problem 3 : while input video had bitrate of 412654 (412.654kbps), concat demuxer resulted in output file with bitrate 1318777 (1.318 Mbps) over 3x the input bitrate.

    &#xA;

    Query 3) : What are accurate parameters to be given to retain all (almost) codec parameters same as input video and only perform concatenation without modifying time_base or framerate ?

    &#xA;

    Note : when -video_track_timescale 24 parameter is not provided as input parameter to concat demuxer, time_base in resultant output was different value (1/1000 ) instead of input files' time_base 1/24 !

    &#xA;

    ( when the Pts times are messed up, Non-monotonic DTS .. errors were observed in output : [vost#0:0/copy @ 000002c1b9b41140] Non-monotonic DTS ; previous : 2, current : 1 ; changing to 3. This may result in incorrect timestamps in the output file..)

    &#xA;

    To clarify, reason to use concat demuxer is to prevent re-encoding video, final usage would be to concatenate some segments of input video file with a few more video files all containing same codec parameters like fps,resolution,time_base etc.

    &#xA;

    Query 4) : Is it accurate at frame level to take pts_time values from ffprobe output and use these in ffmpeg concat demuxer input file for inpoint/outpoint values ?

    &#xA;

    (as ffprobe pts_time values possibly might be aligned with ffmpeg expectations, thought of taking pts_time values from ffprobe command output instead of venturing into computing frame start time)

    &#xA;

    the small (<1MB) input video file used in this test has been shared at : https://filetransfer.io/data-package/F5caZ0xT#link

    &#xA;

    input video file's ffprobe output has been pasted(trimmed to save space) :

    &#xA;

    "codec_name": "h264",&#xA;"codec_long_name": "H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10",&#xA;"profile": "High",&#xA;"codec_type": "video",&#xA;"codec_tag_string": "avc1",&#xA;"codec_tag": "0x31637661",&#xA;"width": 640,&#xA;"height": 480,&#xA;"coded_width": 640,&#xA;"coded_height": 480,&#xA;"closed_captions": 0,&#xA;"film_grain": 0,&#xA;"has_b_frames": 2,&#xA;"sample_aspect_ratio": "1:1",&#xA;"display_aspect_ratio": "4:3",&#xA;"pix_fmt": "yuv420p",&#xA;"level": 41,&#xA;"color_range": "tv",&#xA;"color_space": "smpte170m",&#xA;"chroma_location": "left",&#xA;"field_order": "progressive",&#xA;"refs": 1,&#xA;"is_avc": "true",&#xA;"nal_length_size": "4",&#xA;"id": "0x1",&#xA;"r_frame_rate": "24/1",&#xA;"avg_frame_rate": "24/1",&#xA;"time_base": "1/24",&#xA;"start_pts": 0,&#xA;"start_time": "0.000000",&#xA;"duration_ts": 240,&#xA;"duration": "10.000000",&#xA;"bit_rate": "409628",&#xA;"bits_per_raw_sample": "8",&#xA;"nb_frames": "240",&#xA;"extradata_size": 49,&#xA;

    &#xA;

    Searched quite a lot online for a solution but searches didnot result in finding a fix for this concat demuxer situation&#xA;I seek helpful answers to the queries presented above, Thanks All

    &#xA;

    (while some workarounds like converting to raw h264 and then applying time scale again to each segment is suggested in other discussions, in current scenario, the input video file is just 1 so it appears accurate parameters to ffmpeg concat demuxer method are needed and be helpful to others facing similar issue)

    &#xA;

  • Decoding VP8 On A Sega Dreamcast

    20 février 2011, par Multimedia Mike — Sega Dreamcast, VP8

    I got Google’s libvpx VP8 codec library to compile and run on the Sega Dreamcast with its Hitachi/Renesas SH-4 200 MHz CPU. So give Google/On2 their due credit for writing portable software. I’m not sure how best to illustrate this so please accept this still photo depicting my testbench Dreamcast console driving video to my monitor :



    Why ? Because I wanted to try my hand at porting some existing software to this console and because I tend to be most comfortable working with assorted multimedia software components. This seemed like it would be a good exercise.

    You may have observed that the video is blue. Shortest, simplest answer : Pure laziness. Short, technical answer : Path of least resistance for getting through this exercise. Longer answer follows.

    Update : I did eventually realize that the Dreamcast can work with YUV textures. Read more in my followup post.

    Process and Pitfalls
    libvpx comes with a number of little utilities including decode_to_md5.c. The first order of business was porting over enough source files to make the VP8 decoder compile along with the MD5 testbench utility.

    Again, I used the KallistiOS (KOS) console RTOS (aside : I’m still working to get modern Linux kernels compiled for the Dreamcast). I started by configuring and compiling libvpx on a regular desktop Linux system. From there, I was able to modify a number of configuration options to make the build more amenable to the embedded RTOS.

    I had to create a few shim header files that mapped various functions related to threading and synchronization to their KOS equivalents. For example, KOS has a threading library cleverly named kthreads which is mostly compatible with the more common pthread library functions. KOS apparently also predates stdint.h, so I had to contrive a file with those basic types.

    So I got everything compiled and then uploaded the binary along with a small VP8 IVF test vector. Imagine my surprise when an MD5 sum came out of the serial console. Further, visualize my utter speechlessness when I noticed that the MD5 sum matched what my desktop platform produced. It worked !

    Almost. When I tried to decode all frames in a test vector, the program would invariably crash. The problem was that the file that manages motion compensation (reconinter.c) needs to define MUST_BE_ALIGNED which compiles byte-wise block copy functions. This is necessary for CPUs like the SH-4 which can’t load unaligned data. Apparently, even ARM CPUs these days can handle unaligned memory accesses which is why this isn’t a configure-time option.

    Showing The Work
    I completed the first testbench application which ran the MD5 test on all 17 official IVF test vectors. The SH-4/Dreamcast version aces the whole suite.

    However, this is a video game console, so I had better be able to show the decoded video. The Dreamcast is strictly RGB— forget about displaying YUV data directly. I could take the performance hit to convert YUV -> RGB. Or, I could just display the intensity information (Y plane) rendered on a random color scale (I chose blue) on an RGB565 texture (the DC’s graphics hardware can also do paletted textures but those need to be rearranged/twiddled/swizzled).

    Results
    So, can the Dreamcast decode VP8 video in realtime ? Sure ! Well, I really need to qualify. In the test depicted in the picture, it seems to be realtime (though I wasn’t enforcing proper frame timings, just decoding and displaying as quickly as possible). Obviously, I wasn’t bothering to properly convert YUV -> RGB. Plus, that Big Buck Bunny test vector clip is only 176x144. Obviously, no audio decoding either.

    So, realtime playback, with a little fine print.

    On the plus side, it’s trivial to get the Dreamcast video hardware to upscale that little blue image to fullscreen.

    I was able to tally the total milliseconds’ worth of wall clock time required to decode the 17 VP8 test vectors. As you can probably work out from this list, when I try to play a 320x240 video, things start to break down.

    1. Processed 29 176x144 frames in 987 milliseconds.
    2. Processed 49 176x144 frames in 1809 milliseconds.
    3. Processed 49 176x144 frames in 704 milliseconds.
    4. Processed 29 176x144 frames in 255 milliseconds.
    5. Processed 49 176x144 frames in 339 milliseconds.
    6. Processed 48 175x143 frames in 2446 milliseconds.
    7. Processed 29 176x144 frames in 432 milliseconds.
    8. Processed 2 1432x888 frames in 2060 milliseconds.
    9. Processed 49 176x144 frames in 1884 milliseconds.
    10. Processed 57 320x240 frames in 5792 milliseconds.
    11. Processed 29 176x144 frames in 989 milliseconds.
    12. Processed 29 176x144 frames in 740 milliseconds.
    13. Processed 29 176x144 frames in 839 milliseconds.
    14. Processed 49 175x143 frames in 2849 milliseconds.
    15. Processed 260 320x240 frames in 29719 milliseconds.
    16. Processed 29 176x144 frames in 962 milliseconds.
    17. Processed 29 176x144 frames in 933 milliseconds.
  • ffmpeg avcodec_encode_video2 hangs when using Quick Sync h264_qsv encoder

    11 janvier 2017, par Mike Simpson

    When I use the mpeg4 or h264 encoders, I am able to successfully encode images to make a valid AVI file using the API for ffmpeg 3.1.0. However, when I use the Quick Sync encoder (h264_qsv), avcodec_encode_video2 will hang some of the time. I found that when using images that are 1920x1080, it was rare that avcodec_encode_video2 would hang. When using 256x256 images, it was very likely that the function would hang.

    I have created the test code below that demonstrates the hang of avcodec_encode_video2. The code will create a 1000 frame, 256x256 AVI with a bit rate of 400000. The frames are simply allocated, so the output video should just be green frames.

    The problem was observed using Windows 7 and Windows 10, using the 32-bit or 64-bit test application.

    If anyone has any idea on how I can avoid the avcodec_encode_video2 hang I would be very grateful ! Thanks in advance for any assistance.

    extern "C"
    {
    #ifndef __STDC_CONSTANT_MACROS
    #define __STDC_CONSTANT_MACROS
    #endif
    #include "avcodec.h"
    #include "avformat.h"
    #include "swscale.h"
    #include "avutil.h"
    #include "imgutils.h"
    #include "opt.h"
    #include
    }

    #include <iostream>


    // Globals
    AVCodec* m_pCodec = NULL;
    AVStream *m_pStream = NULL;
    AVOutputFormat* m_pFormat = NULL;
    AVFormatContext* m_pFormatContext = NULL;
    AVCodecContext* m_pCodecContext = NULL;
    AVFrame* m_pFrame = NULL;
    int m_frameIndex;

    // Output format
    AVPixelFormat m_pixType = AV_PIX_FMT_NV12;
    // Use for mpeg4
    //AVPixelFormat m_pixType = AV_PIX_FMT_YUV420P;

    // Output frame rate
    int m_frameRate = 30;
    // Output image dimensions
    int m_imageWidth = 256;
    int m_imageHeight = 256;
    // Number of frames to export
    int m_frameCount = 1000;
    // Output file name
    const char* m_fileName = "c:/test/test.avi";
    // Output file type
    const char* m_fileType = "AVI";
    // Codec name used to encode
    const char* m_encoderName = "h264_qsv";
    // use for mpeg4
    //const char* m_encoderName = "mpeg4";
    // Target bit rate
    int m_targetBitRate = 400000;

    void addVideoStream()
    {
       m_pStream = avformat_new_stream( m_pFormatContext, m_pCodec );
       m_pStream->id = m_pFormatContext->nb_streams - 1;
       m_pStream->time_base = m_pCodecContext->time_base;
       m_pStream->codec->pix_fmt = m_pixType;
       m_pStream->codec->flags = m_pCodecContext->flags;
       m_pStream->codec->width = m_pCodecContext->width;
       m_pStream->codec->height = m_pCodecContext->height;
       m_pStream->codec->time_base = m_pCodecContext->time_base;
       m_pStream->codec->bit_rate = m_pCodecContext->bit_rate;
    }

    AVFrame* allocatePicture( enum AVPixelFormat pix_fmt, int width, int height )
    {
       AVFrame *frame;

       frame = av_frame_alloc();

       if ( !frame )
       {
           return NULL;
       }

       frame->format = pix_fmt;
       frame->width  = width;
       frame->height = height;

       int checkImage = av_image_alloc( frame->data, frame->linesize, width, height, pix_fmt, 32 );

       if ( checkImage &lt; 0 )
       {
           return NULL;
       }

       return frame;
    }

    bool initialize()
    {
       AVRational frameRate;
       frameRate.den = m_frameRate;
       frameRate.num = 1;

       av_register_all();

       m_pCodec = avcodec_find_encoder_by_name(m_encoderName);

       if( !m_pCodec )
       {
           return false;
       }

       m_pCodecContext = avcodec_alloc_context3( m_pCodec );
       m_pCodecContext->width = m_imageWidth;
       m_pCodecContext->height = m_imageHeight;
       m_pCodecContext->time_base = frameRate;
       m_pCodecContext->gop_size = 0;
       m_pCodecContext->pix_fmt = m_pixType;
       m_pCodecContext->codec_id = m_pCodec->id;
       m_pCodecContext->bit_rate = m_targetBitRate;

       av_opt_set( m_pCodecContext->priv_data, "+CBR", "", 0 );

       return true;
    }

    bool startExport()
    {
       m_frameIndex = 0;
       char fakeFileName[512];
       int checkAllocContext = avformat_alloc_output_context2( &amp;m_pFormatContext, NULL, m_fileType, fakeFileName );

       if ( checkAllocContext &lt; 0 )
       {
           return false;
       }

       if ( !m_pFormatContext )
       {
           return false;
       }

       m_pFormat = m_pFormatContext->oformat;

       if ( m_pFormat->video_codec != AV_CODEC_ID_NONE )
       {
           addVideoStream();

           int checkOpen = avcodec_open2( m_pCodecContext, m_pCodec, NULL );

           if ( checkOpen &lt; 0 )
           {
               return false;
           }

           m_pFrame = allocatePicture( m_pCodecContext->pix_fmt, m_pCodecContext->width, m_pCodecContext->height );                
           if( !m_pFrame )
           {
               return false;
           }
           m_pFrame->pts = 0;
       }

       int checkOpen = avio_open( &amp;m_pFormatContext->pb, m_fileName, AVIO_FLAG_WRITE );
       if ( checkOpen &lt; 0 )
       {
           return false;
       }

       av_dict_set( &amp;(m_pFormatContext->metadata), "title", "QS Test", 0 );

       int checkHeader = avformat_write_header( m_pFormatContext, NULL );
       if ( checkHeader &lt; 0 )
       {
           return false;
       }

       return true;
    }

    int processFrame( AVPacket&amp; avPacket )
    {
       avPacket.stream_index = 0;
       avPacket.pts = av_rescale_q( m_pFrame->pts, m_pStream->codec->time_base, m_pStream->time_base );
       avPacket.dts = av_rescale_q( m_pFrame->pts, m_pStream->codec->time_base, m_pStream->time_base );
       m_pFrame->pts++;

       int retVal = av_interleaved_write_frame( m_pFormatContext, &amp;avPacket );
       return retVal;
    }

    bool exportFrame()
    {
       int success = 1;
       int result = 0;

       AVPacket avPacket;

       av_init_packet( &amp;avPacket );
       avPacket.data = NULL;
       avPacket.size = 0;

       fflush(stdout);

       std::cout &lt;&lt; "Before avcodec_encode_video2 for frame: " &lt;&lt; m_frameIndex &lt;&lt; std::endl;
       success = avcodec_encode_video2( m_pCodecContext, &amp;avPacket, m_pFrame, &amp;result );
       std::cout &lt;&lt; "After avcodec_encode_video2 for frame: " &lt;&lt; m_frameIndex &lt;&lt; std::endl;

       if( result )
       {
           success = processFrame( avPacket );
       }

       av_packet_unref( &amp;avPacket );

       m_frameIndex++;
       return ( success == 0 );
    }

    void endExport()
    {
       int result = 0;
       int success = 0;

       if (m_pFrame)
       {
           while ( success == 0 )
           {
               AVPacket avPacket;
               av_init_packet( &amp;avPacket );
               avPacket.data = NULL;
               avPacket.size = 0;

               fflush(stdout);
               success = avcodec_encode_video2( m_pCodecContext, &amp;avPacket, NULL, &amp;result );

               if( result )
               {
                   success = processFrame( avPacket );
               }
               av_packet_unref( &amp;avPacket );

               if (!result)
               {
                   break;
               }
           }
       }

       if (m_pFormatContext)
       {
           av_write_trailer( m_pFormatContext );

           if( m_pFrame )
           {
               av_frame_free( &amp;m_pFrame );
           }

           avio_closep( &amp;m_pFormatContext->pb );
           avformat_free_context( m_pFormatContext );
           m_pFormatContext = NULL;
       }
    }

    void cleanup()
    {
       if( m_pFrame || m_pCodecContext )
       {
           if( m_pFrame )
           {
               av_frame_free( &amp;m_pFrame );
           }

           if( m_pCodecContext )
           {
               avcodec_close( m_pCodecContext );
               av_free( m_pCodecContext );
           }
       }
    }

    int main()
    {
       bool success = true;
       if (initialize())
       {
           if (startExport())
           {
               for (int loop = 0; loop &lt; m_frameCount; loop++)
               {
                   if (!exportFrame())
                   {
                       std::cout &lt;&lt; "Failed to export frame\n";
                       success = false;
                       break;
                   }
               }
               endExport();
           }
           else
           {
               std::cout &lt;&lt; "Failed to start export\n";
               success = false;
           }

           cleanup();
       }
       else
       {
           std::cout &lt;&lt; "Failed to initialize export\n";
           success = false;
       }

       if (success)
       {
           std::cout &lt;&lt; "Successfully exported file\n";
       }
       return 1;
    }
    </iostream>