Recherche avancée

Médias (2)

Mot : - Tags -/map

Autres articles (61)

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

  • Les images

    15 mai 2013
  • Taille des images et des logos définissables

    9 février 2011, par

    Dans beaucoup d’endroits du site, logos et images sont redimensionnées pour correspondre aux emplacements définis par les thèmes. L’ensemble des ces tailles pouvant changer d’un thème à un autre peuvent être définies directement dans le thème et éviter ainsi à l’utilisateur de devoir les configurer manuellement après avoir changé l’apparence de son site.
    Ces tailles d’images sont également disponibles dans la configuration spécifique de MediaSPIP Core. La taille maximale du logo du site en pixels, on permet (...)

Sur d’autres sites (11671)

  • FFMPEG/DASH-LL creates audio and video chunks at different rates ; player is confused (404 errors)

    26 mai 2021, par Danny

    I'm trying to create a MPEG-DASH "live" stream from a static file to test various low latency modes. The DASH muxer in FFmpeg creates two AdaptationSets, one for video chunks and one for audio chunks.

    


    However, the audio and video chunk files are not created at the same rate (should they be ?). ie, here stream0 are the video chunks and stream1 are the audio chunks. After a few seconds of running, the webroot directory contains :

    


    chunk-stream0-00001.m4s  chunk-stream1-00001.m4s  
chunk-stream0-00002.m4s  chunk-stream1-00002.m4s  
chunk-stream0-00003.m4s  chunk-stream1-00003.m4s  
chunk-stream0-00004.m4s  chunk-stream1-00004.m4s  
                         chunk-stream1-00005.m4s  
                         chunk-stream1-00006.m4s  
                         chunk-stream1-00007.m4s  
                         chunk-stream1-00008.m4s  
                         chunk-stream1-00009.m4s  
master.mpd  
init-stream0.m4s  
init-stream1.m4s  


    


    The stream doesn't load (or play) on either dash.js or shaka-player and there are lots of 404 (Not Found) errors for the video chunks. The player is requesting chunks from both stream0 and stream1 in sequence, ie, stream0-001 + stream1-001, then stream0-002 + stream1-002 and so on.

    


    But since stream0 only goes from 001 to 004, there are lots of 404 errors as it tries to load stream0-005 through 009.

    


    The gap gets wider after letting FFmpeg run for a while. eg, stream0 is 62 to 75 but stream1 is 174 to 187. Reloading the player page at this point fails with dash.all.debug.js:15615 [2055][FragmentController] No video bytes to push or stream is inactive. and shows 404 errors stream0 chunk 188 (which doesn't exist yet !)

    


    enter image description here

    


    The FFmpeg command was adopted from DASH streaming from the top-down :

    


    ffmpeg -re -i /mnt/swdevel/TestStreams/H264/ThreeHourMovie.mp4 \
-c:v libx264 -x264-params keyint=120:scenecut=0 -b:v 1M -c:a copy \
-f dash -dash_segment_type mp4 \
 -seg_duration 2 \
 -target_latency 3 \
 -frag_type duration \
 -frag_duration 0.2 \
 -window_size 10 \
 -extra_window_size 3 \
 -streaming 1 \
 -ldash 1 \
 -use_template 1 \
 -use_timeline 0 \
 -write_prft 1 \
 -fflags +nobuffer+flush_packets \
 -format_options "movflags=+cmaf" \
 -utc_timing_url "/pelican/testPlayers/time.php" \
 master.mpd


    


    And the dash.js player code is very simple :

    


    const srcUrl = "../ottWebRoot/playerTest/master.mpd"; 

var player = dashjs.MediaPlayer().create();

let autoPlay = false;
player.initialize(document.querySelector("#videoTagId"), srcUrl, autoPlay);

player.updateSettings(
{
    streaming :
    {
        lowLatencyEnabled : true,
        liveDelay : 2,
        jumpGaps : true,
        jumpLargeGaps : true,
        smallGapLimit : 1.5,
    }
});


    


    To provide the UTCTiming element in the manifest, the small time.php URL returns a UTC time from the web server :

    


    <?php
    print gmdate("Y-m-d\TH:i:s\Z");
?>


    


    (It also shows 404 errors for the latest stream1/audio chunk, that's likely a different problem)

    


    I'm not sure what to try next. Any and suggestions greatly appreciated.

    


    EDIT I

    


    The suggestion by @Anonymous Coward to change the key interval improved things a lot. The chunks for stream0 and stream1 are in lock-step and have identical sequence numbers.

    


    However, there are still many 404 errors, both on initial page load (without pressing play) and during playback.

    


    I ran watch -n 1 ls -lt code> and compared side-by-side to the errors in the browser console.  It&#x27;s hard to compare but it <em>looks</em> like the browser is trying to fetch files "on the play edge" which haven&#x27;t yet been created by FFmpeg.  See the pic below.

    &#xA;

    How do I instruct the browser to wait just a bit more before fetching the edge chunks ?

    &#xA;

    enter image description here

    &#xA;

    EDIT II

    &#xA;

    Using shaka-player instead of dash.js plays properly without 404 errors. Configured as :

    &#xA;

        player.configure(&#xA;    {&#xA;        streaming: &#xA;        {&#xA;            lowLatencyMode: true,&#xA;            inaccurateManifestTolerance: 0,&#xA;            rebufferingGoal: 0.1,&#xA;        }&#xA;        &#xA;    });&#xA;

    &#xA;

    Client

    &#xA;

      &#xA;
    • MacOS 10.12
    • &#xA;

    • dash.js latest 3.2.2
    • &#xA;

    • Chrome 79, Safari 12, FireFox v ?
    • &#xA;

    &#xA;

    Server

    &#xA;

      &#xA;
    • Apache 2.4.37
    • &#xA;

    • PHP 7.2.4 (for time function only)
    • &#xA;

    • Centos 8
    • &#xA;

    &#xA;

    (For reference, here is the mpd file generated by FFmpeg)

    &#xA;

    &lt;?xml version="1.0" encoding="utf-8"?>&#xA;<mpd xmlns="urn:mpeg:dash:schema:mpd:2011" profiles="urn:mpeg:dash:profile:isoff-live:2011" type="dynamic" minimumupdateperiod="PT500S" availabilitystarttime="2021-05-24T14:50:00.263Z" publishtime="2021-05-24T15:22:45.335Z" timeshiftbufferdepth="PT50.0S" maxsegmentduration="PT2.0S" minbuffertime="PT5.0S">&#xA;    <programinformation>&#xA;    </programinformation>&#xA;    <servicedescription>&#xA;        <latency target="3000" referenceid="0"></latency>&#xA;    </servicedescription>&#xA;    <period start="PT0.0S">&#xA;        <adaptationset contenttype="video" startwithsap="1" segmentalignment="true" bitstreamswitching="true" framerate="24/1" maxwidth="1280" maxheight="682" par="15:8" lang="und">&#xA;            <resync dt="200000" type="0"></resync>&#xA;            <representation mimetype="video/mp4" codecs="avc1.64081f" bandwidth="1000000" width="1280" height="682" sar="1023:1024">&#xA;                <producerreferencetime inband="true" type="captured" wallclocktime="2021-05-24T14:50:00.263Z" presentationtime="0">&#xA;                    <utctiming schemeiduri="urn:mpeg:dash:utc:http-xsdate:2014" value="/pelican/testPlayers/time.php"></utctiming>&#xA;                </producerreferencetime>&#xA;                <resync dt="5000000" type="1"></resync>&#xA;                <segmenttemplate timescale="1000000" duration="2000000" availabilitytimeoffset="1.800" availabilitytimecomplete="false" initialization="init-stream$RepresentationID$.m4s" media="chunk-stream$RepresentationID$-$Number%05d$.m4s" startnumber="1">&#xA;                </segmenttemplate>&#xA;            </representation>&#xA;        </adaptationset>&#xA;        <adaptationset contenttype="audio" startwithsap="1" segmentalignment="true" bitstreamswitching="true" lang="und">&#xA;            <resync dt="200000" type="0"></resync>&#xA;            <representation mimetype="audio/mp4" codecs="mp4a.40.2" bandwidth="116317" audiosamplingrate="48000">&#xA;                <audiochannelconfiguration schemeiduri="urn:mpeg:dash:23003:3:audio_channel_configuration:2011" value="2"></audiochannelconfiguration>&#xA;                <producerreferencetime inband="true" type="captured" wallclocktime="2021-05-24T14:50:00.306Z" presentationtime="0">&#xA;                    <utctiming schemeiduri="urn:mpeg:dash:utc:http-xsdate:2014" value="/pelican/testPlayers/time.php"></utctiming>&#xA;                </producerreferencetime>&#xA;                <resync dt="21333" type="1"></resync>&#xA;                <segmenttemplate timescale="1000000" duration="2000000" availabilitytimeoffset="1.800" availabilitytimecomplete="false" initialization="init-stream$RepresentationID$.m4s" media="chunk-stream$RepresentationID$-$Number%05d$.m4s" startnumber="1">&#xA;                </segmenttemplate>&#xA;            </representation>&#xA;        </adaptationset>&#xA;    </period>&#xA;    <utctiming schemeiduri="urn:mpeg:dash:utc:http-xsdate:2014" value="/pelican/testPlayers/time.php"></utctiming>&#xA;</mpd>&#xA;

    &#xA;

  • ffmpeg-python drawtext after concatenating mp4 videos

    2 mars 2020, par user2355051

    I realize a similar question has been asked here, so before pointing to that question and answer, please understand that my question is different. I’m hoping that someone can point me in the right direction.

    In short, I have 5 video segments that I’m concatenating and reordering based on user input. Is there a way that I can drawtext dynamically over the input without having to process the video a second time ?

    This is the code that I have which is working, but as you can see, I need to open the concatenated file and apply the text over that version. Then the file is saved as a duplicate.

    I’m looking for a more elegant way to accomplish this. Any suggestions would be appreciated.

           video1 = ffmpeg.input('./assets/v_1.mp4')
           video2 = ffmpeg.input('./assets/v_2.mp4')
           video3 = ffmpeg.input('./assets/v_3.mp4')
           video4 = ffmpeg.input('./assets/v_4.mp4')
           video5 = ffmpeg.input('./assets/v_5.mp4')



           print(row)

           ## IF Row 1 and 2 have values they get all five.
           if row[1] == '1' and row[2] == '1':
               print("Matches here");
               outfile = row[0]+'.mp4'
               ##DO Stuff
               joined = ffmpeg.concat(video1.video,video1.audio,video2.video,video2.audio,video3.video,video3.audio,video4.video,video4.audio,video5.video,video5.audio, v=1,a=1,unsafe=1).node
               vj = joined[0]
               va = joined[1].filter('volume', 1)

               out = ffmpeg.output(vj,va, outfile)
               out.run()
               ## Once Concat Video is finished, then it draws text over the video.
               input2 = ffmpeg.input(row[0]+'.mp4').drawtext(fontfile='/Users/jserra/Library/Fonts/Cocogoose-Condensed-Regular-trial.ttf',fontsize='60',timecode='00:00:00:00',r=60,text=row[0],fontcolor='black',escape_text=True)
               ffmpeg.output(input2,row[0]+'_1.mp4').run()

    I’ve tried this and receive the following error :

    video1 = ffmpeg.input('./assets/StMarys_1.mp4').drawtext(fontfile='/Users/jserra/Library/Fonts/Cocogoose-Condensed-Regular-trial.ttf',fontsize='60',timecode='00:00:00:00',r=60,text=row[0],fontcolor='black',escape_text=True)

    Error :

       .virtualenvs/cvtesting/lib/python3.6/site-packages/ffmpeg/_run.py", line 93, in _allocate_filter_stream_names
       upstream_node, upstream_label
    ValueError: Encountered drawtext(fontcolor='black', fontfile='/Users/jserra/Library/Fonts/Cocogoose-Condensed-Regular-trial.ttf', fontsize='60', r=60, text='jack', timecode='00:00:00:00') &lt;1d2ff6bbf3f0> with multiple outgoing edges with same upstream label None; a `split` filter is probably required

    I’ve also tried chaining it after the videos are concatenated with joined. I still receive errors.

    joined = ffmpeg.concat(video1.video,video1.audio,video2.video,video2.audio,video3.video,video3.audio,video4.video,video4.audio,video5.video,video5.audio, v=1,a=1,unsafe=1).drawtext(fontfile='/Users/jserra/Library/Fonts/Cocogoose-Condensed-Regular-trial.ttf',fontsize='60',timecode='00:00:00:00',r=60,text=row[0],fontcolor='black',escape_text=True).node

    Will I need to process these videos twice ? If there are any optimizations that I can make please let me know. Also, if there are any pointers about displaying the drawn text for a certain period of time, the documentation seems kinda spotty as it relates to controlling the duration, I’m not sure what the values mean or how they impact each other.

    Thanks

  • Reduce latency in ffmpeg snapshot

    3 décembre 2015, par Acorian0

    I have a latency problem with FFMPEG.

    I have a streaming server running and I occasionally need to take multiple snapshots over a period of time (in the example below, 5s), and then examine the snapshots taken. Time is critical.

    With this command I take 25 frames per second over 5 seconds, meaning I will have 125 snapshots in my folder.

    ffmpeg.exe -ss 0.05 -re -i udp://239.255.0.xx:xxxx -ss 0 -vf fps=25 -to 5 -y \Test\%5d.jpg

    The -vf fps is forcing the 25 frames per second even if the server can’t provide them.

    The problem is that -ss 0.05 is not doing its job. It is supposed to delay the initial snapshot for 50 milliseconds but FFMPEG takes around 200ms to "start" the service on itself :/. This means the first snapshot is taken after around 200ms after I called the service. (200ms is way too big of a latency for my purpose... I can survive with 100ms)

    How can I force FFMPEG to start taking snapshots earlier ? Or is there a way to get snapshots, let’s say, from the buffer (e.g. start taking snapshots 150ms in the past) ?

    PS : Changing the -ss from 0.05 to 0.00 is not doing anything, I can only see the -ss doing something if the value is bigger than 0.2/0.25. Another thing, using a newer version of FFMPEG is by now impossible

    Sample OUTPUT :

    ffmpeg version 2.4.5 Copyright (c) 2000-2014 the FFmpeg developers
     built on Dec 30 2014 14:53:50 with gcc 4.9.2 (GCC)
     configuration: --disable-static --enable-shared --enable-gpl --enable-version3 --disable-w32thread
    s --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-icon
    v --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libfreetype --enable
    -libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencor
    e-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-l
    ibschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-li
    bvidstab --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-l
    ibwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --ena
    ble-lzma --enable-decklink --enable-zlib
     libavutil      54.  7.100 / 54.  7.100
     libavcodec     56.  1.100 / 56.  1.100
     libavformat    56.  4.101 / 56.  4.101
     libavdevice    56.  0.100 / 56.  0.100
     libavfilter     5.  1.100 /  5.  1.100
     libswscale      3.  0.100 /  3.  0.100
     libswresample   1.  1.100 /  1.  1.100
     libpostproc    53.  0.100 / 53.  0.100
    [mpeg2video @ 01d77120] Invalid frame dimensions 0x0.
    Input #0, mpegts, from 'udp://239.255.0.14:5014':
     Duration: N/A, start: 159.051978, bitrate: 105241 kb/s
     Program 1
       Metadata:
         service_name    : Service01
         service_provider: FFmpeg
       Stream #0:0[0x100]: Video: mpeg1video ([2][0][0][0] / 0x0002), yuv420p(tv), 1920x1080 [SAR 1:1 D
    AR 16:9], 104857 kb/s, 50 tbr, 90k tbn, 50 tbc
       Stream #0:1[0x101]: Audio: mp2 ([3][0][0][0] / 0x0003), 48000 Hz, stereo, s16p, 384 kb/s
    [swscaler @ 01d70060] deprecated pixel format used, make sure you did set range correctly
    Output #0, image2, to 'C:\Test\%5d.jpg':
     Metadata:
       encoder         : Lavf56.4.101
       Stream #0:0: Video: mjpeg, yuvj420p, 1920x1080 [SAR 1:1 DAR 16:9], q=2-31, 200 kb/s, 25 fps, 25
    tbn, 25 tbc
       Metadata:
         encoder         : Lavc56.1.100 mjpeg
    Stream mapping:
     Stream #0:0 -> #0:0 (mpeg1video (native) -> mjpeg (native))
    Press [q] to stop, [?] for help
    frame=  125 fps= 25 q=24.8 Lsize=N/A time=00:00:05.00 bitrate=N/A dup=1 drop=0
    video:9710kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown