
Recherche avancée
Médias (1)
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (78)
-
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Possibilité de déploiement en ferme
12 avril 2011, parMediaSPIP peut être installé comme une ferme, avec un seul "noyau" hébergé sur un serveur dédié et utilisé par une multitude de sites différents.
Cela permet, par exemple : de pouvoir partager les frais de mise en œuvre entre plusieurs projets / individus ; de pouvoir déployer rapidement une multitude de sites uniques ; d’éviter d’avoir à mettre l’ensemble des créations dans un fourre-tout numérique comme c’est le cas pour les grandes plate-formes tout public disséminées sur le (...)
Sur d’autres sites (11544)
-
Using FFMPEG filters directly in Web Audio API AudioWorklet
13 juillet 2021, par Cynic_AndroidI am trying to leverage the vast numbers of audio filters that FFMPEG has and see if I can use them directly in a custom AudioWorklet so I dont have to reinvent the wheel for each and every filter. One option I came across was to convert the AVFilter library to WASM and write a wrapper class to call the library functions.
https://dev.to/alfg/ffmpeg-webassembly-2cbl


But I am looking for a solution where the data can be piped to the filter and the output passed instantaneously to the other audio worklet nodes so the effect can be heard without a delay.


Any kind of help would be highly appreciated.


-
FFMPEG/DASH-LL creates audio and video chunks at different rates ; player is confused (404 errors)
26 mai 2021, par DannyI'm trying to create a MPEG-DASH "live" stream from a static file to test various low latency modes. The DASH muxer in FFmpeg creates two AdaptationSets, one for video chunks and one for audio chunks.


However, the audio and video chunk files are not created at the same rate (should they be ?). ie, here
stream0
are the video chunks andstream1
are the audio chunks. After a few seconds of running, the webroot directory contains :

chunk-stream0-00001.m4s chunk-stream1-00001.m4s 
chunk-stream0-00002.m4s chunk-stream1-00002.m4s 
chunk-stream0-00003.m4s chunk-stream1-00003.m4s 
chunk-stream0-00004.m4s chunk-stream1-00004.m4s 
 chunk-stream1-00005.m4s 
 chunk-stream1-00006.m4s 
 chunk-stream1-00007.m4s 
 chunk-stream1-00008.m4s 
 chunk-stream1-00009.m4s 
master.mpd 
init-stream0.m4s 
init-stream1.m4s 



The stream doesn't load (or play) on either
dash.js
or shaka-player and there are lots of404 (Not Found)
errors for the video chunks. The player is requesting chunks from both stream0 and stream1 in sequence, ie, stream0-001 + stream1-001, then stream0-002 + stream1-002 and so on.

But since stream0 only goes from 001 to 004, there are lots of 404 errors as it tries to load stream0-005 through 009.


The gap gets wider after letting FFmpeg run for a while. eg, stream0 is 62 to 75 but stream1 is 174 to 187. Reloading the player page at this point fails with
dash.all.debug.js:15615 [2055][FragmentController] No video bytes to push or stream is inactive.
and shows 404 errors stream0 chunk 188 (which doesn't exist yet !)



The FFmpeg command was adopted from DASH streaming from the top-down :


ffmpeg -re -i /mnt/swdevel/TestStreams/H264/ThreeHourMovie.mp4 \
-c:v libx264 -x264-params keyint=120:scenecut=0 -b:v 1M -c:a copy \
-f dash -dash_segment_type mp4 \
 -seg_duration 2 \
 -target_latency 3 \
 -frag_type duration \
 -frag_duration 0.2 \
 -window_size 10 \
 -extra_window_size 3 \
 -streaming 1 \
 -ldash 1 \
 -use_template 1 \
 -use_timeline 0 \
 -write_prft 1 \
 -fflags +nobuffer+flush_packets \
 -format_options "movflags=+cmaf" \
 -utc_timing_url "/pelican/testPlayers/time.php" \
 master.mpd



And the
dash.js
player code is very simple :

const srcUrl = "../ottWebRoot/playerTest/master.mpd"; 

var player = dashjs.MediaPlayer().create();

let autoPlay = false;
player.initialize(document.querySelector("#videoTagId"), srcUrl, autoPlay);

player.updateSettings(
{
 streaming :
 {
 lowLatencyEnabled : true,
 liveDelay : 2,
 jumpGaps : true,
 jumpLargeGaps : true,
 smallGapLimit : 1.5,
 }
});



To provide the
UTCTiming
element in the manifest, the smalltime.php
URL returns a UTC time from the web server :

<?php
 print gmdate("Y-m-d\TH:i:s\Z");
?>



(It also shows 404 errors for the latest stream1/audio chunk, that's likely a different problem)


I'm not sure what to try next. Any and suggestions greatly appreciated.


EDIT I


The suggestion by @Anonymous Coward to change the key interval improved things a lot. The chunks for stream0 and stream1 are in lock-step and have identical sequence numbers.


However, there are still many 404 errors, both on initial page load (without pressing play) and during playback.


I ran
watch -n 1 ls -lt code> and compared side-by-side to the errors in the browser console. It's hard to compare but it <em>looks</em> like the browser is trying to fetch files "on the play edge" which haven't yet been created by FFmpeg. See the pic below.


How do I instruct the browser to wait just a bit more before fetching the edge chunks ?




EDIT II


Using
shaka-player
instead ofdash.js
plays properly without 404 errors. Configured as :

player.configure(
 {
 streaming: 
 {
 lowLatencyMode: true,
 inaccurateManifestTolerance: 0,
 rebufferingGoal: 0.1,
 }
 
 });



Client


- 

- MacOS 10.12
- dash.js latest 3.2.2
- Chrome 79, Safari 12, FireFox v ?








Server


- 

- Apache 2.4.37
- PHP 7.2.4 (for time function only)
- Centos 8








(For reference, here is the mpd file generated by FFmpeg)


<?xml version="1.0" encoding="utf-8"?>
<mpd xmlns="urn:mpeg:dash:schema:mpd:2011" profiles="urn:mpeg:dash:profile:isoff-live:2011" type="dynamic" minimumupdateperiod="PT500S" availabilitystarttime="2021-05-24T14:50:00.263Z" publishtime="2021-05-24T15:22:45.335Z" timeshiftbufferdepth="PT50.0S" maxsegmentduration="PT2.0S" minbuffertime="PT5.0S">
 <programinformation>
 </programinformation>
 <servicedescription>
 <latency target="3000" referenceid="0"></latency>
 </servicedescription>
 <period start="PT0.0S">
 <adaptationset contenttype="video" startwithsap="1" segmentalignment="true" bitstreamswitching="true" framerate="24/1" maxwidth="1280" maxheight="682" par="15:8" lang="und">
 <resync dt="200000" type="0"></resync>
 <representation mimetype="video/mp4" codecs="avc1.64081f" bandwidth="1000000" width="1280" height="682" sar="1023:1024">
 <producerreferencetime inband="true" type="captured" wallclocktime="2021-05-24T14:50:00.263Z" presentationtime="0">
 <utctiming schemeiduri="urn:mpeg:dash:utc:http-xsdate:2014" value="/pelican/testPlayers/time.php"></utctiming>
 </producerreferencetime>
 <resync dt="5000000" type="1"></resync>
 <segmenttemplate timescale="1000000" duration="2000000" availabilitytimeoffset="1.800" availabilitytimecomplete="false" initialization="init-stream$RepresentationID$.m4s" media="chunk-stream$RepresentationID$-$Number%05d$.m4s" startnumber="1">
 </segmenttemplate>
 </representation>
 </adaptationset>
 <adaptationset contenttype="audio" startwithsap="1" segmentalignment="true" bitstreamswitching="true" lang="und">
 <resync dt="200000" type="0"></resync>
 <representation mimetype="audio/mp4" codecs="mp4a.40.2" bandwidth="116317" audiosamplingrate="48000">
 <audiochannelconfiguration schemeiduri="urn:mpeg:dash:23003:3:audio_channel_configuration:2011" value="2"></audiochannelconfiguration>
 <producerreferencetime inband="true" type="captured" wallclocktime="2021-05-24T14:50:00.306Z" presentationtime="0">
 <utctiming schemeiduri="urn:mpeg:dash:utc:http-xsdate:2014" value="/pelican/testPlayers/time.php"></utctiming>
 </producerreferencetime>
 <resync dt="21333" type="1"></resync>
 <segmenttemplate timescale="1000000" duration="2000000" availabilitytimeoffset="1.800" availabilitytimecomplete="false" initialization="init-stream$RepresentationID$.m4s" media="chunk-stream$RepresentationID$-$Number%05d$.m4s" startnumber="1">
 </segmenttemplate>
 </representation>
 </adaptationset>
 </period>
 <utctiming schemeiduri="urn:mpeg:dash:utc:http-xsdate:2014" value="/pelican/testPlayers/time.php"></utctiming>
</mpd>



-
lavf/srtdec : rewrite parsing logic
22 décembre 2015, par Clément Bœschlavf/srtdec : rewrite parsing logic
Fixes Ticket #5032
The samples in Ticket #5032 is using \r\r\n as line breaks. Since we
already are handling \r, or \n, or \r\n as line breaks, \r\n\n will be
considered as a double line breaks. This is an issue because
ff_subtitles_read_text_chunk() will as a result stop extracting a chunk
after just one line.So instead of parsing the SRT by "chunks" (which means splitting every
double LB), this new parser is detecting timing lines, and split the
events on this basis. While this sounds safe and simple, it needs to
take into account the event number preceding the timing line while
handling situations such as :- event number starting at 0 or actually any number instead of 1
- event numbers not being ordered at all
- event number being followed by text garbage (this really happened,
see Ticket #4898)
- event payload containing one or multiple number (a protagonist saying
a count-down, a date or whatever) which could be confused with a
chapter number
- event number being empty (see Ticket #2167)
- all kind of weird line breaks can appear randomly like wild pokémons
- untrustable line breaks (Ticket #5032)The sample madness.srt tries to sum up most of this into one sample,
ticket5032-rrn.srt is the file containing \r\r\n line breaks. and
empty-events-2167.srt contains empty events.