
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (50)
-
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...) -
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
-
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
Sur d’autres sites (7275)
-
avcodec/jpeg2000dwt : merge rescaling with interleave in 9/7 int IDWT
2 juin 2013, par Michael Niedermayeravcodec/jpeg2000dwt : merge rescaling with interleave in 9/7 int IDWT
Tha fate tests change because the edge mirroring was wrong before this commit
Reviewed-by : Nicolas BERTRAND <nicoinattendu@gmail.com>
Signed-off-by : Michael Niedermayer <michaelni@gmx.at> -
How to reduce the latency of CMAF ?
13 juin 2023, par dannyomniI implemented CMAF through a self-built nginx server with ffmpeg, but I encountered some technical bottlenecks. My latency always remains at 3 seconds and cannot be further reduced. Additionally, I'm unable to successfully implement chunked transfer.


Briefly describe my environment, I use OBS to push the live stream to the server, then transcode it on the server, and finally push the content to users through CDN.


Here is some of my code


ffmpeg :


sudo ffmpeg -i rtmp://127.0.0.1:1935/live/stream -loglevel 40 -c copy -sc_threshold 0 -g 60 -bf 0 -map 0 -f dash -strict experimental -use_timeline 1 -use_template 1 -seg_duration 1 -window_size 5 -adaptation_sets "id=0,streams=v id=1,streams=a" -streaming 1 -dash_segment_type mp4 -utc_timing_url "http://time.akamai.com/?iso" -movflags frag_keyframe+empty_moov+default_base_moof -ldash 1 -hls_playlist 1 -master_m3u8_publish_rate 1 -remove_at_exit 1 /var/www/html/live/manifest.mpd



nignx config :


server_name myserver.com;
 add_header Access-Control-Allow-Origin *;
 add_header Access-Control-Allow-Methods 'GET, POST, OPTIONS';
 add_header Access-Control-Allow-Headers 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
 add_header Access-Control-Expose-Headers 'Content-Length,Content-Range';
 root /var/www/html;
 index index.html index.nginx-debian.html;
 location / {
 chunked_transfer_encoding on;
 }



html player





 
 
 
 <code class="echappe-js"><script src="https://cdn.jsdelivr.net/npm/hls.js@latest"></script>

<script src="https://cdn.dashjs.org/latest/dash.all.min.js"></script>

 


 

<script>&#xA; const video = document.getElementById(&#x27;video&#x27;);&#xA; const hlsSrc = &#x27;/live/master.m3u8&#x27;; // Replace with your HLS stream URL&#xA; const dashSrc = &#x27;/live/stream.mpd&#x27;; // Replace with your DASH stream URL&#xA;&#xA; function isHlsSupported() {&#xA; return Hls.isSupported() || video.canPlayType(&#x27;application/vnd.apple.mpegurl&#x27;);&#xA; }&#xA;&#xA; function isDashSupported() {&#xA; return !!window.MediaSource &amp;&amp; !!MediaSource.isTypeSupported(&#x27;video/mp4; codecs="avc1.4d401e,mp4a.40.2"&#x27;);&#xA; }&#xA;&#xA; if (isHlsSupported()) {&#xA; // Use HLS for playback&#xA; const hls = new Hls({&#xA; lowLatencyMode: true,// Enable low-latency mode&#xA; liveSyncDurationCount: 1, // Number of segments used to sync live stream&#xA; liveMaxLatencyDurationCount: 2,// Number of segments used to calculate the latency&#xA; maxBufferLength: 2,// Max buffer length in seconds&#xA; maxBufferSize: 1000 * 1000 * 100,// Max buffer size in bytes&#xA; liveBackBufferLength: 0// Max back buffer length in seconds (0 means back buffer disabled)&#xA; });&#xA; hls.loadSource(hlsSrc);&#xA; hls.attachMedia(video);&#xA; hls.on(Hls.Events.MANIFEST_PARSED, () => {&#xA; video.play();&#xA; });&#xA; } else if (isDashSupported()) {&#xA; // Use DASH for playback&#xA; const player = dashjs.MediaPlayer().create();&#xA; player.initialize(video, dashSrc, true);&#xA; player.updateSettings({&#xA; streaming: {&#xA; lowLatencyEnabled: true, // Enable low-latency mode&#xA; liveDelay: 1, // Set live delay in seconds, equal to 3 times the segment duration&#xA; liveCatchUpPlaybackRate: 1.2, // Playback rate for catching up when behind the live edge&#xA; liveCatchUpMinDrift: 0.5, // Minimum drift from live edge before initiating catch-up (in seconds)&#xA; bufferTimeAtTopQuality: 3, // Maximum buffer length in seconds&#xA; bufferToKeep: 0, // Duration of the back buffer in seconds (disable back buffer)&#xA; }&#xA; });&#xA; } else {&#xA; console.error(&#x27;Neither HLS nor DASH playback is supported in this browser.&#x27;);&#xA; }&#xA; </script>





I hope to reduce the latency to 1 second.


-
Ghost image issues with ffmpeg -filter_complex displace
5 juillet 2022, par raul.vilaI've (almost) been able to apply a displacement based on 2 animated gaussian noise videos, but I'm having issues with a ghost image. A picture is worth a thousand words.


Here you have a script to replicate the issue :


ffmpeg -y -t 2 -f lavfi -i color=c=blue:s=160x120 -c:v libx264 -tune stillimage -pix_fmt rgb24 00_empty.mp4
ffmpeg -y -i 00_empty.mp4 -vf "drawtext=text=string1:y=h/2:x=w-t*w/2:fontcolor=white:fontsize=60" 01_text.mp4
ffmpeg -y -t 2 -f lavfi -i color=c=gray:s=160x120 -c:v libx264 -tune stillimage -pix_fmt rgb24 02_gray.mp4
ffmpeg -y -i 01_text.mp4 -i 02_gray.mp4 -i 02_gray.mp4 -filter_complex "[0][1][2]displace=edge=mirror" 03_displaced_text.mp4



It creates a test video with a scrolling text and a gray dummy video. Then it applies a displacement based on the gray video. If I understand correctly, because the gray video is 100% gray, it should leave the video unchanged (or maybe displace everything by a fixed ammount of pixels), but it creates a "shadow". I tried with 3 different pixel formats (yuv420p, yuv444p, rgb24) because I found this question on stackoverflow talking about that :


- 

- Why are Cb and Cr planes displaced differently from lum by the displace complex filter in ffmpeg ?




ffmpeg version 5.0.1-full_build-www.gyan.dev


Any idea will be welcome.

Thanks !