
Recherche avancée
Médias (1)
-
Revolution of Open-source and film making towards open film making
6 octobre 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (45)
-
List of compatible distributions
26 avril 2011, parThe table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Demande de création d’un canal
12 mars 2010, parEn fonction de la configuration de la plateforme, l’utilisateur peu avoir à sa disposition deux méthodes différentes de demande de création de canal. La première est au moment de son inscription, la seconde, après son inscription en remplissant un formulaire de demande.
Les deux manières demandent les mêmes choses fonctionnent à peu près de la même manière, le futur utilisateur doit remplir une série de champ de formulaire permettant tout d’abord aux administrateurs d’avoir des informations quant à (...)
Sur d’autres sites (6489)
-
avcodec/jpeg2000dwt : merge rescaling with interleave in 9/7 int IDWT
2 juin 2013, par Michael Niedermayeravcodec/jpeg2000dwt : merge rescaling with interleave in 9/7 int IDWT
Tha fate tests change because the edge mirroring was wrong before this commit
Reviewed-by : Nicolas BERTRAND <nicoinattendu@gmail.com>
Signed-off-by : Michael Niedermayer <michaelni@gmx.at> -
How to reduce the latency of CMAF ?
13 juin 2023, par dannyomniI implemented CMAF through a self-built nginx server with ffmpeg, but I encountered some technical bottlenecks. My latency always remains at 3 seconds and cannot be further reduced. Additionally, I'm unable to successfully implement chunked transfer.


Briefly describe my environment, I use OBS to push the live stream to the server, then transcode it on the server, and finally push the content to users through CDN.


Here is some of my code


ffmpeg :


sudo ffmpeg -i rtmp://127.0.0.1:1935/live/stream -loglevel 40 -c copy -sc_threshold 0 -g 60 -bf 0 -map 0 -f dash -strict experimental -use_timeline 1 -use_template 1 -seg_duration 1 -window_size 5 -adaptation_sets "id=0,streams=v id=1,streams=a" -streaming 1 -dash_segment_type mp4 -utc_timing_url "http://time.akamai.com/?iso" -movflags frag_keyframe+empty_moov+default_base_moof -ldash 1 -hls_playlist 1 -master_m3u8_publish_rate 1 -remove_at_exit 1 /var/www/html/live/manifest.mpd



nignx config :


server_name myserver.com;
 add_header Access-Control-Allow-Origin *;
 add_header Access-Control-Allow-Methods 'GET, POST, OPTIONS';
 add_header Access-Control-Allow-Headers 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
 add_header Access-Control-Expose-Headers 'Content-Length,Content-Range';
 root /var/www/html;
 index index.html index.nginx-debian.html;
 location / {
 chunked_transfer_encoding on;
 }



html player





 
 
 
 <code class="echappe-js"><script src="https://cdn.jsdelivr.net/npm/hls.js@latest"></script>

<script src="https://cdn.dashjs.org/latest/dash.all.min.js"></script>

 


 

<script>&#xA; const video = document.getElementById(&#x27;video&#x27;);&#xA; const hlsSrc = &#x27;/live/master.m3u8&#x27;; // Replace with your HLS stream URL&#xA; const dashSrc = &#x27;/live/stream.mpd&#x27;; // Replace with your DASH stream URL&#xA;&#xA; function isHlsSupported() {&#xA; return Hls.isSupported() || video.canPlayType(&#x27;application/vnd.apple.mpegurl&#x27;);&#xA; }&#xA;&#xA; function isDashSupported() {&#xA; return !!window.MediaSource &amp;&amp; !!MediaSource.isTypeSupported(&#x27;video/mp4; codecs="avc1.4d401e,mp4a.40.2"&#x27;);&#xA; }&#xA;&#xA; if (isHlsSupported()) {&#xA; // Use HLS for playback&#xA; const hls = new Hls({&#xA; lowLatencyMode: true,// Enable low-latency mode&#xA; liveSyncDurationCount: 1, // Number of segments used to sync live stream&#xA; liveMaxLatencyDurationCount: 2,// Number of segments used to calculate the latency&#xA; maxBufferLength: 2,// Max buffer length in seconds&#xA; maxBufferSize: 1000 * 1000 * 100,// Max buffer size in bytes&#xA; liveBackBufferLength: 0// Max back buffer length in seconds (0 means back buffer disabled)&#xA; });&#xA; hls.loadSource(hlsSrc);&#xA; hls.attachMedia(video);&#xA; hls.on(Hls.Events.MANIFEST_PARSED, () => {&#xA; video.play();&#xA; });&#xA; } else if (isDashSupported()) {&#xA; // Use DASH for playback&#xA; const player = dashjs.MediaPlayer().create();&#xA; player.initialize(video, dashSrc, true);&#xA; player.updateSettings({&#xA; streaming: {&#xA; lowLatencyEnabled: true, // Enable low-latency mode&#xA; liveDelay: 1, // Set live delay in seconds, equal to 3 times the segment duration&#xA; liveCatchUpPlaybackRate: 1.2, // Playback rate for catching up when behind the live edge&#xA; liveCatchUpMinDrift: 0.5, // Minimum drift from live edge before initiating catch-up (in seconds)&#xA; bufferTimeAtTopQuality: 3, // Maximum buffer length in seconds&#xA; bufferToKeep: 0, // Duration of the back buffer in seconds (disable back buffer)&#xA; }&#xA; });&#xA; } else {&#xA; console.error(&#x27;Neither HLS nor DASH playback is supported in this browser.&#x27;);&#xA; }&#xA; </script>





I hope to reduce the latency to 1 second.


-
Ghost image issues with ffmpeg -filter_complex displace
5 juillet 2022, par raul.vilaI've (almost) been able to apply a displacement based on 2 animated gaussian noise videos, but I'm having issues with a ghost image. A picture is worth a thousand words.


Here you have a script to replicate the issue :


ffmpeg -y -t 2 -f lavfi -i color=c=blue:s=160x120 -c:v libx264 -tune stillimage -pix_fmt rgb24 00_empty.mp4
ffmpeg -y -i 00_empty.mp4 -vf "drawtext=text=string1:y=h/2:x=w-t*w/2:fontcolor=white:fontsize=60" 01_text.mp4
ffmpeg -y -t 2 -f lavfi -i color=c=gray:s=160x120 -c:v libx264 -tune stillimage -pix_fmt rgb24 02_gray.mp4
ffmpeg -y -i 01_text.mp4 -i 02_gray.mp4 -i 02_gray.mp4 -filter_complex "[0][1][2]displace=edge=mirror" 03_displaced_text.mp4



It creates a test video with a scrolling text and a gray dummy video. Then it applies a displacement based on the gray video. If I understand correctly, because the gray video is 100% gray, it should leave the video unchanged (or maybe displace everything by a fixed ammount of pixels), but it creates a "shadow". I tried with 3 different pixel formats (yuv420p, yuv444p, rgb24) because I found this question on stackoverflow talking about that :


- 

- Why are Cb and Cr planes displaced differently from lum by the displace complex filter in ffmpeg ?




ffmpeg version 5.0.1-full_build-www.gyan.dev


Any idea will be welcome.

Thanks !