
Recherche avancée
Autres articles (59)
-
Demande de création d’un canal
12 mars 2010, parEn fonction de la configuration de la plateforme, l’utilisateur peu avoir à sa disposition deux méthodes différentes de demande de création de canal. La première est au moment de son inscription, la seconde, après son inscription en remplissant un formulaire de demande.
Les deux manières demandent les mêmes choses fonctionnent à peu près de la même manière, le futur utilisateur doit remplir une série de champ de formulaire permettant tout d’abord aux administrateurs d’avoir des informations quant à (...) -
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)
Sur d’autres sites (13626)
-
hls.js starting a beginning with ANDROID mobile (chrome, webview also) and not live *** but works very nice in deskto, ios .. hls.js 1.0.0 2021-04-01
27 avril 2021, par JintorI'm streaming a .m3u8 with the latest hls.js 1.0.0 (not rc) but version of 2021-04-01...


example : the stream began at 5pm, and now it's 5:15 pm...


the stream start at live point in almost all browsers


The pattern I see here : ALL browsers in android (tested in Android 10) won't start at live point, only at 0...


I did all the tests


• Safari desktop => stream live at 5:15


• Safari mobile => stream live at 5:15


• WebView (Android) => ••• ISSUE : the player starts the stream at 0 (5pm)


• WKWebView (apple IOS iphone,ipad) => stream live at 5:15


• Chrome Desktop (mac/win) => stream live at 5:15


• Chrome MOBILE (Android) => ••• ISSUE : the player starts the stream at 0 (5pm)


• Chrome MOBILE (iPhone) => stream live at 5:15


• Microsoft EDGE Desktop => stream live at 5:15


• Microsoft EDGE mobile (android) => ••• ISSUE : the player starts the stream at 0 (5pm)


• Firefox Desktop (mac/win) => stream live at 5:15


• Opera Desktop (mac/win) => stream live at 5:15


• Opera Mini (iPhone) => stream live at 5:15


• Opera Mini (android) => ••• ISSUE : the player starts the stream at 0 (5pm)


• Brave Desktop (mac/win) => stream live at 5:15


• Brave Mobile (iPhone) => stream live at 5:15


• Brave Mobile (android) => ••• ISSUE : the player starts the stream at 0 (5pm)


This code


<code class="echappe-js"><script src="https://cdn.jsdelivr.net/npm/hls.js@latest"></script>



<script>&#xA; var video = document.getElementById("video");&#xA; var videoSrc = "https://www.example1.com/streaming/index.m3u8";&#xA; if (video.canPlayType("application/vnd.apple.mpegurl")) {&#xA; video.src = videoSrc;&#xA; } else if (Hls.isSupported()) {&#xA; var config = {&#xA; autoStartLoad: true,&#xA; startPosition: -1,&#xA; debug: false,&#xA; capLevelOnFPSDrop: false,&#xA; capLevelToPlayerSize: false,&#xA; defaultAudioCodec: undefined,&#xA; initialLiveManifestSize: 1,&#xA; maxBufferLength: 30,&#xA; maxMaxBufferLength: 500,&#xA; backBufferLength: Infinity,&#xA; maxBufferSize: 60 * 1000 * 1000,&#xA; maxBufferHole: 0.5,&#xA; highBufferWatchdogPeriod: 2,&#xA; nudgeOffset: 0.1,&#xA; nudgeMaxRetry: 3,&#xA; maxFragLookUpTolerance: 0.25,&#xA; liveSyncDurationCount: 3,&#xA; liveMaxLatencyDurationCount: Infinity,&#xA; liveDurationInfinity: false,&#xA; enableWorker: true,&#xA; enableSoftwareAES: true,&#xA; manifestLoadingTimeOut: 10000,&#xA; manifestLoadingMaxRetry: 1,&#xA; manifestLoadingRetryDelay: 1000,&#xA; manifestLoadingMaxRetryTimeout: 64000,&#xA; startLevel: undefined,&#xA; levelLoadingTimeOut: 10000,&#xA; levelLoadingMaxRetry: 4,&#xA; levelLoadingRetryDelay: 1000,&#xA; levelLoadingMaxRetryTimeout: 64000,&#xA; fragLoadingTimeOut: 20000,&#xA; fragLoadingMaxRetry: 6,&#xA; fragLoadingRetryDelay: 1000,&#xA; fragLoadingMaxRetryTimeout: 64000,&#xA; startFragPrefetch: false,&#xA; testBandwidth: true,&#xA; progressive: false,&#xA; lowLatencyMode: true,&#xA; fpsDroppedMonitoringPeriod: 5000,&#xA; fpsDroppedMonitoringThreshold: 0.2,&#xA; appendErrorMaxRetry: 3,&#xA; enableWebVTT: true,&#xA; enableIMSC1: true,&#xA; enableCEA708Captions: true,&#xA; stretchShortVideoTrack: false,&#xA; maxAudioFramesDrift: 1,&#xA; forceKeyFrameOnDiscontinuity: true,&#xA; abrEwmaFastLive: 3.0,&#xA; abrEwmaSlowLive: 9.0,&#xA; abrEwmaFastVoD: 3.0,&#xA; abrEwmaSlowVoD: 9.0,&#xA; abrEwmaDefaultEstimate: 500000,&#xA; abrBandWidthFactor: 0.95,&#xA; abrBandWidthUpFactor: 0.7,&#xA; abrMaxWithRealBitrate: false,&#xA; maxStarvationDelay: 4,&#xA; maxLoadingDelay: 4,&#xA; minAutoBitrate: 0,&#xA; emeEnabled: false&#xA; };&#xA; var hls = new Hls(config);&#xA; hls.loadSource(videoSrc);&#xA; hls.attachMedia(video);&#xA; } &#xA; video.addEventListener("loadedmetadata", function(){ video.muted = true; video.play(); }, false);&#xA; </script>



// here I added video.muted = true ; video.play() ; to auto start, if I try to autoplay unmuted, many browsers refuse this command...


// playsinline="true" is NEEDED for safari


••••••• THE FFMPEG COMMAND (working : it allows me to have 3 to 4 seconds delay ••••••


ffmpeg -re -i input.x -c:a aac -c:v libx264 
-movflags +dash -preset ultrafast 
-crf 28 -refs 4 -qmin 4 -pix_fmt yuv420p 
-tune zerolatency -c:a aac -ac 2 -profile:v main 
-flags -global_header -bufsize 969k 
-hls_time 1 -hls_list_size 0 -g 30 
-start_number 0 -streaming 1 -hls_playlist 1 
-lhls 1 -hls_playlist_type event -f hls path_to_index.m3u8



•••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••


How can this be fixed ?


How can I make play at live point on load in android MOBILE ?


-
Continuous remultiplexing of two MPEG-TS multicasts into a single MPEG-TS multicast in the face of upstream errors
29 mars 2021, par Anonymous CowardI remultiplex PIDs from two incoming multicasts into a single TS using something like below :


ffmpeg -i udp://239.1.1.1:5000 -i udp://239.1.1.2:5000 -map 0:v -map 0:a -map 1:a -codec copy -f mpegts udp://239.1.1.3:5000



This works well when the multicasts are stable, but occasionally one of the multicasts will disappear for a short period of time. In this case, the output stops and the remaining input buffer start to fill as it is not being drained, sometimes overrunning the circular buffer. When the failed input returns, the backed-up buffer is not drained, so there is an ongoing offset between the inputs.


Is it possible to configure ffmpeg in such a way that the output continues, just without the missing PID ? eg so the video and first audio continue, but the second audio is not present for the period that input is missing.


Thanks in advance !


-
Using an actual audio recording to filter out noise from a video
9 mars 2021, par user2751530I use my laptop (Ubuntu 18.04 LTS derivative on a Dell XPS13) for recording videos (these are just narrated presentations) using OBS. After a presentation is done (.flv format), I process it using ffmpeg using filters that try to reduce background noise, reduce the size of the video, change encoding to .mp4, insert a watermark, etc. Over several months, this system has worked well.


However, my laptop is now beginning to show its age (it is 4 years old). That means that the fan becomes loud - loud enough to notice in a recording, not loud enough to notice when you are working. So, even after filtering for low frequency in ffmpeg, there are clicking and other type of sounds that are left in the video. I am a scientist, though not an audio/video expert. So, I was thinking - is it possible for me to simply record the noise coming out of my machine when I am not presenting, and then use that recording to filter out the noise that my machine makes during the presentation ?


Blanket approaches like filtering out certain ranges of the audio spectrum, etc. are unlikely to work, as the power spectrum of the noise likely has many peaks, and these are likely to extend into human voice range as well (I can hear them). Further, this is a moving target - the laptop is aging and in any case, the amount and type of noise it makes depends on the load and how long it has been on. Algorithm :


- 

- Record actual computer noise (with the added bonus of background noise) while I am not recording. Ideally, just before starting to record the presentation. This could take the form of a 1-2 minute audio sample.
- Record the presentation on OBS.
- Use 1 as a filter to get rid of noise in 2. I imagine it would involve doing a Fourier analysis of 1, and then removing those peaks from the spectrum of 2 at each time epoch.








I have looked into sox, which is what people somewhat flippantly point you to without giving any details. I do not know how to separate out audio channels from a video and then interleave them back together (not an expert on the software here). Other than RTFM, is there any helpful advice anyone could offer ? I have searched, but have not been able to find a HOWTO. I expect that that is probably the fault of my search since I refuse to believe that this is a new idea - it is a standard method used in many fields to get rid of noise, including astronomy.