
Recherche avancée
Médias (91)
-
MediaSPIP Simple : futur thème graphique par défaut ?
26 septembre 2013, par
Mis à jour : Octobre 2013
Langue : français
Type : Video
-
avec chosen
13 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
sans chosen
13 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
config chosen
13 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
SPIP - plugins - embed code - Exemple
2 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
GetID3 - Bloc informations de fichiers
9 avril 2013, par
Mis à jour : Mai 2013
Langue : français
Type : Image
Autres articles (112)
-
Script d’installation automatique de MediaSPIP
25 avril 2011, parAfin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
La documentation de l’utilisation du script d’installation (...) -
Demande de création d’un canal
12 mars 2010, parEn fonction de la configuration de la plateforme, l’utilisateur peu avoir à sa disposition deux méthodes différentes de demande de création de canal. La première est au moment de son inscription, la seconde, après son inscription en remplissant un formulaire de demande.
Les deux manières demandent les mêmes choses fonctionnent à peu près de la même manière, le futur utilisateur doit remplir une série de champ de formulaire permettant tout d’abord aux administrateurs d’avoir des informations quant à (...) -
La sauvegarde automatique de canaux SPIP
1er avril 2010, parDans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)
Sur d’autres sites (11872)
-
h264 via WebRTC latency issue
18 septembre 2024, par LucasI am trying to send a video stream encoded with h264 (hardware accelerated with nvidia encoder) via WebRTC for low latency display on a browser.


More precisely, I have a thread that encodes an opengl framebuffer at a fixed frame rate, the resulting AVPacket's data (I encode using ffmpeg's C api) is then forwarded via WebRTC to the client (using aiortc)


The problem is that I observe significant delays, that seem to depend on the frame rate I use.
For example, running it locally, I get around 160ms delay when running at 30fps, and around 30ms when encoding at 90fps.


The delay here is the measured time to encode + transmit + decode, and I have the strong impression that the issue happen when presenting the video frame, like the browser is not immediately presenting the frame... (encoding is fast, I would expect the transmission to be also rather fast on a local setup, and decoding seems to be fine as well, as reported by the RTP stats in the browser).


I tried to play with RTP timestamps, but that did not change anything, the only variable that seems to impact the latency is the encoding thread 'frequency'.


Any idea on what could be creating this latency ? Am I missing a parameter ?


Also, here are the codec options I use : (they do not influence the latency that much from what I experimented)


profile = high
preset = llhq # low latency, high quality
tune = zerolatency
zerolatency = 1
g = 2 * FRAME_PER_SECOND # key frame every 2s
strict-gop = 1



UPDATE


I have the impression that the jitter buffer on Chrome's side is kind of preventing the rtp packets to be decoded immediately, is that possible ?


UPDATE 2


- 

- Using RTP
playout-delay
header extension slightly reduced the latency. - Setting
playoudDelayHint
in browser also seemed to help a bit






UPDATE 3


After further investigations, I came to the conclusion that it was not possible to get a lower latency by going through the standard webrtc for video streams, as there is little to no control on the video buffering, which I believe to be responsible of the observed latency.


On a side note, I tried to check how google stadia is doing it, as they seem to use WebRTC as well, but they use some in-house frameworks... (plus Chrome is the only supported browser)


- Using RTP
-
Transcoding fMP4 to HLS while writing on iOS using FFmpeg
29 avril 2017, par bclymerTL ;DR
I want to convert fMP4 fragments to TS segments (for HLS) as the fragments are being written using FFmpeg on an iOS device.
Why ?
I’m trying to achieve live uploading on iOS while maintaining a seamless, HD copy locally.
What I’ve tried
-
Rolling
AVAssetWriter
s where each writes for 8 seconds, then concating the MP4s together via FFmpeg.What went wrong - There are blips in the audio and video at times. I’ve identified 3 reasons for this.
1) Priming frames for audio written by the AAC encoder creating gaps.
2) Since video frames are 33.33ms long, and audio frames 0.022ms long, it’s possible for them to not line up at the end of a file.
3) The lack of frame accurate encoding present on Mac OS, but not available for iOS Details Here
-
FFmpeg muxing a large video only MP4 file with raw audio into TS segments. The work was based off the Kickflip SDK
What Went Wrong - Every once in a while an audio only file would get uploaded, with no video whatsoever. Never able to reproduce it in-house, but it was pretty upsetting to our users when they didn’t record what they thought they did. There were also issues with accurate seeking on the final segments, almost like the TS segments were incorrectly time stamped.
What I’m thinking now
Apple was pushing fMP4 at WWDC this year (2016) and I hadn’t looked into it much at all before that. Since an fMP4 file can be read, and played while it’s being written, I thought that it would be possible for FFmpeg to transcode the file as it’s being written as well, as long as we hold off sending the bytes to FFmpeg until each fragment within the file is finished.
However, I’m not familiar enough with the FFmpeg C API, I only used it briefly within attempt #2.
What I need from you
- Is this a feasible solution ? Is anybody familiar enough with fMP4 to know if I can actually accomplish this ?
- How will I know that
AVFoundation
has finished writing a fragment within the file so that I can pipe it into FFmpeg ? - How can I take data from a file on disk, chunk at a time, pass it into FFmpeg and have it spit out TS segments ?
-
-
Transcoding fMP4 to HLS while writing on iOS using FFmpeg
12 juillet 2017, par bclymerTL ;DR
I want to convert fMP4 fragments to TS segments (for HLS) as the fragments are being written using FFmpeg on an iOS device.
Why ?
I’m trying to achieve live uploading on iOS while maintaining a seamless, HD copy locally.
What I’ve tried
-
Rolling
AVAssetWriter
s where each writes for 8 seconds, then concatenating the MP4s together via FFmpeg.What went wrong - There are blips in the audio and video at times. I’ve identified 3 reasons for this.
1) Priming frames for audio written by the AAC encoder creating gaps.
2) Since video frames are 33.33ms long, and audio frames 0.022ms long, it’s possible for them to not line up at the end of a file.
3) The lack of frame accurate encoding present on Mac OS, but not available for iOS Details Here
-
FFmpeg muxing a large video only MP4 file with raw audio into TS segments. The work was based on the Kickflip SDK
What Went Wrong - Every once in a while an audio only file would get uploaded, with no video whatsoever. Never able to reproduce it in-house, but it was pretty upsetting to our users when they didn’t record what they thought they did. There were also issues with accurate seeking on the final segments, almost like the TS segments were incorrectly time stamped.
What I’m thinking now
Apple was pushing fMP4 at WWDC this year (2016) and I hadn’t looked into it much at all before that. Since an fMP4 file can be read, and played while it’s being written, I thought that it would be possible for FFmpeg to transcode the file as it’s being written as well, as long as we hold off sending the bytes to FFmpeg until each fragment within the file is finished.
However, I’m not familiar enough with the FFmpeg C API, I only used it briefly within attempt #2.
What I need from you
- Is this a feasible solution ? Is anybody familiar enough with fMP4 to know if I can actually accomplish this ?
- How will I know that
AVFoundation
has finished writing a fragment within the file so that I can pipe it into FFmpeg ? - How can I take data from a file on disk, chunk at a time, pass it into FFmpeg and have it spit out TS segments ?
-