
Recherche avancée
Médias (1)
-
GetID3 - Bloc informations de fichiers
9 avril 2013, par
Mis à jour : Mai 2013
Langue : français
Type : Image
Autres articles (52)
-
Encodage et transformation en formats lisibles sur Internet
10 avril 2011MediaSPIP transforme et ré-encode les documents mis en ligne afin de les rendre lisibles sur Internet et automatiquement utilisables sans intervention du créateur de contenu.
Les vidéos sont automatiquement encodées dans les formats supportés par HTML5 : MP4, Ogv et WebM. La version "MP4" est également utilisée pour le lecteur flash de secours nécessaire aux anciens navigateurs.
Les documents audios sont également ré-encodés dans les deux formats utilisables par HTML5 :MP3 et Ogg. La version "MP3" (...) -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Ajouter notes et légendes aux images
7 février 2011, parPour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
Modification lors de l’ajout d’un média
Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)
Sur d’autres sites (11170)
-
FFMPEG : Recurring onMetaData for RTMP ? [on hold]
30 novembre 2017, par stevendesuFor whatever reason this was put on hold as "too broad", although I felt I was quite specific. So I’ll try rephrasing here :
My former understanding :
The RTMP Protocol involves sending several parallel streams of data as a series of packets, with an ID correlating to which stream they are a part of. For instance :
[VIDEO] <data>
[AUDIO] <data>
[VIDEO] <data>
[VIDEO] <data>
[SERVER] <metadata about="about" bandwidth="bandwidth">
[VIDEO] <data>
[AUDIO] <data>
...
</data></data></metadata></data></data></data></data>Then on the player side these packets are split up into separate buffers based on type (all video data is concatenated, all audio data is concatenated, etc)
One of the packet types is called
onMetaData
(ID : 0x12)An
onMetaData
packet includes a timestamp for when to trigger the metadata (this way it can be synchronized with the video) as well as the contents of the metadata (a text string)My setup :
I’m using Red5Pro as my ingest server to take in an RTMP stream and then watch this stream via WebRTC. When an
onMetaData
packet is received by Red5, it sends out a JSON object to all subscribers of the stream over WebSockets with the contents of the stream.What I want :
I want to take advantage of this
onMetaData
channel to embed the server’s system clock into a stream. This way anyone viewing the stream can determine when (according to the server) a stream was encoded and, if they synchronize their clock with the server, they can then compute the end-to-end latency of the stream. Due to Red5’s use of WebSockets to send metadata this isn’t a perfect solution (you may receive the metadata before or after you actually receive the video information), however I have some plans to work around this.In other words, I want my stream to look like this :
[VIDEO] <data>
[AUDIO] <data>
[ONMETADATA] time: 2:05:77.382
[VIDEO] <data>
[VIDEO] <data>
[SERVER] <metadata about="about" bandwidth="bandwidth">
[VIDEO] <data>
[ONMETADATA] time: 2:05:77.423
[AUDIO] <data>
...
</data></data></metadata></data></data></data></data>What I would like is to generate this stream (with the server’s current time periodically embedded into the
onMetaData
channel) using FFMPEGSimpler problem :
FFMPEG offers a
-metadata
command-line parameter.In my experiments, using this parameter caused a single
onMetaData
event to be fired including things like "title", "author", etc. I could not inject additionalonMetaData
packets periodically as the stream progressed.Even if the metadata packets do not contain the system clock, if I could send any metadata packets periodically using FFMPEG then I could include something static like "the server’s clock at the time the broadcast started". I can then compare this to the current timestamp of the video and calculate the latency.
My confusion :
Continuing to look into this after creating my post, there are a couple things that I don’t fully understand or which don’t quite make sense to me. For one, if FFMPEG is only injecting a single
onMetaData
packet into the stream, then I would expect anyone joining the stream late to miss it. However when I join the stream 8 hours later I see Red5 send me the metadata packet complete with title, author, etc. So it’s almost like the metadata packet doesn’t have a timestamp associated with it but instead is just generic metadata about the videoFurthermore, there’s something called "AMF" which I’m not familiar with, but it may be important ?
Original Post
I spent today playing around with methods to embed the system clock at time of encode into a stream, so that I could compare this value to the same system clock at time of decode to get a rough estimate of RTMP latency. Unfortunately the majority of techniques I used ended up failing.
One thing I wanted to try next was taking advantage of RTMP’s
onMetaData
to send the current system clock periodically (maybe every 5 seconds) as part of the stream for any clients to listen for.Unfortunately FFMPEG’s
-metadata
option seems to only be for one-time metadata when the stream first loads. I can’t figure out how to add continuous (and generated) values to a stream.Is there a way to do this ?
-
ffmpeg x11grab inputting improperly
20 décembre 2017, par Not a superuserI have the stock ffmpeg install from the xbps repository. I’ve used it before, but am on a new system.
Running "ffmpeg x11grab -video_size 1280x800 -framerate 60 -i $DISPLAY output.mkv" yeilds no errors, but when I watch the video it is a mess switching between workspaces and with only partly rendered programs.
Taking other inputs such as webcam work fine, and different encoding methods don’t change anything (though webm flat out doesn’t work, but that’s not a problem for me).
I’ve tried what’s here : https://wiki.archlinux.org/index.php/FFmpeg
Only other thing to note is that I use i3 as a dm, which shouldn’t be a problem, but figured I’d state it just in case.
EDIT :
I was using compton for composite, and that was where my issue lied...
https://github.com/chjj/compton/issues/381Thanks !
-
Is H.264 used with CRF 0 really strictly lossless ?
23 décembre 2017, par MephistoI am surprised by how small files are when encoded in ffmpeg with the libx264 codec in Constant Rate Factor mode equals zero (-crf 0) that, according to the documentation, is "lossless".
I would like to make sure what the word "lossless" here means. I would like to know if it follows my personal definition of lossless video : After encoding a video, you can confidently bet the life of your mother that, once you play it, the numerical values in the pixels of the restored video will be identically equal (within maybe a factor 0.00001 due to the floating point arithmetic) to the original.
Does the H.264 lossless encoding follow my definition, or do they call it "lossless" because it is visually very close, very beautiful, whatever... ?