
Recherche avancée
Médias (91)
-
Collections - Formulaire de création rapide
19 février 2013, par
Mis à jour : Février 2013
Langue : français
Type : Image
-
Les Miserables
4 juin 2012, par
Mis à jour : Février 2013
Langue : English
Type : Texte
-
Ne pas afficher certaines informations : page d’accueil
23 novembre 2011, par
Mis à jour : Novembre 2011
Langue : français
Type : Image
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
-
Richard Stallman et la révolution du logiciel libre - Une biographie autorisée (version epub)
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (88)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...) -
Librairies et binaires spécifiques au traitement vidéo et sonore
31 janvier 2010, parLes logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
Binaires complémentaires et facultatifs flvtool2 : (...)
Sur d’autres sites (11320)
-
Evolution #3896 : Bienvenue personnalisée à l’installation d’un plugin
6 février 2017, par jluc -La proposition ne vise pas à ouvrir un nouvel onglet comme les plugins firefox. J’aurais pas du écrire "mini page d’accueil" quand je voulais dire "message d’accueil".
La proposition vise à donner tout de suite un accès clair et visible aux informations importantes concernant un plugin après son installation et/ou à un message de bienvenue dépendant du plugin.
Ce qui m’a amené à faire cette proposition c’est le plugin menu_alpha. Voilà tous les mouvements et connaissances que j’ai du avoir pour parvenir à la page de configuration :
- Savoir qu’il faut le configurer pour que ça marche (sinon on voit rien, pas d’interface)
- Descendre chercher le plugin dans la liste (car je sais qu’à droite là il y a le bouton "outils" donnant habituellement accès à la config)
- Cliquer le nom du plugin (car je sais que ça donne des infos supplémentaires dont la doc)
- Cliquer le lien vers la doc qu’il s’est trouvé être bien là (car je savais qu’une doc bien faite me donnerait l’info cherchée)
- J’ai alors pu lire la doc, revenir au site et chercher la page de config cherchée, dans les préférences personnelles (j’ai un peu hésité car j’y vais jamais, mais je me suis souvenu de leur accès).
Un simple lien "configurer" comme le suggère marcimat serait suffisant ici
mais attention ce n’est pas la même info que celle qui fait apparaître le bouton "outils" à droite de la ligne du plugin dans la liste : menu_alpha n’a pas de bouton "outil" mais a une page de config "exotique" (ailleurs)On pourrait alors imaginer qu’il y ait d’autres moyens de faire apparaître ce bouton "outil" (pas seulement la page de configuration classique) et de le faire apparaitre aussi pour la page de config exotique de menu_alpha.
Ce serait une bonne amélioration.On peut aussi supposer que selon les plugins il peut y a d’autres infos à mettre en avant à l’installation... yc un simple message de bienvenue sympa (et non intrusif).
L’installation de plusieurs plugins simultanés amène des contraintes sur la structure visuelle de ce message, mais il me semble que ça ne change pas grand chose aux solutions envisagées. On peut évidemment juste ajouter le lien "Configurer" à la fin des messages techniques, ou bien insérer des lignes "li" supplémentaires au milieu des messages techniques d’installation, et on peut aussi créer plusieurs nouvelles petites boites à la suite de la boite des messages techniques, une pour chacun des messages personnalisés quand il y en a un.
-
FFMPEG : Recurring onMetaData for RTMP ? [on hold]
30 novembre 2017, par stevendesuFor whatever reason this was put on hold as "too broad", although I felt I was quite specific. So I’ll try rephrasing here :
My former understanding :
The RTMP Protocol involves sending several parallel streams of data as a series of packets, with an ID correlating to which stream they are a part of. For instance :
[VIDEO] <data>
[AUDIO] <data>
[VIDEO] <data>
[VIDEO] <data>
[SERVER] <metadata about="about" bandwidth="bandwidth">
[VIDEO] <data>
[AUDIO] <data>
...
</data></data></metadata></data></data></data></data>Then on the player side these packets are split up into separate buffers based on type (all video data is concatenated, all audio data is concatenated, etc)
One of the packet types is called
onMetaData
(ID : 0x12)An
onMetaData
packet includes a timestamp for when to trigger the metadata (this way it can be synchronized with the video) as well as the contents of the metadata (a text string)My setup :
I’m using Red5Pro as my ingest server to take in an RTMP stream and then watch this stream via WebRTC. When an
onMetaData
packet is received by Red5, it sends out a JSON object to all subscribers of the stream over WebSockets with the contents of the stream.What I want :
I want to take advantage of this
onMetaData
channel to embed the server’s system clock into a stream. This way anyone viewing the stream can determine when (according to the server) a stream was encoded and, if they synchronize their clock with the server, they can then compute the end-to-end latency of the stream. Due to Red5’s use of WebSockets to send metadata this isn’t a perfect solution (you may receive the metadata before or after you actually receive the video information), however I have some plans to work around this.In other words, I want my stream to look like this :
[VIDEO] <data>
[AUDIO] <data>
[ONMETADATA] time: 2:05:77.382
[VIDEO] <data>
[VIDEO] <data>
[SERVER] <metadata about="about" bandwidth="bandwidth">
[VIDEO] <data>
[ONMETADATA] time: 2:05:77.423
[AUDIO] <data>
...
</data></data></metadata></data></data></data></data>What I would like is to generate this stream (with the server’s current time periodically embedded into the
onMetaData
channel) using FFMPEGSimpler problem :
FFMPEG offers a
-metadata
command-line parameter.In my experiments, using this parameter caused a single
onMetaData
event to be fired including things like "title", "author", etc. I could not inject additionalonMetaData
packets periodically as the stream progressed.Even if the metadata packets do not contain the system clock, if I could send any metadata packets periodically using FFMPEG then I could include something static like "the server’s clock at the time the broadcast started". I can then compare this to the current timestamp of the video and calculate the latency.
My confusion :
Continuing to look into this after creating my post, there are a couple things that I don’t fully understand or which don’t quite make sense to me. For one, if FFMPEG is only injecting a single
onMetaData
packet into the stream, then I would expect anyone joining the stream late to miss it. However when I join the stream 8 hours later I see Red5 send me the metadata packet complete with title, author, etc. So it’s almost like the metadata packet doesn’t have a timestamp associated with it but instead is just generic metadata about the videoFurthermore, there’s something called "AMF" which I’m not familiar with, but it may be important ?
Original Post
I spent today playing around with methods to embed the system clock at time of encode into a stream, so that I could compare this value to the same system clock at time of decode to get a rough estimate of RTMP latency. Unfortunately the majority of techniques I used ended up failing.
One thing I wanted to try next was taking advantage of RTMP’s
onMetaData
to send the current system clock periodically (maybe every 5 seconds) as part of the stream for any clients to listen for.Unfortunately FFMPEG’s
-metadata
option seems to only be for one-time metadata when the stream first loads. I can’t figure out how to add continuous (and generated) values to a stream.Is there a way to do this ?
-
Workflow for creating animated hand-drawn videos - encoding difficulties
8 décembre 2017, par MircodeI want to create YouTube videos, kind of in the style of a white-board animation.
Tldr question : How can I encode into a lossless rgb video format with ffmpeg, including alpha channel ?
More detailed :
My current workflow looks like this :I draw the slides in Inkscape, I group all paths that are to be drawn in one go (one scene so to say) and store the slide as svg. Then I run a custom python script over that. It animates the slides as described here https://codepen.io/MyXoToD/post/howto-self-drawing-svg-animation. Each frame is exported as svg, converted to png and fed to ffmpeg for making a video from it.
For every scene (a couple of paths being drawn, there are several scenes per slide) I create an own video file and then I also store a png file that contains the last frame of that video.
I then use kdenlive to join it all together : A video containing the drawing of the first scene, then a png which holds the last image of the video while I talk about the drawing, then the next animated drawing, then the next still image where I continue talking and so on. I use these intermediate images because freezing the last frame is tedious in kdenlive and I have around 600 scenes. Here I do the editing, adjust the duration of the still images and render the final video.
The background of the video is a photo of a blackboard which never changes, the strokes are paths with a filter to make it look like chalk.
So far so good, everything almost works.
My problem is : Whenever there is a transition between an animation and a still image, it is visible in the final result. I have tried several approaches to make this work but nothing is without flaws.
My first approach was to encode the animations as mp4 like this :
p = Popen(['ffmpeg', '-y', '-f', 'image2pipe', '-vcodec', 'png', '-r', str(fps), '-i', '-', '-vcodec', 'libx264', '-crf', '21', '-bf', '2', '-flags', '+cgop', '-pix_fmt', 'yuv420p', '-movflags', 'faststart', '-r', str(fps), videofile], stdin=PIPE)
which is recommended for YouTube. But then there is a little brightness difference between video and still image.
Then I tried mov with png codec :
p = Popen(['ffmpeg', '-y', '-f', 'image2pipe', '-vcodec', 'png', '-r', str(fps), '-i', '-', '-vcodec', 'png', '-r', str(fps), videofile], stdin=PIPE)
I think this encodes every frame as png in the video. It creates way bigger files since every frame is encoded separately. But it’s ok since I can use transparency for the background and just store the chalk strokes. However, sometimes I want to swipe parts of the chalk on a slide away, which I do by drawing background over it. Which would work if those overlaid, animated background chunks which are stored in the video looked exactly like the underlying png in the background. But it doesn’t. It’s slightly more blurry and I believe the color changes a tiny bit as well. Which I don’t understand since I thought the video just stores a sequence of pngs... Is there some quality setting that I miss here ?
Then I read about ProRes4444 and tried that :
p = Popen(['ffmpeg', '-y', '-f', 'image2pipe', '-vcodec', 'png', '-r', str(fps), '-i', '-', '-c:v', 'prores_ks', '-pix_fmt', 'yuva444p10le', '-alpha_bits', '8', '-profile:v', '4444', '-r', str(fps), videofile], stdin=PIPE)
and this actually seems to work. However, the animation files become larger than the bunch of png files they contain, probably because this format stores 12 bit per channel. This is not thaat horrible since only intermediate videos grow big, the final result is still ok.
But ideally there would be a lossless codec which stores in rgb colorspace with 8 bit per channel, 8 bit for alpha and considers only the difference to the previous frame (because all that changes from frame to frame is a tiny bit of chalk drawing). Is there such a thing ? Alternatively, I’d also be ok without transparency but then I have to store the background in every scene. But if only changes are stored from frame to frame within one scene, that should be manageable.
Or should I make some fundamental changes in my workflow altogether ?
Sorry that this is rather lengthy, I appreciate any help.
Cheers !