
Recherche avancée
Médias (1)
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
Autres articles (21)
-
Personnaliser les catégories
21 juin 2013, parFormulaire de création d’une catégorie
Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire.
Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...) -
Gestion de la ferme
2 mars 2010, parLa ferme est gérée dans son ensemble par des "super admins".
Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
Dans un premier temps il utilise le plugin "Gestion de mutualisation" -
Librairies et binaires spécifiques au traitement vidéo et sonore
31 janvier 2010, parLes logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
Binaires complémentaires et facultatifs flvtool2 : (...)
Sur d’autres sites (4917)
-
FFmpeg 5 C api codec end of stream situation
11 mars 2023, par Guanyuming HeI'm new to FFmpeg api programming (I'm using version 5.1) and am learning from the documentation and official examples.


In the documentation page about send/receive encoding and decoding API overview, end of stream situation is discussed briefly :




End of stream situations. These require "flushing" (aka draining) the codec, as the codec might buffer multiple frames or packets internally for performance or out of necessity (consider B-frames). This is handled as follows :






Instead of valid input, send
NULL
to theavcodec_send_packet()
(decoding) oravcodec_send_frame()
(encoding) functions. This will enter draining mode.
Callavcodec_receive_frame()
(decoding)
oravcodec_receive_packet()
(encoding) in a loop untilAVERROR_EOF
is returned. The functions will not returnAVERROR(EAGAIN)
, unless you forgot to enter draining mode.
Before decoding can be resumed again, the codec has to be reset withavcodec_flush_buffers()
.



As I understand it, when I get
AVERROR_EOF
, I have reached a special point where I need to drain buffered data from the codec and finally reset the codec withavcodec_flush_buffers()
. Without doing it, I cannot continue decoding/encoding.

Then I have some questions :


- 

- If I received EOF when I already finished sending data (e.g. when after EOF is returned by
av_read_frame()
), how should I tell if it's really finished ? - The data returned from the
receive_...
functions during draining, should I take them as valid ?






I might have found answers to those in the official examples, but I'm not sure if the answer is universally true. I noticed that in some official examples, like in transcode_aac.c, draining is only done for the first EOF reached, and then after the second one is received, it is regarded that there are really nothing left. Any data received during draining is also written to the final output.


I just wonder, Is it true for all multimedia files in ffmpeg ?


I appreciate your response and time in advance. :)


- If I received EOF when I already finished sending data (e.g. when after EOF is returned by
-
How should one start with ffmpeg's API ?
20 mai 2018, par JoeDoughI’d like to make a real time streaming program that takes input from a webcamera, ffmpeg looks like a good library for encoding a stream of images but there is no documentation or community tutorials (there is just a doxygen API reference).Where should I start if there’s no official documentation ?
-
How to generate the first few HLS playlist files for a live broadasting event ?
1er mars 2014, par user3367166Knowing that HLS is based on a playlist of N segment files, how to properly generate the first (N-1) playlist files for a live broadcasting event (sliding window method) ?
should we wait for the first N segments to be recorded, and send a complete first playlist file ? This of course works but means that there is a delay of N x segment_duration before streaming starts.
or is there an official way of sending "partial" playlist files such that streaming starts after 1 x segment_duration ?
I have experimented with partial playlist files : ffmpeg 2.1.3 appends new segments into the playlist as they are recorded (below). I find this a reasonable way but my ipod5 usually stops playback after the first playlist or sometimes manages to playback the first few playlists but with much choppiness, so I believe it is not a supported way. Is there any official solution ?
-
1st playlist file :
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-TARGETDURATION:8
#EXT-X-MEDIA-SEQUENCE:0
#EXTINF:8,
segment0.ts -
2nd playlist file :
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-TARGETDURATION:8
#EXT-X-MEDIA-SEQUENCE:0
#EXTINF:8,
segment0.ts
#EXTINF:1,
segment1.ts -
3rd playlist file :
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-TARGETDURATION:8
#EXT-X-MEDIA-SEQUENCE:0
#EXTINF:8,
segment0.ts
#EXTINF:1,
segment1.ts
#EXTINF:8,
segment2.ts
-