
Recherche avancée
Médias (2)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
-
Carte de Schillerkiez
13 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
Autres articles (79)
-
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)
Sur d’autres sites (9066)
-
Decode h264 video bytes into JPEG frames in memory with ffmpeg
5 février 2024, par John KarkasI'm using python and ffmpeg (4.4.2) to generate a h264 video stream from images produced continuously from a process. I am aiming to send this stream over websocket connection and decode it to individual image frames at the receiving end, and emulate a stream by continuously pushing frames to an
<img style='max-width: 300px; max-height: 300px' />
tag in my HTML.

However, I cannot read images at the receiving end, after trying combinations of
rawvideo
input format,image2pipe
format, re-encoding the incoming stream withmjpeg
andpng
, etc. So I would be happy to know what the standard way of doing something like this would be.

At the source, I'm piping frames from a while loop into ffmpeg to assemble a h264 encoded video. My command is :


command = [
 'ffmpeg',
 '-f', 'rawvideo',
 '-pix_fmt', 'rgb24',
 '-s', f'{shape[1]}x{shape[0]}',
 '-re',
 '-i', 'pipe:',
 '-vcodec', 'h264',
 '-f', 'rawvideo',
 # '-vsync', 'vfr',
 '-hide_banner',
 '-loglevel', 'error',
 'pipe:'
 ]



At the receiving end of the websocket connection, I can store the images in storage by including :


command = [
 'ffmpeg',
 '-i', '-', # Read from stdin
 '-c:v', 'mjpeg',
 '-f', 'image2',
 '-hide_banner',
 '-loglevel', 'error',
 f'encoded/img_%d_encoded.jpg'
 ]



in my ffmpeg command.


But, I want to instead extract each individual frame coming in the pipe and load in my application, without saving them in storage. So basically, I want whatever is happening at by the
'encoded/img_%d_encoded.jpg'
line in ffmpeg, but allowing me to access each frame in the stdout subprocess pipe of an ffmpeg pipeline at the receiving end, running in its own thread.

- 

- What would be the most appropriate ffmpeg command to fulfil a use case like the above ? And how could it be tuned to be faster or have more quality ?
- Would I be able to read from the stdout buffer with
process.stdout.read(2560x1440x3)
for each frame ?






If you feel strongly about referring me to a more update version of ffmpeg, please do so.


PS : It is understandable this may not be the optimal way to create a stream. Nevertheless, I do not find there should be much complexity in this and the latency should be low. I could instead communicate JPEG images via the websocket and view them in my
<img style='max-width: 300px; max-height: 300px' />
tag, but I want to save on bandwidth and relay some computational effort at the receiving end.

-
ffmpeg-next how can I enable multithreading on a decoder ?
14 décembre 2022, par Brandon PiñaI'm using the rust crate
ffmpeg-next
to decode some video into individual frames for usage in another library. Problem is when I run my test it only seems to use a single core. I've tried modifying the threading configuration for my decoder as you can see below, but It doesn't seem to be do anything

let context_decoder =
 ffmpeg_next::codec::context::Context::from_parameters(input_stream.parameters())?;
 let mut decoder = context_decoder.decoder().video()?;
 let mut threading_config = decoder.threading();
 threading_config.count = num_cpus::get();
 threading_config.kind = ThreadingType::Frame;

 decoder.set_threading(threading_config);



-
Revision 085f76e535 : Add experimental VBR adaptation method. Add code to monitor over and under spen
15 avril 2014, par Paul WilkinsChanged Paths :
Modify /vp9/encoder/vp9_firstpass.c
Modify /vp9/encoder/vp9_onyx_if.c
Modify /vp9/encoder/vp9_ratectrl.c
Modify /vp9/encoder/vp9_ratectrl.h
Add experimental VBR adaptation method.Add code to monitor over and under spend and
apply limited correction to the data rate of subsequent
frames. To prevent the problem of starvation or overspend
on individual frames (especially near the end of a clip) the
maximum adjustment on a single frame is limited to a %
of its un-modified allocation.Change-Id : I6e1ca035ab8afb0c98eac4392115d0752d9cbd7f