
Recherche avancée
Autres articles (64)
-
Encodage et transformation en formats lisibles sur Internet
10 avril 2011MediaSPIP transforme et ré-encode les documents mis en ligne afin de les rendre lisibles sur Internet et automatiquement utilisables sans intervention du créateur de contenu.
Les vidéos sont automatiquement encodées dans les formats supportés par HTML5 : MP4, Ogv et WebM. La version "MP4" est également utilisée pour le lecteur flash de secours nécessaire aux anciens navigateurs.
Les documents audios sont également ré-encodés dans les deux formats utilisables par HTML5 :MP3 et Ogg. La version "MP3" (...) -
Submit bugs and patches
13 avril 2011Unfortunately a software is never perfect.
If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
You may also (...) -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs
Sur d’autres sites (10571)
-
Process Multiple Streams using ffmpeg-python
27 novembre 2019, par C DormanI have a MPEG-2 TS with a stream of video and a steam of KLV metadata. I can use ffmpeg-python to process each of these streams independently. For example, I can get the video stream, process each frames using numpy as shown in the docs. I can also get the data stream and show it in following way (using the library klvdata) :
process = (
ffmpeg
.input(in_filename)
.output('pipe:', format='data', codec='copy', map='data-re')
.run_async(pipe_stdout=True, pipe_stderr=True)
)
for packet in klvdata.StreamParser(process.stdout.read()):
packet.structure()
process.wait()How do I do these at the same time ? I need to split the TS data into its streams and process them both, keeping them in sync. ffmpeg by itself can demultiplex the streams into separate files, but how do I handle the streams in python. The KLV has information that I want to show on top of the video stream (recognition boxes).
-
How to record a 5 second video on Raspberry Pi 3 with USB webcam ?
16 mars 2023, par FeengineerHi i tried to record a video whit a USB-webcam on my RaspberryPi.
When i type
ffplay /dev/video0
I do see a live video, but when i try to record it withffmpeg
the video output is like a black screen with like the traffic cone in my VLC media player. I tried saving the video as a .mkv and .mp4 file. Anyone know how to fix this ?

The code i used to record a video is :
ffmpeg -f v4l2 -t 5 -framerate 25 -video_size 640x480 -i /dev/video0 output.mkv
andffmpeg -f v4l2 -t 5 -framerate 25 -video_size 640x480 -i /dev/video0 output.mp4

The video did show up in the folder, but it doesn't show video when opening.

edit : I fixed it now by changing the file type to .avi, but if anyone can still explain how to get .mkv or .mp4, let me know :)


-
Feeding a series of images to ffmpeg as each image is created [closed]
5 février 2013, par Mark SchneiderI'm trying to use ffmpeg to build a 1280x720 slide-show from a sequence of pictures and videos, but I have concerns about potential disk I/O bottleneck.
I expect a typical slide-show to have about 50 pictures and 2-3 videos (10-15 seconds each at 30 fps). I would like to show each picture for 3-4 seconds (possibly with a
Ken Burns effect) with a smooth 2 second crossfade between each set of pictures (or for pictures adjacent to videos - between the picture and the first/last frame of the video).Given about 50 pictures, the crossfades alone would amount to about 3,000 images (50 transitions x 2 secs/transition x 30 fps). And I suppose if I implement a Ken Burns effect during each picture's 3-4 second showing, I'd have to provide ffmpeg with individual images for each of those frames. (I'm writing a script in Ruby that will pull a list of images from a database and in turn call ImageMagick to create the individual images for each frame. As I understand it, the RMagick library interfaces with ImageMagick such that the output images come back as in-memory objects without needing to write to disk. FWIW, I'm developing in Windows 8 and will deploy to Heroku.)
All of the slideshow examples I've found online feed ffmpeg a set of images which have already been created. However, in an effort to avoid waiting on considerable disk I/O, I'd like to feed each image to ffmpeg as the image is created rather than create them all in advance.
Is there a way to send each image file to ffmpeg on the fly as the file is created in memory ?