
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (38)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Ajouter notes et légendes aux images
7 février 2011, parPour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
Modification lors de l’ajout d’un média
Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (6433)
-
ffmpeg images-to-video script anyone ?
3 février 2014, par danjaI'm wanting to take a bunch of images and make a video slideshow out of them. There'll be an app for that, right ? Yup, quite a few it seems. The problem is I want the slides synced to a piece of music, and all the apps I've seen only allow you to show each slide for a multiple of a whole second. I want them to show for multiples of 1.714285714 seconds to fit with 140 bpm.
The tools I've seen generally seem to have ffmpeg under the hood, so presumably this kind of thing could be done with a script. But ffmpeg has sooo many options...I'm hoping someone will have something close.
I'll have up to about 100 slides, the ones that have to show for 3.428571428 secs or whatever I guess I can simply show twice.
-
split video (avi/h264) on keyframe
30 novembre 2012, par m.srHallo.
I have a big video file.
ffmpeg
,tcprobe
and other tool say, it is an h264-stream in an AVI-container.Now i'd like to cut out small chunks form the video.
-
Problem : The index of the video seam corrupted/destroyed. I kind of fixed this via
mplayer -forceidx -saveidx <indexfile> <bigvideofile></bigvideofile></indexfile>
. The Problem here is, that I'm now stuck with mplayer/mencoder which can use this index file via-loadidx <indexfile></indexfile>
. I have tried correcting the index like described inman aviindex
(mplayer -frames 0 -saveidx mpidx broken.avi ; aviindex -i mpidx -o tcindex ; avimerge -x tcindex -i broken.avi -o fixed.avi
), but this didn't fix my video - meaning that most tools i've tested couldn't search in the video file. -
Problem : I cut out parts of the video via following command :
mencoder -loadidx in.idx -ss 8578 -endpos 20 -oac faac -ovc x264 -sws 9 -lavfopts format=mp4 -x264encopts <lotsofopts> -of lavf -vf scale=800:-10,harddup in.avi -o out.mp4</lotsofopts>
. Now here the problem is, that some videos are corrupted at the beginning. I think this is because the fact, that i do not necessarily cut at keyframe.
Questions :
-
What is the best way to fix the index of an avi "inline" so that every tool can again work as expected with it ?
-
How can i split at the keyframes ? Is there an mencoder-option for this ?
-
Are Keyframes coming in a frequency ? How to find out this frequency ? (So with a bit of math it should be possible to calculate the next keyframe and cut there)
-
Is ther perhaps some completely other way to split this movie ? Doing it by hand is no option, i've to cut out 1000+ chunks ...
Thanks a lot !
-
-
Stream video from ffmpeg and capture with OpenCV
10 décembre 2014, par chembradI have a video stream coming in on rtp to ffmpeg and I want to pipe this to my OpenCV tools for live streaming processing. The rtp linkage is working because I am able to send the incoming data to a file and play it (or play if via ffplay). My OpenCV implementation is functional as well because I am able to capture video from a file and also a webcam.
The problem is the streaming to OpenCV. I have heard that this may be done using a named pipe. First I could stream the ffmpeg output to the pipe and then have OpenCV open this pipe and begin processing.
What I’ve tried :
I make a named-pipe in my cygwin bash by :
$ mkfifo stream_pipe
Next I use my ffmpeg command to pull the stream from rtp and send it to the pipe :
$ ffmpeg -f avi -i rtp://xxx.xxx.xxx.xxx:1234 -f avi -y out.avi > stream_pipe
I am not sure if this is the right way to go about sending the stream to the named pipe but it seems to be accepting the command and work because of the output from ffmpeg gives me bitrates, fps, and such.
Next I use the named pipe in my OpenCV capture function :
$ ./cvcap.exe stream_pipe
where the code for cvcap.cpp boils down to this :
cv::VideoCapture *pIns = new cv::VideoCapture(argv[1]);
The program seems to hang when reaching this one line, so, I am wondering if this is the right way of going about this. I have never used named pipes before and I am not sure if this is the correct usage. In addition, I don’t know if I need to handle the named pipe differently in OpenCV—change code around to accept this kind of input. Like I said, my code already accepts files and camera inputs, I am just hung up on a stream coming in. I have only heard that named pipes can be used for OpenCV—I haven’t seen any actual code or commands !
Any help or insights are appreciated !
UPDATE :
I believe named pipes may not be working in the way I intended. As seen on this cygwin forum post :
The problem is that Cygwin’s implementation of fifos is very buggy. I wouldn’t recommend using fifos for anything but the simplest of applications.
I may need to find another way to do this. I have tried to pipe the ffmpeg output into a normal file and then have OpenCV read it at the same time. This works to some extent, but I imagine in can be dangerous to read and write from a file concurrently—who knows what would happen !