
Recherche avancée
Autres articles (108)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (15780)
-
lavu/common : make FF_CEIL_RSHIFT faster when shift is constant.
12 mai 2013, par Clément Bœsch -
FFMPEG mosaic/side-by-side-compositing from simultaneous DirectShow input devices
9 juin 2013, par timlukinsThis is what I'm trying to do :
ffmpeg.exe -y \
-f dshow -i video="Microsoft LifeCam Cinema" \
-f dshow -i video="Microsoft LifeCam VX-2000" \
-filter_complex "[0:v]pad=iw*2:ih:0[left];[left][1:v]overlay=W/2.0[fileout]" \
-map "[fileout]" -vcodec libx264 -f flv out.flvBasically, I have 2 webcams and I would like to combine them into a single video file in which the frames are 2x1 in size with the frame from one camera in the left and the other on the right.
In other words, what might be termed "mosaic-ing" or "side-by-side compositing". This is not concatenation - i.e. one file after the other (so not using the concat filter).
I've gleamed that this use of
-filter_complex
to pad and then position the frames appears the prescribed way. Indeed, when I test this with files like so :ffmpeg.exe -y -i test1.flv -i test2.flv -filter_complex "[0:v]pad=iw*2:ih:0[left];[left][1:v]overlay=W/2.0[fileout]" -map "[fileout]" -vcodec libx264 -f flv testout.flv
It works fine !
With the "live" version however, both cameras seem to start (their lights come on) but the capture stalls.
(Suspiciously like there is some DirectShow deadlock on the separate input device threads...)
And so, I wonder is there some way to overcome this and force the two input stream's data to merge ?
I have also tried the extended format of the dshow filter option like so as well :
-f dshow -i video="Microsoft LifeCam Cinema":video="Microsoft LifeCam VX-2000"
But only one camera is then selected (I suspect this option is really only to enable separate video and audio streams to be combined).
I've also tried explicitly setting each input device to have the exact same frame size and rate with
-f dshow -video_size 640x480 -framerate 30
. No joy though. It still stalls once the camera is listed.Here is the tail end of the output (with
-v debug
on) :Finished splitting the commandline.
Parsing a group of options: global .
Applying option y (overwrite output files) with argument 1.
Applying option v (set libav* logging level) with argument debug.
Applying option filter_complex (create a complex filtergraph) with argument [0:v]pad=iw*2:ih:0[left];[left][1:v]overlay=W/2.0[fileout].
Successfully parsed a group of options.
Parsing a group of options: input file video=Microsoft LifeCam Cinema.
Applying option f (force format) with argument dshow.
Successfully parsed a group of options.
Opening an input file: video=Microsoft LifeCam Cinema.
[dshow @ 00000000016e79a0] All info found
[dshow @ 00000000016e79a0] Estimating duration from bitrate, this may be inaccurate
Input #0, dshow, from 'video=Microsoft LifeCam Cinema':
Duration: N/A, start: 1130406.072000, bitrate: N/A
Stream #0:0, 1, 1/10000000: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 640x480, 333333/10000000, 30 tbr, 10000k tbn, 30 tbc
Successfully opened the file.
Parsing a group of options: input file video=Microsoft LifeCam VX-2000.
Applying option f (force format) with argument dshow.
Successfully parsed a group of options.
Opening an input file: video=Microsoft LifeCam VX-2000.
[dshow @ 00000000016e79a0] real-time buffer 101% full! frame dropped!EDIT Further details trying to fix within the code...*
I've always understood from past Windows DirectShow work that multiple calls to CoInitialize() on the same thread is bad. See here. Perhaps I've misunderstood how FFMPEG is multi-threaded (i.e. if each input device is on it's own thread) but I thought to just try regulating the call with a guard variable (a
static int com_init = 0;
- this should probably be mutex-ed...).e.g. in libavdevice/dshow.c method
dshow_read_header
889 if (com_init==0)
890 CoInitialize(0);
891 com_init++And similar for dshow_read_close
170 com_init--;
171 if (com_init==0)
172 CoUninitialize()Sadly, this doesn't work. The first camera starts but the second doesn't and the error is :
[dshow @ 0000000000301760] Could not set video options
video=Microsoft LifeCam VX-2000: Input/output error(Worth a shot. Looks like each input device is indeed on the same thread...)
-
Revision 01e41a531b : Remove vp9_recon_intra_mbuv Use common vp9_recon_sbuv instead. Change-Id : I146
20 avril 2013, par John KoleszarChanged Paths : Modify /vp9/common/vp9_reconintra.c Modify /vp9/common/vp9_reconintra.h Modify /vp9/encoder/vp9_encodeintra.c Remove vp9_recon_intra_mbuv Use common vp9_recon_sbuv instead. Change-Id : I146f79adfdfda2b52257a52fa783727f12afa246