
Recherche avancée
Autres articles (108)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Contribute to a better visual interface
13 avril 2011MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.
Sur d’autres sites (15609)
-
Ffmpeg 2 Input cameras in single ouput
1er juin 2020, par ExpressingxI have app where it streams from a camera to a file with Preview using
liabv
. 
Now there is requirement to be able to stream from 2 cameras simultaneously and output to a single file. The Preview will be like a CCTV camera and written to single output. Is this possible with libav ?
Before doing anything I've tried withffmpeg.exe
directly and found this :


ffmpeg -f dshow -i video="Camera1" -i video="Camera2" -filter_complex "nullsrc=size=640x480 [base];[0:v] setpts=PTS-STARTPTS, scale=640x480 [upperleft];[1:v] setpts=PTS-STARTPTS, scale=640x480 [upperright];[base][upperleft] overlay=shortest=1 [tmp1];[tmp1][upperright] overlay=shortest=1:x=640:y=480 [tmp2];"-c:v libx264 output.mp4




But every time throws error 'No such file or directory' for the second camera, while I've verified the camera is working if I use it as single input. Do I miss something ?



Overall is it possible to achieve that ?


-
FFmpeg convert video to images with complex logic
18 juillet 2020, par UdiI'm trying to use FFMPEG in order to solve some complex logic on my videos.


The business logic is the following :
I get videos from the formats : avi, mp4, mov.


I don't know what is the content in the video. It can be from 1M to 5G.


I want to output a list of images from this video with the higher quality I can get. Capturing only frames that have big changes from their previous frame. (new person, a new angle, big movement, etc)


In addition, I want to limit the number of frames per second that even if the video is dramatically fast and has changed all the time it will not produce more frames per second than this parameter.


I'm using now the following command :


./ffmpeg -i "/tmp/input/fast_movies/3_1.mp4" -vf fps=3,mpdecimate=hi=14720:lo=7040:frac=0.5 -vsync 0 -s hd720 "/tmp/output/fast_movies/(#%04d).png"



According to my understanding it doing the following :
fps=3 - first cut the video to 3 frames per second (So that is the limit I talked about)


mpdecimate - filter frames that doesn't have greater changes than the thresholds I set.


-vsync 0 - sync video timestamp - I'm not sure why but without it - it makes hundereds of duplicate frames ignoring the fps and mpdecimate command. Can someone explain ?


-s hd720 - set video size to


It works pretty well but I'm not so happy with the quality. Do you think I miss something ? Is there any parameter in FFMPEG that I better use it instead of these ones ?


-
Any other better ways to extract frames from a large size video according to the timecode given ?
6 avril 2020, par Yong EnGiven a video from youtube with at least 600 MB. The video is annotated with labels that happens in multiple timecodes. The timecodes are in millisecond (SSSS.ss). I am trying to get the frames that fall within a time period (2 timecodes). There are TWO approaches that I used with different tools, one is using openCV in python and FFmpeg in bash script :



I would stick with few variables here,



- 

- fps = 25
- timecode (after convert into second) = 333.44 to 334.00 take note that I am dealing with time period that might less than a second.







openCV



- 

- Using openCV in python, I read the video as frames into a numpy array. Using the fps from video, 25. I can estimate where do the frames that fall under this time period by diving total video duration in second with length of the numpy array.
- Problem here is I will miss out some frames here due to the video fps is not really 25 as given by the video meta info, it could be 24.xx. Any solutions ?







ffmpeg



- 

- What I did, every time I want to get the frames, I run the script.
- Problem here is I need to read the video 100 times if I have 100 time periods. Any ways to overcome this ?







Thanks for reading it.