
Recherche avancée
Autres articles (52)
-
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
Librairies et binaires spécifiques au traitement vidéo et sonore
31 janvier 2010, parLes logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
Binaires complémentaires et facultatifs flvtool2 : (...)
Sur d’autres sites (6804)
-
Create time lapse video from other video
27 janvier 2017, par OrlandoUsing avconv (or even ffmpeg, so I can use as a reference), how can I create a time lapse video by taking only anchor/reference frames from another video ? Most information I find is on how to create a time lapse video by combining images, and I’d like to do it by extracting frames from a video. Say, if a video is 30 seconds long at 30 FPS, I’d like to take 60 out of those 900 frames (900/60 = every 15 seconds) to produce a 2 second video.
-
Sync Video Multitrack Recording (with video.js and FFMpeg ?) [closed]
28 avril 2020, par finnkI am writing a web application that gives the ability to record multiple videos and merge them into a single split-screen video. The videos have to be synchronous with each other, so if the user had already recorded a video, it will playback on the next recording. So the captured video(s) play(s) while recording.
Right now, I implemented it by using video.js and videojs-record. 
I merge the videos server-side using FFmpeg( client-side would be much better, but I didn't figure out how to achieve this)



When I start the record, the playback of each recorded video begins as well. 
This approach produces, of course, a small latency between the videos.



Any suggestions on how to sync the recordings ?



Do you know ay better other libraries ?



Thank you & Best regards,


-
Merging 3 separate commands into one that re-encodes a video, extracts a thumbnail, delete original and rename new video in subdirectories
16 janvier 2017, par Ali SamiiI am trying to execute a find bash command to process hundreds of video files that are all named
video-original.mp4
but are in subdirectories of a parent directory.Here’s an example of the directory structure :
videos
├── 01a
│ └── video-original.mp4
├── 01b
│ └── video-original.mp4
├── 02a
│ └── video-original.mp4
├── 02b
│ └── video-original.mp4
├── 03a
│ └── video-original.mp4
└── 03b
└── video-original.mp4I am using the following command :
find ./ -name 'video-original.mp4' -exec bash -c 'ffmpeg -i "$0" -f mp4 -vcodec libx264 -preset veryslow -profile:v high -acodec aac -movflags faststart video.mp4 -hide_banner' {} \;
The problem I am having is that it is saving the file
video.mp4
in the parentvideos
directory, instead of in the subdirectory next to the originalvideo-original.mp4
Afterwards, I want to delete the file
video-original.mp4
. Currently, my process entails waiting for all the videos to be reencoded, and then once complete, issuing a separate command to delete the filevideo-original.mp4
:find ./ -name 'video-original.mp4' -exec bash -c 'rm -rf "$0"' {} \;
And my final step would be to extract a screenshot of the new
video.mp4
at 10 seconds and save it asthumbnail.jpg
. Again, I am currently doing that as a separate step that I execute after the previous two steps are completed.find ./ -name 'video.mp4' -exec bash -c 'ffmpeg -i "$0" -ss 00:00:10 -vframes 1 thumbnail.jpg' {} \;
What I would like to do is combine these three steps into a single command so the end result will be :
videos
├── 01a
│ ├── thumbnail.jpg
│ └── video.mp4
├── 01b
│ ├── thumbnail.jpg
│ └── video.mp4
├── 02a
│ ├── thumbnail.jpg
│ └── video.mp4
├── 02b
│ ├── thumbnail.jpg
│ └── video.mp4
├── 03a
│ ├── thumbnail.jpg
│ └── video.mp4
└── 03b
├── thumbnail.jpg
└── video.mp4Finally, it would be great to save that as a bash script and include it in my path in
/usr/local/bin
or~/bin
as an executable so I could just issue the commandreencode
and it would run. Would be even better if the input file could have any video file, for example,random_name.mp4
orrandom_name.mov
orrandom_name.webm
, basically any video file (but skippingvideo.mp4
at the encoding step).