
Recherche avancée
Médias (1)
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
Autres articles (55)
-
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs
12 avril 2011, parLa manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras. -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
Sur d’autres sites (10517)
-
Directly response video to the client for download when ffmpeg cutting the video
7 mars 2023, par web disinerI am bulding a web application using Node.js with Express framework. That simply cut the video and send to the client browser as a response for download. For cutting a video i use FFmpeg tool.
all things working fine but ffmpeg cutted video save to working directory.


Below is my code..


const child_process= require("child_process")

const cutting =async()=>{
 ffmpeg = child_process.spawn('ffmpeg', [

 "-i", /video.mp4,
 "-ss", "00:02:20",
 "-to","00:02:50",
 "-c:v", "copy", 
 "-c:a", "copy", 
 "output3.mp4"






 ])
 ffmpeg.stderr.on('data', function(data) {
 console.log('ffmpeg stderr data = '+data );
 });

}



This is image of working directory


Now i want when
**ffmpeg cutting the video it directly send chunks of video to the client for download **
without saving it in working directory or server where my code is hosted.

How can i do this ? which approch will useful for this ?
Anyone help me. much appreciated
Thanks


-
ffmpeg merge silent video with another video+audio
4 mars 2016, par Mark DaggerI want to create, in a single command, a video from 3 sources :
- a silent background video ;
- a smaller video to be overlayed (same length of 1), KEEPING its AUDIO ;
- a PNG logo to be overlayed
I can create the video but cannot get the audio track. I don’t understand if
-vf
is supposed to work in this case. This is what I’ve tried to do :ffmpeg.exe -y -i MASTER_SILENT_VIDEO.mp4 -vf "movie=SMALLER_VIDEO_WITH_AUDIO.flv, scale=320:-1[inner];movie=MY_LOGO.png[inner2]; [in][inner] overlay=800:480,amerge [step1]; [step1][inner2] overlay=30:30 [out]" completed.mp4
The "amerge" filter should do the audio merging job, but of course it doesn’t work. I’ve found similar questions involving
-map
or filtergraph but they refer to mixing a video source and an audio source ; I tried several filtergraph examples without success. Any idea ? -
FFMPEG video editing application. Need time and date stamp burned into video
11 mai 2022, par JacobI am developing an application for video editing. The main component of this application is to produce a single video file from several video files captured from a camcorder with the time and date stamp displayed on the final rendered video, much like the final product from a security camera. I have figured out, by using FFMPEG, how to burn the date and time into the video with a .SRT file as well as with DrawText like the following :


ffmpeg -y -i video.mp4 -vf “drawtext=fontfile=roboto.ttf:fontsize=12:fontcolor=yellow:text='%{pts\:localtime\:1575526882\:%A, %d, %B %Y %I\\\:%M\\\:%S %p}'" -preset ultrafast -f mp4 output_new.mp4 



I would rather use the DrawText method so the user does not have to wait longer while creating the .SRT files. I am new to FFMPEG and I find their documentation very confusing. I guess I am hoping there is someone out there who has experience with it.


Everything seems to work when I pass in the date created meta data from the video file and drawtext just does its thing. The problem is my application allows for editing of the video. I do this, for lack of better solution, by allowing the user to select beginning and ending frames they do not want, from the UI and then the code simply deletes the frames from the directory where they were split and saved. I then use FFMPEG to iterate through the directory and combine the remaining frames to make a video file.


This approach starts the time and date from the date created metadata ; however, cutting the frames out of the video will make the DT stamp inaccurate, due to the missing frames.


Is there any way to tell FFMPEG to burn in the date and time from date/time retrieved from each individual frame ? I appreciate any advice that you may have.