
Recherche avancée
Médias (1)
-
GetID3 - Bloc informations de fichiers
9 avril 2013, par
Mis à jour : Mai 2013
Langue : français
Type : Image
Autres articles (35)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
-
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
Sur d’autres sites (3995)
-
DrawText and crossfade effect - ffmpeg
22 juillet 2018, par GeekI have one video with crossfade effect and i want add text on this video (drawtext).
However when i add a filter text, he remove a crossfade effect in the final video.Command to create video with crossfade effect :
ffmpeg -i first.mp4 -i second.mp4 -filter_complex
"[0:v]trim=start=0:end=2,setpts=PTS-STARTPTS[0_clip_1];
[0:v]trim=start=2:end=3,setpts=PTS-STARTPTS[fadeoutsrc_0];
[fadeoutsrc_0]format=pix_fmts=yuva420p,fade=t=out:st=0:d=1:alpha=1[fadeout_0];
[fadeout_0]fifo[fadeoutfifo_0];[1:v]trim=start=1,setpts=PTS-STARTPTS[1_clip_2];
[1:v]trim=start=0:end=1,setpts=PTS-STARTPTS[fadeinsrc_1];
[fadeinsrc_1]format=pix_fmts=yuva420p,fade=t=in:st=0:d=1:alpha=1[fadein_1];
[fadein_1]fifo[fadeinfifo_1];
[fadeoutfifo_0][fadeinfifo_1]overlay[crossfade_1];
[0_clip_1][crossfade_1][1_clip_2]concat=n=3[output];[0:a][1:a] acrossfade=d=1 [audio]"
-map "[output]" -map "[audio]" videoWithCrossfade.mp4Command to add filter text :
ffmpeg -i videoWithCrossfade.mp4 -filter_complex
"/Windows/fonts/arial.ttf':text='hello world!':fontcolor=white:fontsize=40:box=1:boxcolor=red
@0.5:boxborderw=10:x=500:y=500"
output.mp4this is the link of video : http://www.mediafire.com/file/kw3lvdb2rp1bs6u/videoWithCrossfade.mp4/file
http://www.mediafire.com/file/iycdzozsqzosq87/output.mp4/fileThanks for your help !
-
ffmpeg -ss then apply filter then concat producing timestamp errors
6 août 2020, par Bob RamseyUsing ffmpeg, I have split a file into multiple parts using -ss. Then I apply a filter to some of the files, then concat the files back together. When I do that, I get : Non-monotonous DTS in output stream 0:0 ; previous : 341334, current : 340526 ; changing to 341335. This may result in incorrect timestamps in the output file. The output file plays, but there are noticeable skips where the files are joined.


Here's how I am splitting the file :


ffmpeg -i full_source.mp4 -ss 0 -to 14.264250 -c copy 01-plain.mp4
ffmpeg -i full_source.mp4 -ss 14.264250 -to 18.435083 -c copy 01-filtered.mp4

ffmpeg -i full_source.mp4 -ss 18.435083 -to 29.988292 -c copy 02-plain.mp4
ffmpeg -i full_source.mp4 -ss 29.988292 -to 31.865167 -c copy 02-filtered.mp4
...
ffmpeg -i full_source.mp4 -ss 0 -to 14.264250 -c copy 10-plain.mp4
ffmpeg -i full_source.mp4 -ss 234.484203 -to 300.000 -c copy 10-filtered.mp4



Then I apply a different drawtext filter on each of the 10 filtered files and save them with a new name, like :


ffmpeg -hide_banner -loglevel warning -y -i 01-filtered.mp4 -filter_complex "drawtext=fontfile=calibri.ttf:fontsize=24:fontcolor=white:x=300:y=500:text='hello world'" -crf 15 01-filtered-complete.mp4



Finally, I join all of the plain and complete files back together like this :


ffmpeg -f concat -safe 0 -i mylist.txt -c:a copy -c:v copy outfile.mp4



And that's where the timing error comes in. I've tried adding -vsync drop in the concat command, but that didn't really work either. Same version of ffmpeg does the split, the filter, and the concat. I've tried different versions, everything from 20170519 to one from May 2020 with the same result. Always making sure that all three steps are done by the same version of ffmpeg.


The only thing I can see is that ffprobe shows a duration of 14.27 for 01-plain.mp4 when it should be 14.264250. All of the other files show a similar rounding difference. The files are 23.98 fps. If I do all of my filters in really long command without splitting the file, I can use the more precise numbers with no problem. It just takes 10 times as long. This is all scripted, it happens a couple of hundred times a day and time is money, so I can't take 10 times as long to do each file.


Any ideas ? Thanks in advance !


-
Is ffmpeg able to read ArrayBuffer input from stream
7 juillet 2017, par jAndyI want to accomplish the following tasks :
- Record Video+Audio from any HTML5 (
MediaStream
) capable browser - Send that data via
WebSocket
asBlob
/ArrayBuffer
chunks to a server - Broadcast that input stream-data to multiple clients
As it turns out, this brought me into a world of pain. The first task is fairly simple using the HTML5
MediaStream
objects alongside WebSockets.// ... for simplicity...
navigator.mediaDevices.getUserMedia({ audio: true, video: true }).then(stream => {
let mediaRecorder = new MediaRecorder( stream );
// ...
mediaRecorder.ondataavailable = e => {
webSocket.send( 'newVideoData', e.data ); // configured for binary data
};
});Now, I want to receive those data fragments and stream those via
nginx vod module
, because I guess I want the output stream in HLS or DASH.
I could write a littlenodejs
script as backend, which just receives the binary chunks and write them to a file or stream, and just reference it songinx vod module
could possibly read it and create them3u8
manifest on the fly ?I am wondering now,
- if
ffmpeg
is able to read that binary data directly (should bewebm format
), without a man-in-the-middle script, "somehow" ? - If not, do I have to write the data down into a file and pass that as input to
ffmpeg
or can I (should I) pipe the data to a self spawnedffmpeg
instance ? (if so, how ?) - Do I actually need the
nginx server
(probably alongside rtmp module) to deliver the output stream as HLS or could I just useffmpeg
to also create a dynamic manifest ? - Is the
nginx vod module
capable of creating a dynamic hls/dash manifest or must the input data be complete beforehand ? - Ultimately, am I on the totally wrong track here ? :P
Actually I just want to create a little video-live-chat demo, without any plugins or 3rd party encoding software, pure browser.
- Record Video+Audio from any HTML5 (