
Recherche avancée
Médias (2)
-
Exemple de boutons d’action pour une collection collaborative
27 février 2013, par
Mis à jour : Mars 2013
Langue : français
Type : Image
-
Exemple de boutons d’action pour une collection personnelle
27 février 2013, par
Mis à jour : Février 2013
Langue : English
Type : Image
Autres articles (91)
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Mise à disposition des fichiers
14 avril 2011, parPar défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...) -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)
Sur d’autres sites (9437)
-
FFMPEG : Specifying Output Stream Type When Combing Multiple Filters
7 mai 2021, par Leonard BednerI currently have 3 separate
ffmpeg
commands that do the following :

- 

- Overlay a watermark on a video :
ffmpeg -i samplegreen.webm -i foregrounds/myimage.png -r 30 -filter_complex "overlay=(W-w)/2:H-h" -af "adelay=700" output.mp4
- Overlay the results of 1) onto a beach video :
ffmpeg -i backgrounds/beachsunsetmp4.mp4 -i output.mp4 -filter_complex "[1:v]chromakey=0x005d0b:0.1485:0.03[ckout];[0:v][ckout]overlay[o]" -map [o] -map 1:a -shortest somefolder/sample_video.mp4
- Merge the audio of the results of 2) with another audio file :
ffmpeg -i somefolder/sample_video.mp4 -i backgrounds/beachsunsetmp4.mp3 -filter_complex '[0:a][1:a]amerge=inputs=2[a]' -map 0:v -map '[a]' -c:v copy -ac 2 -shortest anotherfolder/sample_video.mp4








Now, this all works as intended, however, I was looking into attempting to combine them all into a single command, combining all the filters, like so :


ffmpeg -i samplegreen.webm -i foregrounds/myimage.png -r 30 -i backgrounds/beachsunsetmp4.mp4 -i backgrounds/beachsunsetmp4.mp3 -filter_complex \
 "[0]overlay=(W-w)/2:H-h[output_1]; \
 [output_1]chromakey=0x005d0b:0.1485:0.03[ckout]; \
 [2:v][ckout]overlay[output_2]; \
 [output_2][3:a] amerge=inputs=2 [output_3]" \
 -af "adelay=700" -map [output_3] shortest final.mp4



It fails with the following error (
Media type mismatch between the 'Parsed_overlay_2' filter output pad 0 (video) and the 'Parsed_amerge_3' filter input pad 0 (audio)
) :

ffmpeg version 4.3.2 Copyright (c) 2000-2021 the FFmpeg developers
 built with Apple clang version 11.0.0 (clang-1100.0.33.17)
 configuration: --prefix=/usr/local/Cellar/ffmpeg/4.3.2_1 --enable-shared --enable-pthreads --enable-version3 --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libdav1d --enable-libmp3lame --enable-libopus --enable-librav1e --enable-librubberband --enable-libsnappy --enable-libsrt --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librtmp --enable-libspeex --enable-libsoxr --enable-libzmq --enable-libzimg --disable-libjack --disable-indev=jack --enable-videotoolbox
 libavutil 56. 51.100 / 56. 51.100
 libavcodec 58. 91.100 / 58. 91.100
 libavformat 58. 45.100 / 58. 45.100
 libavdevice 58. 10.100 / 58. 10.100
 libavfilter 7. 85.100 / 7. 85.100
 libavresample 4. 0. 0 / 4. 0. 0
 libswscale 5. 7.100 / 5. 7.100
 libswresample 3. 7.100 / 3. 7.100
 libpostproc 55. 7.100 / 55. 7.100
Input #0, matroska,webm, from 'samplegreen.webm':
 Metadata:
 encoder : Chrome
 Duration: N/A, start: 0.000000, bitrate: N/A
 Stream #0:0(eng): Video: vp8, yuv420p(progressive), 1280x720, SAR 1:1 DAR 16:9, 1k tbr, 1k tbn, 1k tbc (default)
 Metadata:
 alpha_mode : 1
 Stream #0:1(eng): Audio: opus, 48000 Hz, mono, fltp (default)
Input #1, png_pipe, from 'foregrounds/myimage.png':
 Duration: N/A, bitrate: N/A
 Stream #1:0: Video: png, rgba(pc), 350x86, 25 tbr, 25 tbn, 25 tbc
Input #2, mov,mp4,m4a,3gp,3g2,mj2, from 'backgrounds/beachsunsetmp4.mp4':
 Metadata:
 major_brand : mp42
 minor_version : 0
 compatible_brands: mp42mp41
 creation_time : 2021-02-16T18:24:40.000000Z
 Duration: 00:00:32.53, start: 0.000000, bitrate: 3032 kb/s
 Stream #2:0(eng): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv, bt709), 1280x720, 3027 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc (default)
 Metadata:
 creation_time : 2021-02-16T18:24:40.000000Z
 handler_name : ?Mainconcept Video Media Handler
 encoder : AVC Coding
[mp3 @ 0x7f86cf809000] Estimating duration from bitrate, this may be inaccurate
Input #3, mp3, from 'backgrounds/beachsunsetmp4.mp3':
 Metadata:
 date : 2021-02-18 06:49
 id3v2_priv.XMP : <?xpacket begin="\xef\xbb\xbf" id="W5M0MpCehiHzreSzNTczkc9d"?>\x0a\x0a \x0a s
 Stream #3:0: Audio: mp3, 48000 Hz, stereo, fltp, 128 kb/s
[Parsed_overlay_2 @ 0x7f86cd4039c0] Media type mismatch between the 'Parsed_overlay_2' filter output pad 0 (video) and the 'Parsed_amerge_3' filter input pad 0 (audio)
[AVFilterGraph @ 0x7f86cd402a40] Cannot create the link overlay:0 -> amerge:0
Error initializing complex filters.
Invalid argument



As far as I can tell, the issue is that the filter,
amerge
, wants 2 audio streams. Normally, I could take the input stream argument (which is a video), and make it use the audio by doing something like[0:a][1:a]amerge=inputs=2[results]
. However, since my input stream is the output of a preceding filter, that doesn't seem to work (i.e. [output_2:a]). It bombs out with :

[matroska,webm @ 0x7fecca000000] Invalid stream specifier: output_2:a.
 Last message repeated 1 times
Stream specifier 'output_2:a' in filtergraph description [0]overlay=(W-w)/2:H-h[output_1]; [output_1]chromakey=0x005d0b:0.1485:0.03[ckout]; [2:v][ckout]overlay[output_2]; [output_2:a][3:a] amerge=inputs=2 [output_3] matches no streams.



So all of that said... Is there a way to specify that I'd like to use the audio stream from the output of a preceding filter ? Or any other ways to combine all of these filters into a single command ?


Thanks.


Any help would be greatly appreciated !


- Overlay a watermark on a video :
-
C# / FFMPEG - Is this the best way to programmatically combine multiple video files in different formats and encodings into one ?
28 avril 2021, par buggybudI've been trying to concat multiple videos into one. These videos may have different file types and extensions. As it stands I am only working with MP4 files that seem to have different resolutions, framerates, you name it.


After having first followed the Stackoverflow answers that talked about using an intermediate file (convert all the files into one format, then concate that) I also came across a solution that uses what I think is called a 'concat video filter.'


This would allow me to ignore the intermediate steps and just combine all the files by specifying their individual settings within one single FFMPEG command.


-i 1.mp4 -i 2.mp4 -i 3.mp4 -filter_complex \
 "[0:v]scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:-1:-1,setsar=1,fps=30,format=yuv420p[v0];
 [1:v]scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:-1:-1,setsar=1,fps=30,format=yuv420p[v1];
 [2:v]scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:-1:-1,setsar=1,fps=30,format=yuv420p[v2];
 [v0][0:a][v1][1:a][v2][2:a]concat=n=3:v=1:a=1[v][a]" \
 -map "[v]" -map "[a]" -c:v libx264 -c:a aac -movflags +faststart output.mp4



Provided above is the snippet that shows you how to combine three videos into one. I've used this Snippet within my code but as opposed to manually specifying the input files, and not finding a way to use the list file, I came across the very hacky solution to programatically create the above command parameters.


var command = "";

 for (var i = 0; i < video_paths.Count; i++)
 {
 command += $"-i \"{video_paths[i]}\" ";
 }

 command += "-filter_complex ";
 command += "\"";

 for (var i = 0; i < video_paths.Count; i++)
 {
 command += $"[{i}:v]scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:-1:-1,setsar=1,fps=30,format=yuv420p[v{i}];";
 }

 for (var i = 0; i < video_paths.Count; i++)
 {
 command += $"[v{i}][{i}:a]";
 }

 command += $"concat=n={video_paths.Count}:v=1:a=1[v][a]\" ";
 command += $"-map \"[v]\" -map \"[a]\" -c:v libx264 -c:a aac -movflags +faststart \"{path}\"";

 ffmpeg(command);



The above code is the solution to my problem. It works.


The reason I made this Stackoverflow question is the following : Is this the best way to programatically make the arguments ? What is the maximum string limit of these arguments as my video paths are all absolute paths ? How can I make this look nicer and less chaotic in code ?


Bud


-
Combine multiple images into a video using ffmpeg [closed]
13 avril 2021, par Mayank ThapliyalI want to combine multiple images into a video. I will use
ffmpeg
with Python as Python script will help me select images accordingly. And I have to fulfill multiple conditions :

- 

-
Frame-rate will be 0.2 (5 second for each image).


-
Screen-Ratio should be 18:9. But all images have different dimensions. And neither I want to crop nor I want to stretch them. So I want to pad them with a black background.


-
Ability to add background music.










But the issue is that I am not able to adjust images. Either images get stretched to full screen or they get cropped. So currently first I use
PIL
library to adjust images first then combine them withffmpeg
. But it takes a good amount of time and I don't like this approach.

Please help me solve my issue by only using
ffmpeg
(I guess that it will be fast enough).

-