
Recherche avancée
Autres articles (24)
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Mise à disposition des fichiers
14 avril 2011, parPar défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...) -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)
Sur d’autres sites (7061)
-
How to compress video with uniform blur being the only side effect of compression ?
5 janvier 2020, par KukusterThe point is to somehow compress a video file, significantly reducing bitrate but somehow without changing its apparent quality, and because it would fit the design, I thought I will blur the video. In raster image processing I guess, it would be closer to the mean blur, or kind of like being behind turbid glass effect.
Ideally, I would like to be able to choose between compressing more with more strong blur and compressing less with less strong blur, all look pretty in any case.
I tried some ffmpeg postprocessing libraries including fspp, spp, uspp (takes a long time to render), etc. I almost reached the goal using fspp with parameters 5:60:15. But blur was either very strong for certain objects or left other undesired artifacts. Although uspp does a good job and looks pretty, it leaves about half of the image in frames unblurred, while I’m looking for more uniform blur across the frame. Because sometimes it takes a lot of time, I didn’t try all the features of uspp.
Here is my ffmpeg input data about the video streams :
Duration : 00:01:03.02, start : 0.000000, bitrate : 4010 kb/s Stream #0:0(und) : Video : h264 (High) (avc1 / 0x31637661), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], 3870 kb/s, 23.98 fps, 23.98 tbr, 24k tbn, 47.95 tbc (default) Metadata : handler_name : VideoHandler Stream #0:1(eng) : Audio : aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 133 kb/s (default) Metadata : handler_name : SoundHandler
-
Merging multiple videos in a template/layout with Python FFMPEG ?
14 janvier 2021, par J. M. ArnoldI'm currently trying to edit videos with the Python library of FFMPEG. I'm working with multiple file formats, precisely
.mp4
,.png
and text inputs (.txt
). The goal is to embed the different video files within a "layout" - for demonstration purposes I tried to design an example picture :



The output is supposed to be a 1920x1080
.mp4
file with the following Elements :

- 

- Element 3 is the video itself (due to it being a mobile phone screen recording, it's about the size displayed there)
- Element 1 and 2 are the "boarders", i.e. static pictures (?)
- Element 4 represents a regularly changing text - input through the python script (probably be read from a
.txt
file) - Element 5 portrays a
.png
,.svg
or alike ; in general a "picture" in the broad sense.










What I'm trying to achieve is to create a sort of template file in which I "just" need to input the different
.mp4
and.png
files, as well as the text and in the end I'll receive a.mp4
file whereas my Python script functions as the navigator sending the data packages to FFMPEG to process the video itself.

I dug into the FFMPEG library as well as the python-specific repository and wasn't able to find such an option. There were lots of articles explaining the usage of "channel layouts" (though these don't seem to fit my need).


In case anyone wants to try on the same versions :


- 

python --version
:
Python 3.7.3pip show ffmpeg
: Version : 1.4 (it's the most recent ; on an off-topic note : It's not obligatory to use FFMPEG, I'd prefer using this library though if it doesn't offer the functionality I'm looking for, I'd highly appreciate if someone suggested something else)






-
ffmpeg :how to apply animation in multiple images
2 mars 2023, par Pavan GhanateI am trying to merge number of selected images from gallery to a video template in order to make video status or short video in a android app, I am able to merge the selected images in the video using below cammand now i want to add animation


ArrayList<string> cmd2 = new ArrayList<>();
 cmd2.add("-y");
 cmd2.add("-i");
 if (video_temp_path!= null){
 cmd2.add(video_temp_path);
 }else {
 cmd2.add(Environment.getExternalStorageDirectory().getPath()
 + "/Download/happy.mp4");
 }


 for (int no = 0; no < paths.length; no++) {
 cmd2.add("-i");

 cmd2.add(paths[no]);

 }

 cmd2.add("-filter_complex");



 cmd2.add("[0][1]overlay=x=100:y=200:enable='between(t,3,8)'[v1];" +
 "[v1][2]overlay=x=100:y=200:enable='between(t,10,15)'[v2];" +
 "[v2][3]overlay=x=100:y=200:enable='gt(t,17)'[v3]");
 cmd2.add("-map");
 cmd2.add("[v3]");
 cmd2.add("-map");
 cmd2.add( "0:a");
 cmd2.add(Environment.getExternalStorageDirectory().getPath()
 + "/Download/output.mp4");
</string>


but now i want to add fade in out animation to images so I am using this cammand generated by chatgpt but its giving me error


ArrayList<string> cmd2 = new ArrayList<>();
 cmd2.add("-y");
 cmd2.add("-i");

 if (video_temp_path != null) {
 cmd2.add(video_temp_path);
 } else {
 cmd2.add(Environment.getExternalStorageDirectory().getPath() +
 "/Download/happy.mp4");
}

for (int no = 0; no < paths.length; no++) {
 cmd2.add("-loop");
 cmd2.add("1"); // loop the image

 cmd2.add("-t");
 cmd2.add("5"); // duration of the image

 cmd2.add("-i");
 cmd2.add(paths[no]);

 cmd2.add("-filter_complex");
 cmd2.add("[1:v]fade=in:st=0:d=1[tin];" +
 "[1:v]fade=out:st=4:d=1[tout];" +
 "[0:v][tin]overlay=x=100:y=200" +
 "[v1];" +
 "[v1][tout]overlay=x=100:y=200:enable='between(t,10,15)'[v2];" +
 "[v2][2:v]overlay=x=100:y=200:enable='gt(t,17)'[v3]");

 cmd2.add("-map");
 cmd2.add("[v3]");
 cmd2.add("-map");
 cmd2.add("0:a");
}

cmd2.add(Environment.getExternalStorageDirectory().getPath() +
 "/Download/output.mp4");
</string>


error is


Option map (set input stream mapping) cannot be applied to input url /storage/emulated/0/Pictures/temp/1677570327312.jpg -- you are trying to apply an input option to an output file or vice versa. Move this option before the file it belongs to.



2023-03-01 12:50:50.707 5950-6326/com.android.mergevideo E/mobile-ffmpeg : Error parsing options for input file /storage/emulated/0/Pictures/temp/1677570327312.jpg.
2023-03-01 12:50:50.707 5950-6326/com.android.mergevideo E/mobile-ffmpeg : Error opening input files :
2023-03-01 12:50:50.707 5950-6326/com.android.mergevideo E/mobile-ffmpeg : Invalid argument