
Recherche avancée
Médias (1)
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
Autres articles (75)
-
Taille des images et des logos définissables
9 février 2011, parDans beaucoup d’endroits du site, logos et images sont redimensionnées pour correspondre aux emplacements définis par les thèmes. L’ensemble des ces tailles pouvant changer d’un thème à un autre peuvent être définies directement dans le thème et éviter ainsi à l’utilisateur de devoir les configurer manuellement après avoir changé l’apparence de son site.
Ces tailles d’images sont également disponibles dans la configuration spécifique de MediaSPIP Core. La taille maximale du logo du site en pixels, on permet (...) -
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
-
Pas question de marché, de cloud etc...
10 avril 2011Le vocabulaire utilisé sur ce site essaie d’éviter toute référence à la mode qui fleurit allègrement
sur le web 2.0 et dans les entreprises qui en vivent.
Vous êtes donc invité à bannir l’utilisation des termes "Brand", "Cloud", "Marché" etc...
Notre motivation est avant tout de créer un outil simple, accessible à pour tout le monde, favorisant
le partage de créations sur Internet et permettant aux auteurs de garder une autonomie optimale.
Aucun "contrat Gold ou Premium" n’est donc prévu, aucun (...)
Sur d’autres sites (10310)
-
How to set background to subtitle in ffmpeg ?
10 avril 2017, par supermarioIt is described here how ot burn a srt file into a video.
However, I want to put a semi-transparent background to the subtitles so that the texts can be read more easily. How can I do that ? -
timelapse images into a movie, 500 at a time
2 mars 2017, par molly78I am trying to make a script to turn a bunch of timelapse images into a movie, using ffmpeg.
The latest problem is how to loop thru the images in, say, batches of 500.
There could be 100 images from the day, or there could be 5000 images.
The reason for breaking this apart is due to running out of memory.
Afterwards I would need to cat them using MP4Box to join all together...
I am entirely new to bash, but not entirely programming.
What I think needs to happen is this
1) read in the folders contents as the images may not be consecutively named
2) send ffmpeg a list of 500 at a time to process (https://trac.ffmpeg.org/wiki/Concatenate)
2b) while you’re looping thru this, set a counter to determine how many loops you’ve done
3) use the number of loops to create the MP4Box cat command line to join them all at the end.
the basic script that works if there’s only say 500 images is :
#!/bin/bash
dy=$(date '+%Y-%m-%d')
ffmpeg -framerate 24 -s hd1080 -pattern_type glob -i "/mnt/cams/Camera1/$dy/*.jpg" -vcodec libx264 -pix_fmt yuv420p Cam1-"$dy".mp4MP4Box’s cat command looks like :
MP4Box -cat Cam1-$dy7.mp4 -cat Cam1-$dy6.mp4 -cat Cam1-$dy5.mp4 -cat Cam1-$dy4.mp4 -cat Cam1-$dy3.mp4 -cat Cam1-$dy2.mp4 -cat Cam1-$dy1.mp4 "Cam1 - $dy1 to $dy7.mp4"
Needless to say help is immensely appreciated for my project
-
ffmpeg and gnu parallel
16 août 2013, par souvikMy work would require me to encode a few thousand movies in a few days. Each movie needs to be encoded in 3 different formats. I use ffmpeg to output these formats in parallel with a single read of the input source as detailed here : http://ffmpeg.org/trac/ffmpeg/wiki/Creating%20multiple%20outputs
In addition, I am using GNU Parallel to encode from multiple video files in parallel. We have four blade servers of different configurations (48, 32, 16 and 16 cores) encoding videos in parallel. Ideally, we should be able to encode 112 videos in parallel.
However, it seems that encoding completes faster on machines with lesser cores. I have 16 completed encodes on the 16 core servers in around 4 hours, while it takes close to 10 hours for 48 encodes to complete on the 48 core system. What could be the bottleneck ? A typical encode command is as follows :
ffmpeg -i sample.mpg -y -vcodec libx264 -vprofile baseline -level 30 -acodec libfdk_aac -ab 128k -ac 2 -b:v 500K -threads 1 encoded/sample_enc.mp4
Any pointers highly appreciated. Thanks !