
Recherche avancée
Médias (91)
-
Richard Stallman et le logiciel libre
19 octobre 2011, par
Mis à jour : Mai 2013
Langue : français
Type : Texte
-
Stereo master soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Audio
-
Elephants Dream - Cover of the soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
#7 Ambience
16 octobre 2011, par
Mis à jour : Juin 2015
Langue : English
Type : Audio
-
#6 Teaser Music
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#5 End Title
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
Autres articles (56)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (8980)
-
swresample/resample : do not increase phase_count on exact_rational
17 juin 2016, par Muhammad Faizswresample/resample : do not increase phase_count on exact_rational
high phase_count is only useful when dst_incr_mod is non zero
in other word, it is only useful on soft compensationon init, it will build filter with low phase_count
but when soft compensation is enabled, rebuild filter
with high phase_countthis approach saves lots of memory
Reviewed-by : Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by : Muhammad Faiz <mfcc64@gmail.com> -
Is there an efficient way to use ffmpeg to create a huge quantity of small video file, cut from a larger one ?
9 mars 2024, par Giuliano OliveriI'm trying to cut video files into smaller chunks. (each one being one word said in the video, so they're not all of equal size)


I've tried a lot of different approaches to try to be as efficient as possible, but I can't get the runtime to be under 2/3rd of the original video length. That's an issue because I'm trying to process 400+ hours of video.


Is there a more efficient way to do this ? Or am I doomed to run this for weeks ?


Here is the command for my best attempt so far


ffmpeg -hwaccel cuda -hwaccel_output_format cuda -ss start_timestamp -t to_timestamp -i file_name -vf "fps=30,scale_cuda=1280:720" -c:v h264_nvenc -y output_file



Note that the machine running the code has a 4090
This command is then executed via python, which gives it the right timestamps and file paths for each smaller clip in a for loop


I think it's wasting a lot of time calling a new process each time, however I haven't been able to get better results with a split filter ; but here's the ffmpeg-python code for that attempt :


Creation of the stream :


inp = (
 ffmpeg
 .input(file_name, hwaccel="cuda", hwaccel_output_format="cuda")
 .filter("fps",fps=30)
 .filter('scale_cuda', '1280','720')
 .filter_multi_output('split')
)



Which then gets called in a for loop


(
 ffmpeg
 .filter(inp, 'trim', start=row[1]['start'], end=row[1]['end'])
 .filter('setpts', 'PTS-STARTPTS')
 .output(output_file,vcodec='h264_nvenc')
 .run()
)



-
Is there an efficient way to use ffmpeg to perform a large quantity of cuts from a single file ?
16 mars 2024, par Giuliano OliveriI'm trying to cut video files into smaller chunks. (each one being one word said in the video, so they're not all of equal size)


I've tried a lot of different approaches to try to be as efficient as possible, but I can't get the runtime to be under 2/3rd of the original video length. That's an issue because I'm trying to process 400+ hours of video.


Is there a more efficient way to do this ? Or am I doomed to run this for weeks ?


Here is the command for my best attempt so far


ffmpeg -hwaccel cuda -hwaccel_output_format cuda -ss start_timestamp -t to_timestamp -i file_name -vf "fps=30,scale_cuda=1280:720" -c:v h264_nvenc -y output_file



Note that the machine running the code has a 4090
This command is then executed via python, which gives it the right timestamps and file paths for each smaller clip in a for loop


I think it's wasting a lot of time calling a new process each time, however I haven't been able to get better results with a split filter ; but here's the ffmpeg-python code for that attempt :


Creation of the stream :


inp = (
 ffmpeg
 .input(file_name, hwaccel="cuda", hwaccel_output_format="cuda")
 .filter("fps",fps=30)
 .filter('scale_cuda', '1280','720')
 .filter_multi_output('split')
)



Which then gets called in a for loop


(
 ffmpeg
 .filter(inp, 'trim', start=row[1]['start'], end=row[1]['end'])
 .filter('setpts', 'PTS-STARTPTS')
 .output(output_file,vcodec='h264_nvenc')
 .run()
)