
Recherche avancée
Autres articles (111)
-
Script d’installation automatique de MediaSPIP
25 avril 2011, parAfin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
La documentation de l’utilisation du script d’installation (...) -
Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs
12 avril 2011, parLa manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras. -
Que fait exactement ce script ?
18 janvier 2011, parCe script est écrit en bash. Il est donc facilement utilisable sur n’importe quel serveur.
Il n’est compatible qu’avec une liste de distributions précises (voir Liste des distributions compatibles).
Installation de dépendances de MediaSPIP
Son rôle principal est d’installer l’ensemble des dépendances logicielles nécessaires coté serveur à savoir :
Les outils de base pour pouvoir installer le reste des dépendances Les outils de développements : build-essential (via APT depuis les dépôts officiels) ; (...)
Sur d’autres sites (11091)
-
vulkan : add support for encode and decode queues and refactor queue code
7 novembre 2021, par Lynnevulkan : add support for encode and decode queues and refactor queue code
This simplifies and makes queue family picking simpler and more robust.
The requirements on the device context are relaxed. They made no sense
in the first place.The video encode/decode extension is still in beta, at least on paper,
but I really doubt they'd change needing a separate queue family. -
Is there an efficient way to use ffmpeg to perform a large quantity of cuts from a single file ?
16 mars 2024, par Giuliano OliveriI'm trying to cut video files into smaller chunks. (each one being one word said in the video, so they're not all of equal size)


I've tried a lot of different approaches to try to be as efficient as possible, but I can't get the runtime to be under 2/3rd of the original video length. That's an issue because I'm trying to process 400+ hours of video.


Is there a more efficient way to do this ? Or am I doomed to run this for weeks ?


Here is the command for my best attempt so far


ffmpeg -hwaccel cuda -hwaccel_output_format cuda -ss start_timestamp -t to_timestamp -i file_name -vf "fps=30,scale_cuda=1280:720" -c:v h264_nvenc -y output_file



Note that the machine running the code has a 4090
This command is then executed via python, which gives it the right timestamps and file paths for each smaller clip in a for loop


I think it's wasting a lot of time calling a new process each time, however I haven't been able to get better results with a split filter ; but here's the ffmpeg-python code for that attempt :


Creation of the stream :


inp = (
 ffmpeg
 .input(file_name, hwaccel="cuda", hwaccel_output_format="cuda")
 .filter("fps",fps=30)
 .filter('scale_cuda', '1280','720')
 .filter_multi_output('split')
)



Which then gets called in a for loop


(
 ffmpeg
 .filter(inp, 'trim', start=row[1]['start'], end=row[1]['end'])
 .filter('setpts', 'PTS-STARTPTS')
 .output(output_file,vcodec='h264_nvenc')
 .run()
)



-
Is there an efficient way to use ffmpeg to create a huge quantity of small video file, cut from a larger one ?
9 mars 2024, par Giuliano OliveriI'm trying to cut video files into smaller chunks. (each one being one word said in the video, so they're not all of equal size)


I've tried a lot of different approaches to try to be as efficient as possible, but I can't get the runtime to be under 2/3rd of the original video length. That's an issue because I'm trying to process 400+ hours of video.


Is there a more efficient way to do this ? Or am I doomed to run this for weeks ?


Here is the command for my best attempt so far


ffmpeg -hwaccel cuda -hwaccel_output_format cuda -ss start_timestamp -t to_timestamp -i file_name -vf "fps=30,scale_cuda=1280:720" -c:v h264_nvenc -y output_file



Note that the machine running the code has a 4090
This command is then executed via python, which gives it the right timestamps and file paths for each smaller clip in a for loop


I think it's wasting a lot of time calling a new process each time, however I haven't been able to get better results with a split filter ; but here's the ffmpeg-python code for that attempt :


Creation of the stream :


inp = (
 ffmpeg
 .input(file_name, hwaccel="cuda", hwaccel_output_format="cuda")
 .filter("fps",fps=30)
 .filter('scale_cuda', '1280','720')
 .filter_multi_output('split')
)



Which then gets called in a for loop


(
 ffmpeg
 .filter(inp, 'trim', start=row[1]['start'], end=row[1]['end'])
 .filter('setpts', 'PTS-STARTPTS')
 .output(output_file,vcodec='h264_nvenc')
 .run()
)