
Recherche avancée
Autres articles (93)
-
Configuration spécifique pour PHP5
4 février 2011, parPHP5 est obligatoire, vous pouvez l’installer en suivant ce tutoriel spécifique.
Il est recommandé dans un premier temps de désactiver le safe_mode, cependant, s’il est correctement configuré et que les binaires nécessaires sont accessibles, MediaSPIP devrait fonctionner correctement avec le safe_mode activé.
Modules spécifiques
Il est nécessaire d’installer certains modules PHP spécifiques, via le gestionnaire de paquet de votre distribution ou manuellement : php5-mysql pour la connectivité avec la (...) -
Les formats acceptés
28 janvier 2010, parLes commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
ffmpeg -codecs ffmpeg -formats
Les format videos acceptés en entrée
Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
Les formats vidéos de sortie possibles
Dans un premier temps on (...) -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs
Sur d’autres sites (6372)
-
Extremely slow rendering time using Moviepy
15 janvier 2024, par pacorisasI'm trying to create the following : two stacked videos (one on top of each other) with subtitles (like those videos you see in tiktok) from an srt file. For this, I'm first taking the top and bottom video and creating a CompositeVideoClip :


clips_array([[video_clip], [random_bottom_clip]])



Then, I'm taking this CompositeVideoClip and using a generator, creating the SubtitlesClip which then I will add to another CompositeVideoClip :


sub = SubtitlesClip(os.path.join(temp_directory, f"subtitles.srt"), generator)
final = CompositeVideoClip([myvideo, sub.set_position(('center', 'center'))]).set_duration("00:02:40")



Lastly, I'm adding some more text-clips (just an small title for the video) and rendering :


video_with_text = CompositeVideoClip([final] + text_clips)
video_with_text.write_videofile(part_path, fps=30,threads=12,codec="h264_nvenc")



Here is the problem. I tried to render a video of 180 seconds (3 minutes) and the video takes up to hour and a half (80 minutes) which is wild. I tried some render settings as you can see like changing 'codec' and using all the 'threads' of my CPU.
I tried to not use so many CompositeVideoClips, I read that when you concatenate those the final render will suffer a lot, but I didn't manage to find a way "not to use" that many CompositeVideoClips, any idea ?


My PC is not that bad. 16GB, AMD Ryzen 5 5600 6-Core , NVIDIA 1650 SUPER.


My goal is to at least bring the render to less than an hour. Right now is like 1.23s/it


-
No option name near , Error parsing a filter description around
8 août 2024, par Aqib JavedI am trying to attach the captions files ( srt format ) to a video
if im running the same command on mac or windows, its working with -vf or -filter_complex but on android/IOS, its not working and throwing the same error


i am on Flutter and here is my code :


var tempDir = await getTemporaryDirectory();
var status = await Permission.storage.status;
if (!status.isGranted) {
 await Permission.storage.request();
}

final outputPath = "${tempDir.path}/outputWithCaptions.mp4";


const videoPath = "/storage/emulated/0/Download/english.mp4";
final subtitlePath = deepGramResponse.captionsPath.replaceAll('\'', '\\\'');
String command = "-y -i";
command = "$command '$videoPath'";
command = "$command -vf";
command = '$command "subtitles=\'$subtitlePath\'"';
command = "$command '$outputPath'";
 if (File(videoPath).existsSync() && File(subtitlePath).existsSync()) {
 FFmpegKit.executeAsync(command, (session) async {
 final returnCode = await session.getReturnCode();
 final output = await session.getOutput();
 final error = await session.getFailStackTrace();
 final logs = await session
 .getAllLogs()
 .then((value) => value.map((e) => e.getMessage()).toList());
 log('FFmpeg command executed with return code: $returnCode');
 if (ReturnCode.isSuccess(returnCode)) {
 log('Captions attached successfully');
 deepGramResponse.copyWith(
 videoPath: outputPath,
 );
 Get.to(() => VideoPlayerScreen(videoPath: outputPath));
 } else {
 log('FFmpeg command failed');
 log('Error output: $output');
 log('Error details: $error');
 log('Logs: $logs');
 Fluttertoast.showToast(
 msg: 'Something went wrong, please try again later');
 }
 });
} else {
 log('One or more files do not exist');
 Fluttertoast.showToast(msg: 'Subtitle or video file not found');
 return;
}



and here is the error :


, [AVFilterGraph @ 0x7b8a3dda90] No option name near '/data/user/0/com.example.blink/app_flutter/captions.srt'
 , [AVFilterGraph @ 0x7b8a3dda90] Error parsing a filter description around: 
 , [AVFilterGraph @ 0x7b8a3dda90] Error parsing filterchain 'subtitles='/data/user/0/com.example.blink/app_flutter/captions.srt'' around: 
 , Error reinitializing filters!
 , Failed to inject frame into filter network: Invalid argument
 , Error while processing the decoded data for stream #0:0
 , Conversion failed!



-
ffmpeg, how to concat two streams, one with and one without audio
3 janvier 2019, par chasep255I have one clip filmed at 240 FPS. I want to slow it down 8x and concat the slow motion version of it to the fast version. The fast version has audio but the slow does not. When I open the finished movie using totem in Ubuntu I get no sound. However, the sound appears to be correct when I use VLC. I think this is an issue with the sound not being the same length as the final movie. I think I somehow need to pad the sound to the length of the final movie. Anyone know how to pad the audio or a better way to do this ?
ffmpeg -hwaccel cuda -i GX010071_1.MP4 -filter_complex "[0:v]setpts=8*PTS[s];[0:v]framerate=30[f]; [f] [s] concat=n=2 [c]" -map '[c]' -map 0:a -c:v hevc_nvenc SLOW.MP4