Recherche avancée

Médias (1)

Mot : - Tags -/MediaSPIP 0.2

Autres articles (33)

  • Le plugin : Podcasts.

    14 juillet 2010, par

    Le problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
    Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
    Types de fichiers supportés dans les flux
    Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

Sur d’autres sites (4562)

  • ffmpeg, how to concat two streams, one with and one without audio

    3 janvier 2019, par chasep255

    I have one clip filmed at 240 FPS. I want to slow it down 8x and concat the slow motion version of it to the fast version. The fast version has audio but the slow does not. When I open the finished movie using totem in Ubuntu I get no sound. However, the sound appears to be correct when I use VLC. I think this is an issue with the sound not being the same length as the final movie. I think I somehow need to pad the sound to the length of the final movie. Anyone know how to pad the audio or a better way to do this ?

    ffmpeg -hwaccel cuda -i GX010071_1.MP4 -filter_complex "[0:v]setpts=8*PTS[s];[0:v]framerate=30[f]; [f] [s] concat=n=2 [c]" -map '[c]' -map 0:a -c:v hevc_nvenc SLOW.MP4
  • No option name near , Error parsing a filter description around

    8 août 2024, par Aqib Javed

    I am trying to attach the captions files ( srt format ) to a video
if im running the same command on mac or windows, its working with -vf or -filter_complex but on android/IOS, its not working and throwing the same error

    


    i am on Flutter and here is my code :

    


    var tempDir = await getTemporaryDirectory();
var status = await Permission.storage.status;
if (!status.isGranted) {
  await Permission.storage.request();
}

final outputPath = "${tempDir.path}/outputWithCaptions.mp4";


const videoPath = "/storage/emulated/0/Download/english.mp4";
final subtitlePath = deepGramResponse.captionsPath.replaceAll('\'', '\\\'');
String command = "-y -i";
command = "$command '$videoPath'";
command = "$command -vf";
command = '$command "subtitles=\'$subtitlePath\'"';
command = "$command '$outputPath'";
 if (File(videoPath).existsSync() && File(subtitlePath).existsSync()) {
  FFmpegKit.executeAsync(command, (session) async {
    final returnCode = await session.getReturnCode();
    final output = await session.getOutput();
    final error = await session.getFailStackTrace();
    final logs = await session
        .getAllLogs()
        .then((value) => value.map((e) => e.getMessage()).toList());
    log('FFmpeg command executed with return code: $returnCode');
    if (ReturnCode.isSuccess(returnCode)) {
      log('Captions attached successfully');
      deepGramResponse.copyWith(
        videoPath: outputPath,
      );
      Get.to(() => VideoPlayerScreen(videoPath: outputPath));
    } else {
      log('FFmpeg command failed');
      log('Error output: $output');
      log('Error details: $error');
      log('Logs: $logs');
      Fluttertoast.showToast(
          msg: 'Something went wrong, please try again later');
    }
  });
} else {
  log('One or more files do not exist');
  Fluttertoast.showToast(msg: 'Subtitle or video file not found');
  return;
}


    


    and here is the error :

    


     , [AVFilterGraph @ 0x7b8a3dda90] No option name near '/data/user/0/com.example.blink/app_flutter/captions.srt'
  , [AVFilterGraph @ 0x7b8a3dda90] Error parsing a filter description around: 
  , [AVFilterGraph @ 0x7b8a3dda90] Error parsing filterchain 'subtitles='/data/user/0/com.example.blink/app_flutter/captions.srt'' around: 
  , Error reinitializing filters!
  , Failed to inject frame into filter network: Invalid argument
  , Error while processing the decoded data for stream #0:0
  , Conversion failed!


    


  • Extremely slow rendering time using Moviepy

    15 janvier 2024, par pacorisas

    I'm trying to create the following : two stacked videos (one on top of each other) with subtitles (like those videos you see in tiktok) from an srt file. For this, I'm first taking the top and bottom video and creating a CompositeVideoClip :

    


    clips_array([[video_clip], [random_bottom_clip]])


    


    Then, I'm taking this CompositeVideoClip and using a generator, creating the SubtitlesClip which then I will add to another CompositeVideoClip :

    


    sub = SubtitlesClip(os.path.join(temp_directory, f"subtitles.srt"), generator)
final = CompositeVideoClip([myvideo, sub.set_position(('center', 'center'))]).set_duration("00:02:40")


    


    Lastly, I'm adding some more text-clips (just an small title for the video) and rendering :

    


    video_with_text = CompositeVideoClip([final] + text_clips)
video_with_text.write_videofile(part_path, fps=30,threads=12,codec="h264_nvenc")


    


    Here is the problem. I tried to render a video of 180 seconds (3 minutes) and the video takes up to hour and a half (80 minutes) which is wild. I tried some render settings as you can see like changing 'codec' and using all the 'threads' of my CPU.
I tried to not use so many CompositeVideoClips, I read that when you concatenate those the final render will suffer a lot, but I didn't manage to find a way "not to use" that many CompositeVideoClips, any idea ?

    


    My PC is not that bad. 16GB, AMD Ryzen 5 5600 6-Core , NVIDIA 1650 SUPER.

    


    My goal is to at least bring the render to less than an hour. Right now is like 1.23s/it