
Recherche avancée
Médias (91)
-
Les Miserables
9 décembre 2019, par
Mis à jour : Décembre 2019
Langue : français
Type : Textuel
-
VideoHandle
8 novembre 2019, par
Mis à jour : Novembre 2019
Langue : français
Type : Video
-
Somos millones 1
21 juillet 2014, par
Mis à jour : Juin 2015
Langue : français
Type : Video
-
Un test - mauritanie
3 avril 2014, par
Mis à jour : Avril 2014
Langue : français
Type : Textuel
-
Pourquoi Obama lit il mes mails ?
4 février 2014, par
Mis à jour : Février 2014
Langue : français
-
IMG 0222
6 octobre 2013, par
Mis à jour : Octobre 2013
Langue : français
Type : Image
Autres articles (96)
-
Sélection de projets utilisant MediaSPIP
29 avril 2011, parLes exemples cités ci-dessous sont des éléments représentatifs d’usages spécifiques de MediaSPIP pour certains projets.
Vous pensez avoir un site "remarquable" réalisé avec MediaSPIP ? Faites le nous savoir ici.
Ferme MediaSPIP @ Infini
L’Association Infini développe des activités d’accueil, de point d’accès internet, de formation, de conduite de projets innovants dans le domaine des Technologies de l’Information et de la Communication, et l’hébergement de sites. Elle joue en la matière un rôle unique (...) -
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
Gestion des droits de création et d’édition des objets
8 février 2011, parPar défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;
Sur d’autres sites (6187)
-
Interpreting ffmpeg output in Python
11 juin 2020, par Luka MilivojevicI started working in FFmpeg and I want to create a list that will contain start and end timestamps of silence intervals. I did print out these intervals using the FFmpeg but I need to format that output so it looks a bit more readable, so that is why I want to create a list out of it and then print it using a custom function. I know that I should go with regex here but I am not sure how should I write it nor how should I read the FFmpeg console output. My function for silence detection looks like :



def detect_silence_ffmpeg():
 command = r"ffmpeg -i audio.wav -af silencedetect=n=-40dB:d=0.5 -f null - "
 subprocess.call(command, shell=True)




And the output of this function on a 7 second long sample video is :



ffmpeg version git-2020-06-03-b6d7c4c Copyright (c) 2000-2020 the FFmpeg developers
 built with gcc 9.3.1 (GCC) 20200523
 configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libsrt --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --disable-w32threads --enable-libmfx --enable-ffnvcodec --enable-cuda-llvm --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt --enable-amf
 libavutil 56. 49.100 / 56. 49.100
 libavcodec 58. 90.100 / 58. 90.100
 libavformat 58. 44.100 / 58. 44.100
 libavdevice 58. 9.103 / 58. 9.103
 libavfilter 7. 84.100 / 7. 84.100
 libswscale 5. 6.101 / 5. 6.101
 libswresample 3. 6.100 / 3. 6.100
 libpostproc 55. 6.100 / 55. 6.100
Guessed Channel Layout for Input Stream #0.0 : stereo
Input #0, wav, from 'audio.wav':
 Metadata:
 encoder : Lavf58.44.100
 Duration: 00:00:07.34, bitrate: 1411 kb/s
 Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, stereo, s16, 1411 kb/s
Stream mapping:
 Stream #0:0 -> #0:0 (pcm_s16le (native) -> pcm_s16le (native))
Press [q] to stop, [?] for help
Output #0, null, to 'pipe:':
 Metadata:
 encoder : Lavf58.44.100
 Stream #0:0: Audio: pcm_s16le, 44100 Hz, stereo, s16, 1411 kb/s
 Metadata:
 encoder : Lavc58.90.100 pcm_s16le
[silencedetect @ 00000202fc71e680] silence_start: 0
[silencedetect @ 00000202fc71e680] silence_end: 1.16374 | silence_duration: 1.16374
[silencedetect @ 00000202fc71e680] silence_start: 1.94558
[silencedetect @ 00000202fc71e680] silence_end: 3.41345 | silence_duration: 1.46787
[silencedetect @ 00000202fc71e680] silence_start: 3.8578
[silencedetect @ 00000202fc71e680] silence_end: 5.84844 | silence_duration: 1.99063
[silencedetect @ 00000202fc71e680] silence_start: 6.43653
size=N/A time=00:00:07.33 bitrate=N/A speed= 308x 
video:0kB audio:1264kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
[silencedetect @ 00000202fc71e680] silence_end: 7.33868 | silence_duration: 0.902154




And this code should be implemented on an hour or so long videos so I really need to find a way to format this output a bit better than this. That would be it, any help would be much appreciated :)



P.S : the idea is that this should work on Windows mainly, but if the cross-platform is possible too it would be great.


-
ffmpeg running in cloudfunction silently fails/never finishes
19 juin 2020, par VojtěchI am trying to implement a Cloudfunction which would run
ffmpeg
on a Google bucket upload. I have been playing with a script based on https://kpetrovi.ch/2017/11/02/transcoding-videos-with-ffmpeg-in-google-cloud-functions.html


The original script needs little tuning as the library evolved a bit. My current version is here :



const {Storage} = require('@google-cloud/storage');
const storage = new Storage();
const ffmpeg = require('fluent-ffmpeg');
const ffmpeg_static = require('ffmpeg-static');

console.log("Linking ffmpeg path to:", ffmpeg_static)
ffmpeg.setFfmpegPath(ffmpeg_static);

exports.transcodeVideo = (event, callback) => {
 const bucket = storage.bucket(event.bucket);
 console.log(event);
 if (event.name.indexOf('uploads/') === -1) {
 console.log("File " + event.name + " is not to be processed.")
 return;
 }

 // ensure that you only proceed if the file is newly createdxxs
 if (event.metageneration !== '1') {
 callback();
 return;
 }

 // Open write stream to new bucket, modify the filename as needed.
 const targetName = event.name.replace("uploads/", "").replace(/[.][a-z0-9]+$/, "");
 console.log("Target name will be: " + targetName);

 const remoteWriteStream = bucket.file("processed/" + targetName + ".mp4")
 .createWriteStream({
 metadata: {
 //metadata: event.metadata, // You may not need this, my uploads have associated metadata
 contentType: 'video/mp4', // This could be whatever else you are transcoding to
 },
 });

 // Open read stream to our uploaded file
 const remoteReadStream = bucket.file(event.name).createReadStream();

 // Transcode
 ffmpeg()
 .input(remoteReadStream)
 .outputOptions('-c:v copy') // Change these options to whatever suits your needs
 .outputOptions('-c:a aac')
 .outputOptions('-b:a 160k')
 .outputOptions('-f mp4')
 .outputOptions('-preset fast')
 .outputOptions('-movflags frag_keyframe+empty_moov')
 // https://github.com/fluent-ffmpeg/node-fluent-ffmpeg/issues/346#issuecomment-67299526
 .on('start', (cmdLine) => {
 console.log('Started ffmpeg with command:', cmdLine);
 })
 .on('end', () => {
 console.log('Successfully re-encoded video.');
 callback();
 })
 .on('error', (err, stdout, stderr) => {
 console.error('An error occured during encoding', err.message);
 console.error('stdout:', stdout);
 console.error('stderr:', stderr);
 callback(err);
 })
 .pipe(remoteWriteStream, { end: true }); // end: true, emit end event when readable stream ends
};





This version correctly runs and I can see this in logs :



2020-06-16 21:24:22.606 Function execution took 912 ms, finished with status: 'ok'
2020-06-16 21:24:52.902 Started ffmpeg with command: ffmpeg -i pipe:0 -c:v copy -c:a aac -b:a 160k -f mp4 -preset fast -movflags frag_keyframe+empty_moov pipe:1




It seems the function execution ends before the actual ffmpeg command, which then never finishes.



Is there a way to make the ffmpeg "synchronous" or "blocking" so that it finishes before the function execution ?


-
Subtitle Overlay Isn't Working, how do I fix it ? [closed]
27 octobre 2024, par michael tanI'm trying to make a program to create clips with subtitles, but instead of the overlaid subtitles syncing with the clip, they just start from the beginning of the movie.


import subprocess
from moviepy.editor import VideoFileClip

def parse_srt(srt_file):
 """Parse the SRT file and return a list of subtitles with their timestamps."""
 subtitles = []
 with open(srt_file, 'r') as f:
 content = f.read().strip().split('\n\n')
 for entry in content:
 lines = entry.split('\n')
 if len(lines) >= 3:
 index = lines[0]
 timestamps = lines[1]
 text = '\n'.join(lines[2:])
 start, end = timestamps.split(' --> ')
 subtitles.append((start.strip(), end.strip(), text.strip()))
 return subtitles

def print_subtitles_in_range(subtitles, start_time, end_time):
 """Print subtitles that fall within the given start and end times."""
 for start, end, text in subtitles:
 start_seconds = convert_srt_time_to_seconds(start)
 end_seconds = convert_srt_time_to_seconds(end)
 if start_seconds >= start_time and end_seconds <= end_time:
 print(f"{start} --> {end}: {text}")


def convert_srt_time_to_seconds(time_str):
 """Convert SRT time format to total seconds."""
 hours, minutes, seconds = map(float, time_str.replace(',', '.').split(':'))
 return hours * 3600 + minutes * 60 + seconds

def create_captioned_clip(input_file, start_time, end_time, srt_file, output_file):
 # Step 1: Extract the clip from the main video
 clip = VideoFileClip(input_file).subclip(start_time, end_time)
 print("Clip duration:", clip.duration)
 temp_clip_path = "temp_clip.mp4"
 clip.write_videofile(temp_clip_path, codec="libx264")

 # Step 2: Parse the SRT file to get subtitles
 subtitles = parse_srt(srt_file)

 # Step 3: Print subtitles that fall within the start and end times
 print("\nSubtitles for the selected clip:")
 print_subtitles_in_range(subtitles, start_time, end_time)

 # Step 2: Add subtitles using FFmpeg
 ffmpeg_command = [
 "ffmpeg",
 "-ss", str(start_time), # Seek to the start time of the clip
 "-i", input_file, # Use the original input file for subtitles
 "-vf", f"subtitles='{srt_file}:force_style=Alignment=10,TimeOffset={start_time}'", # Overlay subtitles
 "-t", str(end_time - start_time), # Set duration for the output
 output_file
 ]

 print("Running command:", ' '.join(ffmpeg_command))
 subprocess.run(ffmpeg_command, capture_output=True, text=True)

# Define input video and srt file
input_video = "Soul.2020.720p.BluRay.x264.AAC-[YTS.MX].mp4"
subtitle_file = "Soul.2020.720p.BluRay.x264.AAC-[YTS.MX].srt"

# Define multiple clips with start and end times
clips = [
 {"start": (5 * 60), "end": (5 * 60 + 30), "output": "output_folder/captioned_clip1.mp4"},
 {"start": (7 * 60), "end": (7 * 60 + 25), "output": "output_folder/captioned_clip2.mp4"},
]

# Process each clip
for clip_info in clips:
 create_captioned_clip(input_video, clip_info["start"], clip_info["end"], subtitle_file, clip_info["output"])



I thought the subtitles would sync with the clip automatically ; after that didn't work I tried to manually sync them by putting the start time, duration, and an offset, but it still didn't work. The subtitles still start from 0:00 of the movie. There's nothing wrong with the .srt file, it's formatted correctly.