
Recherche avancée
Médias (91)
-
MediaSPIP Simple : futur thème graphique par défaut ?
26 septembre 2013, par
Mis à jour : Octobre 2013
Langue : français
Type : Video
-
avec chosen
13 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
sans chosen
13 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
config chosen
13 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
SPIP - plugins - embed code - Exemple
2 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
GetID3 - Bloc informations de fichiers
9 avril 2013, par
Mis à jour : Mai 2013
Langue : français
Type : Image
Autres articles (92)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (4310)
-
ffmpeg running in cloudfunction silently fails/never finishes
19 juin 2020, par VojtěchI am trying to implement a Cloudfunction which would run
ffmpeg
on a Google bucket upload. I have been playing with a script based on https://kpetrovi.ch/2017/11/02/transcoding-videos-with-ffmpeg-in-google-cloud-functions.html


The original script needs little tuning as the library evolved a bit. My current version is here :



const {Storage} = require('@google-cloud/storage');
const storage = new Storage();
const ffmpeg = require('fluent-ffmpeg');
const ffmpeg_static = require('ffmpeg-static');

console.log("Linking ffmpeg path to:", ffmpeg_static)
ffmpeg.setFfmpegPath(ffmpeg_static);

exports.transcodeVideo = (event, callback) => {
 const bucket = storage.bucket(event.bucket);
 console.log(event);
 if (event.name.indexOf('uploads/') === -1) {
 console.log("File " + event.name + " is not to be processed.")
 return;
 }

 // ensure that you only proceed if the file is newly createdxxs
 if (event.metageneration !== '1') {
 callback();
 return;
 }

 // Open write stream to new bucket, modify the filename as needed.
 const targetName = event.name.replace("uploads/", "").replace(/[.][a-z0-9]+$/, "");
 console.log("Target name will be: " + targetName);

 const remoteWriteStream = bucket.file("processed/" + targetName + ".mp4")
 .createWriteStream({
 metadata: {
 //metadata: event.metadata, // You may not need this, my uploads have associated metadata
 contentType: 'video/mp4', // This could be whatever else you are transcoding to
 },
 });

 // Open read stream to our uploaded file
 const remoteReadStream = bucket.file(event.name).createReadStream();

 // Transcode
 ffmpeg()
 .input(remoteReadStream)
 .outputOptions('-c:v copy') // Change these options to whatever suits your needs
 .outputOptions('-c:a aac')
 .outputOptions('-b:a 160k')
 .outputOptions('-f mp4')
 .outputOptions('-preset fast')
 .outputOptions('-movflags frag_keyframe+empty_moov')
 // https://github.com/fluent-ffmpeg/node-fluent-ffmpeg/issues/346#issuecomment-67299526
 .on('start', (cmdLine) => {
 console.log('Started ffmpeg with command:', cmdLine);
 })
 .on('end', () => {
 console.log('Successfully re-encoded video.');
 callback();
 })
 .on('error', (err, stdout, stderr) => {
 console.error('An error occured during encoding', err.message);
 console.error('stdout:', stdout);
 console.error('stderr:', stderr);
 callback(err);
 })
 .pipe(remoteWriteStream, { end: true }); // end: true, emit end event when readable stream ends
};





This version correctly runs and I can see this in logs :



2020-06-16 21:24:22.606 Function execution took 912 ms, finished with status: 'ok'
2020-06-16 21:24:52.902 Started ffmpeg with command: ffmpeg -i pipe:0 -c:v copy -c:a aac -b:a 160k -f mp4 -preset fast -movflags frag_keyframe+empty_moov pipe:1




It seems the function execution ends before the actual ffmpeg command, which then never finishes.



Is there a way to make the ffmpeg "synchronous" or "blocking" so that it finishes before the function execution ?


-
Subtitle Overlay Isn't Working, how do I fix it ? [closed]
27 octobre 2024, par michael tanI'm trying to make a program to create clips with subtitles, but instead of the overlaid subtitles syncing with the clip, they just start from the beginning of the movie.


import subprocess
from moviepy.editor import VideoFileClip

def parse_srt(srt_file):
 """Parse the SRT file and return a list of subtitles with their timestamps."""
 subtitles = []
 with open(srt_file, 'r') as f:
 content = f.read().strip().split('\n\n')
 for entry in content:
 lines = entry.split('\n')
 if len(lines) >= 3:
 index = lines[0]
 timestamps = lines[1]
 text = '\n'.join(lines[2:])
 start, end = timestamps.split(' --> ')
 subtitles.append((start.strip(), end.strip(), text.strip()))
 return subtitles

def print_subtitles_in_range(subtitles, start_time, end_time):
 """Print subtitles that fall within the given start and end times."""
 for start, end, text in subtitles:
 start_seconds = convert_srt_time_to_seconds(start)
 end_seconds = convert_srt_time_to_seconds(end)
 if start_seconds >= start_time and end_seconds <= end_time:
 print(f"{start} --> {end}: {text}")


def convert_srt_time_to_seconds(time_str):
 """Convert SRT time format to total seconds."""
 hours, minutes, seconds = map(float, time_str.replace(',', '.').split(':'))
 return hours * 3600 + minutes * 60 + seconds

def create_captioned_clip(input_file, start_time, end_time, srt_file, output_file):
 # Step 1: Extract the clip from the main video
 clip = VideoFileClip(input_file).subclip(start_time, end_time)
 print("Clip duration:", clip.duration)
 temp_clip_path = "temp_clip.mp4"
 clip.write_videofile(temp_clip_path, codec="libx264")

 # Step 2: Parse the SRT file to get subtitles
 subtitles = parse_srt(srt_file)

 # Step 3: Print subtitles that fall within the start and end times
 print("\nSubtitles for the selected clip:")
 print_subtitles_in_range(subtitles, start_time, end_time)

 # Step 2: Add subtitles using FFmpeg
 ffmpeg_command = [
 "ffmpeg",
 "-ss", str(start_time), # Seek to the start time of the clip
 "-i", input_file, # Use the original input file for subtitles
 "-vf", f"subtitles='{srt_file}:force_style=Alignment=10,TimeOffset={start_time}'", # Overlay subtitles
 "-t", str(end_time - start_time), # Set duration for the output
 output_file
 ]

 print("Running command:", ' '.join(ffmpeg_command))
 subprocess.run(ffmpeg_command, capture_output=True, text=True)

# Define input video and srt file
input_video = "Soul.2020.720p.BluRay.x264.AAC-[YTS.MX].mp4"
subtitle_file = "Soul.2020.720p.BluRay.x264.AAC-[YTS.MX].srt"

# Define multiple clips with start and end times
clips = [
 {"start": (5 * 60), "end": (5 * 60 + 30), "output": "output_folder/captioned_clip1.mp4"},
 {"start": (7 * 60), "end": (7 * 60 + 25), "output": "output_folder/captioned_clip2.mp4"},
]

# Process each clip
for clip_info in clips:
 create_captioned_clip(input_video, clip_info["start"], clip_info["end"], subtitle_file, clip_info["output"])



I thought the subtitles would sync with the clip automatically ; after that didn't work I tried to manually sync them by putting the start time, duration, and an offset, but it still didn't work. The subtitles still start from 0:00 of the movie. There's nothing wrong with the .srt file, it's formatted correctly.


-
How to read video frame by frame in C# .NET Framework 4.8 ?
27 septembre 2022, par Brendan HillI need to read a video file (.mkv format, h264, 1280x720) frame by frame for video processing.


I am stuck with .NET Framework 4.8 due to dependencies.


Having tried Accord.net VideoFileReader I am unable to read them due to :


[matroska,webm @ 0000024eb36e5c00] Could not find codec parameters for stream 0 (Video: h264, none(progressive), 1280x720): unspecified pixel format
Consider increasing the value for the 'analyzeduration' and 'probesize' options
Assertion desc failed at src/libswscale/swscale_internal.h:674



Unfortunately Accord.NET does not give me much control over the ffmpeg request and the project seems to have died (and .NET Framework has died (and C# has died)).


What are some alternatives which would allow me to read a .mkv file in h264 format frame by frame ?


NOTE - FFmpeg.NET does not seem to support getting frame bitmaps ; neither does MediaToolkit :(


NOTE - this page has the same problem (last comment 2020) but no resolution :( https://github.com/accord-net/framework/issues/713