
Recherche avancée
Autres articles (53)
-
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)
Sur d’autres sites (10935)
-
Is it real connect subtitles when streaming video (HLS, M3U8)
12 avril 2019, par Mikhail PetrovI am trying to create a stream in which the tracks from the convertible file (in my case .mkv to .m3u8) will be played through m3u8. At the moment, it turns out to switch between multiple resolutions, and even select the audio track, but does not see the subtitles at all.
Video works, audio tracks are switched, but no subtitles are visible at all
ffprobe source file :
https://paste2.org/czUePDPjNext, perform coding and splitting into tracks :
ffmpeg -i '/home/mishkapetran/Загрузки/Rick.mkv' \
-map 0:v:0 -c:v libx264 -profile:v baseline -preset:v superfast -strict -2 -s 426x240 -f hls -hls_time 10 -hls_list_size 0 -segment_list rick240p -hls_segment_filename '/home/mishkapetran/Загрузки/test/Rick240p_%d.ts' '/home/mishkapetran/Загрузки/test/Rick240p.m3u8' \
-map 0:a:0 -c:a aac -f hls -hls_time 10 -hls_list_size 0 -segment_list rick_ru -hls_segment_filename '/home/mishkapetran/Загрузки/test/RickTrack_ru_%d.aac' '/home/mishkapetran/Загрузки/test/RickTrack_ru.m3u8' \
-map 0:a:1 -c:a aac -f hls -hls_time 10 -hls_list_size 0 -segment_list rick_en -hls_segment_filename '/home/mishkapetran/Загрузки/test/RickTrack_en_%d.aac' '/home/mishkapetran/Загрузки/test/RickTrack_en.m3u8' \
-map 0:s:0 suben.vtt -f hls -hls_time 10 -hls_list_size 0 -segment_list en -hls_segment_filename '/home/mishkapetran/Загрузки/test/sub_en_%d.vtt' '/home/mishkapetran/Загрузки/test/sub_en.m3u8' \
-map 0:s:1 subru.vtt -f hls -hls_time 10 -hls_list_size 0 -segment_list ru -hls_segment_filename '/home/mishkapetran/Загрузки/test/sub_ru_%d.vtt' '/home/mishkapetran/Загрузки/test/sub_ru.m3u8'Then in the same folder I create the m3u8 master :
#EXTM3U
#EXT-X-VERSION:5
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="audio",NAME="Russian",LANGUAGE="ru",AUTOSELECT=YES,URI="RickTrack_ru.m3u8"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="audio",NAME="English",LANGUAGE="en",AUTOSELECT=NO,URI="RickTrack_en.m3u8"
#EXT-X-MEDIA:TYPE=SUBTITLES,GROUP-ID="subs",NAME="Russian",DEFAULT=YES,FORCED=NO,AUTOSELECT=YES,LANGUAGE="ru",URI="sub_ru.m3u8"
#EXT-X-MEDIA:TYPE=SUBTITLES,GROUP-ID="subs",NAME="English",DEFAULT=NO,FORCED=NO,AUTOSELECT=YES,LANGUAGE="en",URI="sub_en.m3u8"
#EXT-X-STREAM-INF:BANDWIDTH=928000,CODECS="avc1.42c00d,mp4a.40.2",RESOLUTION=480x270,AUDIO="audio",SUBTITLES="subs"
Rick240p.m3u8 -
FFMPEG : Recording a video stream to disk real-time
31 mars 2022, par Евгений ФедоровI made a video recording with ffmpeg


var inputArgs = string.Format(CultureInfo.InvariantCulture, "-framerate {0} -f rawvideo -pix_fmt {3} -video_size {1}x{2} -i -", VideoFPS, VideoXRes, VideoYRes, VideoPXLFormat);

 var outputArgs = string.Format(CultureInfo.InvariantCulture, "-vcodec mpeg4 -crf {2} -pix_fmt yuv420p -preset {3} -shortest -r {1} \"{0}\"", outputPath, VideoFPS, VideoCRFValue, CompressionRate);

 var ffmpegProcess = new Process
 {
 StartInfo =
 {
 FileName = ffmpegPath,
 Arguments = $"{inputArgs} {outputArgs}",
 UseShellExecute = false,
 CreateNoWindow = true,
 RedirectStandardInput = true
 }
 };

 ffmpegProcess .Start();
 var VideoRecordingFFmpegInput = VideoRecordingFFmpegProcces.StandardInput.BaseStream;



Now I write data bytes from the image into the stream from time to time


VideoRecordingFFmpegInput.Write(framesByteArray, 0, sizeOfpack);



When I finished recording the frames :


VideoRecordingFFmpegInput.Flush();
 VideoRecordingFFmpegInput.Close();
 ffmpegProcess.WaitForExit();



Everything works fine, and it creates the file I want.


When the process starts, it creates a file that has a very small size



And during the recording of image bytes in the video stream - this size does not change (Probably it is written to RAM ?)


But as soon as the ffmpeg process finishes - flush / close
the file size becomes normal.




The problem is that if I record for several hours it can cause an OutOfMemory exception


Is there any way to have the frames record and immediately increase the file size ? (Without infinitely filling RAM ?)


-
How to render two videos with alpha channel in real time in pygame with synched audio ?
21 décembre 2024, par Francesco CalderoneI need to play two videos with synched sound in real-time with Pygame.
Pygame does not currently support video streams, so I am using a ffmpeg subprocess.
The first video is a prores422_hq. This is a background video with no alpha channel.
The second video is a prores4444 overlay video with an alpha channel, and it needs to be played in real-tim on top of the first video (with transparency).
All of this needs synched sound from the first base video only.


I have tried many libraries, including pymovie pyav and opencv. The best result so far is to use a subprocess with ffmpeg.


ffmpeg -i testing/stefano_prores422_hq.mov -stream_loop -1 -i testing/key_prores4444.mov -filter_complex "[1:v]format=rgba,colorchannelmixer=aa=1.0[overlay];[0:v][overlay]overlay" -f nut pipe:1 | ffplay -


When running this in the terminal and playing with ffplay, everything is perfect, the overlay looks good, no dropped frames, and the sound is in synch.


However, trying to feed that to pygame via a subprocess creates either video delays and drop frames or audio not in synch.


EXAMPLE ONE :


# SOUND IS NOT SYNCHED - sound is played via ffplay
import pygame
import subprocess
import numpy as np
import sys

def main():
 pygame.init()
 screen_width, screen_height = 1920, 1080
 screen = pygame.display.set_mode((screen_width, screen_height))
 pygame.display.set_caption("PyGame + FFmpeg Overlay with Audio")
 clock = pygame.time.Clock()

 # LAUNCH AUDIO-ONLY SUBPROCESS
 audio_cmd = [
 "ffplay",
 "-nodisp", # no video window
 "-autoexit", # exit when video ends
 "-loglevel", "quiet",
 "testing/stefano_prores422_hq.mov"
 ]
 audio_process = subprocess.Popen(audio_cmd)

 # LAUNCH VIDEO-OVERLAY SUBPROCESS
 ffmpeg_command = [
 "ffmpeg",
 "-re", # read at native frame rate
 "-i", "testing/stefano_prores422_hq.mov",
 "-stream_loop", "-1", # loop alpha video
 "-i", "testing/key_prores4444.mov",
 "-filter_complex",
 "[1:v]format=rgba,colorchannelmixer=aa=1.0[overlay];" # ensure alpha channel
 "[0:v][overlay]overlay", # overlay second input onto first
 "-f", "rawvideo", # output raw video
 "-pix_fmt", "rgba", # RGBA format
 "pipe:1" # write to STDOUT
 ]
 video_process = subprocess.Popen(
 ffmpeg_command,
 stdout=subprocess.PIPE,
 stderr=subprocess.DEVNULL
 )
 frame_size = screen_width * screen_height * 4 # RGBA = 4 bytes/pixel
 running = True
 while running:
 for event in pygame.event.get():
 if event.type == pygame.QUIT:
 running = False
 break

 raw_frame = video_process.stdout.read(frame_size)

 if len(raw_frame) < frame_size:
 running = False
 break
 # Convert raw bytes -> NumPy array -> PyGame surface
 frame_array = np.frombuffer(raw_frame, dtype=np.uint8)
 frame_array = frame_array.reshape((screen_height, screen_width, 4))
 frame_surface = pygame.image.frombuffer(frame_array.tobytes(), 
 (screen_width, screen_height), 
 "RGBA")
 screen.blit(frame_surface, (0, 0))
 pygame.display.flip()
 clock.tick(25)
 video_process.terminate()
 video_process.wait()
 audio_process.terminate()
 audio_process.wait()
 pygame.quit()
 sys.exit()

if __name__ == "__main__":
 main()




EXAMPLE TWO


# NO VIDEO OVERLAY - SOUND SYNCHED
import ffmpeg
import pygame
import sys
import numpy as np
import tempfile
import os

def extract_audio(input_file, output_file):
 """Extract audio from video file to temporary WAV file"""
 (
 ffmpeg
 .input(input_file)
 .output(output_file, acodec='pcm_s16le', ac=2, ar='44100')
 .overwrite_output()
 .run(capture_stdout=True, capture_stderr=True)
 )

def get_video_fps(input_file):
 probe = ffmpeg.probe(input_file)
 video_info = next(s for s in probe['streams'] if s['codec_type'] == 'video')
 fps_str = video_info.get('r_frame_rate', '25/1')
 num, den = map(int, fps_str.split('/'))
 return num / den

input_file = "testing/stefano_prores422_hq.mov"

# Create temporary WAV file
temp_audio = tempfile.NamedTemporaryFile(suffix='.wav', delete=False)
temp_audio.close()
extract_audio(input_file, temp_audio.name)

probe = ffmpeg.probe(input_file)
video_info = next(s for s in probe['streams'] if s['codec_type'] == 'video')
width = int(video_info['width'])
height = int(video_info['height'])
fps = get_video_fps(input_file)

process = (
 ffmpeg
 .input(input_file)
 .output('pipe:', format='rawvideo', pix_fmt='rgb24')
 .run_async(pipe_stdout=True)
)

pygame.init()
pygame.mixer.init(frequency=44100, size=-16, channels=2, buffer=4096)
clock = pygame.time.Clock()
screen = pygame.display.set_mode((width, height))

pygame.mixer.music.load(temp_audio.name)
pygame.mixer.music.play()

frame_count = 0
start_time = pygame.time.get_ticks()

while True:
 for event in pygame.event.get():
 if event.type == pygame.QUIT:
 pygame.mixer.music.stop()
 os.unlink(temp_audio.name)
 sys.exit()

 in_bytes = process.stdout.read(width * height * 3)
 if not in_bytes:
 break

 # Calculate timing for synchronization
 expected_frame_time = frame_count * (1000 / fps)
 actual_time = pygame.time.get_ticks() - start_time
 
 if actual_time < expected_frame_time:
 pygame.time.wait(int(expected_frame_time - actual_time))
 
 in_frame = (
 np.frombuffer(in_bytes, dtype="uint8")
 .reshape([height, width, 3])
 )
 out_frame = pygame.surfarray.make_surface(np.transpose(in_frame, (1, 0, 2)))
 screen.blit(out_frame, (0, 0))
 pygame.display.flip()
 
 frame_count += 1

pygame.mixer.music.stop()
process.wait()
pygame.quit()
os.unlink(temp_audio.name)



I also tried using pygame mixer and a separate mp3 audio file, but that didn't work either. Any help on how to synch the sound while keeping the playback of both videos to 25 FPS would be greatly appreciated !!!