
Recherche avancée
Médias (91)
-
Spoon - Revenge !
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
My Morning Jacket - One Big Holiday
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Zap Mama - Wadidyusay ?
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
David Byrne - My Fair Lady
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Beastie Boys - Now Get Busy
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Granite de l’Aber Ildut
9 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
Autres articles (66)
-
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)
Sur d’autres sites (6076)
-
FFmpeg get a crop without tying it to a specific time period
30 avril 2021, par Дмитрий Варзановcrop = sub.check_output("ffmpeg -i "+file+" -ss 5 -t 30 -vf cropdetect -f null - 2>&1 | awk '/crop/ { print $NF }' | tail -1", shell=True).decode().strip()



In order not to bind to a time interval, it is possible to analyze the frames for the entire time period of the video file.


It would be a solution if this process was performed quickly, instantly. But unfortunately, it can take hours, depending on the duration of the video


Execute the command to get the duration of the video, and then calculate and set the time to get the crop. We'll have to build forests...


I am looking for a solution in which there will be no need to set the time and at the same time the task will take the least amount of time.


Perhaps someone has a solution ?


-
How to render two videos with alpha channel in real time in pygame with synched audio ?
21 décembre 2024, par Francesco CalderoneI need to play two videos with synched sound in real-time with Pygame.
Pygame does not currently support video streams, so I am using a ffmpeg subprocess.
The first video is a prores422_hq. This is a background video with no alpha channel.
The second video is a prores4444 overlay video with an alpha channel, and it needs to be played in real-tim on top of the first video (with transparency).
All of this needs synched sound from the first base video only.


I have tried many libraries, including pymovie pyav and opencv. The best result so far is to use a subprocess with ffmpeg.


ffmpeg -i testing/stefano_prores422_hq.mov -stream_loop -1 -i testing/key_prores4444.mov -filter_complex "[1:v]format=rgba,colorchannelmixer=aa=1.0[overlay];[0:v][overlay]overlay" -f nut pipe:1 | ffplay -


When running this in the terminal and playing with ffplay, everything is perfect, the overlay looks good, no dropped frames, and the sound is in synch.


However, trying to feed that to pygame via a subprocess creates either video delays and drop frames or audio not in synch.


EXAMPLE ONE :


# SOUND IS NOT SYNCHED - sound is played via ffplay
import pygame
import subprocess
import numpy as np
import sys

def main():
 pygame.init()
 screen_width, screen_height = 1920, 1080
 screen = pygame.display.set_mode((screen_width, screen_height))
 pygame.display.set_caption("PyGame + FFmpeg Overlay with Audio")
 clock = pygame.time.Clock()

 # LAUNCH AUDIO-ONLY SUBPROCESS
 audio_cmd = [
 "ffplay",
 "-nodisp", # no video window
 "-autoexit", # exit when video ends
 "-loglevel", "quiet",
 "testing/stefano_prores422_hq.mov"
 ]
 audio_process = subprocess.Popen(audio_cmd)

 # LAUNCH VIDEO-OVERLAY SUBPROCESS
 ffmpeg_command = [
 "ffmpeg",
 "-re", # read at native frame rate
 "-i", "testing/stefano_prores422_hq.mov",
 "-stream_loop", "-1", # loop alpha video
 "-i", "testing/key_prores4444.mov",
 "-filter_complex",
 "[1:v]format=rgba,colorchannelmixer=aa=1.0[overlay];" # ensure alpha channel
 "[0:v][overlay]overlay", # overlay second input onto first
 "-f", "rawvideo", # output raw video
 "-pix_fmt", "rgba", # RGBA format
 "pipe:1" # write to STDOUT
 ]
 video_process = subprocess.Popen(
 ffmpeg_command,
 stdout=subprocess.PIPE,
 stderr=subprocess.DEVNULL
 )
 frame_size = screen_width * screen_height * 4 # RGBA = 4 bytes/pixel
 running = True
 while running:
 for event in pygame.event.get():
 if event.type == pygame.QUIT:
 running = False
 break

 raw_frame = video_process.stdout.read(frame_size)

 if len(raw_frame) < frame_size:
 running = False
 break
 # Convert raw bytes -> NumPy array -> PyGame surface
 frame_array = np.frombuffer(raw_frame, dtype=np.uint8)
 frame_array = frame_array.reshape((screen_height, screen_width, 4))
 frame_surface = pygame.image.frombuffer(frame_array.tobytes(), 
 (screen_width, screen_height), 
 "RGBA")
 screen.blit(frame_surface, (0, 0))
 pygame.display.flip()
 clock.tick(25)
 video_process.terminate()
 video_process.wait()
 audio_process.terminate()
 audio_process.wait()
 pygame.quit()
 sys.exit()

if __name__ == "__main__":
 main()




EXAMPLE TWO


# NO VIDEO OVERLAY - SOUND SYNCHED
import ffmpeg
import pygame
import sys
import numpy as np
import tempfile
import os

def extract_audio(input_file, output_file):
 """Extract audio from video file to temporary WAV file"""
 (
 ffmpeg
 .input(input_file)
 .output(output_file, acodec='pcm_s16le', ac=2, ar='44100')
 .overwrite_output()
 .run(capture_stdout=True, capture_stderr=True)
 )

def get_video_fps(input_file):
 probe = ffmpeg.probe(input_file)
 video_info = next(s for s in probe['streams'] if s['codec_type'] == 'video')
 fps_str = video_info.get('r_frame_rate', '25/1')
 num, den = map(int, fps_str.split('/'))
 return num / den

input_file = "testing/stefano_prores422_hq.mov"

# Create temporary WAV file
temp_audio = tempfile.NamedTemporaryFile(suffix='.wav', delete=False)
temp_audio.close()
extract_audio(input_file, temp_audio.name)

probe = ffmpeg.probe(input_file)
video_info = next(s for s in probe['streams'] if s['codec_type'] == 'video')
width = int(video_info['width'])
height = int(video_info['height'])
fps = get_video_fps(input_file)

process = (
 ffmpeg
 .input(input_file)
 .output('pipe:', format='rawvideo', pix_fmt='rgb24')
 .run_async(pipe_stdout=True)
)

pygame.init()
pygame.mixer.init(frequency=44100, size=-16, channels=2, buffer=4096)
clock = pygame.time.Clock()
screen = pygame.display.set_mode((width, height))

pygame.mixer.music.load(temp_audio.name)
pygame.mixer.music.play()

frame_count = 0
start_time = pygame.time.get_ticks()

while True:
 for event in pygame.event.get():
 if event.type == pygame.QUIT:
 pygame.mixer.music.stop()
 os.unlink(temp_audio.name)
 sys.exit()

 in_bytes = process.stdout.read(width * height * 3)
 if not in_bytes:
 break

 # Calculate timing for synchronization
 expected_frame_time = frame_count * (1000 / fps)
 actual_time = pygame.time.get_ticks() - start_time
 
 if actual_time < expected_frame_time:
 pygame.time.wait(int(expected_frame_time - actual_time))
 
 in_frame = (
 np.frombuffer(in_bytes, dtype="uint8")
 .reshape([height, width, 3])
 )
 out_frame = pygame.surfarray.make_surface(np.transpose(in_frame, (1, 0, 2)))
 screen.blit(out_frame, (0, 0))
 pygame.display.flip()
 
 frame_count += 1

pygame.mixer.music.stop()
process.wait()
pygame.quit()
os.unlink(temp_audio.name)



I also tried using pygame mixer and a separate mp3 audio file, but that didn't work either. Any help on how to synch the sound while keeping the playback of both videos to 25 FPS would be greatly appreciated !!!


-
FFMPEG timelapse with alternate time scale
24 juin 2022, par Bill VMy home video system takes a snapshot every 15 seconds. At midnight I run a script that uses ffmpeg to create a time lapse video at 8 frames/second. Playing the resulting time lapse video takes 12 minutes. When I play it back, the time line show the time as 0-12:00. I'd like it to show the time of the snapshots - 0-23:59:45. Can that be done and if so how ? Below is the ffmpeg command I use. Prior to running ffmpeg, the images are in files ordered 1.jpg to 5760.jpg. Thanks


ffmpeg -r 8 -s 1920x1080 -i %%d.jpg -vcodec mpeg4 -crf 25 -pix_fmt yuv420p temp.mp4