
Recherche avancée
Médias (1)
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
Autres articles (41)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Other interesting software
13 avril 2011, parWe don’t claim to be the only ones doing what we do ... and especially not to assert claims to be the best either ... What we do, we just try to do it well and getting better ...
The following list represents softwares that tend to be more or less as MediaSPIP or that MediaSPIP tries more or less to do the same, whatever ...
We don’t know them, we didn’t try them, but you can take a peek.
Videopress
Website : http://videopress.com/
License : GNU/GPL v2
Source code : (...) -
Les formats acceptés
28 janvier 2010, parLes commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
ffmpeg -codecs ffmpeg -formats
Les format videos acceptés en entrée
Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
Les formats vidéos de sortie possibles
Dans un premier temps on (...)
Sur d’autres sites (6561)
-
Decode android's hardware encoded H264 camera feed using ffmpeg in real time
31 octobre 2012, par user971871I'm trying to use the hardware
H264
encoder on Android to create video from the camera, and useFFmpeg
to mux in audio (all on the Android phone itself)What I've accomplished so far is packetizing the
H264
video intortsp
packets, and decoding it using VLC (overUDP
), so I know the video is at least correctly formatted. However, I'm having trouble getting the video data toffmpeg
in a format it can understand.I've tried sending the same
rtsp
packets to a port 5006 on localhost (over UDP), then providingffmpeg
with thesdp
file that tells it which local port the video stream is coming in on and how to decode the video, if I understandrtsp
streaming correctly. However this doesn't work and I'm having trouble diagnosing why, asffmpeg
just sits there waiting for input.For reasons of latency and scalability I can't just send the video and audio to the server and mux it there, it has to be done on the phone, in as lightweight a manner as possible.
What I guess I'm looking for are suggestions as to how this can be accomplished. The optimal solution would be sending the packetized
H264
video toffmpeg
over a pipe, but then I can't sendffmpeg
thesdp
file parameters it needs to decode the video.I can provide more information on request, like how
ffmpeg
is compiled for Android but I doubt that's necessary.Oh, and the way I start
ffmpeg
is through command line, I would really rather avoid mucking about with jni if that's at all possible.And help would be much appreciated, thanks.
-
Controlling "Real-Time" sending rate in RTP Streaming with FFMpeg
14 décembre 2020, par Robert_OrdisI'm trying to build an experimental audio telephony system with ffmpeg to talk some G.711 VoIP machine.


Then, I tried this command.


.\ffmpeg.exe -re -f dshow -i audio="CABLE Output (VB-Audio Virtual Cable)" -ac 1 -ab 64k -ar 8000 -f mulaw -f rtp "rtp://192.168.3.175:4449?fifo_size=240&localrtpport=5100&pkt_size=240"



In WireShark capturing, the audio in each packets was actually divided in each around 30[ms].


However, 17- 18 packets was sent together once per 500[ms].


Sent audio was correct, but in this situation, an opponent machine can't treat this correctly.


How do I send these packets in per "UNDER 0.5 SEC" ?


-
How to render two videos with alpha channel in real time in pygame with synched audio ?
21 décembre 2024, par Francesco CalderoneI need to play two videos with synched sound in real-time with Pygame.
Pygame does not currently support video streams, so I am using a ffmpeg subprocess.
The first video is a prores422_hq. This is a background video with no alpha channel.
The second video is a prores4444 overlay video with an alpha channel, and it needs to be played in real-tim on top of the first video (with transparency).
All of this needs synched sound from the first base video only.


I have tried many libraries, including pymovie pyav and opencv. The best result so far is to use a subprocess with ffmpeg.


ffmpeg -i testing/stefano_prores422_hq.mov -stream_loop -1 -i testing/key_prores4444.mov -filter_complex "[1:v]format=rgba,colorchannelmixer=aa=1.0[overlay];[0:v][overlay]overlay" -f nut pipe:1 | ffplay -


When running this in the terminal and playing with ffplay, everything is perfect, the overlay looks good, no dropped frames, and the sound is in synch.


However, trying to feed that to pygame via a subprocess creates either video delays and drop frames or audio not in synch.


EXAMPLE ONE :


# SOUND IS NOT SYNCHED - sound is played via ffplay
import pygame
import subprocess
import numpy as np
import sys

def main():
 pygame.init()
 screen_width, screen_height = 1920, 1080
 screen = pygame.display.set_mode((screen_width, screen_height))
 pygame.display.set_caption("PyGame + FFmpeg Overlay with Audio")
 clock = pygame.time.Clock()

 # LAUNCH AUDIO-ONLY SUBPROCESS
 audio_cmd = [
 "ffplay",
 "-nodisp", # no video window
 "-autoexit", # exit when video ends
 "-loglevel", "quiet",
 "testing/stefano_prores422_hq.mov"
 ]
 audio_process = subprocess.Popen(audio_cmd)

 # LAUNCH VIDEO-OVERLAY SUBPROCESS
 ffmpeg_command = [
 "ffmpeg",
 "-re", # read at native frame rate
 "-i", "testing/stefano_prores422_hq.mov",
 "-stream_loop", "-1", # loop alpha video
 "-i", "testing/key_prores4444.mov",
 "-filter_complex",
 "[1:v]format=rgba,colorchannelmixer=aa=1.0[overlay];" # ensure alpha channel
 "[0:v][overlay]overlay", # overlay second input onto first
 "-f", "rawvideo", # output raw video
 "-pix_fmt", "rgba", # RGBA format
 "pipe:1" # write to STDOUT
 ]
 video_process = subprocess.Popen(
 ffmpeg_command,
 stdout=subprocess.PIPE,
 stderr=subprocess.DEVNULL
 )
 frame_size = screen_width * screen_height * 4 # RGBA = 4 bytes/pixel
 running = True
 while running:
 for event in pygame.event.get():
 if event.type == pygame.QUIT:
 running = False
 break

 raw_frame = video_process.stdout.read(frame_size)

 if len(raw_frame) < frame_size:
 running = False
 break
 # Convert raw bytes -> NumPy array -> PyGame surface
 frame_array = np.frombuffer(raw_frame, dtype=np.uint8)
 frame_array = frame_array.reshape((screen_height, screen_width, 4))
 frame_surface = pygame.image.frombuffer(frame_array.tobytes(), 
 (screen_width, screen_height), 
 "RGBA")
 screen.blit(frame_surface, (0, 0))
 pygame.display.flip()
 clock.tick(25)
 video_process.terminate()
 video_process.wait()
 audio_process.terminate()
 audio_process.wait()
 pygame.quit()
 sys.exit()

if __name__ == "__main__":
 main()




EXAMPLE TWO


# NO VIDEO OVERLAY - SOUND SYNCHED
import ffmpeg
import pygame
import sys
import numpy as np
import tempfile
import os

def extract_audio(input_file, output_file):
 """Extract audio from video file to temporary WAV file"""
 (
 ffmpeg
 .input(input_file)
 .output(output_file, acodec='pcm_s16le', ac=2, ar='44100')
 .overwrite_output()
 .run(capture_stdout=True, capture_stderr=True)
 )

def get_video_fps(input_file):
 probe = ffmpeg.probe(input_file)
 video_info = next(s for s in probe['streams'] if s['codec_type'] == 'video')
 fps_str = video_info.get('r_frame_rate', '25/1')
 num, den = map(int, fps_str.split('/'))
 return num / den

input_file = "testing/stefano_prores422_hq.mov"

# Create temporary WAV file
temp_audio = tempfile.NamedTemporaryFile(suffix='.wav', delete=False)
temp_audio.close()
extract_audio(input_file, temp_audio.name)

probe = ffmpeg.probe(input_file)
video_info = next(s for s in probe['streams'] if s['codec_type'] == 'video')
width = int(video_info['width'])
height = int(video_info['height'])
fps = get_video_fps(input_file)

process = (
 ffmpeg
 .input(input_file)
 .output('pipe:', format='rawvideo', pix_fmt='rgb24')
 .run_async(pipe_stdout=True)
)

pygame.init()
pygame.mixer.init(frequency=44100, size=-16, channels=2, buffer=4096)
clock = pygame.time.Clock()
screen = pygame.display.set_mode((width, height))

pygame.mixer.music.load(temp_audio.name)
pygame.mixer.music.play()

frame_count = 0
start_time = pygame.time.get_ticks()

while True:
 for event in pygame.event.get():
 if event.type == pygame.QUIT:
 pygame.mixer.music.stop()
 os.unlink(temp_audio.name)
 sys.exit()

 in_bytes = process.stdout.read(width * height * 3)
 if not in_bytes:
 break

 # Calculate timing for synchronization
 expected_frame_time = frame_count * (1000 / fps)
 actual_time = pygame.time.get_ticks() - start_time
 
 if actual_time < expected_frame_time:
 pygame.time.wait(int(expected_frame_time - actual_time))
 
 in_frame = (
 np.frombuffer(in_bytes, dtype="uint8")
 .reshape([height, width, 3])
 )
 out_frame = pygame.surfarray.make_surface(np.transpose(in_frame, (1, 0, 2)))
 screen.blit(out_frame, (0, 0))
 pygame.display.flip()
 
 frame_count += 1

pygame.mixer.music.stop()
process.wait()
pygame.quit()
os.unlink(temp_audio.name)



I also tried using pygame mixer and a separate mp3 audio file, but that didn't work either. Any help on how to synch the sound while keeping the playback of both videos to 25 FPS would be greatly appreciated !!!