
Recherche avancée
Médias (1)
-
Revolution of Open-source and film making towards open film making
6 octobre 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (73)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Configurer la prise en compte des langues
15 novembre 2010, parAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (7134)
-
avcodec/libx265 : ignore user set alpha x265-param
26 décembre 2024, par James Almer -
How to render a blend of two videos with alpha channel in python in real time ? [closed]
21 décembre 2024, par Francesco CalderoneI need to play two videos in real time in Python.
One video is a background video with no alpha channel. I am using H264 but it can be any codec.
The second video is an overlay video. This video with an alpha channel needs to be played in real-time on top of the first video. I am using Quicktime 444 with an alpha channel but again, it can be any codec.


In terms of libraries, I tried a combination of cv and numpy, I tried pymovie, pyAV, ffmpeg... so far, all the results have been unsuccessful. When the videos render, the frame rate drops way below 30FPS, and the resulting stream is glitchy.


I also tried rendering the video without an alpha channel and performing green screen chroma keying in real-time. Needless to say, even worse.


What solution can I use ?


Here's my attempted code with ffmpeg


import ffmpeg
import cv2
import numpy as np

def decode_video_stream(video_path, pix_fmt, width, height, fps):
 process = (
 ffmpeg
 .input(video_path)
 .output('pipe:', format='rawvideo', pix_fmt=pix_fmt, s=f'{width}x{height}', r=fps)
 .run_async(pipe_stdout=True, pipe_stderr=True)
 )
 return process

def read_frame(process, width, height, channels):
 frame_size = width * height * channels
 raw_frame = process.stdout.read(frame_size)
 if not raw_frame:
 return None
 frame = np.frombuffer(raw_frame, np.uint8).reshape((height, width, channels))
 return frame

def play_videos_with_alpha(base_video_path, alpha_video_path, resolution=(1280, 720), fps=30):
 width, height = resolution
 frame_time = int(1000 / fps) # Frame time in milliseconds

 # Initialize FFmpeg decoding processes
 base_process = decode_video_stream(base_video_path, 'rgb24', width, height, fps)
 alpha_process = decode_video_stream(alpha_video_path, 'rgba', width, height, fps)

 cv2.namedWindow("Blended Video", cv2.WINDOW_NORMAL)

 try:
 while True:
 # Read frames
 base_frame = read_frame(base_process, width, height, channels=3)
 alpha_frame = read_frame(alpha_process, width, height, channels=4)

 # Restart processes if end of video is reached
 if base_frame is None:
 base_process.stdout.close()
 base_process = decode_video_stream(base_video_path, 'rgb24', width, height, fps)
 base_frame = read_frame(base_process, width, height, channels=3)

 if alpha_frame is None:
 alpha_process.stdout.close()
 alpha_process = decode_video_stream(alpha_video_path, 'rgba', width, height, fps)
 alpha_frame = read_frame(alpha_process, width, height, channels=4)

 # Separate RGB and alpha channels from alpha video
 rgb_image = cv2.cvtColor(alpha_frame[:, :, :3], cv2.COLOR_RGB2BGR)
 alpha_channel = alpha_frame[:, :, 3] / 255.0 # Normalize alpha

 # Convert base frame to BGR format for blending
 base_image = cv2.cvtColor(base_frame, cv2.COLOR_RGB2BGR)

 # Blend the images
 blended_image = (base_image * (1 - alpha_channel[:, :, None]) + rgb_image * alpha_channel[:, :, None]).astype(np.uint8)

 # Display the result
 cv2.imshow("Blended Video", blended_image)

 if cv2.waitKey(frame_time) & 0xFF == ord('q'):
 break

 except Exception as e:
 print("Error during playback:", e)

 finally:
 # Clean up
 base_process.stdout.close()
 alpha_process.stdout.close()
 cv2.destroyAllWindows()

base_video_path = "test.mp4" # Background video
alpha_video_path = "test.mov" # Overlay video
play_videos_with_alpha(base_video_path, alpha_video_path, resolution=(1280, 720), fps=30)



Which is so far the version that drops less frames. I've been thinking about threading, or using CUDA, but ideally I want something that runs pretty much on any machine. What would be the least computationally heavy operation without reducing the frame size (1920 x 1080) and without pre-rendering the blend and exporting pre-blended file ? Is there a way ? Maybe I'm getting at it all wrong. I feel lost. Please help. Thanks.


-
avformat/hevc : add support for writing alpha layer
7 décembre 2024, par Timo Rothenpieler