Recherche avancée
Autres articles (69)
-
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (9411)
-
can't pipe in numpy arrays (images) to ffmpeg subprocess in python
21 juin 2023, par KevinI'm trying to capture webcam video stream using opencv and pipe raw frames into ffmpeg subprocess, apply 3d .cube lut, bring back those lut applied frames into opencv and display it using cv2.imshow.


This is my code :


import cv2
import subprocess as sp
import numpy as np

lut_cmd = [
 'ffmpeg', '-f', 'rawvideo', '-pixel_format', 'bgr24', '-s', '1280x720', '-framerate', '30', '-i', '-', '-an', '-vf',
 'lut3d=file=lut/luts/lut.cube', '-f', 'rawvideo', 'pipe:1'
 ]

lut_process = sp.Popen(lut_cmd, stdin=sp.PIPE, stdout=sp.PIPE)

width = 1280
height = 720

video_capture = cv2.VideoCapture(0)

while True:
 ret, frame = video_capture.read()

 if not ret:
 break
 
 # Write raw video frame to input stream of ffmpeg sub-process.
 lut_process.stdin.write(frame.tobytes())
 lut_process.stdin.flush()
 print("flushed")

 # Read the processed frame from the ffmpeg subprocess
 raw_frame = lut_process.stdout.read(width * height * 3)
 print("read")
 frame = np.frombuffer(raw_frame, dtype=np.uint8).reshape(height, width, 3)

 cv2.imshow('Video', frame)

 if cv2.waitKey(1) & 0xFF == ord('q'):
 break

lut_process.terminate()
video_capture.release()

cv2.destroyAllWindows()



code gets stuck at reading from ffmpeg part :

raw_frame = lut_process.stdout.read(width * height * 3)

this is what i get when i run the code :


flushed
Input #0, rawvideo, from 'fd:':
 Duration: N/A, start: 0.000000, bitrate: 663552 kb/s
 Stream #0:0: Video: rawvideo (BGR[24] / 0x18524742), bgr24, 1280x720, 663552 kb/s, 30 tbr, 30 tbn
Stream mapping:
 Stream #0:0 -> #0:0 (rawvideo (native) -> rawvideo (native))
Output #0, rawvideo, to 'pipe:1':
 Metadata:
 encoder : Lavf60.3.100
 Stream #0:0: Video: rawvideo (BGR[24] / 0x18524742), bgr24(progressive), 1280x720, q=2-31, 663552 kb/s, 30 fps, 30 tbn
 Metadata:
 encoder : Lavc60.3.100 rawvideo
frame= 0 fps=0.0 q=0.0 size= 0kB time=-577014:32:22.77 bitrate= -0.0kbits/s speed=N/A 


"read" never gets printed. ffmpeg is stuck at 0fps. cv2.imshow doesn't show up.


I tried changing
lut_process.stdin.write(frame.tobytes())
tolut_process.stdin.write(frame.tostring()), but result was same.

I tried adding 3 seconds pause before first write to ffmpeg begin, thinking maybe ffmpeg was not ready to process frames, but result was same.


I'm sure that my webcam is working, and I know it's video stream is 1280x720 30fps.


I was successful at
Displaying webcam stream just using opencv,
set FFmpeg input directly to my webcam and get output result using stdout.read, displaying it using opencv.


have no idea what should I try next.


I am using macOS 12.6, openCV 4.7.0, ffmpeg 6.0, python 3.10.11, and visual studio code.


Any help would be greatly appreciated.


-
Pipe opencv frames into ffmpeg
25 juin 2023, par Dmytro SoltusyukI am trying to pipe opencv frames into ffmpeg, but it does not work.


After the research, I found this answer (https://stackoverflow.com/a/62807083/10676682) to work the best for me, so I have the following :


def start_streaming_process(rtmp_url, width, height, fps):
 # fmt: off
 cmd = ['ffmpeg',
 '-y',
 '-f', 'rawvideo',
 '-vcodec', 'rawvideo',
 '-pix_fmt', 'bgr24',
 '-s', "{}x{}".format(width, height),
 '-r', str(fps),
 '-i', '-',
 '-c:v', 'libx264',
 '-pix_fmt', 'yuv420p',
 '-preset', 'ultrafast',
 '-f', 'flv',
 '-flvflags', 'no_duration_filesize',
 rtmp_url]
 # fmt: on

 return subprocess.Popen(cmd, stdin=subprocess.PIPE)


def main():
 width, height, fps = get_video_size(SOURCE_VIDEO_PATH)
 streaming_process = start_streaming_process(
 TARGET_VIDEO_PATH,
 width,
 height,
 fps,
 )

 model = load_yolo(WEIGHTS_PATH)
 frame_iterator = read_frames(video_source=SOURCE_VIDEO_PATH)
 processed_frames_iterator = process_frames(
 model, frame_iterator, ball_target_area=400
 )

 for processed_frame in processed_frames_iterator:
 streaming_process.communicate(processed_frame.tobytes())

 streaming_process.kill()


processed_framehere is an annotated OpenCV frame.

However, after I do my first
streaming_process.communicatecall, the ffmpeg process exits with code 0 (meaning everything was ok), but it is not. I can not feed the rest of the frames into ffmpeg, because the process exited.

Here are the logs :


Input #0, rawvideo, from 'fd:':
 Duration: N/A, start: 0.000000, bitrate: 663552 kb/s
 Stream #0:0: Video: rawvideo (BGR[24] / 0x18524742), bgr24, 1280x720, 663552 kb/s, 30 tbr, 30 tbn
Stream mapping:
 Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264))
[libx264 @ 0x132e05570] using cpu capabilities: ARMv8 NEON
[libx264 @ 0x132e05570] profile High, level 3.1, 4:2:0, 8-bit
[libx264 @ 0x132e05570] 264 - core 164 r3095 baee400 - H.264/MPEG-4 AVC codec - Copyleft 2003-2022 - h
ttp://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme
=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 
fast_pskip=1 chroma_qp_offset=-2 threads=15 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 inter
laced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=
1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbt
ree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, flv, to 'rtmp://global-live.mux.com:5222/app/9428e064-e5d3-0bee-dc67-974ba53ce164':
 Metadata:
 encoder : Lavf60.3.100
 Stream #0:0: Video: h264 ([7][0][0][0] / 0x0007), yuv420p(tv, progressive), 1280x720, q=2-31, 30 fps
, 1k tbn
 Metadata:
 encoder : Lavc60.3.100 libx264
 Side data:
 cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
frame= 1 fps=0.0 q=29.0 Lsize= 41kB time=00:00:00.00 bitrate=N/A speed= 0x eed=N/A 
video:40kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.678311%
[libx264 @ 0x132e05570] frame I:1 Avg QP:25.22 size: 40589
[libx264 @ 0x132e05570] mb I I16..4: 37.7% 33.4% 28.9%
[libx264 @ 0x132e05570] 8x8 transform intra:33.4%
[libx264 @ 0x132e05570] coded y,uvDC,uvAC intra: 51.1% 53.2% 14.4%
[libx264 @ 0x132e05570] i16 v,h,dc,p: 32% 38% 20% 10%
[libx264 @ 0x132e05570] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 16% 36% 28% 3% 2% 2% 3% 3% 6%
[libx264 @ 0x132e05570] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 18% 37% 17% 4% 4% 4% 5% 4% 7%
[libx264 @ 0x132e05570] i8c dc,h,v,p: 46% 37% 12% 4%
[libx264 @ 0x132e05570] kb/s:9741.36


That's all. Exit code 0.


-
How to create a video from svg files using a pipe with ffmpeg ?
5 juillet 2023, par Max Chr.I want to create a video from svg graphic stored in a database. My procedure would be the following to archive that :


- 

- connect to database
- create ffmpeg command which takes a pipe as input
- spawn the ffmpeg child process
- Wait for the output of the process.










start another thread :
For all svg files do :


- 

- download the svg from the database into a byte buffer
- write the byte buffer to stdin of the ffmpeg child process






When running my code I encounter a problem when piping the svg files to ffmpeg.
Another possibility would be to download all the svg files to a temp directory and then using ffmpeg, but I want to avoid this.


I used
ffmpeg -f image2pipe -c:v svg -i - -c:v libx264 -y Downloads/out.mp4. But then ffmpeg gives me the following error :

[image2pipe @ 0x562ebd74c300] Could not find codec parameters for stream 0 (Video: svg (librsvg), none): unspecified size
Consider increasing the value for the 'analyzeduration' (0) and 'probesize' (5000000) options


I found out that there's a
svg_pipeformat in ffmpeg, so I tried that - without success, same error.

Do you have any ideas how to make this work ?