
Recherche avancée
Médias (91)
-
Head down (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Echoplex (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Discipline (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Letting you (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
1 000 000 (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
999 999 (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
Autres articles (87)
-
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...) -
Librairies et binaires spécifiques au traitement vidéo et sonore
31 janvier 2010, parLes logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
Binaires complémentaires et facultatifs flvtool2 : (...)
Sur d’autres sites (11544)
-
Python opencv subprocess write return broken pipe
16 septembre 2021, par VamsiI want to read an rtsp video source, add overlay text and push it to the RTMP endpoint.I am using Videocapture to read the video source and python subprocess to write the frames back to RTMP endpoint. I referred this FFmpeg stream video to rtmp from frames OpenCV python


import sys
import subprocess

import cv2
import ffmpeg
rtmp_url = "rtmp://127.0.0.1:1935/live/test"

path = 0
cap = cv2.VideoCapture("rtsp://10.0.1.7/media.sdp")

# gather video info to ffmpeg
fps = int(cap.get(cv2.CAP_PROP_FPS))
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))



command = ['ffmpeg', '-i', '-', "-c", "copy", '-f', 'flv', rtmp_url]
p = subprocess.Popen(command, stdin=subprocess.PIPE, stderr=subprocess.PIPE, stdout=subprocess.PIPE)

font = cv2.FONT_HERSHEY_SIMPLEX
while cap.isOpened():

 ret, frame = cap.read()
 cv2.putText(frame, 'TEXT ON VIDEO', (50, 50), font, 1, (0, 255, 255), 2, cv2.LINE_4)
 cv2.imshow('video', frame)
 if cv2.waitKey(1) & 0xFF == ord('q'):
 break

 if not ret:
 print("frame read failed")
 break

 try:
 p.stdin.write(frame.tobytes())
 except Exception as e:
 print (e)


cap.release()
p.stdin.close()
p.stderr.close()
p.wait()



The python script returns "[Errno 32] Broken pipe". Running the ffmpeg command in the terminal works fine.




ffmpeg -i rtsp ://10.0.1.7/media.sdp -c copy -f flv
rtmp ://127.0.0.1:1935/live/test




The above command works fine, and I can push the input stream to RTMP endpoint. But I can't write processed frame to subprocess which has ffmpeg running.


Please let me know, if I miss anything.


-
Broken pipe error when using FFmpeg stream camera video
14 septembre 2021, par Xiuyi YangI try to write frames captured from a my laptop camera and then stream these images with FFmpeg. This is my code :


import subprocess
import cv2
rstp_url = "rtsp://localhost:31415/stream"

# In my mac webcamera is 0, also you can set a video file name instead, for example "/home/user/demo.mp4"
path = 0
cap = cv2.VideoCapture(path)

# gather video info to ffmpeg
fps = int(cap.get(cv2.CAP_PROP_FPS))
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))

# command and params for ffmpeg
command = ['ffmpeg',
 '-y',
 '-f', 'rawvideo',
 '-vcodec', 'rawvideo',
 '-pix_fmt', 'bgr24',
 '-s', "{}x{}".format(width, height),
 '-r', str(fps),
 '-i', '-',
 '-c:v', 'libx264',
 '-pix_fmt', 'yuv420p',
 '-preset', 'ultrafast',
 '-f', 'flv',
 rstp_url]

# using subprocess and pipe to fetch frame data
p = subprocess.Popen(command, stdin=subprocess.PIPE)


while cap.isOpened():
 ret, frame = cap.read()
 if not ret:
 print("frame read failed")
 break

 # YOUR CODE FOR PROCESSING FRAME HERE

 # write to pipe
 p.stdin.write(frame.tobytes())



After run this code, the following error occurs :




Input #0, rawvideo, from 'pipe :' :
Duration : N/A, start : 0.000000, bitrate : 221184 kb/s
Stream #0:0 : Video : rawvideo (BGR[24] / 0x18524742), bgr24, 640x480, 221184 kb/s, 30 > tbr, 30 tbn, 30 tbc
rtsp ://localhost:31415/stream : Protocol not found
Traceback (most recent call last) :
File "rtsp.py", line 42, in 
p.stdin.write(frame.tobytes())
BrokenPipeError : [Errno 32] Broken pipe




Please help me fix this bug.



After reply by Rotem, I modidied code as below :


import subprocess
import cv2
import pdb 

rstp_url = "rtsp://localhost:31415/stream"

# In my mac webcamera is 0, also you can set a video file name instead, for example "/home/user/demo.mp4"
path = 0
cap = cv2.VideoCapture(path)

# gather video info to ffmpeg
fps = int(cap.get(cv2.CAP_PROP_FPS))
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))

# command and params for ffmpeg
command = ['ffmpeg',
 '-re',
 '-f', 'rawvideo', # Apply raw video as input - it's more efficient than encoding each frame to PNG
 '-s', f'{width}x{height}',
 '-pixel_format', 'bgr24',
 '-r', f'{fps}',
 '-i', '-',
 '-pix_fmt', 'yuv420p',
 '-c:v', 'libx264',
 '-bufsize', '64M',
 '-maxrate', '4M',
 '-rtsp_transport', 'udp',
 '-f', 'rtsp',
 #'-muxdelay', '0.1',
 rstp_url ]


# using subprocess and pipe to fetch frame data
p = subprocess.Popen(command, stdin=subprocess.PIPE)


while cap.isOpened():
 ret, frame = cap.read()
 if not ret:
 print("frame read failed")
 break

 pdb.set_trace()
 # YOUR CODE FOR PROCESSING FRAME HERE

 # write to pipe
 p.stdin.write(frame.tobytes())

 cv2.imshow('current_img', frame) # Show image for testing

 # time.sleep(1/FPS)
 key = cv2.waitKey(int(round(1000/fps))) # We need to call cv2.waitKey after cv2.imshow

 if key == 27: # Press Esc for exit
 break

p.stdin.close() # Close stdin pipe
p.wait() # Wait for FFmpeg sub-process to finish
cv2.destroyAllWindows() # Close OpenCV window



The error ocurs as below :




Connection to tcp ://localhost:31415 ?timeout=0 failed : Connection >refused
Could not write header for output file #0 (incorrect codec >parameters ?) : Connection refused
Error initializing output stream 0:0 —
Conversion failed !
c
Qt : Session management error : None of the authentication protocols >specified are supported.




-
Split ffmpeg audio and video to different pipe outputs
8 septembre 2021, par Bryan HornaI have a conference application using Jitsi Meet, and using their Jibri tool for streaming to an RTMP source.

But the delay is intolerable as of now so while looking out for alternatives I found that I can use WebRTC.

It turns out that most of the tools I've found (Pion, and others) expect me to send separate video/audio streams.

And Jibri uses anffmpeg
command which outputs both streams joined, so I want to split them.

Here's the command in question :


ffmpeg \
-y -v info \
-f x11grab \
-draw_mouse 0 \
-r 30 \
-s 1920x1080 \
-thread_queue_size 4096 \
-i :0.0+0,0 \
-f alsa \
-thread_queue_size 4096 \
-i plug:bsnoop \
-acodec aac -strict -2 -ar 44100 -b:a 128k \
-af aresample=async=1 \
-c:v libx264 -preset veryfast \
-maxrate 2976k -bufsize 5952k \
-pix_fmt yuv420p -r 30 \
-crf 25 \
-g 60 -tune zerolatency \
-f flv rtmps://redacted.info



I'd want to have it output to at least two outputs (one for video and other for audio), using
pipe:x
orrtp://
.

It'll be better if multiple output codecs are possible (audio opus/aac, video h264/vp8), so it supports most of WebRTC.

I've read the documentation at ffmpeg website but still can't get to a command that does this job in just one command.

I know I could use-an
and-vn
with two differentffmpeg
commands but that would consume more CPU I guess.

Thanks in advance for your great help !