
Recherche avancée
Médias (29)
-
#7 Ambience
16 octobre 2011, par
Mis à jour : Juin 2015
Langue : English
Type : Audio
-
#6 Teaser Music
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#5 End Title
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#3 The Safest Place
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#4 Emo Creates
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#2 Typewriter Dance
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
Autres articles (77)
-
Gestion générale des documents
13 mai 2011, parMédiaSPIP ne modifie jamais le document original mis en ligne.
Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...) -
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (12085)
-
How to convert H264 RTP stream from PCAP to a playable video file
21 août 2014, par yooshaI have captured stream of H264 in PCAP files and trying to create media files from the data. The container is not important (avi,mp4,mkv,…).
When I’m using videosnarf or rtpbreak (combined with python code that adds 00 00 00 01 before each packet) and then ffmpeg, the result is OK only if the input frame rate is constant (or near constant). However, when the input is vfr, the result plays too fast (and on same rare cases too slow).
For example :videosnarf -i captured.pcap –c
ffmpeg -i H264-media-1.264 output.aviAfter doing some investigation of the issue I believe now that since the videosnarf (and rtpbreak) are removing the RTP header from the packets, the timestamp is lost and ffmpeg is referring to the input data as cbr.
- I would like to know if there is a way to pass (on a separate file ?)
the timestamps vector or any other information to ffmpeg so the
result will be created correctly ? - Is there any other way I can take the data out of the PCAP file and play it or convert it and then play it ?
- Since all work is done in Python, any suggestion of libraries/modules that can help with the work (even if requires some codding) is welcome as well.
Note : All work is done offline, no limitations on the output. It can be cbr/vbr, any playable container and transcoding. The only "limitation" I have : it should all run on linux…
Thanks
YSome additional information :
Since the nothing provides the FFMPEG with the timestamp data, i decided to try a different approach : skip videosnarf and use Python code to pipe the packets directly to ffmpeg (using the "-f -i -" options) but then it refuses to accept it unless I provide an SDP file...
How do I provide the SDP file ? is it an additional input file ? ("-i config.sdp")The following code is an unsuccessful try doing the above :
import time
import sys
import shutil
import subprocess
import os
import dpkt
if len(sys.argv) < 2:
print "argument required!"
print "txpcap <pcap file="file">"
sys.exit(2)
pcap_full_path = sys.argv[1]
ffmp_cmd = ['ffmpeg','-loglevel','debug','-y','-i','109c.sdp','-f','rtp','-i','-','-na','-vcodec','copy','p.mp4']
ffmpeg_proc = subprocess.Popen(ffmp_cmd,stdout = subprocess.PIPE,stdin = subprocess.PIPE)
with open(pcap_full_path, "rb") as pcap_file:
pcapReader = dpkt.pcap.Reader(pcap_file)
for ts, data in pcapReader:
if len(data) < 49:
continue
ffmpeg_proc.stdin.write(data[42:])
sout, err = ffmpeg_proc.communicate()
print "stdout ---------------------------------------"
print sout
print "stderr ---------------------------------------"
print err
</pcap>In general this will pipe the packets from the PCAP file to the following command :
ffmpeg -loglevel debug -y -i 109c.sdp -f rtp -i - -na -vcodec copy p.mp4
SDP file : [RTP includes dynamic payload type # 109, H264]
v=0
o=- 0 0 IN IP4 ::1
s=No Name
c=IN IP4 ::1
t=0 0
a=tool:libavformat 53.32.100
m=video 0 RTP/AVP 109
a=rtpmap:109 H264/90000
a=fmtp:109
packetization-mode=1 ;profile-level-id=64000c ;sprop-parameter-sets=Z2QADKwkpAeCP6wEQAAAAwBAAAAFI8UKkg==,aMvMsiw= ;
b=AS:200Results :
ffmpeg version 0.10.2 Copyright (c) 2000-2012 the FFmpeg developers
built on Mar 20 2012 04:34:50 with gcc 4.4.6 20110731 (Red Hat
4.4.6-3) configuration : —prefix=/usr —libdir=/usr/lib64 —shlibdir=/usr/lib64 —mandir=/usr/share/man —enable-shared —enable-runtime-cpudetect —enable-gpl —enable-version3 —enable-postproc —enable-avfilter —enable-pthreads —enable-x11grab —enable-vdpau —disable-avisynth —enable-frei0r —enable-libopencv —enable-libdc1394 —enable-libdirac —enable-libgsm —enable-libmp3lame —enable-libnut —enable-libopencore-amrnb —enable-libopencore-amrwb —enable-libopenjpeg —enable-librtmp —enable-libschroedinger —enable-libspeex —enable-libtheora —enable-libvorbis —enable-libvpx —enable-libx264 —enable-libxavs —enable-libxvid —extra-cflags=’-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector —param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC’ —disable-stripping libavutil 51. 35.100 / 51. 35.100 libavcodec 53. 61.100 / 53. 61.100 libavformat 53. 32.100
/ 53. 32.100 libavdevice 53. 4.100 / 53. 4.100
libavfilter 2. 61.100 / 2. 61.100 libswscale 2. 1.100
/ 2. 1.100 libswresample 0. 6.100 / 0. 6.100
libpostproc 52. 0.100 / 52. 0.100 [sdp @ 0x15c0c00] Format sdp
probed with size=2048 and score=50 [sdp @ 0x15c0c00] video codec set
to : h264 [NULL @ 0x15c7240] RTP Packetization Mode : 1 [NULL @
0x15c7240] RTP Profile IDC : 64 Profile IOP : 0 Level : c [NULL @
0x15c7240] Extradata set to 0x15c78e0 (size : 36) !error,_recognition
separate : 1 ; 1 [h264 @ 0x15c7240] error,_recognition combined : 1 ;
10001 [sdp @ 0x15c0c00] decoding for stream 0 failed [sdp @
0x15c0c00] Could not find codec parameters (Video : h264) [sdp @
0x15c0c00] Estimating duration from bitrate, this may be inaccurate
109c.sdp : could not find codec parameters Traceback (most recent
call last) : File "./ffpipe.py", line 26, in
ffmpeg_proc.stdin.write(data[42 :]) IOError : [Errno 32] Broken pipe(forgive the mass above, the editor keep on complaining about code that is not indented OK ??)
I’m working on this issue for days... any help/suggestion/hint will be appreciated.
- I would like to know if there is a way to pass (on a separate file ?)
-
RaspberryPi HLS streaming with nginx and ffmpeg ; v4l2 error : ioctl(VIDIOC_STREAMON) : Protocol error
22 janvier 2021, par Mirco WeberI'm trying to realize a baby monitoring with a Raspberry Pi (Model 4B, 4GB RAM) and an ordinary Webcam (with integrated Mic).
I followed this Tutorial : https://github.com/DeTeam/webcam-stream/blob/master/Tutorial.md


Shortly described :


- 

- I installed and configured an nginx server with rtmp module enabled.
- I installed ffmpeg with this configuration —enable-gpl —enable-nonfree —enable-mmal —enable-omx-rpi
- I tried to stream ;)








The configuration of nginx seems to be working (sometimes streaming works, the server starts without any complication and when the server is up and running, the webpage is displayed).
The configuration of ffmpeg seems to be fine as well, since streaming sometimes works...


I was trying a couple of different ffmpeg-commands ; all of them are sometimes working and sometimes resulting in an error.
The command looks like following :


ffmpeg -re
-f v4l2
-i /dev/video0
-f alsa
-ac 1
-thread_queue_size 4096
-i hw:CARD=Camera,DEV=0
-profile:v high
-level:v 4.1
-vcodec h264_omx
-r 10
-b:v 512k
-s 640x360
-acodec aac
-strict
-2
-ac 2
-ab 32k
-ar 44100
-f flv
rtmp://localhost/show/stream;



Note : I rearranged the code to make it easier to read. In the terminal, it is all in one line.
Note : There is no difference when using
-f video4linux2
instead of-f v4l2


The camera is recognized by the system :


pi@raspberrypi:~ $ v4l2-ctl --list-devices
bcm2835-codec-decode (platform:bcm2835-codec):
 /dev/video10
 /dev/video11
 /dev/video12

bcm2835-isp (platform:bcm2835-isp):
 /dev/video13
 /dev/video14
 /dev/video15
 /dev/video16

HD Web Camera: HD Web Camera (usb-0000:01:00.0-1.2):
 /dev/video0
 /dev/video1



When only using
-i /dev/video0
, audio transmission never worked.
The output ofarecord -L
was :

pi@raspberrypi:~ $ arecord -L
default
 Playback/recording through the PulseAudio sound server
null
 Discard all samples (playback) or generate zero samples (capture)
jack
 JACK Audio Connection Kit
pulse
 PulseAudio Sound Server
usbstream:CARD=Headphones
 bcm2835 Headphones
 USB Stream Output
sysdefault:CARD=Camera
 HD Web Camera, USB Audio
 Default Audio Device
front:CARD=Camera,DEV=0
 HD Web Camera, USB Audio
 Front speakers
surround21:CARD=Camera,DEV=0
 HD Web Camera, USB Audio
 2.1 Surround output to Front and Subwoofer speakers
surround40:CARD=Camera,DEV=0
 HD Web Camera, USB Audio
 4.0 Surround output to Front and Rear speakers
surround41:CARD=Camera,DEV=0
 HD Web Camera, USB Audio
 4.1 Surround output to Front, Rear and Subwoofer speakers
surround50:CARD=Camera,DEV=0
 HD Web Camera, USB Audio
 5.0 Surround output to Front, Center and Rear speakers
surround51:CARD=Camera,DEV=0
 HD Web Camera, USB Audio
 5.1 Surround output to Front, Center, Rear and Subwoofer speakers
surround71:CARD=Camera,DEV=0
 HD Web Camera, USB Audio
 7.1 Surround output to Front, Center, Side, Rear and Woofer speakers
iec958:CARD=Camera,DEV=0
 HD Web Camera, USB Audio
 IEC958 (S/PDIF) Digital Audio Output
dmix:CARD=Camera,DEV=0
 HD Web Camera, USB Audio
 Direct sample mixing device
dsnoop:CARD=Camera,DEV=0
 HD Web Camera, USB Audio
 Direct sample snooping device
hw:CARD=Camera,DEV=0
 HD Web Camera, USB Audio
 Direct hardware device without any conversions
plughw:CARD=Camera,DEV=0
 HD Web Camera, USB Audio
 Hardware device with all software conversions
usbstream:CARD=Camera
 HD Web Camera
 USB Stream Output



that's why i added
-i hw:CARD=Camera,DEV=0
.

As mentioned above, it worked very well a couple of times with this configuration and commands.
But very often, i get the following error message when starting to stream :


pi@raspberrypi:~ $ ffmpeg -re -f video4linux2 -i /dev/video0 -f alsa -ac 1 -thread_queue_size 4096 -i hw:CARD=Camera,DEV=0 -profile:v high -level:v 4.1 -vcodec h264_omx -r 10 -b:v 512k -s 640x360 -acodec aac -strict -2 -ac 2 -ab 32k -ar 44100 -f flv rtmp://localhost/show/stream
ffmpeg version N-100673-g553eb07737 Copyright (c) 2000-2021 the FFmpeg developers
 built with gcc 8 (Raspbian 8.3.0-6+rpi1)
 configuration: --enable-gpl --enable-nonfree --enable-mmal --enable-omx-rpi --extra-ldflags=-latomic
 libavutil 56. 63.101 / 56. 63.101
 libavcodec 58.117.101 / 58.117.101
 libavformat 58. 65.101 / 58. 65.101
 libavdevice 58. 11.103 / 58. 11.103
 libavfilter 7. 96.100 / 7. 96.100
 libswscale 5. 8.100 / 5. 8.100
 libswresample 3. 8.100 / 3. 8.100
 libpostproc 55. 8.100 / 55. 8.100
[video4linux2,v4l2 @ 0x2ea4600] ioctl(VIDIOC_STREAMON): Protocol error
/dev/video0: Protocol error



And when I'm swithing to
/dev/video1
(since this was also an output forv4l2-ctl --list-devices
), I get the following error message :

pi@raspberrypi:~ $ ffmpeg -re -f v4l2 -i /dev/video1 -f alsa -ac 1 -thread_queue_size 4096 -i hw:CARD=Camera,DEV=0 -profile:v high -level:v 4.1 -vcodec h264_omx -r 10 -b:v 512k -s 640x360 -acodec aac -strict -2 -ac 2 -ab 32k -ar 44100 -f flv rtmp://localhost/show/stream
ffmpeg version N-100673-g553eb07737 Copyright (c) 2000-2021 the FFmpeg developers
 built with gcc 8 (Raspbian 8.3.0-6+rpi1)
 configuration: --enable-gpl --enable-nonfree --enable-mmal --enable-omx-rpi --extra-ldflags=-latomic
 libavutil 56. 63.101 / 56. 63.101
 libavcodec 58.117.101 / 58.117.101
 libavformat 58. 65.101 / 58. 65.101
 libavdevice 58. 11.103 / 58. 11.103
 libavfilter 7. 96.100 / 7. 96.100
 libswscale 5. 8.100 / 5. 8.100
 libswresample 3. 8.100 / 3. 8.100
 libpostproc 55. 8.100 / 55. 8.100
[video4linux2,v4l2 @ 0x1aa4610] ioctl(VIDIOC_G_INPUT): Inappropriate ioctl for device
/dev/video1: Inappropriate ioctl for device



When using the
video0
input, the webcam's LED that recognizes an access is constantly on. When usingvideo1
not.

After hours and days of googling and tears and whiskey, for the sake of my liver, my marriage and my physical and mental health, I'm very sincerly asking for your help...
What the f**k is happening and what can I do to make it work ???


Thanks everybody :)


UPDATE 1 :


- 

- using the full path to ffmpeg does not change anything...
- /dev/video0 and /dev/video1 have access rights for everybody
sudo ffmpeg ...
does not change anything as well- the problem seems to be at an "early stage". Stripping the command down to
ffmpeg -i /dev/video0
results in the same problem










UPDATE 2 :

It seems that everything is working when I first start another Application that needs access to the webcam and then ffmpeg...
Might be some driver issue, but when I'm looking for loaded modules withlsmod
, there is absolutely no change before and after I started the application...
Any help still appreciated...

UPDATE 3 :

I was checking the output ofdmesg
.

When I started the first application I received this message :

uvcvideo: Failed to query (GET_DEF) UVC control 12 on unit 2: -32 (exp. 4).


And when I startedffmpeg
, nothing happend but everything worked...

-
Best way to diagnose VideoCapture not opening the rtmp stream
8 janvier 2021, par Greg0ryI am pulling my hair off for a few days and I'm out of ideas.


I have two rtmp streams


- 

- first stream is transcoded directly by myself (I use ffmpeg to transcode) and then I attach to that stream with opencv - VideoCapture can open the stream with no problem
- second stream is transcoded by 3rd party system (this system captures video through WebRTC and then encodes to h264) - this stream cannot be opened by VideCapture no matter what I do






I can attach with pure ffmpeg to that dodgy stream and I can restream - but this is not ideal as introduces extra network traffic and latency.


I was playing with
OPENCV_FFMPEG_CAPTURE_OPTIONS
environmental variable (I was trying to remove audio stream, change the video codec, playing with rtmp options like thisOPENCV_FFMPEG_CAPTURE_OPTIONS="loglevel;debug" python my_script.py
) - no joy

So I figured I am trying to solve this problem from wrong end. I should somehow collect underlying ffmpeg logs that should be available when calling VideoCapture. So I tried to set
OPENCV_FFMPEG_CAPTURE_OPTIONS="v;debug"
but I can see no ffmpeg output when calling VideoCapture.

This is very simple python3 script I was using during tests :


import cv2 as cv
dodgy_cap = cv.VideoCapture()
dodgy_cap.open('rtmp://my_local_ip_address/rtmp/dodgy_stream_name')
print(dodgy_cap.isOpened()) # always returns False
healthy_cap = cv.VideoCapture()
healthy_cap.open('rtmp://my_local_ip_address/rtmp/healthy_stream_name')
print(healthy_cap.isOpened()) # always returns True



I collected information about both streams with ffprobe, but even though they look different I cannot see what would be the difference that prevents opencv from opening VideoCapture for dodgy stream..


This is a fragment from (very) verbose log for healthy stream :


RTMP_ClientPacket, received: notify 254 bytes 
(object begin) 
Property: 
Property: 
(object begin) 
Property: 
Property: 
Property: 
Property: 
Property: 
Property: 
Property: 
Property: 
Property: 
Property: 
(object end) 
(object end) 
Metadata:
 duration 0.00
 width 2048.00
 height 1536.00
 videodatarate 0.00
 framerate 6.00
 videocodecid 7.00
 title Session streamed by "preview"
 comment h264Preview_01_main
 encoder Lavf58.20.100
 filesize 0.00

(... raw network packets ...)

Input #0, flv, from 'rtmp://my_local_ip_address/rtmp/healthy_stream_name':
 Metadata:
 title : Session streamed by "preview"
 comment : h264Preview_01_main
 encoder : Lavf58.20.100
 Duration: 00:00:00.00, start: 159.743000, bitrate: N/A
 Stream #0:0, 41, 1/1000: Video: h264 (High), 1 reference frame, yuv420p(progressive), 2048x1536, 0/1, 6 fps, 6 tbr, 1k tbn




And this is the fragment for dodgy stream :


RTMP_ClientPacket, received: notify 205 bytes 
(object begin) 
Property: 
(object begin) 
Property: 
Property: 
Property: 
Property: 
Property: 
Property: 
Property: 
Property: 
Property: 
(object end) 
(object end) 
RTMP_ReadPacket: fd=3 

(... raw network packets ...)

Input #0, flv, from 'rtmp://my_local_ip_address/rtmp/dodgy_stream_name':
 Duration: N/A, start: 4511.449000, bitrate: N/A
 Stream #0:0, 41, 1/1000: Video: h264 (High), 1 reference frame, yuv420p(progressive, left), 640x480, 0/1, 15.17 fps, 15.08 tbr, 1k tbn, 30 tbc
 Stream #0:1, 124, 1/1000: Audio: aac (LC), 48000 Hz, mono, fltp




Both streams don't require any authentication (they are not exposed to the outside world)


Dodgy stream contains audio but I don't think it is source of problem as I was able to connect to other healthy rtmp streams that contained audio..


I have no more ideas how can I debug this problem, please help..



I found in VideoCap documentation that I can enable exception mode, however it did not help much (it says where it failed but it does not say why) :


>>> dodgy_stream = cv.VideoCapture()
>>> dodgy_stream.setExceptionMode(True)
>>> dodgy_stream.open("rtmp://my_local_ip_address/rtmp/dodgy_stream_name")
Traceback (most recent call last):
 File "<stdin>", line 1, in <module>
cv2.error: OpenCV(4.5.0) /tmp/pip-req-build-s_nildlw/opencv/modules/videoio/src/cap.cpp:177: error: (-2:Unspecified error) could not open 'rtmp://my_local_ip_address/rtmp/dodgy_stream_name' in function 'open'
</module></stdin>