
Recherche avancée
Autres articles (27)
-
Librairies et binaires spécifiques au traitement vidéo et sonore
31 janvier 2010, parLes logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
Binaires complémentaires et facultatifs flvtool2 : (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)
Sur d’autres sites (4171)
-
FFplay requesting video via RTSP :// but receiving on multicast address
28 mai 2014, par DavidGFirst of all, I apologize for how long the supporting information will be in this post. This is my first post on this forum.
My issue is I need to run the command line version of ffmpeg to capture a video stream. However, as a proof of concept I’m first attempting to capture and view the video using ffplay (BTW, I have not had any success using ffmpeg or ffprobe). I’m running the ffplay command to read video from a Coretec video encoder which has multicast enabled.
Unicast address: 172.30.18.50
Multicast address: 239.130.18.50:4002My question is how can I request the Unicast address, but receive the video on the multicast address ? (BTW, the ffplay operation does not work even if I replace the Unicast address with the Multicast address below)
NOTE : After looking at the Wireshark trace, I see the video data has GSMTAP in the protocol column. When I do "ffmpeg -protocols : I see there is a Decoder "gsm" which decodes raw gsm. however, when I use ffplay -f gsm ... I get "Protocol not found".
I am able to use VLC to view the video using the following command :
VLC rtsp://172.30.18.50
It appears from the Wireshark trace that the session is initiated on the Unicast address, but the video is streamed on the Multicast address. VLC is able to determine this and perform the appropriate operation. I don’t know what to add to ffplay to let it know that another stream will be carrying the video.
I am UNABLE to perform the following ffplay commands (none of them work) :
ffplay -v debug rtsp://172.30.18.50
ffplay -v debug -rtsp_transport udp rtsp://172.30.18.50
ffplay -v debug -rtsp_transport udp_multicast rtsp://172.30.18.50NOTE : I am able to get ffplay to launch, but the video is garbled badly. Maybe this bit of information will ring a bell for someone ? The command I used was :
ffplay -v debug -i udp://239.130.18.50:4002?sources=172.30.18.50
The version of ffplay I’m using is :
ffplay version N-63439-g96470ca Copyright (c) 2003-2014 the FFmpeg developers
built on May 25 2014 22:09:07 with gcc 4.8.2 (GCC)
configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-av
isynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enab
le-iconv --enable-libass --enable-libbluray --enable-libcaca --enable-libfreetyp
e --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-
libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libope
njpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsox
r --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab -
-enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx
--enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-
libxavs --enable-libxvid --enable-decklink --enable-zlib
libavutil 52. 86.100 / 52. 86.100
libavcodec 55. 65.100 / 55. 65.100
libavformat 55. 41.100 / 55. 41.100
libavdevice 55. 13.101 / 55. 13.101
libavfilter 4. 5.100 / 4. 5.100
libswscale 2. 6.100 / 2. 6.100
libswresample 0. 19.100 / 0. 19.100
libpostproc 52. 3.100 / 52. 3.100The debug output for ffplay -v debug rtsp ://172.30.18.50 is :
[rtsp @ 0000000002a8be80] SDP:= 0KB vq= 0KB sq= 0B f=0/0
v=0
o=- 1 1 IN IP4 50.18.30.172
s=Test
a=type:broadcast
t=0 0
c=IN IP4 239.130.18.50/63
m=video 4002 RTP/AVP 96
a=rtpmap:96 MP4V-ES/90000
a=fmtp:96 profile-level-id=245;config=000001B0F5000001B509000001000000012000C8F8
A058BA9860FA616087828307a=control:track1
[rtsp @ 0000000002a8be80] video codec set to: mpeg4
[udp @ 0000000002a8bac0] end receive buffer size reported is 65536
[udp @ 0000000002aa1600] end receive buffer size reported is 65536
[rtsp @ 0000000002a8be80] Nonmatching transport in server reply/0
rtsp://172.30.18.50: Invalid data found when processing inputAnd the Wireshark trace output is :
OPTIONS rtsp://172.30.18.50:554 RTSP/1.0
CSeq: 1
User-Agent: Lavf55.41.100
RTSP/1.0 200 OK
CSeq: 1
Public: DESCRIBE, SETUP, TEARDOWN, PLAY
DESCRIBE rtsp://172.30.18.50:554 RTSP/1.0
Accept: application/sdp
CSeq: 2
User-Agent: Lavf55.41.100
RTSP/1.0 200 OK
CSeq: 2 Content-Type: application/sdp
Content-Length: 270
v=0
o=- 1 1 IN IP4 50.18.30.172
s=Test
a=type:broadcast
t=0 0
c=IN IP4 239.130.18.50/63
m=video 4002 RTP/AVP 96
a=rtpmap:96 MP4V-ES/90000
a=fmtp:96 profile-level-id=245;config=000001B0F5000001B509000001000000012000C8F8A058BA9860FA616087828307a=control:track1
SETUP rtsp://172.30.18.50:554 RTSP/1.0
Transport: RTP/AVP/UDP;unicast;client_port=9574-9575
CSeq: 3
User-Agent: Lavf55.41.100
RTSP/1.0 200 OK
CSeq: 3
Session: test
Transport: RTP/AVP;multicast;destination=;port=4002-4003;ttl=63The debug output for ffplay -v debug -rtsp_transport udp rtsp ://172.30.18.50 is :
[rtsp @ 0000000002c5c0a0] SDP:= 0KB vq= 0KB sq= 0B f=0/0
v=0
o=- 1 1 IN IP4 50.18.30.172
s=Test
a=type:broadcast
t=0 0
c=IN IP4 239.130.18.50/63
m=video 4002 RTP/AVP 96
a=rtpmap:96 MP4V-ES/90000
a=fmtp:96 profile-level-id=245;config=000001B0F5000001B509000001000000012000C8F8
A058BA9860FA616087828307a=control:track1
[rtsp @ 0000000002c5c0a0] video codec set to: mpeg4
[udp @ 0000000002c62420] end receive buffer size reported is 65536
[udp @ 0000000002c726a0] end receive buffer size reported is 65536
[rtsp @ 0000000002c5c0a0] Nonmatching transport in server reply/0
rtsp://172.30.18.50: Invalid data found when processing inputAnd the Wireshark trace output is :
OPTIONS rtsp://172.30.18.50:554 RTSP/1.0
CSeq: 1
User-Agent: Lavf55.41.100
RTSP/1.0 200 OK
CSeq: 1
Public: DESCRIBE, SETUP, TEARDOWN, PLAY
DESCRIBE rtsp://172.30.18.50:554 RTSP/1.0
Accept: application/sdp
CSeq: 2
User-Agent: Lavf55.41.100
RTSP/1.0 200 OK
CSeq: 2
Content-Type: application/sdp
Content-Length: 270
v=0
o=- 1 1 IN IP4 50.18.30.172
s=Test
a=type:broadcast
t=0 0
c=IN IP4 239.130.18.50/63
m=video 4002 RTP/AVP 96
a=rtpmap:96 MP4V-ES/90000 a=fmtp:96 profile-level-id=245;config=000001B0F5000001B509000001000000012000C8F8A058BA9860FA616087828307a=control:track1
SETUP rtsp://172.30.18.50:554 RTSP/1.0
Transport: RTP/AVP/UDP;unicast;client_port=22332-22333
CSeq: 3
User-Agent: Lavf55.41.100
RTSP/1.0 200 OK
CSeq: 3
Session: test
Transport: RTP/AVP;multicast;destination=239.130.18.50;port=4002-4003;ttl=63The debug output for ffplay -v debug -rtsp_transport udp_multicast is :
[rtsp @ 00000000002fc100] SDP:= 0KB vq= 0KB sq= 0B f=0/0
v=0
o=- 1 1 IN IP4 50.18.30.172
s=Test
a=type:broadcast
t=0 0
c=IN IP4 239.130.18.50/63
m=video 4002 RTP/AVP 96
a=rtpmap:96 MP4V-ES/90000
a=fmtp:96 profile-level-id=245;config=000001B0F5000001B509000001000000012000C8F8
A058BA9860FA616087828307a=control:track1
[rtsp @ 00000000002fc100] video codec set to: mpeg4
nan : 0.000 fd= 0 aq= 0KB vq= 0KB sq= 0B f=0/0And the Wireshark trace output is :
OPTIONS rtsp://172.30.18.50:554
RTSP/1.0
CSeq: 1
User-Agent: Lavf55.41.100
RTSP/1.0 200 OK
CSeq: 1
Public: DESCRIBE, SETUP, TEARDOWN, PLAY
DESCRIBE rtsp://172.30.18.50:554 RTSP/1.0
Accept: application/sdp
CSeq: 2
User-Agent: Lavf55.41.100
RTSP/1.0 200 OK
CSeq: 2
Content-Type: application/sdp
Content-Length: 270
v=0
o=- 1 1 IN IP4 50.18.30.172
s=Test
a=type:broadcast
t=0 0
c=IN IP4 239.130.18.50/63
m=video 4002 RTP/AVP 96
a=rtpmap:96 MP4V-ES/90000
a=fmtp:96 profile-level-id=245;config=000001B0F5000001B509000001000000012000C8F8A058BA9860FA616087828307a=control:track1
SETUP rtsp://172.30.18.50:554 RTSP/1.0
Transport: RTP/AVP/UDP;multicast
CSeq: 3
User-Agent: Lavf55.41.100Thank you in advance to whomever is willing to tackle this.
DavidG
-
How to Play Smooth 4K Video with FFplay ? [closed]
14 mai 2020, par Matan MarcianoIm trying to play 4K video with ffplay and the video is not smooth.



Video File :



- 

- Big Buck Bunny, 4K, 60fps, 30Mbps
- http://distribution.bbb3d.renderfarming.net/video/mp4/bbb_sunflower_2160p_60fps_stereo_abl.mp4







Hardware :



- 

- Dell OptiPlex 7070 Micro (8 Cores, 8GB RAM, GPU : Intel Corporation UHD Graphics 630)
- Samsung U28E590D, 4K Screen
- DisplayPort to HDMI Cable









Software :



- 

- Ubuntu 19.04
- ffplay with libmvx (according this guide : https://github.com/Intel-Media-SDK/MediaSDK/wiki/Build-and-use-ffmpeg-with-MediaSDK)







The FFplay command I used is :
ffplay -vcodec h264_qsv -i input.ts



- 

- Notes :
- During the playing, my CPU usage is around 25% (according to 'top' command)
- FFplay logs indicate on many frame drops
- I also tried VLC and MPV player's- VLC looks bad also. MPV player looks better but still not smooth.











So, its looks like my hardware is good enough for 4K playing. What am I missing ?
How can I play smooth 4K video via FFplay ?


-
Using an actual audio recording to filter out noise from a video
9 mars 2021, par user2751530I use my laptop (Ubuntu 18.04 LTS derivative on a Dell XPS13) for recording videos (these are just narrated presentations) using OBS. After a presentation is done (.flv format), I process it using ffmpeg using filters that try to reduce background noise, reduce the size of the video, change encoding to .mp4, insert a watermark, etc. Over several months, this system has worked well.


However, my laptop is now beginning to show its age (it is 4 years old). That means that the fan becomes loud - loud enough to notice in a recording, not loud enough to notice when you are working. So, even after filtering for low frequency in ffmpeg, there are clicking and other type of sounds that are left in the video. I am a scientist, though not an audio/video expert. So, I was thinking - is it possible for me to simply record the noise coming out of my machine when I am not presenting, and then use that recording to filter out the noise that my machine makes during the presentation ?


Blanket approaches like filtering out certain ranges of the audio spectrum, etc. are unlikely to work, as the power spectrum of the noise likely has many peaks, and these are likely to extend into human voice range as well (I can hear them). Further, this is a moving target - the laptop is aging and in any case, the amount and type of noise it makes depends on the load and how long it has been on. Algorithm :


- 

- Record actual computer noise (with the added bonus of background noise) while I am not recording. Ideally, just before starting to record the presentation. This could take the form of a 1-2 minute audio sample.
- Record the presentation on OBS.
- Use 1 as a filter to get rid of noise in 2. I imagine it would involve doing a Fourier analysis of 1, and then removing those peaks from the spectrum of 2 at each time epoch.








I have looked into sox, which is what people somewhat flippantly point you to without giving any details. I do not know how to separate out audio channels from a video and then interleave them back together (not an expert on the software here). Other than RTFM, is there any helpful advice anyone could offer ? I have searched, but have not been able to find a HOWTO. I expect that that is probably the fault of my search since I refuse to believe that this is a new idea - it is a standard method used in many fields to get rid of noise, including astronomy.