
Recherche avancée
Médias (1)
-
Collections - Formulaire de création rapide
19 février 2013, par
Mis à jour : Février 2013
Langue : français
Type : Image
Autres articles (96)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;
Sur d’autres sites (7998)
-
How to Stream With FFmpeg and NGINX RTMP
2 octobre 2023, par willowen100I'm trying to stream from OBS (open broadcast software) on my Windows PC to NGINX+RTMP also installed on the same PC. I have set a bitrate of 20,000Kbps in OBS which will be the foundation bitrate for the multiple streams I aim to setup within NGINX.



I would like to be able to stream into NGINX and then on-the-fly use FFmpeg to transcode the stream to comply with the streaming site I intend to broadcast to, for example Twitch.tv.



I can view my stream via VLC if I use the network path rtmp ://localhost/live/test. However, when I'm on Twitch's inspector site to see if my stream is coming thorugh, I'm not receiving anything. I have no idea if my FFmpeg is working or there is something wrong with my NGINX configuration below.



If someone could shed some light of where I might be going wrong please that would be greatly appreciated.



nginx.conf



#user www-data;
worker_processes 1;

events {
 worker_connections 1024;
}

http { 
 server_tokens off;

 include mime.types;
 default_type application/octet-stream;
 sendfile off;
 keepalive_timeout 65;

 server {
 listen 80;
 server_name localhost;

 # make a internal server page and put it in html
 error_page 500 502 503 504 /50x.html;
 location = /50x.html {
 root html;
 }
 }
}

rtmp {
 server {
 listen 1935;
 chunk_size 8192;

 application live {
 live on;
 #interleave on;
 #wait_video on;
 record off;

 # Twitch
 exec_push "D:\Users\Will\Downloads\ffmpeg\bin"
 -i rtmp://localhost/source/$name 
 -c:v libx264 
 -c:a copy 
 -preset veryfast 
 -profile:v high 
 -level 4.1
 -x264-params "nal-hrd=cbr" "opencl=true"
 -b:v 8000K 
 -minrate 8000K 
 -maxrate 8000K
 -keyint 2
 -s 1920x1080
 push rtmp://live-lhr03.twitch.tv/app/STREAM_KEY;
 }
 }
}




Many thanks



UPDATE 1



For the sake of simplicity I'm testing OBS, NGINX and FFmpeg all on the same physical computer, a Windows PC. Once everything is working I will port NGINX and FFmpeg to my Linux PC.



I'm using a pre-compiled version of NGINX with the RTMP module baked in. I've also downloaded the latest FFmpeg libraries which I have set a path environment variable for in Windows so that FFmpeg commands can be called in CommandPrompt/PowerShell.



Here's the path I'm trying to take :-



OBS is encoding x264 at 20,000Kbps and it's destination is a RTMP application in NGINX called 'live'. From here I want to encode the one stream derived from OBS into several smaller bandwidth streams so that I can comply with streaming service's requirements such as Twitch and Mixer for example.



At the end of the FFmpeg parameters do I push the output directly to Twitch or take the output of FFmpeg and send back into a second RTMP application on NGINX and then push out to Twitch ?



One advantage of pushing FFmpeg's output back into NGINX before going off to the external stream service is I can open the FFmpeg transcoded stream through a RTMP supported player such as VLC for example, allowing me to view the compressed output.



Another question I have is, can the FFmpeg parameters be put on separate lines or do they have to all in one line ?



This is a really good site I have been referring back to





-
RTP/UDP or RTSP for accessing stream and passing frame to OpenCV ?
15 janvier 2020, par xor31fourApologies for my inexperience in this domain..I am trying to implement an algorithm that detects the occurrence of a particular event in real-time. The particular event is a consecutive growth of motion across 5 consecutive frames.. almost analogous to a growing sphere or beach ball.
I am able to detect the event on pre-recorded video that is in .avi format (mjpeg frames) with EmguCV (C# wrapper for OpenCV). The method I use is based on background subtraction.. outlined here https://www.pyimagesearch.com/2015/05/25/basic-motion-detection-and-tracking-with-python-and-opencv/
The problem is that the live video transport stream is usually in the format rtsp ://XXX.XXX.X.XX/stream1.sdp
EmguCV on windows can’t decode this h.264 stream for some reason that I am still trying to figure out ... I tried the same url using Python and OpenCV and received a non-matching transport in server reply message similar to this one "Nonmatching transport in server reply" when cv2.VideoCapture rtsp onvif camera, how to fix ? - the answer didn’t work for me.
I can open the rtsp URL using VLCPlayer and its corresponding C# library - from my understanding it is using ffmpeg, although I may be wrong. FFmpeg on the command line can access the stream.
EmguCV also uses ffmpeg as a backend which is why I am very confused as to why it can’t open the rtsp URL.
Here is an image of the module tree when VLCPlayer opens the rtsp stream :
.
From my understanding, EmguCV doesn’t use live555 or avcodec..
I’ve noticed that if I change the streamer configuration to use UDP or RTP rather than RTSP, EmguCV can access the h.264 URL, although the URL is now in the format rtp/udp ://XXX.XXX.X.XX:XXXXX - no .sdp extension.
I would highly appreciate if someone with more experience can give me some pointers.
I have a great deal to learn even though I have spent a lot of time researching this topic. In regards to the detections remaining successful would it be recommended to process H.264 frames with possible distortion or MJPEG frames ?I can’t afford a delay longer than 1-2 seconds, and would ideally like to continue with the current method used to detect the event.
From my current understanding, here are the routes I can take :
1) Use RTP/UDP and process h.264 video using EmguCV - there is some distortion in the video when there is a large amount of movement.. I also receive several h264 error messages during the stream
[h264 @ 00000124f13a5080] SPS unavailable in decode_picture_timing
[h264 @ 00000124f13a5080] non-existing PPS 0 referenced
[h264 @ 00000124f13a5080] decode_slice_header error
[h264 @ 00000124f13a5080] no frame!
[h264 @ 00000124f135eac0] Missing reference picture, default is 0
[h264 @ 00000124f135eac0] decode_slice_header error
[h264 @ 00000124f13a5080] cbp too large (6929) at 11 20
[h264 @ 00000124f13a5080] error while decoding MB 11 20
[h264 @ 00000124f135eac0] top block unavailable for requested intra mode -1
[h264 @ 00000124f135eac0] error while decoding MB 3 0
[h264 @ 00000124f124e580] cbp too large (96) at 33 0
[h264 @ 00000124f124e580] error while decoding MB 33 0
[h264 @ 00000124f19940c0] top block unavailable for requested intra mode
[h264 @ 00000124f19940c0] error while decoding MB 1 12) Keep RTSP protocol, use libav to decode the frames and pass to EmguCV.. following this answer https://www.raspberrypi.org/forums/viewtopic.php?t=83127 .. I’m not sure if this will introduce a huge delay
3) Keep RTSP protocol, use ffmpeg to convert h.264 stream to MJPEG and access this URL instead ?
Again I’m not sure if this will be a feasible solution if it will introduce a great delay.4) Use a Linux machine rather than windows and configure a gstreamer backend - not ideal
Thank you for taking the time to read this post.
-
fastest ffmpeg without caring about quality
31 mai 2019, par RedDeathI would like to convert any video into .mp4 as fast as possible without caring about quality loss. I have used the next commands with which I have been able to finish the process in 37 seconds for a 10 second video.
-vcodec h264
-crf 32
-preset ultrafastHowever 37 seconds is still too long for a 10 seconds video. Is there any improvements that I can do to the command in order to reduce the execution time ?
Edit (Extra info) :
I’m using FFmpeg Android (
implementation 'com.writingminds:FFmpegAndroid:0.3.2'
) even though commands usually work for any FFmpeg (with a few variants depending on the FFmpeg version).The command used in my case which gave me the fastest result so far is :
mFfmpeg.execute(
arrayOf(
"-i" , videoCopy?.path,
"-vcodec", "h264",
"-crf", "32",
"-preset", "ultrafast",
"-y", uploadFile?.path),
object : ExecuteBinaryResponseHandler() { ... }Which for regular FFmpeg command would be
"-ffmpeg -i {video?.path} -vcodec h264 -crf 32 -preset ultrafast -y {uploadFile?.path}"
Where
video
is my original videoFile
anduploadFile
is theFile
where I want to save the result into.In a Samsung J3 (SM-J320M, you can find its specifications online) such command takes the aforementioned 37 seconds.
After executing such command the first onProgress message returned by FFmpeg prints :
ffmpeg version n3.0.1 Copyright (c) 2000-2016 the FFmpeg developers built with gcc 4.8 (GCC)
configuration:
--target-os=linux
--cross-prefix=/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/bin/arm-linux-androideabi-
--arch=arm
--cpu=cortex-a8
--enable-runtime-cpudetect
--sysroot=/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/sysroot
--enable-pic
--enable-libx264
--enable-libass
--enable-libfreetype
--enable-libfribidi
--enable-libmp3lame
--enable-fontconfig
--enable-pthreads
--disable-debug
--disable-ffserver
--enable-version3
--enable-hardcoded-tables
--disable-ffplay
--disable-ffprobe
--enable-gpl
--enable-yasm
--disable-doc
--disable-shared
--enable-static
--pkg-config=/home/vagrant/SourceCode/ffmpeg-android/ffmpeg-pkg-config
--prefix=/home/vagrant/SourceCode/ffmpeg-android/build/armeabi-v7a
--extra-cflags='-I/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/include -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -fno-strict-overflow -fstack-protector-all'
--extra-ldflags='-L/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/lib -Wl,-z,relro -Wl,-z,now -pie'
--extra-libs='-lpng -lexpat -lm'
--extra-cxxflags=
libavutil 55. 17.103 / 55. 17.103
libavcodec 57. 24.102 / 57. 24.102
libavformat 57. 25.100 / 57. 25.100
libavdevice 57. 0.101 / 57. 0.101
libavfilter 6. 31.100 / 6. 31.100
libswscale 4. 0.100 / 4. 0.100
libswresample 2. 0.101 / 2. 0.101
libpostproc 54. 0.100 / 54. 0.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/storage/emulated/0/DCIM/Yakatak/656.mp4': Metadata: major_brand : mp42 minor_version : 0 compatible_brands: isommp42 creation_time : 2019-05-29 11:27:56 location : +51.5202-000.1435/ location-eng : +51.5202-000.1435/ Duration: 00:00:09.47, start: 0.000000, bitrate: 12147 kb/s Stream #0:0(eng): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p, 1280x720, 11899 kb/s, 30.02 fps, 30 tbr, 90k tbn, 180k tbc (default) Metadata: rotate : 90 creation_time : 2019-05-29 11:27:56 handler_name : VideoHandle Side data: displaymatrix: rotation of -90.00 degrees Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 256 kb/s (default) Metadata: creation_time : 2019-05-29 11:27:56 handler_name : SoundHandle[libx264 @ 0xb5428800] using cpu capabilities: none![libx264 @ 0xb5428800] profile Constrained Baseline, level 3.1[libx264 @ 0xb5428800] 264 - core 148 - H.264/MPEG-4 AVC codec - Copyleft 2003-2015 - http://www.videolan.org/x264.html - options: cabac=0 ref=1 deblock=0:0:0 analyse=0:0 me=dia subme=0 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=0 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=0 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=0 weightp=0 keyint=250 keyint_min=25 scenecut=0 intra_refresh=0 rc=crf mbtree=0 crf=32.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=0
Output #0, mp4, to '/storage/emulated/0/DCIM/Yakatak/uploadFile.mp4': Metadata: major_brand : mp42 minor_version : 0 compatible_brands: isommp42 location-eng : +51.5202-000.1435/ location : +51.5202-000.1435/ encoder : Lavf57.25.100 Stream #0:0(eng): Video: h264 (libx264) ([33][0][0][0] / 0x0021), yuv420p, 720x1280, q=-1--1, 30 fps, 15360 tbn, 30 tbc (default) Metadata: handler_name : VideoHandle creation_time : 2019-05-29 11:27:56 encoder : Lavc57.24.102 libx264 Side data: unknown side data type 10 (24 bytes) Stream #0:1(eng): Audio: aac (LC) ([64][0][0][0] / 0x0040), 48000 Hz, stereo, fltp, 128 kb/s (default) Metadata: creation_time : 2019-05-29 11:27:56 handler_name : SoundHandle encoder : Lavc57.24.102 aacStream mapping: Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264)) Stream #0:1 -> #0:1 (aac (native) -> aac (native))Press [q] to stop, [?] for helpframe= 0 fps=0.0