
Recherche avancée
Médias (91)
-
Collections - Formulaire de création rapide
19 février 2013, par
Mis à jour : Février 2013
Langue : français
Type : Image
-
Les Miserables
4 juin 2012, par
Mis à jour : Février 2013
Langue : English
Type : Texte
-
Ne pas afficher certaines informations : page d’accueil
23 novembre 2011, par
Mis à jour : Novembre 2011
Langue : français
Type : Image
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
-
Richard Stallman et la révolution du logiciel libre - Une biographie autorisée (version epub)
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (29)
-
Librairies et binaires spécifiques au traitement vidéo et sonore
31 janvier 2010, parLes logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
Binaires complémentaires et facultatifs flvtool2 : (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)
Sur d’autres sites (4673)
-
How to Stream With FFmpeg and NGINX RTMP
2 octobre 2023, par willowen100I'm trying to stream from OBS (open broadcast software) on my Windows PC to NGINX+RTMP also installed on the same PC. I have set a bitrate of 20,000Kbps in OBS which will be the foundation bitrate for the multiple streams I aim to setup within NGINX.



I would like to be able to stream into NGINX and then on-the-fly use FFmpeg to transcode the stream to comply with the streaming site I intend to broadcast to, for example Twitch.tv.



I can view my stream via VLC if I use the network path rtmp ://localhost/live/test. However, when I'm on Twitch's inspector site to see if my stream is coming thorugh, I'm not receiving anything. I have no idea if my FFmpeg is working or there is something wrong with my NGINX configuration below.



If someone could shed some light of where I might be going wrong please that would be greatly appreciated.



nginx.conf



#user www-data;
worker_processes 1;

events {
 worker_connections 1024;
}

http { 
 server_tokens off;

 include mime.types;
 default_type application/octet-stream;
 sendfile off;
 keepalive_timeout 65;

 server {
 listen 80;
 server_name localhost;

 # make a internal server page and put it in html
 error_page 500 502 503 504 /50x.html;
 location = /50x.html {
 root html;
 }
 }
}

rtmp {
 server {
 listen 1935;
 chunk_size 8192;

 application live {
 live on;
 #interleave on;
 #wait_video on;
 record off;

 # Twitch
 exec_push "D:\Users\Will\Downloads\ffmpeg\bin"
 -i rtmp://localhost/source/$name 
 -c:v libx264 
 -c:a copy 
 -preset veryfast 
 -profile:v high 
 -level 4.1
 -x264-params "nal-hrd=cbr" "opencl=true"
 -b:v 8000K 
 -minrate 8000K 
 -maxrate 8000K
 -keyint 2
 -s 1920x1080
 push rtmp://live-lhr03.twitch.tv/app/STREAM_KEY;
 }
 }
}




Many thanks



UPDATE 1



For the sake of simplicity I'm testing OBS, NGINX and FFmpeg all on the same physical computer, a Windows PC. Once everything is working I will port NGINX and FFmpeg to my Linux PC.



I'm using a pre-compiled version of NGINX with the RTMP module baked in. I've also downloaded the latest FFmpeg libraries which I have set a path environment variable for in Windows so that FFmpeg commands can be called in CommandPrompt/PowerShell.



Here's the path I'm trying to take :-



OBS is encoding x264 at 20,000Kbps and it's destination is a RTMP application in NGINX called 'live'. From here I want to encode the one stream derived from OBS into several smaller bandwidth streams so that I can comply with streaming service's requirements such as Twitch and Mixer for example.



At the end of the FFmpeg parameters do I push the output directly to Twitch or take the output of FFmpeg and send back into a second RTMP application on NGINX and then push out to Twitch ?



One advantage of pushing FFmpeg's output back into NGINX before going off to the external stream service is I can open the FFmpeg transcoded stream through a RTMP supported player such as VLC for example, allowing me to view the compressed output.



Another question I have is, can the FFmpeg parameters be put on separate lines or do they have to all in one line ?



This is a really good site I have been referring back to





-
ffplay startup time proportional to specified framerate
17 décembre 2019, par bremen_mattI am playing a video over http using ffplay. The call I am using looks like this :
ffplay -framerate 30 -fflags nobuffer -flags low_delay -autoexit -i http://localhost:8880
The video is an H.264 encoding where (my understanding is a bit unclear here) it is something like a "raw" H.264 stream without timestamps.
My primary concern is to get video displayed with low latency. In that regard, the video is fine.
The issue is with the framerate and with the startup time.
The video source is emitting frames as soon as they are processed, so the frame rate is not constant. However, my experience is that as long as you specify a framerate larger than the max achievable frame rate of the source, then the viewer still looks fine. On the flipside, if the video source starts emitting frames at 60 fps, but I specify a framerate of 30, then the delay just sort of builds up in ffplay to the point where after 10 seconds, the video is 20 seconds behind. So the first question would be whether there is a way to get ffplay to use a variable framerate. The behavior I am looking for is "display a frame as soon as it is received over http".
In light of the aforementioned issue, the approach I have been taking is to simply specify a high framerate, which seems to work. However, there is an issue with this approach in the form of startup time. When I set the framerate to 10, the ffplay window starts in approx 3 seconds, but then quickly starts accruing a lag (so I can’t do this). When I set the framerate to 100, the ffplay window takes 30 seconds (literally 30 seconds) to start, but then will not have any lag.
I have seen that ffmpeg has a
vsync
option that on the surface seems like it would allow you to set a variable framerate. However, ffplay doesn’t seem to recognize this. I would also be willing to pipe the output of ffmpeg to a different window (I am running Ubuntu 18.04) if that is what must be done, but I would prefer not to have to recompile ffplay. -
FFMPEG merge audio tracks into one and encode using NVENC
5 novembre 2019, par L0LockI often shoot films with several audio inputs, resulting in video files with multiple audio tracks supposed to be played all together at the same time.
I usually go through editing those files and there I do whatever I want with those files, but sometimes I would also like to just send the files right away online without editing, in which case I would enjoy FFMPEG’s fast & simple & quality encoding.But here’s the catch : most online video streaming services don’t support multiple audio tracks, so I have to merge them into one so we can hear everything.
I also want to upscale the video (it’s a little trick for the streaming service to trigger its higher quality encoding).
And finally, since it’s just an encoding meant to just be shared on a streaming service, I prefer a fast & light encoding over quality, which HEVC NVENC is good for.So far I’ve tried to use the amix advanced filter and I try to use the Lanczos filter for upscaling which seems to give a better result in my case.
The input file is quite simple :
Stream 0:0 : video track
Stream 0:1 : main audio recording
Stream 0:2 : secondary audio recordingThe audio tracks are at the correct volume and duration and position in time, so the only thing I need is really just to turn them into one track
ffmpeg -i "ow_raw.mp4" -filter_complex "[0:1][0:2]amix=inputs=2[a]" -map "0:0" -map "[a]" -c:v hevc_nvenc -preset fast -level 4.1 -pix_fmt yuv420p -vf scale=2560:1:flags=lanczos "ow_share.mkv" -y
But it doesn’t work :
ffmpeg version N-94905-g8efc9fcc56 Copyright (c) 2000-2019 the FFmpeg developers
built with gcc 9.1.1 (GCC) 20190807
configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt --enable-amf
libavutil 56. 35.100 / 56. 35.100
libavcodec 58. 56.101 / 58. 56.101
libavformat 58. 32.104 / 58. 32.104
libavdevice 58. 9.100 / 58. 9.100
libavfilter 7. 58.102 / 7. 58.102
libswscale 5. 6.100 / 5. 6.100
libswresample 3. 6.100 / 3. 6.100
libpostproc 55. 6.100 / 55. 6.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'ow_raw.mp4':
Metadata:
major_brand : mp42
minor_version : 0
compatible_brands: isommp42
creation_time : 2019-11-02T16:43:32.000000Z
date : 2019
Duration: 00:15:49.79, start: 0.000000, bitrate: 30194 kb/s
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, smpte170m/smpte170m/bt470m), 1920x1080 [SAR 1:1 DAR 16:9], 29805 kb/s, 60 fps, 60 tbr, 90k tbn, 120 tbc (default)
Metadata:
creation_time : 2019-11-02T16:43:32.000000Z
handler_name : VideoHandle
Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 196 kb/s (default)
Metadata:
creation_time : 2019-11-02T16:43:32.000000Z
handler_name : SoundHandle
Stream #0:2(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 184 kb/s (default)
Metadata:
creation_time : 2019-11-02T16:43:32.000000Z
handler_name : SoundHandle
Stream mapping:
Stream #0:1 (aac) -> amix:input0 (graph 0)
Stream #0:2 (aac) -> amix:input1 (graph 0)
Stream #0:0 -> #0:0 (h264 (native) -> hevc (hevc_nvenc))
amix (graph 0) -> Stream #0:1 (libvorbis)
Press [q] to stop, [?] for help
[hevc_nvenc @ 000002287e34a040] InitializeEncoder failed: invalid param (8)
Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
Conversion failed!