
Recherche avancée
Médias (1)
-
SWFUpload Process
6 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
Autres articles (111)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Script d’installation automatique de MediaSPIP
25 avril 2011, parAfin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
La documentation de l’utilisation du script d’installation (...) -
Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs
12 avril 2011, parLa manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.
Sur d’autres sites (10307)
-
Low latency routing of rtp input to rtsp output with ffmpeg on a server
29 mai 2020, par guillefixI want to be able to do some simple low-latency screen share. I know peer-to-peer would be the lowest latency, but it seems using an intermediate server is a lot easier to setup. I have found this awesome little library, which sets up an RTSP server, which I'm running on my digital ocean server. I then :



- 

- Set up OBS to stream using RTP to the server on port 8558 with libx264 encoding.
- Run
ffmpeg -re -stream_loop -1 -i rtp://127.0.0.1:8558 -c:v libx264 -c:a aac -f rtsp rtsp://localhost:8554/mystream
on the server. - Open
rtsp://<server ip="ip">:8554/mystream</server>
on VLC.









However, the latency seems to be quite high. With my crappy internet it must have been like half a minute. A friend with better internet saw it fluctuating between 4-15 seconds. Furthermore, there seems to be a lot of artifacts on the video (problems with encoding ? I'm not sure why these happen ?)



I attach below my OBS settings, and an example of artifacts.



My question is : is there some settings on OBS and on ffmpeg that would allow this to have as low end-to-end latency as possible, while not having too many bad artifacts ? I'm not very well versed on video encoding and streaming, so this is all quite new to me. I'm willing to learn !








-
Trying to capture display output for real-time analysis with OpenCV ; I need help with interfacing with the OS for input
26 juillet 2024, par mirariI want to apply operations from the OpenCV computer vision library, in real time, to video captured from my computer display.
The idea in this particular case is to detect interesting features during gameplay in a popular game and provide the user with an enhanced experience ; but I could think of several other scenarios where one would want to have live access to this data as well. 
At any rate, for the development phase it might be acceptable using canned video, but for the final application performance and responsiveness are obviously critical.



I am trying to do this on Ubuntu 10.10 as of now, and would prefer to use a UNIX-like system, but any options are of interest.
My C skills are very limited, so whenever talking to OpenCV through Python is possible, I try to use that instead.
Please note that I am trying to capture NOT from a camera device, but from a live stream of display output ; and I'm at a loss as to how to take the input. As far as I can tell, CaptureFromCAM works only for camera devices, and it seems to me that the requirement for real-time performance in the end result makes storage in file and reading back through CaptureFromFile a bad option.



The most promising route I have found so far seems to be using ffmpeg with the x11grab option to capture from an X11 display ;
(e.g. the command
ffmpeg -f x11grab -sameq -r 25 -s wxga -i :0.0 out.mpg
captures 1366x768 of display 0 to 'out.mpg').
I imagine it should be possible to treat the output stream from ffmpeg as a file to be read by OpenCV (presumably by using the CaptureFromFile function) maybe by using pipes ; but this is all on a much higher level than I have ever dealt with before and I could really use some directions. 
Do you think this approach is feasible ? And more importantly can you think of a better one ? How would you do it ?


-
ffmpeg - caching ahead piped input as insurance while maintaining low latency and real-time output
5 décembre 2020, par hedgehog90I'm piping a live transcoded stream into ffmpeg (simplified for brevity) :


mpv playlist --o=- | ffmpeg -re -i - -tune zerolatency -f flv rtmp://blah.com/live


The piped input usually runs above 1x encoding speed, but every now and then it can run a little slower than real-time (just a momentary 0.99x or 0.98x dip).
When this happens, the rtmp server (a popular streaming service with an audience) output will pause momentarily for a couple seconds usually.


To overcome this I want ffmpeg to cache a few seconds in advance, so mpv's output (which outputs at whatever speed it's read, so potentially very fast) can supply ffmpeg with a little extra, and whenever mpv goes a little under 1x speed, there's a little insurance that ffmpeg has cached away. This should be doable while maintaining the lowest possible latency.


Question is, how ?