
Recherche avancée
Autres articles (112)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)
Sur d’autres sites (14640)
-
ffmpeg : What is the best practice to keep a live connection/socket with a camera, and save time on ffprobe
15 mai 2022, par Jeff StrongmanToday... I used the following command : with
subprocess.PIPE
andsubprocess.Popen
in python 3 :

ffmpeg -i udp://{address_of_camera} \
 -vf select='if(eq(pict_type,I),st(1,t),gt(t,ld(1)))' setpts=N/FRAME_RATE/TB \
 -f rawvideo -an -vframes {NUM_WANTED_FRAMES} pipe:`



This command helps me to capture
NUM_WANTED_FRAMES
frames from a live camera at a given moment.

However... it takes me about 4 seconds to read the frames, and about 2.5 seconds to open a socket between my computer and the camera's computer.


Is there a way, to have a socket/connection always open between my computer and the camera's computer, to save the 2.5 seconds ?


I read something about
fifo_size
andoverrun_fatal
. I thought that maybe I can setfifo_size
to be equal toNUM_WANTED_FRAMES
, andoverrun_fatal
to True ? Will this solve my problem ? Or is there a different and simpler/better solution ?

Should I try to record always (no
-vframes
flag) store the frames in a queue(With max size), and upon a wish to slice the video, read from my queue buffer ? Will it work well with the keyframe ?

Also... What to do when ffmpeg fails ? restart the ffmpeg command ?


-
creating a virtual microphone Ubuntu 16.04 and streaming audio into it from RTSP IP camera
22 avril 2017, par sunsetjunksI need to create both virtual webcam and virtual microphone on an Ubuntu 16.04 machine for use in web application using WebRTC through my web browser.
I need to feed video and audio to these 2 virtual devices from an IP camera (RTSP stream).
Playing RTSP stream directly in VLC works fine with both video and audio.For this, I have created a /dev/video1 with video4linux2.
I am able to feed the IP camera to /dev/video1.ffmpeg -i rtsp ://ip_address:554/streaming/channels/101/ -f v4l2
/dev/video1If I look in VLC player, I can select /dev/video1 as a video device, but I have only "hw:0,0" as audio device, which is my in-built microphone.
How to properly feed such RTSP stream to both virtual webcam and virtual microphone ?
-
creating a virtual microphone Ubuntu 16.04 and streaming audio into it from RTSP IP camera
14 janvier 2021, par sunsetjunksI need to create both virtual webcam and virtual microphone on an Ubuntu 16.04 machine for use in web application using WebRTC through my web browser.



I need to feed video and audio to these 2 virtual devices from an IP camera (RTSP stream). 
Playing RTSP stream directly in VLC works fine with both video and audio.



For this, I have created a /dev/video1 with video4linux2.
I am able to feed the IP camera to /dev/video1.





ffmpeg -i rtsp ://ip_address:554/streaming/channels/101/ -f v4l2
 /dev/video1





If I look in VLC player, I can select /dev/video1 as a video device, but I have only "hw:0,0" as audio device, which is my in-built microphone.



How to properly feed such RTSP stream to both virtual webcam and virtual microphone ?