
Recherche avancée
Médias (3)
-
Exemple de boutons d’action pour une collection collaborative
27 février 2013, par
Mis à jour : Mars 2013
Langue : français
Type : Image
-
Exemple de boutons d’action pour une collection personnelle
27 février 2013, par
Mis à jour : Février 2013
Langue : English
Type : Image
-
Collections - Formulaire de création rapide
19 février 2013, par
Mis à jour : Février 2013
Langue : français
Type : Image
Autres articles (7)
-
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ; -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Menus personnalisés
14 novembre 2010, parMediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
Menus créés à l’initialisation du site
Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...)
Sur d’autres sites (4336)
-
Recording RTSP steam with Python
6 mai 2022, par ロジャーCurrently I am using MediaPipe with Python to monitor RTSP steam from my camera, working as a security camera. Whenever the MediaPipe holistic model detects humans, the script writes the frame to a file.


i.e.


# cv2.VideoCapture(RTSP)
# read frame
# while mediapipe detect
# cv2.VideoWriter write frame
# store file



Recently I want to add audio recording support. I have done some research that it is not possible to record audio with OpenCV. It has to be done with FFMPEG or PyAudio.


I am facing these difficulities.


- 

-
When a person walk through in front of the camera, it takes maybe less than 2 seconds. For the RTSP stream being read by OpenCV, human is detected with MediaPipe, and start FFMPEG for recording, that human should have walked far far away already. So FFMPEG method seems not working for me.


-
For PyAudio method I am currently studying, I need to create 2 threads establishing individual RTSP connections. One thread is for video to be read by OpenCV and MediaPipe. The other thread is for audio to be recorded when the OpenCV thread notice human is detected. I have tried using several devices to read the RTSP streams. The devices are showing timestamps (watermark on the video) with several seconds in difference. So I doubt if I can get video from OpenCV and audio from PyAudio in sync when merging them into one single video.








Is there any suggestion how to solve this problem ?


Thanks.


-
-
Imagemagick & Pillow generate malformed GIF frames
1er juin 2016, par RiTuI need to extract the middle frame of a gif animation.
Imagemagick :
convert C:\temp\orig.gif -coalesce C:\temp\frame.jpg
generates the frames properly :
However when I extract a single frame :
convert C:\temp\orig.gif[4] -coalesce C:\temp\frame.jpg
then the frame is malformed, as if the -coalesce option was ignored :
Extraction of individual frames with Pillow and ffmpeg also results in malformed frames, tested on a couple of gifs.
Download gif : https://i.imgur.com/Aus8JpT.gif
I need to be able to extract middle frames of every gif version in either PIL, Imagemagick of ffmpeg (ideally PIL).
-
Extract frames as images from an RTMP stream in real-time
7 novembre 2014, par SoftForgeI am streaming short videos (4 or 5 seconds) encoded in H264 at 15 fps in VGA quality from different clients to a server using RTMP which produced an FLV file. I need to analyse the frames from the video as images as soon as possible so I need the frames to be written as PNG images as they are received.
Currently I use Wowza to receive the streams and I have tried using the transcoder API to access the individual frames and write them to PNGs. This partially works but there is about a second delay before the transcoder starts processing and when the stream ends Wowza flushes its buffers causing the last second not to get transcoded meaning I can lose the last 25% of the video frames. I have tried to find a workaround but Wowza say that it is not possible to prevent the buffer getting flushed. It is also not the ideal solution because there is a 1 second delay before I start getting frames and I have to re-encode the video when using the transcoder which is computationally expensive and unnecessarily for my needs.
I have also tried piping a video in real-time to FFmpeg and getting it to produce the PNG images but unfortunately it waits until it receives the entire video before producing the PNG frames.
How can I extract all of the frames from the stream as close to real-time as possible ? I don’t mind what language or technology is used as long as it can run on a Linux server. I would be happy to use FFmpeg if I can find a way to get it to write the images while it is still receiving the video or even Wowza if I can find a way not to lose frames and not to re-encode.
Thanks for any help or suggestions.