
Recherche avancée
Médias (1)
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (73)
-
La file d’attente de SPIPmotion
28 novembre 2010, parUne file d’attente stockée dans la base de donnée
Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
MediaSPIP Core : La Configuration
9 novembre 2010, parMediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...)
Sur d’autres sites (7840)
-
Extracting each individual frame from an H264 stream for real-time analysis with OpenCV
5 mai 2017, par exclmtnptProblem Outline
I have an h264 real-time video stream (I’ll call this "the stream") being captured in Process1. My goal is to extract each frame from the stream as it comes through and use Process2 to analyze it with OpenCV. (Process1 is nodejs, Process2 is Python)
Things I’ve tried, and their failure modes :
- Send the stream directly from one Process1 to Process2 over a named fifo pipe :
I succeeded in directing the stream from Process1 into the pipe. However, in Process2 (which is Python) I could not (a) extract individual frames from the stream, and (b) convert any extracted data from h264 into an OpenCV format (e.g. JPEG, numpy array).
I had hoped to use OpenCV’s VideoCapture() method, but it does not allow you to pass a FIFO pipe as an input. I was able to use VideoCapture by saving the h264 stream to a .h264 file, and then passing that as the file path. This doesn’t help me, because I need to do my analysis in real time (i.e. I can’t save the stream to a file before reading it in to OpenCV).
- Pipe the stream from Process1 to FFMPEG, use FFMPEG to change the stream format from h264 to MJPEG, then pipe the output to Process2 :
I attempted this using the command :
cat pipeFromProcess1.fifo | ffmpeg -i pipe:0 -f h264 -f mjpeg pipe:1 | cat > pipeToProcess2.fifo
The biggest issue with this approach is that FFMPEG takes inputs from Process1 until Process1 is killed, and only then does Process2 begin to receive the data.
Additionally, on the Process2 side, I still don’t understand how to extract individual frames from the data coming over the pipe. I open the pipe for reading (as "f") and then execute data = f.readline(). The size of data varies drastically (some reads have length on the order of 100, others length on the order of 1,000). When I use f.read() instead of f.readline(), the length is much larger, on the order of 100,000.
If I were to know that I was getting the correct size chunk of data, I would still not know how to transform it into an OpenCV-compatible array because I don’t understand the format it’s coming over in. It’s a string, but when I print it out it looks like this :
��_M 0A0����tQ,\%��e���f/�H�#Y�p�f#�Kus�} F����ʳa�G������+$x�%V�� }[����Wo �1’̶A���c����*�&=Z^�o’��Ͽ� SX-ԁ涶V&H|��$
��<�E�� ��>�����u���7�����cR� �f�=�9 ��fs�q�ڄߧ�9v�]�Ӷ���& gr]�n�IRܜ�檯����
� ����+ �I��w�}� ��9�o��� �w��M�m���IJ ��� �m�=�Soՙ}S �>j �,�ƙ�’���tad =i ��WY�FeC֓z �2�g� ;EXX��S��Ҁ*, ���w� _|�&�y��H��=��)� ���Ɗ3@ �h���Ѻ�Ɋ��ZzR`��)�y�� c�ڋ.��v� !u���� �S�I#�$9R�Ԯ0py z ��8 #��A�q�� �͕� ijc �bp=��۹ c SqHConverting from base64 doesn’t seem to help. I also tried :
array = np.fromstring(data, dtype=np.uint8)
which does convert to an array, but not one of a size that makes sense based on the 640x368x3 dimensions of the frames I’m trying to decode.
- Using decoders such as Broadway.js to convert the h264 stream :
These seem to be focused on streaming to a website, and I did not have success trying to re-purpose them for my goal.
Clarification about what I’m NOT trying to do :
I’ve found many related questions about streaming h264 video to a website. This is a solved problem, but none of the solutions help me extract individual frames and put them in an OpenCV-compatible format.
Also, I need to use the extracted frames in real time on a continual basis. So saving each frame as a .jpg is not helpful.
System Specs
Raspberry Pi 3 running Raspian Jessie
Additional Detail
I’ve tried to generalize the problem I’m having in my question. If it’s useful to know, Process1 is using the node-bebop package to pull down the h264 stream (using drone.getVideoStream()) from a Parrot Bebop 2.0. I tried using the other video stream available through node-bebop (getMjpegStream()). This worked, but was not nearly real-time ; I was getting very intermittent data streams. I’ve entered that specific problem as an Issue in the node-bebop repository.
Thanks for reading ; I really appreciate any help anyone can give !
-
Why ffmpeg shows 2 bitrates ? How to change the second one ?
31 juillet 2016, par 5argonI was comparing why my video plays poorly in my application. So I would like to compare with the video that works fine with
ffmpeg -i file
The one that is working
Duration: 00:02:19.96, start: 0.540000, bitrate: 1159 kb/s
Stream #0:0[0x1e0]: Video: mpeg1video, yuv420p(tv), 640x480 [SAR 1:1 DAR 4:3], 1150 kb/s, 25 fps, 25 tbr, 90k tbn, 25 tbcThis one does not work fine
Duration: 00:02:15.24, start: 0.533367, bitrate: 980 kb/s
Stream #0:0[0x1e0]: Video: mpeg1video, yuv420p(tv), 256x256 [SAR 1:1 DAR 1:1], 104857 kb/s, 29.97 fps, 29.97 tbr, 90k tbn, 29.97 tbcI notice
104857 kb/s
in my video which probably cause the performance problem but what is it ? I could not find information about how to read this output. Bitrate seems to be 980 kb/s, so what is this another kb/s ? When I specify bitrate using-vb
it seems to affect only the first one. -
ffmpeg livestream only shows one frame at a time
28 octobre 2016, par user3308335So, I’ve tried to turn one of my pis into a silly "baby cam" for my pet and I followed the tutorial made by Ustream.tv on how to do this.
This is the script I run to start the stream :
#!/bin/bash
RTMP_URL=<rtmpurl>
STREAM_KEY=<streamkey>
while :
do
raspivid -n -hf -t 0 -w 640 -h 480 -fps 15 -b 400000 -o - | ffmpeg -i - -vcodec copy -an -f flv $RTMP_URL/$STREAM_KEY
sleep 2
done
</streamkey></rtmpurl>However, whenever I go to view the stream, the stream shows only one frame. The same frame until I refresh the browser, watch the ad again, and then it’s a new same frame that it’ll show.
Does anyone have an idea why this might be happening or any troubleshooting tricks for me to try ?