Recherche avancée

Médias (2)

Mot : - Tags -/doc2img

Autres articles (105)

  • L’agrémenter visuellement

    10 avril 2011

    MediaSPIP est basé sur un système de thèmes et de squelettes. Les squelettes définissent le placement des informations dans la page, définissant un usage spécifique de la plateforme, et les thèmes l’habillage graphique général.
    Chacun peut proposer un nouveau thème graphique ou un squelette et le mettre à disposition de la communauté.

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

Sur d’autres sites (8215)

  • Convert individual pixel values from RGB to YUV420 and save the frame - C++

    24 mars 2014, par learner

    I have been working with RGB->YUV420 conversion for sometime using the FFmpeg library. Already tried the sws_scale functionality but its not working well. Now, I have decided to convert each pixel individually, using colorspace conversion formulae. So, following is the code that gets me few frames and allows me to access individual R,G,B values of each pixel :

    // Read frames and save first five frames to disk
       i=0;
       while((av_read_frame(pFormatCtx, &packet)>=0) && (i<5))
       {
           // Is this a packet from the video stream?
           if(packet.stream_index==videoStreamIdx)
           {  
               /// Decode video frame            
               avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);

               // Did we get a video frame?
               if(frameFinished)
               {
                   i++;
                   sws_scale(img_convert_ctx, (const uint8_t * const *)pFrame->data,
                             pFrame->linesize, 0, pCodecCtx->height,
                             pFrameRGB->data, pFrameRGB->linesize);

                   int x, y, R, G, B;
                   uint8_t *p = pFrameRGB->data[0];
                   for(y = 0; y < h; y++)
                   {  
                       for(x = 0; x < w; x++)
                       {
                           R = *p++;
                           G = *p++;
                           B = *p++;
                           printf(" %d-%d-%d ",R,G,B);
                       }
                   }

                   SaveFrame(pFrameRGB, pCodecCtx->width, pCodecCtx->height, i);
               }
           }

           // Free the packet that was allocated by av_read_frame
           av_free_packet(&packet);
       }

    I read online that to convert RGB->YUV420 or vice-versa, one should first convert to YUV444 format. So, its like : RGB->YUV444->YUV420. How do I implement this in C++ ?

    Also, here is the SaveFrame() function used above. I guess this will also have to change a little since YUV420 stores data differently. How to take care of that ?

    void SaveFrame(AVFrame *pFrame, int width, int height, int iFrame)
    {
       FILE *pFile;
       char szFilename[32];
       int  y;

       // Open file
       sprintf(szFilename, "frame%d.ppm", iFrame);
       pFile=fopen(szFilename, "wb");
       if(pFile==NULL)
           return;

       // Write header
       fprintf(pFile, "P6\n%d %d\n255\n", width, height);

       // Write pixel data
       for(y=0; ydata[0]+y*pFrame->linesize[0], 1, width*3, pFile);

       // Close file
       fclose(pFile);
    }

    Can somebody please suggest ? Many thanks !!!

  • Extracting each individual frame from an H264 stream for real-time analysis with OpenCV

    11 mars 2020, par exclmtnpt

    Problem Outline

    I have an h264 real-time video stream (I’ll call this "the stream") being captured in Process1. My goal is to extract each frame from the stream as it comes through and use Process2 to analyze it with OpenCV. (Process1 is nodejs, Process2 is Python)

    Things I’ve tried, and their failure modes :

    • Send the stream directly from one Process1 to Process2 over a named fifo pipe :

    I succeeded in directing the stream from Process1 into the pipe. However, in Process2 (which is Python) I could not (a) extract individual frames from the stream, and (b) convert any extracted data from h264 into an OpenCV format (e.g. JPEG, numpy array).

    I had hoped to use OpenCV’s VideoCapture() method, but it does not allow you to pass a FIFO pipe as an input. I was able to use VideoCapture by saving the h264 stream to a .h264 file, and then passing that as the file path. This doesn’t help me, because I need to do my analysis in real time (i.e. I can’t save the stream to a file before reading it in to OpenCV).

    • Pipe the stream from Process1 to FFMPEG, use FFMPEG to change the stream format from h264 to MJPEG, then pipe the output to Process2 :

    I attempted this using the command :

    cat pipeFromProcess1.fifo | ffmpeg -i pipe:0 -f h264 -f mjpeg pipe:1 | cat > pipeToProcess2.fifo

    The biggest issue with this approach is that FFMPEG takes inputs from Process1 until Process1 is killed, and only then does Process2 begin to receive the data.

    Additionally, on the Process2 side, I still don’t understand how to extract individual frames from the data coming over the pipe. I open the pipe for reading (as "f") and then execute data = f.readline(). The size of data varies drastically (some reads have length on the order of 100, others length on the order of 1,000). When I use f.read() instead of f.readline(), the length is much larger, on the order of 100,000.

    If I were to know that I was getting the correct size chunk of data, I would still not know how to transform it into an OpenCV-compatible array because I don’t understand the format it’s coming over in. It’s a string, but when I print it out it looks like this :

    ��_M 0A0����tQ,\%��e���f/�H�#Y�p�f#�Kus�} F����ʳa�G������+$x�%V�� }[����Wo �1’̶A���c����*�&=Z^�o’��Ͽ� SX-ԁ涶V&H|��$
     ��<�E�� ��>�����u���7�����cR� �f�=�9 ��fs�q�ڄߧ�9v�]�Ӷ���& gr]�n�IRܜ�檯����

    � ����+ �I��w�}� ��9�o��� �w��M�m���IJ ��� �m�=�Soՙ}S �>j �,�ƙ�’���tad =i ��WY�FeC֓z �2�g� ;EXX��S��Ҁ*, ���w� _|�&�y��H��=��)� ���Ɗ3@ �h���Ѻ�Ɋ��ZzR`��)�y�� c�ڋ.��v� !u���� �S�I#�$9R�Ԯ0py z ��8 #��A�q�� �͕� ijc �bp=��۹ c SqH

    Converting from base64 doesn’t seem to help. I also tried :

    array = np.fromstring(data, dtype=np.uint8)

    which does convert to an array, but not one of a size that makes sense based on the 640x368x3 dimensions of the frames I’m trying to decode.

    • Using decoders such as Broadway.js to convert the h264 stream :

    These seem to be focused on streaming to a website, and I did not have success trying to re-purpose them for my goal.

    Clarification about what I’m NOT trying to do :

    I’ve found many related questions about streaming h264 video to a website. This is a solved problem, but none of the solutions help me extract individual frames and put them in an OpenCV-compatible format.

    Also, I need to use the extracted frames in real time on a continual basis. So saving each frame as a .jpg is not helpful.

    System Specs

    Raspberry Pi 3 running Raspian Jessie

    Additional Detail

    I’ve tried to generalize the problem I’m having in my question. If it’s useful to know, Process1 is using the node-bebop package to pull down the h264 stream (using drone.getVideoStream()) from a Parrot Bebop 2.0. I tried using the other video stream available through node-bebop (getMjpegStream()). This worked, but was not nearly real-time ; I was getting very intermittent data streams. I’ve entered that specific problem as an Issue in the node-bebop repository.

    Thanks for reading ; I really appreciate any help anyone can give !

  • Extracting each individual frame from an H264 stream for real-time analysis with OpenCV

    5 mai 2017, par exclmtnpt

    Problem Outline

    I have an h264 real-time video stream (I’ll call this "the stream") being captured in Process1. My goal is to extract each frame from the stream as it comes through and use Process2 to analyze it with OpenCV. (Process1 is nodejs, Process2 is Python)

    Things I’ve tried, and their failure modes :

    • Send the stream directly from one Process1 to Process2 over a named fifo pipe :

    I succeeded in directing the stream from Process1 into the pipe. However, in Process2 (which is Python) I could not (a) extract individual frames from the stream, and (b) convert any extracted data from h264 into an OpenCV format (e.g. JPEG, numpy array).

    I had hoped to use OpenCV’s VideoCapture() method, but it does not allow you to pass a FIFO pipe as an input. I was able to use VideoCapture by saving the h264 stream to a .h264 file, and then passing that as the file path. This doesn’t help me, because I need to do my analysis in real time (i.e. I can’t save the stream to a file before reading it in to OpenCV).

    • Pipe the stream from Process1 to FFMPEG, use FFMPEG to change the stream format from h264 to MJPEG, then pipe the output to Process2 :

    I attempted this using the command :

    cat pipeFromProcess1.fifo | ffmpeg -i pipe:0 -f h264 -f mjpeg pipe:1 | cat > pipeToProcess2.fifo

    The biggest issue with this approach is that FFMPEG takes inputs from Process1 until Process1 is killed, and only then does Process2 begin to receive the data.

    Additionally, on the Process2 side, I still don’t understand how to extract individual frames from the data coming over the pipe. I open the pipe for reading (as "f") and then execute data = f.readline(). The size of data varies drastically (some reads have length on the order of 100, others length on the order of 1,000). When I use f.read() instead of f.readline(), the length is much larger, on the order of 100,000.

    If I were to know that I was getting the correct size chunk of data, I would still not know how to transform it into an OpenCV-compatible array because I don’t understand the format it’s coming over in. It’s a string, but when I print it out it looks like this :

    ��_M 0A0����tQ,\%��e���f/�H�#Y�p�f#�Kus�} F����ʳa�G������+$x�%V�� }[����Wo �1’̶A���c����*�&=Z^�o’��Ͽ� SX-ԁ涶V&H|��$
     ��<�E�� ��>�����u���7�����cR� �f�=�9 ��fs�q�ڄߧ�9v�]�Ӷ���& gr]�n�IRܜ�檯����

    � ����+ �I��w�}� ��9�o��� �w��M�m���IJ ��� �m�=�Soՙ}S �>j �,�ƙ�’���tad =i ��WY�FeC֓z �2�g� ;EXX��S��Ҁ*, ���w� _|�&�y��H��=��)� ���Ɗ3@ �h���Ѻ�Ɋ��ZzR`��)�y�� c�ڋ.��v� !u���� �S�I#�$9R�Ԯ0py z ��8 #��A�q�� �͕� ijc �bp=��۹ c SqH

    Converting from base64 doesn’t seem to help. I also tried :

    array = np.fromstring(data, dtype=np.uint8)

    which does convert to an array, but not one of a size that makes sense based on the 640x368x3 dimensions of the frames I’m trying to decode.

    • Using decoders such as Broadway.js to convert the h264 stream :

    These seem to be focused on streaming to a website, and I did not have success trying to re-purpose them for my goal.

    Clarification about what I’m NOT trying to do :

    I’ve found many related questions about streaming h264 video to a website. This is a solved problem, but none of the solutions help me extract individual frames and put them in an OpenCV-compatible format.

    Also, I need to use the extracted frames in real time on a continual basis. So saving each frame as a .jpg is not helpful.

    System Specs

    Raspberry Pi 3 running Raspian Jessie

    Additional Detail

    I’ve tried to generalize the problem I’m having in my question. If it’s useful to know, Process1 is using the node-bebop package to pull down the h264 stream (using drone.getVideoStream()) from a Parrot Bebop 2.0. I tried using the other video stream available through node-bebop (getMjpegStream()). This worked, but was not nearly real-time ; I was getting very intermittent data streams. I’ve entered that specific problem as an Issue in the node-bebop repository.

    Thanks for reading ; I really appreciate any help anyone can give !