
Recherche avancée
Médias (1)
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
Autres articles (82)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
L’utiliser, en parler, le critiquer
10 avril 2011La première attitude à adopter est d’en parler, soit directement avec les personnes impliquées dans son développement, soit autour de vous pour convaincre de nouvelles personnes à l’utiliser.
Plus la communauté sera nombreuse et plus les évolutions seront rapides ...
Une liste de discussion est disponible pour tout échange entre utilisateurs. -
Récupération d’informations sur le site maître à l’installation d’une instance
26 novembre 2010, parUtilité
Sur le site principal, une instance de mutualisation est définie par plusieurs choses : Les données dans la table spip_mutus ; Son logo ; Son auteur principal (id_admin dans la table spip_mutus correspondant à un id_auteur de la table spip_auteurs)qui sera le seul à pouvoir créer définitivement l’instance de mutualisation ;
Il peut donc être tout à fait judicieux de vouloir récupérer certaines de ces informations afin de compléter l’installation d’une instance pour, par exemple : récupérer le (...)
Sur d’autres sites (4541)
-
Extracting each individual frame from an H264 stream for real-time analysis with OpenCV
11 mars 2020, par exclmtnptProblem Outline
I have an h264 real-time video stream (I’ll call this "the stream") being captured in Process1. My goal is to extract each frame from the stream as it comes through and use Process2 to analyze it with OpenCV. (Process1 is nodejs, Process2 is Python)
Things I’ve tried, and their failure modes :
- Send the stream directly from one Process1 to Process2 over a named fifo pipe :
I succeeded in directing the stream from Process1 into the pipe. However, in Process2 (which is Python) I could not (a) extract individual frames from the stream, and (b) convert any extracted data from h264 into an OpenCV format (e.g. JPEG, numpy array).
I had hoped to use OpenCV’s VideoCapture() method, but it does not allow you to pass a FIFO pipe as an input. I was able to use VideoCapture by saving the h264 stream to a .h264 file, and then passing that as the file path. This doesn’t help me, because I need to do my analysis in real time (i.e. I can’t save the stream to a file before reading it in to OpenCV).
- Pipe the stream from Process1 to FFMPEG, use FFMPEG to change the stream format from h264 to MJPEG, then pipe the output to Process2 :
I attempted this using the command :
cat pipeFromProcess1.fifo | ffmpeg -i pipe:0 -f h264 -f mjpeg pipe:1 | cat > pipeToProcess2.fifo
The biggest issue with this approach is that FFMPEG takes inputs from Process1 until Process1 is killed, and only then does Process2 begin to receive the data.
Additionally, on the Process2 side, I still don’t understand how to extract individual frames from the data coming over the pipe. I open the pipe for reading (as "f") and then execute data = f.readline(). The size of data varies drastically (some reads have length on the order of 100, others length on the order of 1,000). When I use f.read() instead of f.readline(), the length is much larger, on the order of 100,000.
If I were to know that I was getting the correct size chunk of data, I would still not know how to transform it into an OpenCV-compatible array because I don’t understand the format it’s coming over in. It’s a string, but when I print it out it looks like this :
��_M 0A0����tQ,\%��e���f/�H�#Y�p�f#�Kus�} F����ʳa�G������+$x�%V�� }[����Wo �1’̶A���c����*�&=Z^�o’��Ͽ� SX-ԁ涶V&H|��$
��<�E�� ��>�����u���7�����cR� �f�=�9 ��fs�q�ڄߧ�9v�]�Ӷ���& gr]�n�IRܜ�檯����
� ����+ �I��w�}� ��9�o��� �w��M�m���IJ ��� �m�=�Soՙ}S �>j �,�ƙ�’���tad =i ��WY�FeC֓z �2�g� ;EXX��S��Ҁ*, ���w� _|�&�y��H��=��)� ���Ɗ3@ �h���Ѻ�Ɋ��ZzR`��)�y�� c�ڋ.��v� !u���� �S�I#�$9R�Ԯ0py z ��8 #��A�q�� �͕� ijc �bp=��۹ c SqHConverting from base64 doesn’t seem to help. I also tried :
array = np.fromstring(data, dtype=np.uint8)
which does convert to an array, but not one of a size that makes sense based on the 640x368x3 dimensions of the frames I’m trying to decode.
- Using decoders such as Broadway.js to convert the h264 stream :
These seem to be focused on streaming to a website, and I did not have success trying to re-purpose them for my goal.
Clarification about what I’m NOT trying to do :
I’ve found many related questions about streaming h264 video to a website. This is a solved problem, but none of the solutions help me extract individual frames and put them in an OpenCV-compatible format.
Also, I need to use the extracted frames in real time on a continual basis. So saving each frame as a .jpg is not helpful.
System Specs
Raspberry Pi 3 running Raspian Jessie
Additional Detail
I’ve tried to generalize the problem I’m having in my question. If it’s useful to know, Process1 is using the node-bebop package to pull down the h264 stream (using drone.getVideoStream()) from a Parrot Bebop 2.0. I tried using the other video stream available through node-bebop (getMjpegStream()). This worked, but was not nearly real-time ; I was getting very intermittent data streams. I’ve entered that specific problem as an Issue in the node-bebop repository.
Thanks for reading ; I really appreciate any help anyone can give !
-
Convert individual pixel values from RGB to YUV420 and save the frame - C++
24 mars 2014, par learnerI have been working with RGB->YUV420 conversion for sometime using the FFmpeg library. Already tried the
sws_scale
functionality but its not working well. Now, I have decided to convert each pixel individually, using colorspace conversion formulae. So, following is the code that gets me few frames and allows me to access individual R,G,B values of each pixel :// Read frames and save first five frames to disk
i=0;
while((av_read_frame(pFormatCtx, &packet)>=0) && (i<5))
{
// Is this a packet from the video stream?
if(packet.stream_index==videoStreamIdx)
{
/// Decode video frame
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
// Did we get a video frame?
if(frameFinished)
{
i++;
sws_scale(img_convert_ctx, (const uint8_t * const *)pFrame->data,
pFrame->linesize, 0, pCodecCtx->height,
pFrameRGB->data, pFrameRGB->linesize);
int x, y, R, G, B;
uint8_t *p = pFrameRGB->data[0];
for(y = 0; y < h; y++)
{
for(x = 0; x < w; x++)
{
R = *p++;
G = *p++;
B = *p++;
printf(" %d-%d-%d ",R,G,B);
}
}
SaveFrame(pFrameRGB, pCodecCtx->width, pCodecCtx->height, i);
}
}
// Free the packet that was allocated by av_read_frame
av_free_packet(&packet);
}I read online that to convert RGB->YUV420 or vice-versa, one should first convert to YUV444 format. So, its like : RGB->YUV444->YUV420. How do I implement this in C++ ?
Also, here is the
SaveFrame()
function used above. I guess this will also have to change a little since YUV420 stores data differently. How to take care of that ?void SaveFrame(AVFrame *pFrame, int width, int height, int iFrame)
{
FILE *pFile;
char szFilename[32];
int y;
// Open file
sprintf(szFilename, "frame%d.ppm", iFrame);
pFile=fopen(szFilename, "wb");
if(pFile==NULL)
return;
// Write header
fprintf(pFile, "P6\n%d %d\n255\n", width, height);
// Write pixel data
for(y=0; ydata[0]+y*pFrame->linesize[0], 1, width*3, pFile);
// Close file
fclose(pFile);
}Can somebody please suggest ? Many thanks !!!
-
How to set individual image display durations with ffmpeg-python
20 septembre 2022, par tompiI am using ffmpeg-python 0.2.0 with Python 3.10.0. Displaying videos in VLC 3.0.17.4.


I am making an animation from a set of images. Each image is displayed for different amount of time.


I have the basics in place with inputting images and concatenating streams, but I can't figure out how to correctly set frame duration.


Consider the following example :


stream1 = ffmpeg.input(image1_file)
stream2 = ffmpeg.input(image2_file)
combined_streams = ffmpeg.concat(stream1, stream2)
output_stream = ffmpeg.output(combined_streams, output_file)
ffmpeg.run(output_stream)



With this I get a video with duration of a split second that barely shows an image before ending. Which is to be expected with two individual frames.


For this example, my goal is to have a video of 5 seconds total duration, showing the image in stream1 for 2 seconds and the image in stream2 for 3 seconds.


Attempt 1 : Setting
t
for inputs

stream1 = ffmpeg.input(image1_file, t=2)
stream2 = ffmpeg.input(image2_file, t=3)
combined_streams = ffmpeg.concat(stream1, stream2)
output_stream = ffmpeg.output(combined_streams, output_file)
ffmpeg.run(output_stream)



With this, I get a video with the duration of a split second and no image displayed.


Attempt 2 : Setting
frames
for inputs

stream1 = ffmpeg.input(image1_file, frames=48)
stream2 = ffmpeg.input(image2_file, frames=72)
combined_streams = ffmpeg.concat(stream1, stream2)
output_stream = ffmpeg.output(combined_streams, output_file, r=24)
ffmpeg.run(output_stream)



In this case, I get the following error from ffmpeg :


Option frames (set the number of frames to output) cannot be applied to input url ########## -- you are trying to apply an input option to an output file or vice versa. Move this option before the file it belongs to.



I can't tell if this is a bug in ffmpeg-python or if I did it wrong.


Attempt 3 : Setting
framerate
for inputs

stream1 = ffmpeg.input(image1_file, framerate=1/2)
stream2 = ffmpeg.input(image2_file, framerate=1/3)
combined_streams = ffmpeg.concat(stream1, stream2)
output_stream = ffmpeg.output(combined_streams, output_file)
ffmpeg.run(output_stream)



With this, I get a video with the duration of a split second and no image displayed. However, when I set both framerate values to 1/2, I get an animation of 4 seconds duration that displays the first image for two seconds and the second image for two seconds. This is the closest I got to a functional solution, but it is not quite there.


I am aware that multiple images can be globbed by input, but that would apply the same duration setting to all images, and my images each have different durations, so I am looking for a different solution.


Any ideas for how to get ffmpeg-python to do the thing is much appreciated.