
Recherche avancée
Médias (91)
-
Les Miserables
9 décembre 2019, par
Mis à jour : Décembre 2019
Langue : français
Type : Textuel
-
VideoHandle
8 novembre 2019, par
Mis à jour : Novembre 2019
Langue : français
Type : Video
-
Somos millones 1
21 juillet 2014, par
Mis à jour : Juin 2015
Langue : français
Type : Video
-
Un test - mauritanie
3 avril 2014, par
Mis à jour : Avril 2014
Langue : français
Type : Textuel
-
Pourquoi Obama lit il mes mails ?
4 février 2014, par
Mis à jour : Février 2014
Langue : français
-
IMG 0222
6 octobre 2013, par
Mis à jour : Octobre 2013
Langue : français
Type : Image
Autres articles (107)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...) -
Configurer la prise en compte des langues
15 novembre 2010, parAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)
Sur d’autres sites (6569)
-
How to set individual image display durations with ffmpeg-python
20 septembre 2022, par tompiI am using ffmpeg-python 0.2.0 with Python 3.10.0. Displaying videos in VLC 3.0.17.4.


I am making an animation from a set of images. Each image is displayed for different amount of time.


I have the basics in place with inputting images and concatenating streams, but I can't figure out how to correctly set frame duration.


Consider the following example :


stream1 = ffmpeg.input(image1_file)
stream2 = ffmpeg.input(image2_file)
combined_streams = ffmpeg.concat(stream1, stream2)
output_stream = ffmpeg.output(combined_streams, output_file)
ffmpeg.run(output_stream)



With this I get a video with duration of a split second that barely shows an image before ending. Which is to be expected with two individual frames.


For this example, my goal is to have a video of 5 seconds total duration, showing the image in stream1 for 2 seconds and the image in stream2 for 3 seconds.


Attempt 1 : Setting
t
for inputs

stream1 = ffmpeg.input(image1_file, t=2)
stream2 = ffmpeg.input(image2_file, t=3)
combined_streams = ffmpeg.concat(stream1, stream2)
output_stream = ffmpeg.output(combined_streams, output_file)
ffmpeg.run(output_stream)



With this, I get a video with the duration of a split second and no image displayed.


Attempt 2 : Setting
frames
for inputs

stream1 = ffmpeg.input(image1_file, frames=48)
stream2 = ffmpeg.input(image2_file, frames=72)
combined_streams = ffmpeg.concat(stream1, stream2)
output_stream = ffmpeg.output(combined_streams, output_file, r=24)
ffmpeg.run(output_stream)



In this case, I get the following error from ffmpeg :


Option frames (set the number of frames to output) cannot be applied to input url ########## -- you are trying to apply an input option to an output file or vice versa. Move this option before the file it belongs to.



I can't tell if this is a bug in ffmpeg-python or if I did it wrong.


Attempt 3 : Setting
framerate
for inputs

stream1 = ffmpeg.input(image1_file, framerate=1/2)
stream2 = ffmpeg.input(image2_file, framerate=1/3)
combined_streams = ffmpeg.concat(stream1, stream2)
output_stream = ffmpeg.output(combined_streams, output_file)
ffmpeg.run(output_stream)



With this, I get a video with the duration of a split second and no image displayed. However, when I set both framerate values to 1/2, I get an animation of 4 seconds duration that displays the first image for two seconds and the second image for two seconds. This is the closest I got to a functional solution, but it is not quite there.


I am aware that multiple images can be globbed by input, but that would apply the same duration setting to all images, and my images each have different durations, so I am looking for a different solution.


Any ideas for how to get ffmpeg-python to do the thing is much appreciated.


-
Convert individual pixel values from RGB to YUV420 and save the frame - C++
24 mars 2014, par learnerI have been working with RGB->YUV420 conversion for sometime using the FFmpeg library. Already tried the
sws_scale
functionality but its not working well. Now, I have decided to convert each pixel individually, using colorspace conversion formulae. So, following is the code that gets me few frames and allows me to access individual R,G,B values of each pixel :// Read frames and save first five frames to disk
i=0;
while((av_read_frame(pFormatCtx, &packet)>=0) && (i<5))
{
// Is this a packet from the video stream?
if(packet.stream_index==videoStreamIdx)
{
/// Decode video frame
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
// Did we get a video frame?
if(frameFinished)
{
i++;
sws_scale(img_convert_ctx, (const uint8_t * const *)pFrame->data,
pFrame->linesize, 0, pCodecCtx->height,
pFrameRGB->data, pFrameRGB->linesize);
int x, y, R, G, B;
uint8_t *p = pFrameRGB->data[0];
for(y = 0; y < h; y++)
{
for(x = 0; x < w; x++)
{
R = *p++;
G = *p++;
B = *p++;
printf(" %d-%d-%d ",R,G,B);
}
}
SaveFrame(pFrameRGB, pCodecCtx->width, pCodecCtx->height, i);
}
}
// Free the packet that was allocated by av_read_frame
av_free_packet(&packet);
}I read online that to convert RGB->YUV420 or vice-versa, one should first convert to YUV444 format. So, its like : RGB->YUV444->YUV420. How do I implement this in C++ ?
Also, here is the
SaveFrame()
function used above. I guess this will also have to change a little since YUV420 stores data differently. How to take care of that ?void SaveFrame(AVFrame *pFrame, int width, int height, int iFrame)
{
FILE *pFile;
char szFilename[32];
int y;
// Open file
sprintf(szFilename, "frame%d.ppm", iFrame);
pFile=fopen(szFilename, "wb");
if(pFile==NULL)
return;
// Write header
fprintf(pFile, "P6\n%d %d\n255\n", width, height);
// Write pixel data
for(y=0; ydata[0]+y*pFrame->linesize[0], 1, width*3, pFile);
// Close file
fclose(pFile);
}Can somebody please suggest ? Many thanks !!!
-
Extracting each individual frame from an H264 stream for real-time analysis with OpenCV
11 mars 2020, par exclmtnptProblem Outline
I have an h264 real-time video stream (I’ll call this "the stream") being captured in Process1. My goal is to extract each frame from the stream as it comes through and use Process2 to analyze it with OpenCV. (Process1 is nodejs, Process2 is Python)
Things I’ve tried, and their failure modes :
- Send the stream directly from one Process1 to Process2 over a named fifo pipe :
I succeeded in directing the stream from Process1 into the pipe. However, in Process2 (which is Python) I could not (a) extract individual frames from the stream, and (b) convert any extracted data from h264 into an OpenCV format (e.g. JPEG, numpy array).
I had hoped to use OpenCV’s VideoCapture() method, but it does not allow you to pass a FIFO pipe as an input. I was able to use VideoCapture by saving the h264 stream to a .h264 file, and then passing that as the file path. This doesn’t help me, because I need to do my analysis in real time (i.e. I can’t save the stream to a file before reading it in to OpenCV).
- Pipe the stream from Process1 to FFMPEG, use FFMPEG to change the stream format from h264 to MJPEG, then pipe the output to Process2 :
I attempted this using the command :
cat pipeFromProcess1.fifo | ffmpeg -i pipe:0 -f h264 -f mjpeg pipe:1 | cat > pipeToProcess2.fifo
The biggest issue with this approach is that FFMPEG takes inputs from Process1 until Process1 is killed, and only then does Process2 begin to receive the data.
Additionally, on the Process2 side, I still don’t understand how to extract individual frames from the data coming over the pipe. I open the pipe for reading (as "f") and then execute data = f.readline(). The size of data varies drastically (some reads have length on the order of 100, others length on the order of 1,000). When I use f.read() instead of f.readline(), the length is much larger, on the order of 100,000.
If I were to know that I was getting the correct size chunk of data, I would still not know how to transform it into an OpenCV-compatible array because I don’t understand the format it’s coming over in. It’s a string, but when I print it out it looks like this :
��_M 0A0����tQ,\%��e���f/�H�#Y�p�f#�Kus�} F����ʳa�G������+$x�%V�� }[����Wo �1’̶A���c����*�&=Z^�o’��Ͽ� SX-ԁ涶V&H|��$
��<�E�� ��>�����u���7�����cR� �f�=�9 ��fs�q�ڄߧ�9v�]�Ӷ���& gr]�n�IRܜ�檯����
� ����+ �I��w�}� ��9�o��� �w��M�m���IJ ��� �m�=�Soՙ}S �>j �,�ƙ�’���tad =i ��WY�FeC֓z �2�g� ;EXX��S��Ҁ*, ���w� _|�&�y��H��=��)� ���Ɗ3@ �h���Ѻ�Ɋ��ZzR`��)�y�� c�ڋ.��v� !u���� �S�I#�$9R�Ԯ0py z ��8 #��A�q�� �͕� ijc �bp=��۹ c SqHConverting from base64 doesn’t seem to help. I also tried :
array = np.fromstring(data, dtype=np.uint8)
which does convert to an array, but not one of a size that makes sense based on the 640x368x3 dimensions of the frames I’m trying to decode.
- Using decoders such as Broadway.js to convert the h264 stream :
These seem to be focused on streaming to a website, and I did not have success trying to re-purpose them for my goal.
Clarification about what I’m NOT trying to do :
I’ve found many related questions about streaming h264 video to a website. This is a solved problem, but none of the solutions help me extract individual frames and put them in an OpenCV-compatible format.
Also, I need to use the extracted frames in real time on a continual basis. So saving each frame as a .jpg is not helpful.
System Specs
Raspberry Pi 3 running Raspian Jessie
Additional Detail
I’ve tried to generalize the problem I’m having in my question. If it’s useful to know, Process1 is using the node-bebop package to pull down the h264 stream (using drone.getVideoStream()) from a Parrot Bebop 2.0. I tried using the other video stream available through node-bebop (getMjpegStream()). This worked, but was not nearly real-time ; I was getting very intermittent data streams. I’ve entered that specific problem as an Issue in the node-bebop repository.
Thanks for reading ; I really appreciate any help anyone can give !