
Recherche avancée
Autres articles (62)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Submit bugs and patches
13 avril 2011Unfortunately a software is never perfect.
If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
You may also (...)
Sur d’autres sites (8693)
-
Extracting frames from a video does not work correctly [closed]
13 avril 2024, par Al TilmidhUsing the libraries (libav) and (ffmpeg), I try to extract frames as
.jpg
files from avideo.mp4
, the problem is that my program crashes when I use theCV_8UC3
parameter, but by changing this parameter toCV_8UC1
, the extracted images end up without color (grayscale), I don't really know what I missed, here is a minimal code to reproduce the two situations :

#include <opencv2></opencv2>opencv.hpp>

extern "C"
{
#include <libavformat></libavformat>avformat.h>
#include <libavcodec></libavcodec>avcodec.h>
}

int main()
{
 AVFormatContext *formatContext = nullptr;

 if (avformat_open_input(&formatContext, "video.mp4", nullptr, nullptr) != 0)
 {
 return -1;
 }

 if (avformat_find_stream_info(formatContext, nullptr) < 0)
 {
 return -1;
 }

 AVPacket packet;
 const AVCodec *codec = nullptr;
 AVCodecContext *codecContext = nullptr;

 int videoStreamIndex = av_find_best_stream(formatContext, AVMEDIA_TYPE_VIDEO, -1, -1, &codec, 0);
 if (videoStreamIndex < 0)
 {
 return -1;
 }

 codecContext = avcodec_alloc_context3(codec);
 avcodec_parameters_to_context(codecContext, formatContext->streams[videoStreamIndex]->codecpar);

 if (avcodec_open2(codecContext, codec, nullptr) < 0)
 {
 return -1;
 }

 AVFrame *frame = av_frame_alloc();

 while (av_read_frame(formatContext, &packet) >= 0)
 {
 if (packet.stream_index == videoStreamIndex)
 {
 int response = avcodec_send_packet(codecContext, &packet);
 
 if (response < 0)
 {
 break;
 }

 while (response >= 0)
 {
 response = avcodec_receive_frame(codecContext, frame);
 if (response == AVERROR(EAGAIN))
 {
 // NO FRAMES
 break;
 }

 else if (response == AVERROR_EOF)
 {
 // END OF FILE
 break;
 }

 else if (response < 0)
 {
 break;
 }

 //cv::Mat frameMat(frame->height, frame->width, CV_8UC3, frame->data[0]); // CV_8UC3 → THE PROGRAM CRASHES
 cv::Mat frameMat(frame->height, frame->width, CV_8UC1, frame->data[0]); // CV_8UC1 → WORK BUT IMAGES ARE IN GRAYSCALE
 cv::imwrite("frame_" + std::to_string(frame->pts) + ".jpg", frameMat);
 }
 }

 av_packet_unref(&packet);
 }

 av_frame_free(&frame);
 avcodec_free_context(&codecContext);
 avformat_close_input(&formatContext);

 return 0;
}



-
How to extract motion vectors from h264 without a full decode on the CPU
25 septembre 2020, par Adrian MayI'm trying to use my nose as a pointing device. The plan is to encode the video stream from a webcam pointed at my face as h264 or the like, get the motion vectors, cook the numbers a bit and chuck them into /dev/uinput to make the mouse pointer move about. The uinput bit was easy.


This has to work with zero discernable latency. This, for instance :


#!/bin/bash
[ -p pipe.mkv ] || mkfifo pipe.mkv
ffmpeg -y -rtbufsize 1M -s 640x360 -vcodec mjpeg -i /dev/video0 -c h264_nvenc pipe.mkv &
ffplay -flags2 +export_mvs -vf codecview=mv=pf+bf+bb pipe.mkv



shows that the vectors are there but with a latency of several seconds which is unusable in a mouse. I know that the first ffmpeg step is working very fast by using the GPU, so either the pipe or the h264 decode in the second step is introducing the latency.


I tried MV Tractus (same as mpegflow I think) in a similar pipe arrangement and it was also very slow. They do a full h264 decode on the CPU and I think that's the problem cos I can see them imposing a lot of load on one CPU. If the pipe had caused the delay by buffering badly then the CPU wouldn't have been loaded. I guess ffplay also did the decoding on the CPU and I couldn't persuade it not to, but it only wants to draw arrows which are no use to me.


I think there are several approaches, and I'd like advice on which would be best, or if there's something even better I don't know about. I could :


- 

- Decode in hardware and get the motion vectors. So far this has failed. I tried combining ffmpeg's
extract_mvs.c
andhw_decode.c
samples but no motion vectors turn up. vdpau is the only decoder I got working on my linux box. I have a nvidia gpu. - Do a minimal parse of the h264 to fish out the motion vectors only, ignoring all the other data. I think this would mean putting some kind of "motion only" option in libav's parser, but I'm not at all familiar with that code.
- Find some other h264 parsing library that has said option and also unpacks the container.
- Forget about hardware accelerated encoding and use a stripped down encoder to make only the motion vectors on either CPU or GPU. I suspect this would be slow cos I think calculating the motion vectors is the hardest part of the algorithm.










I'm tending towards the second option but I need some help figuring out where in the libav code to do it.


- Decode in hardware and get the motion vectors. So far this has failed. I tried combining ffmpeg's
-
Playing RTSP in WPF application with low latency using FFMPEG / FFMediaElement (FFME)
22 mars 2019, par PabokaI’m trying to use FFMediaElement (FFME, WPF MediaElement replacement based on FFmpeg) component to play RSTP live video in my WPF application.
I have a good connection to my camera and I want to play it with minimum available latency.
I’ve reduced the latency by changing
ProbeSize
to its minimal value :private void Media_OnMediaInitializing(object Sender, MediaInitializingRoutedEventArgs e)
{
e.Configuration.GlobalOptions.ProbeSize = 32;
}But I still have about 1 second of latency since the very beginning of the stream. I mean, when I start playing, I have to wait for 1 second till the video appears and then I have 1s of latency.
I’ve also tried to change following parameters :
e.Configuration.GlobalOptions.EnableReducedBuffering = true;
e.Configuration.GlobalOptions.FlagNoBuffer = true;
e.Configuration.GlobalOptions.MaxAnalyzeDuration = TimeSpan.Zero;but it gave no result.
I measured time-interval between FFmpeg output lines (the number in the first column is the time, elapsed from the previous line, ms)
---- OpenCommand: Entered
39 FFInterop.Initialize: FFmpeg v4.0
0 EVENT START: MediaInitializing
0 EVENT DONE : MediaInitializing
379 EVENT START: MediaOpening
0 EVENT DONE : MediaOpening
0 COMP VIDEO: Start Offset: 0,000; Duration: N/A
41 SYNC-BUFFER: Started.
609 SYNC-BUFFER: Finished. Clock set to 1534932751,634
0 EVENT START: MediaOpened
0 EVENT DONE : MediaOpened
0 EVENT START: BufferingStarted
0 EVENT DONE : BufferingStarted
0 OpenCommand: Completed
0 V BLK: 1534932751,634 | CLK: 1534932751,634 | DFT: 0 | IX: 0 | PQ: 0,0k | TQ: 0,0k
0 Command Queue (1 commands): Before ProcessNext
0 Play - ID: 404 Canceled: False; Completed: False; Status: WaitingForActivation; State:
94 V BLK: 1534932751,675 | CLK: 1534932751,699 | DFT: 24 | IX: 1 | PQ: 0,0k | TQ: 0,0kSo, the most the process of "sync-buffering" takes the most of the time.
Is there any parameter of FFmpeg which allows reducing a size of the buffer ?