
Recherche avancée
Autres articles (51)
-
Submit bugs and patches
13 avril 2011Unfortunately a software is never perfect.
If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
You may also (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)
Sur d’autres sites (8002)
-
Segmentation fault with avcodec_encode_video2() while encoding H.264
16 juillet 2015, par Baris DemirayI’m trying to convert a
cv::Mat
to anAVFrame
to encode it then in H.264 and wanted to start from a simple example, as I’m a newbie in both. So I first read in a JPEG file, and then do the pixel format conversion withsws_scale()
fromAV_PIX_FMT_BGR24
toAV_PIX_FMT_YUV420P
keeping the dimensions the same, and it all goes fine until I callavcodec_encode_video2()
.I read quite a few discussions regarding an
AVFrame
allocation and the question segmetation fault while avcodec_encode_video2 seemed like a match but I just can’t see what I’m missing or getting wrong.Here is the minimal code that you can reproduce the crash, it should be compiled with,
g++ -o OpenCV2FFmpeg OpenCV2FFmpeg.cpp -lopencv_imgproc -lopencv_highgui -lopencv_core -lswscale -lavutil -lavcodec -lavformat
It’s output on my system,
cv::Mat [width=420, height=315, depth=0, channels=3, step=1260]
I'll soon crash..
Segmentation faultAnd that
sample.jpg
file’s details byidentify
tool,~temporary/sample.jpg JPEG 420x315 420x315+0+0 8-bit sRGB 38.3KB 0.000u 0:00.000
Please note that I’m trying to create a video out of a single image, just to keep things simple.
#include <iostream>
#include <cassert>
using namespace std;
extern "C" {
#include <libavcodec></libavcodec>avcodec.h>
#include <libswscale></libswscale>swscale.h>
#include <libavformat></libavformat>avformat.h>
}
#include <opencv2></opencv2>core/core.hpp>
#include <opencv2></opencv2>highgui/highgui.hpp>
const string TEST_IMAGE = "/home/baris/temporary/sample.jpg";
int main(int /*argc*/, char** argv)
{
av_register_all();
avcodec_register_all();
/**
* Initialise the encoder
*/
AVCodec *h264encoder = avcodec_find_encoder(AV_CODEC_ID_H264);
AVFormatContext *cv2avFormatContext = avformat_alloc_context();
/**
* Create a stream and allocate frames
*/
AVStream *h264outputstream = avformat_new_stream(cv2avFormatContext, h264encoder);
avcodec_get_context_defaults3(h264outputstream->codec, h264encoder);
AVFrame *sourceAvFrame = av_frame_alloc(), *destAvFrame = av_frame_alloc();
int got_frame;
/**
* Pixel formats for the input and the output
*/
AVPixelFormat sourcePixelFormat = AV_PIX_FMT_BGR24;
AVPixelFormat destPixelFormat = AV_PIX_FMT_YUV420P;
/**
* Create cv::Mat
*/
cv::Mat cvFrame = cv::imread(TEST_IMAGE, CV_LOAD_IMAGE_COLOR);
int width = cvFrame.size().width, height = cvFrame.size().height;
cerr << "cv::Mat [width=" << width << ", height=" << height << ", depth=" << cvFrame.depth() << ", channels=" << cvFrame.channels() << ", step=" << cvFrame.step << "]" << endl;
h264outputstream->codec->pix_fmt = destPixelFormat;
h264outputstream->codec->width = cvFrame.cols;
h264outputstream->codec->height = cvFrame.rows;
/**
* Prepare the conversion context
*/
SwsContext *bgr2yuvcontext = sws_getContext(width, height,
sourcePixelFormat,
h264outputstream->codec->width, h264outputstream->codec->height,
h264outputstream->codec->pix_fmt,
SWS_BICUBIC, NULL, NULL, NULL);
/**
* Convert and encode frames
*/
for (uint i=0; i < 250; i++)
{
/**
* Allocate source frame, i.e. input to sws_scale()
*/
avpicture_alloc((AVPicture*)sourceAvFrame, sourcePixelFormat, width, height);
for (int h = 0; h < height; h++)
memcpy(&(sourceAvFrame->data[0][h*sourceAvFrame->linesize[0]]), &(cvFrame.data[h*cvFrame.step]), width*3);
/**
* Allocate destination frame, i.e. output from sws_scale()
*/
avpicture_alloc((AVPicture *)destAvFrame, destPixelFormat, width, height);
sws_scale(bgr2yuvcontext, sourceAvFrame->data, sourceAvFrame->linesize,
0, height, destAvFrame->data, destAvFrame->linesize);
/**
* Prepare an AVPacket for encoded output
*/
AVPacket avEncodedPacket;
av_init_packet(&avEncodedPacket);
avEncodedPacket.data = NULL;
avEncodedPacket.size = 0;
// av_free_packet(&avEncodedPacket); w/ or w/o result doesn't change
cerr << "I'll soon crash.." << endl;
if (avcodec_encode_video2(h264outputstream->codec, &avEncodedPacket, destAvFrame, &got_frame) < 0)
exit(1);
cerr << "Checking if we have a frame" << endl;
if (got_frame)
av_write_frame(cv2avFormatContext, &avEncodedPacket);
av_free_packet(&avEncodedPacket);
av_frame_free(&sourceAvFrame);
av_frame_free(&destAvFrame);
}
}
</cassert></iostream>Thanks in advance !
EDIT : And the stack trace after the crash,
Thread 2 (Thread 0x7fffe5506700 (LWP 10005)):
#0 0x00007ffff4bf6c5d in poll () at /lib64/libc.so.6
#1 0x00007fffe9073268 in () at /usr/lib64/libusb-1.0.so.0
#2 0x00007ffff47010a4 in start_thread () at /lib64/libpthread.so.0
#3 0x00007ffff4bff08d in clone () at /lib64/libc.so.6
Thread 1 (Thread 0x7ffff7f869c0 (LWP 10001)):
#0 0x00007ffff5ecc7dc in avcodec_encode_video2 () at /usr/lib64/libavcodec.so.56
#1 0x00000000004019b6 in main(int, char**) (argv=0x7fffffffd3d8) at ../src/OpenCV2FFmpeg.cpp:99EDIT2 : Problem was that I hadn’t
avcodec_open2()
the codec as spotted by Ronald. Final version of the code is at https://github.com/barisdemiray/opencv2ffmpeg/, with leaks and probably other problems hoping that I’ll improve it while learning both libraries. -
MJPEG decoding is 3x slower when opening a V4L2 input device [closed]
26 octobre 2024, par XenonicI'm trying to decode a MJPEG video stream coming from a webcam, but I'm hitting some performance blockers when using FFmpeg's C API in my application. I've recreated the problem using the example video decoder, where I just simply open the V4L2 input device, read packets, and push them to the decoder. What's strange is if I try to get my input packets from the V4L2 device instead of from a file, the
avcodec_send_packet
call to the decoder is nearly 3x slower. After further poking around, I narrowed the issue down to whether or not I open the V4L2 device at all.

Let's look at a minimal example demonstrating this behavior :


extern "C"
{
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavutil></libavutil>opt.h>
#include <libavdevice></libavdevice>avdevice.h>
}

#define INBUF_SIZE 4096

static void decode(AVCodecContext *dec_ctx, AVFrame *frame, AVPacket *pkt)
{
 if (avcodec_send_packet(dec_ctx, pkt) < 0)
 exit(1);
 
 int ret = 0;
 while (ret >= 0) {
 ret = avcodec_receive_frame(dec_ctx, frame);
 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
 return;
 else if (ret < 0)
 exit(1);

 // Here we'd save off the decoded frame, but that's not necessary for the example.
 }
}

int main(int argc, char **argv)
{
 const char *filename;
 const AVCodec *codec;
 AVCodecParserContext *parser;
 AVCodecContext *c= NULL;
 FILE *f;
 AVFrame *frame;
 uint8_t inbuf[INBUF_SIZE + AV_INPUT_BUFFER_PADDING_SIZE];
 uint8_t *data;
 size_t data_size;
 int ret;
 int eof;
 AVPacket *pkt;

 filename = argv[1];

 pkt = av_packet_alloc();
 if (!pkt)
 exit(1);

 /* set end of buffer to 0 (this ensures that no overreading happens for damaged MPEG streams) */
 memset(inbuf + INBUF_SIZE, 0, AV_INPUT_BUFFER_PADDING_SIZE);

 // Use MJPEG instead of the example's MPEG1
 //codec = avcodec_find_decoder(AV_CODEC_ID_MPEG1VIDEO);
 codec = avcodec_find_decoder(AV_CODEC_ID_MJPEG);
 if (!codec) {
 fprintf(stderr, "Codec not found\n");
 exit(1);
 }

 parser = av_parser_init(codec->id);
 if (!parser) {
 fprintf(stderr, "parser not found\n");
 exit(1);
 }

 c = avcodec_alloc_context3(codec);
 if (!c) {
 fprintf(stderr, "Could not allocate video codec context\n");
 exit(1);
 }

 if (avcodec_open2(c, codec, NULL) < 0) {
 fprintf(stderr, "Could not open codec\n");
 exit(1);
 }

 c->pix_fmt = AV_PIX_FMT_YUVJ422P;

 f = fopen(filename, "rb");
 if (!f) {
 fprintf(stderr, "Could not open %s\n", filename);
 exit(1);
 }

 frame = av_frame_alloc();
 if (!frame) {
 fprintf(stderr, "Could not allocate video frame\n");
 exit(1);
 }

 avdevice_register_all();
 auto* inputFormat = av_find_input_format("v4l2");
 AVDictionary* options = nullptr;
 av_dict_set(&options, "input_format", "mjpeg", 0);
 av_dict_set(&options, "video_size", "1920x1080", 0);

 AVFormatContext* fmtCtx = nullptr;


 // Commenting this line out results in fast encoding!
 // Notice how fmtCtx is not even used anywhere, we still read packets from the file
 avformat_open_input(&fmtCtx, "/dev/video0", inputFormat, &options);


 // Just parse packets from a file and send them to the decoder.
 do {
 data_size = fread(inbuf, 1, INBUF_SIZE, f);
 if (ferror(f))
 break;
 eof = !data_size;

 data = inbuf;
 while (data_size > 0 || eof) {
 ret = av_parser_parse2(parser, c, &pkt->data, &pkt->size,
 data, data_size, AV_NOPTS_VALUE, AV_NOPTS_VALUE, 0);
 if (ret < 0) {
 fprintf(stderr, "Error while parsing\n");
 exit(1);
 }
 data += ret;
 data_size -= ret;

 if (pkt->size)
 decode(c, frame, pkt);
 else if (eof)
 break;
 }
 } while (!eof);

 return 0;
}



Here's a histogram of the CPU time spent in that
avcodec_send_packet
function call with and without opening the device by commenting out thatavformat_open_input
call above.

Without opening the V4L2 device :




With opening the V4L2 device :




Interestingly we can see a significant number of function calls are in that 25ms time bin ! But most of them are 78ms... why ?


So what's going on here ? Why does opening the device destroy my decode performance ?


Additionally, if I try and run a seemingly equivalent pipeline through the ffmpeg tool itself, I don't hit this problem. Running this command :


ffmpeg -f v4l2 -input_format mjpeg -video_size 1920x1080 -r 30 -c:v mjpeg -i /dev/video0 -c:v copy out.mjpeg



Is generating an output file with a reported speed of just barely over 1.0x, aka. 30 FPS. Perfect, why doesn't the C API give me the same results ? One thing to note is I do get periodic errors from the MJPEG decoder (about every second), not sure if these are a concern or not :


[mjpeg @ 0x5590d6b7b0] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 27 >= 27
[mjpeg @ 0x5590d6b7b0] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 30 >= 30
...



I'm running on a Raspberry Pi CM4 with FFmpeg 6.1.1


-
aaccoder : Implement Perceptual Noise Substitution for AAC
15 avril 2015, par Rostislav Pehlivanovaaccoder : Implement Perceptual Noise Substitution for AAC
This commit implements the perceptual noise substitution AAC extension. This is a proof of concept
implementation, and as such, is not enabled by default. This is the fourth revision of this patch,
made after some problems were noted out. Any changes made since the previous revisions have been indicated.In order to extend the encoder to use an additional codebook, the array holding each codebook has been
modified with two additional entries - 13 for the NOISE_BT codebook and 12 which has a placeholder function.
The cost system was modified to skip the 12th entry using an array to map the input and outputs it has. It
also does not accept using the 13th codebook for any band which is not marked as containing noise, thereby
restricting its ability to arbitrarily choose it for bands. The use of arrays allows the system to be easily
extended to allow for intensity stereo encoding, which uses additional codebooks.The 12th entry in the codebook function array points to a function which stops the execution of the program
by calling an assert with an always ’false’ argument. It was pointed out in an email discussion with
Claudio Freire that having a ’NULL’ entry can result in unexpected behaviour and could be used as
a security hole. There is no danger of this function being called during encoding due to the codebook maps introduced.Another change from version 1 of the patch is the addition of an argument to the encoder, ’-aac_pns’ to
enable and disable the PNS. This currently defaults to disable the PNS, as it is experimental.
The switch will be removed in the future, when the algorithm to select noise bands has been improved.
The current algorithm simply compares the energy to the threshold (multiplied by a constant) to determine
noise, however the FFPsyBand structure contains other useful figures to determine which bands carry noise more accurately.Some of the sample files provided triggered an assertion when the parameter to tune the threshold was set to
a value of ’2.2’. Claudio Freire reported the problem’s source could be in the range of the scalefactor
indices for noise and advised to measure the minimal index and clip anything above the maximum allowed
value. This has been implemented and all the files which used to trigger the asserion now encode without error.The third revision of the problem also removes unneded variabes and comparisons. All of them were
redundant and were of little use for when the PNS implementation would be extended.The fourth revision moved the clipping of the noise scalefactors outside the second loop of the two-loop
algorithm in order to prevent their redundant calculations. Also, freq_mult has been changed to a float
variable due to the fact that rounding errors can prove to be a problem at low frequencies.
Considerations were taken whether the entire expression could be evaluated inside the expression
, but in the end it was decided that it would be for the best if just the type of the variable were
to change. Claudio Freire reported the two problems. There is no change of functionality
(except for low sampling frequencies) so the spectral demonstrations at the end of this commit’s message were not updated.Finally, the way energy values are converted to scalefactor indices has changed since the first commit,
as per the suggestion of Claudio Freire. This may still have some drawbacks, but unlike the first commit
it works without having redundant offsets and outputs what the decoder expects to have, in terms of the
ranges of the scalefactor indices.Some spectral comparisons : https://trac.ffmpeg.org/attachment/wiki/Encode/AAC/Original.png (original),
https://trac.ffmpeg.org/attachment/wiki/Encode/AAC/PNS_NO.png (encoded without PNS),
https://trac.ffmpeg.org/attachment/wiki/Encode/AAC/PNS1.2.png (encoded with PNS, const = 1.2),
https://trac.ffmpeg.org/attachment/wiki/Encode/AAC/Difference1.png (spectral difference).
The constant is the value which multiplies the threshold when it gets compared to the energy, larger
values means more noise will be substituded by PNS values. Example when const = 2.2 :
https://trac.ffmpeg.org/attachment/wiki/Encode/AAC/PNS_2.2.pngReviewed-by : Claudio Freire <klaussfreire@gmail.com>
Signed-off-by : Michael Niedermayer <michaelni@gmx.at>