Recherche avancée

Médias (1)

Mot : - Tags -/Christian Nold

Autres articles (63)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (10246)

  • aaccoder : Implement Perceptual Noise Substitution for AAC

    15 avril 2015, par Rostislav Pehlivanov
    aaccoder : Implement Perceptual Noise Substitution for AAC
    

    This commit implements the perceptual noise substitution AAC extension. This is a proof of concept
    implementation, and as such, is not enabled by default. This is the fourth revision of this patch,
    made after some problems were noted out. Any changes made since the previous revisions have been indicated.

    In order to extend the encoder to use an additional codebook, the array holding each codebook has been
    modified with two additional entries - 13 for the NOISE_BT codebook and 12 which has a placeholder function.
    The cost system was modified to skip the 12th entry using an array to map the input and outputs it has. It
    also does not accept using the 13th codebook for any band which is not marked as containing noise, thereby
    restricting its ability to arbitrarily choose it for bands. The use of arrays allows the system to be easily
    extended to allow for intensity stereo encoding, which uses additional codebooks.

    The 12th entry in the codebook function array points to a function which stops the execution of the program
    by calling an assert with an always ’false’ argument. It was pointed out in an email discussion with
    Claudio Freire that having a ’NULL’ entry can result in unexpected behaviour and could be used as
    a security hole. There is no danger of this function being called during encoding due to the codebook maps introduced.

    Another change from version 1 of the patch is the addition of an argument to the encoder, ’-aac_pns’ to
    enable and disable the PNS. This currently defaults to disable the PNS, as it is experimental.
    The switch will be removed in the future, when the algorithm to select noise bands has been improved.
    The current algorithm simply compares the energy to the threshold (multiplied by a constant) to determine
    noise, however the FFPsyBand structure contains other useful figures to determine which bands carry noise more accurately.

    Some of the sample files provided triggered an assertion when the parameter to tune the threshold was set to
    a value of ’2.2’. Claudio Freire reported the problem’s source could be in the range of the scalefactor
    indices for noise and advised to measure the minimal index and clip anything above the maximum allowed
    value. This has been implemented and all the files which used to trigger the asserion now encode without error.

    The third revision of the problem also removes unneded variabes and comparisons. All of them were
    redundant and were of little use for when the PNS implementation would be extended.

    The fourth revision moved the clipping of the noise scalefactors outside the second loop of the two-loop
    algorithm in order to prevent their redundant calculations. Also, freq_mult has been changed to a float
    variable due to the fact that rounding errors can prove to be a problem at low frequencies.
    Considerations were taken whether the entire expression could be evaluated inside the expression
    , but in the end it was decided that it would be for the best if just the type of the variable were
    to change. Claudio Freire reported the two problems. There is no change of functionality
    (except for low sampling frequencies) so the spectral demonstrations at the end of this commit’s message were not updated.

    Finally, the way energy values are converted to scalefactor indices has changed since the first commit,
    as per the suggestion of Claudio Freire. This may still have some drawbacks, but unlike the first commit
    it works without having redundant offsets and outputs what the decoder expects to have, in terms of the
    ranges of the scalefactor indices.

    Some spectral comparisons : https://trac.ffmpeg.org/attachment/wiki/Encode/AAC/Original.png (original),
    https://trac.ffmpeg.org/attachment/wiki/Encode/AAC/PNS_NO.png (encoded without PNS),
    https://trac.ffmpeg.org/attachment/wiki/Encode/AAC/PNS1.2.png (encoded with PNS, const = 1.2),
    https://trac.ffmpeg.org/attachment/wiki/Encode/AAC/Difference1.png (spectral difference).
    The constant is the value which multiplies the threshold when it gets compared to the energy, larger
    values means more noise will be substituded by PNS values. Example when const = 2.2 :
    https://trac.ffmpeg.org/attachment/wiki/Encode/AAC/PNS_2.2.png

    Reviewed-by : Claudio Freire <klaussfreire@gmail.com>
    Signed-off-by : Michael Niedermayer <michaelni@gmx.at>

    • [DH] libavcodec/aaccoder.c
    • [DH] libavcodec/aacenc.c
    • [DH] libavcodec/aacenc.h
  • MJPEG decoding is 3x slower when opening a V4L2 input device [closed]

    26 octobre 2024, par Xenonic

    I'm trying to decode a MJPEG video stream coming from a webcam, but I'm hitting some performance blockers when using FFmpeg's C API in my application. I've recreated the problem using the example video decoder, where I just simply open the V4L2 input device, read packets, and push them to the decoder. What's strange is if I try to get my input packets from the V4L2 device instead of from a file, the avcodec_send_packet call to the decoder is nearly 3x slower. After further poking around, I narrowed the issue down to whether or not I open the V4L2 device at all.

    &#xA;

    Let's look at a minimal example demonstrating this behavior :

    &#xA;

    extern "C"&#xA;{&#xA;#include <libavcodec></libavcodec>avcodec.h>&#xA;#include <libavformat></libavformat>avformat.h>&#xA;#include <libavutil></libavutil>opt.h>&#xA;#include <libavdevice></libavdevice>avdevice.h>&#xA;}&#xA;&#xA;#define INBUF_SIZE 4096&#xA;&#xA;static void decode(AVCodecContext *dec_ctx, AVFrame *frame, AVPacket *pkt)&#xA;{&#xA;    if (avcodec_send_packet(dec_ctx, pkt) &lt; 0)&#xA;        exit(1);&#xA; &#xA;    int ret = 0;&#xA;    while (ret >= 0) {&#xA;        ret = avcodec_receive_frame(dec_ctx, frame);&#xA;        if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)&#xA;            return;&#xA;        else if (ret &lt; 0)&#xA;            exit(1);&#xA;&#xA;        // Here we&#x27;d save off the decoded frame, but that&#x27;s not necessary for the example.&#xA;    }&#xA;}&#xA;&#xA;int main(int argc, char **argv)&#xA;{&#xA;    const char *filename;&#xA;    const AVCodec *codec;&#xA;    AVCodecParserContext *parser;&#xA;    AVCodecContext *c= NULL;&#xA;    FILE *f;&#xA;    AVFrame *frame;&#xA;    uint8_t inbuf[INBUF_SIZE &#x2B; AV_INPUT_BUFFER_PADDING_SIZE];&#xA;    uint8_t *data;&#xA;    size_t   data_size;&#xA;    int ret;&#xA;    int eof;&#xA;    AVPacket *pkt;&#xA;&#xA;    filename = argv[1];&#xA;&#xA;    pkt = av_packet_alloc();&#xA;    if (!pkt)&#xA;        exit(1);&#xA;&#xA;    /* set end of buffer to 0 (this ensures that no overreading happens for damaged MPEG streams) */&#xA;    memset(inbuf &#x2B; INBUF_SIZE, 0, AV_INPUT_BUFFER_PADDING_SIZE);&#xA;&#xA;    // Use MJPEG instead of the example&#x27;s MPEG1&#xA;    //codec = avcodec_find_decoder(AV_CODEC_ID_MPEG1VIDEO);&#xA;    codec = avcodec_find_decoder(AV_CODEC_ID_MJPEG);&#xA;    if (!codec) {&#xA;        fprintf(stderr, "Codec not found\n");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    parser = av_parser_init(codec->id);&#xA;    if (!parser) {&#xA;        fprintf(stderr, "parser not found\n");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    c = avcodec_alloc_context3(codec);&#xA;    if (!c) {&#xA;        fprintf(stderr, "Could not allocate video codec context\n");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    if (avcodec_open2(c, codec, NULL) &lt; 0) {&#xA;        fprintf(stderr, "Could not open codec\n");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    c->pix_fmt = AV_PIX_FMT_YUVJ422P;&#xA;&#xA;    f = fopen(filename, "rb");&#xA;    if (!f) {&#xA;        fprintf(stderr, "Could not open %s\n", filename);&#xA;        exit(1);&#xA;    }&#xA;&#xA;    frame = av_frame_alloc();&#xA;    if (!frame) {&#xA;        fprintf(stderr, "Could not allocate video frame\n");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    avdevice_register_all();&#xA;    auto* inputFormat = av_find_input_format("v4l2");&#xA;    AVDictionary* options = nullptr;&#xA;    av_dict_set(&amp;options, "input_format", "mjpeg", 0);&#xA;    av_dict_set(&amp;options, "video_size", "1920x1080", 0);&#xA;&#xA;    AVFormatContext* fmtCtx = nullptr;&#xA;&#xA;&#xA;    // Commenting this line out results in fast encoding!&#xA;    // Notice how fmtCtx is not even used anywhere, we still read packets from the file&#xA;    avformat_open_input(&amp;fmtCtx, "/dev/video0", inputFormat, &amp;options);&#xA;&#xA;&#xA;    // Just parse packets from a file and send them to the decoder.&#xA;    do {&#xA;        data_size = fread(inbuf, 1, INBUF_SIZE, f);&#xA;        if (ferror(f))&#xA;            break;&#xA;        eof = !data_size;&#xA;&#xA;        data = inbuf;&#xA;        while (data_size > 0 || eof) {&#xA;            ret = av_parser_parse2(parser, c, &amp;pkt->data, &amp;pkt->size,&#xA;                                   data, data_size, AV_NOPTS_VALUE, AV_NOPTS_VALUE, 0);&#xA;            if (ret &lt; 0) {&#xA;                fprintf(stderr, "Error while parsing\n");&#xA;                exit(1);&#xA;            }&#xA;            data      &#x2B;= ret;&#xA;            data_size -= ret;&#xA;&#xA;            if (pkt->size)&#xA;                decode(c, frame, pkt);&#xA;            else if (eof)&#xA;                break;&#xA;        }&#xA;    } while (!eof);&#xA;&#xA;    return 0;&#xA;}&#xA;

    &#xA;

    Here's a histogram of the CPU time spent in that avcodec_send_packet function call with and without opening the device by commenting out that avformat_open_input call above.

    &#xA;

    Without opening the V4L2 device :

    &#xA;

    fread_cpu

    &#xA;

    With opening the V4L2 device :

    &#xA;

    webcam_cpu

    &#xA;

    Interestingly we can see a significant number of function calls are in that 25ms time bin ! But most of them are 78ms... why ?

    &#xA;

    So what's going on here ? Why does opening the device destroy my decode performance ?

    &#xA;

    Additionally, if I try and run a seemingly equivalent pipeline through the ffmpeg tool itself, I don't hit this problem. Running this command :

    &#xA;

    ffmpeg -f v4l2 -input_format mjpeg -video_size 1920x1080 -r 30 -c:v mjpeg -i /dev/video0 -c:v copy out.mjpeg&#xA;

    &#xA;

    Is generating an output file with a reported speed of just barely over 1.0x, aka. 30 FPS. Perfect, why doesn't the C API give me the same results ? One thing to note is I do get periodic errors from the MJPEG decoder (about every second), not sure if these are a concern or not :

    &#xA;

    [mjpeg @ 0x5590d6b7b0] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 27 >= 27&#xA;[mjpeg @ 0x5590d6b7b0] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 30 >= 30&#xA;...&#xA;

    &#xA;

    I'm running on a Raspberry Pi CM4 with FFmpeg 6.1.1

    &#xA;

  • Segmentation fault with avcodec_encode_video2() while encoding H.264

    16 juillet 2015, par Baris Demiray

    I’m trying to convert a cv::Mat to an AVFrame to encode it then in H.264 and wanted to start from a simple example, as I’m a newbie in both. So I first read in a JPEG file, and then do the pixel format conversion with sws_scale() from AV_PIX_FMT_BGR24 to AV_PIX_FMT_YUV420P keeping the dimensions the same, and it all goes fine until I call avcodec_encode_video2().

    I read quite a few discussions regarding an AVFrame allocation and the question segmetation fault while avcodec_encode_video2 seemed like a match but I just can’t see what I’m missing or getting wrong.

    Here is the minimal code that you can reproduce the crash, it should be compiled with,

    g++ -o OpenCV2FFmpeg OpenCV2FFmpeg.cpp -lopencv_imgproc -lopencv_highgui -lopencv_core -lswscale -lavutil -lavcodec -lavformat

    It’s output on my system,

    cv::Mat [width=420, height=315, depth=0, channels=3, step=1260]
    I'll soon crash..
    Segmentation fault

    And that sample.jpg file’s details by identify tool,

    ~temporary/sample.jpg JPEG 420x315 420x315+0+0 8-bit sRGB 38.3KB 0.000u 0:00.000

    Please note that I’m trying to create a video out of a single image, just to keep things simple.

    #include <iostream>
    #include <cassert>
    using namespace std;

    extern "C" {
       #include <libavcodec></libavcodec>avcodec.h>
       #include <libswscale></libswscale>swscale.h>
       #include <libavformat></libavformat>avformat.h>
    }

    #include <opencv2></opencv2>core/core.hpp>
    #include <opencv2></opencv2>highgui/highgui.hpp>

    const string TEST_IMAGE = "/home/baris/temporary/sample.jpg";

    int main(int /*argc*/, char** argv)
    {
       av_register_all();
       avcodec_register_all();

       /**
        * Initialise the encoder
        */
       AVCodec *h264encoder = avcodec_find_encoder(AV_CODEC_ID_H264);
       AVFormatContext *cv2avFormatContext = avformat_alloc_context();

       /**
        * Create a stream and allocate frames
        */
       AVStream *h264outputstream = avformat_new_stream(cv2avFormatContext, h264encoder);
       avcodec_get_context_defaults3(h264outputstream->codec, h264encoder);
       AVFrame *sourceAvFrame = av_frame_alloc(), *destAvFrame = av_frame_alloc();
       int got_frame;

       /**
        * Pixel formats for the input and the output
        */
       AVPixelFormat sourcePixelFormat = AV_PIX_FMT_BGR24;
       AVPixelFormat destPixelFormat = AV_PIX_FMT_YUV420P;

       /**
        * Create cv::Mat
        */
       cv::Mat cvFrame = cv::imread(TEST_IMAGE, CV_LOAD_IMAGE_COLOR);
       int width = cvFrame.size().width, height = cvFrame.size().height;
       cerr &lt;&lt; "cv::Mat [width=" &lt;&lt; width &lt;&lt; ", height=" &lt;&lt; height &lt;&lt; ", depth=" &lt;&lt; cvFrame.depth() &lt;&lt; ", channels=" &lt;&lt; cvFrame.channels() &lt;&lt; ", step=" &lt;&lt; cvFrame.step &lt;&lt; "]" &lt;&lt; endl;

       h264outputstream->codec->pix_fmt = destPixelFormat;
       h264outputstream->codec->width = cvFrame.cols;
       h264outputstream->codec->height = cvFrame.rows;

       /**
        * Prepare the conversion context
        */
       SwsContext *bgr2yuvcontext = sws_getContext(width, height,
                                                   sourcePixelFormat,
                                                   h264outputstream->codec->width, h264outputstream->codec->height,
                                                   h264outputstream->codec->pix_fmt,
                                                   SWS_BICUBIC, NULL, NULL, NULL);

       /**
        * Convert and encode frames
        */
       for (uint i=0; i &lt; 250; i++)
       {
           /**
            * Allocate source frame, i.e. input to sws_scale()
            */
           avpicture_alloc((AVPicture*)sourceAvFrame, sourcePixelFormat, width, height);

           for (int h = 0; h &lt; height; h++)
               memcpy(&amp;(sourceAvFrame->data[0][h*sourceAvFrame->linesize[0]]), &amp;(cvFrame.data[h*cvFrame.step]), width*3);

           /**
            * Allocate destination frame, i.e. output from sws_scale()
            */
           avpicture_alloc((AVPicture *)destAvFrame, destPixelFormat, width, height);

           sws_scale(bgr2yuvcontext, sourceAvFrame->data, sourceAvFrame->linesize,
                     0, height, destAvFrame->data, destAvFrame->linesize);

           /**
            * Prepare an AVPacket for encoded output
            */
           AVPacket avEncodedPacket;
           av_init_packet(&amp;avEncodedPacket);
           avEncodedPacket.data = NULL;
           avEncodedPacket.size = 0;
           // av_free_packet(&amp;avEncodedPacket); w/ or w/o result doesn't change

           cerr &lt;&lt; "I'll soon crash.." &lt;&lt; endl;
           if (avcodec_encode_video2(h264outputstream->codec, &amp;avEncodedPacket, destAvFrame, &amp;got_frame) &lt; 0)
               exit(1);

           cerr &lt;&lt; "Checking if we have a frame" &lt;&lt; endl;
           if (got_frame)
               av_write_frame(cv2avFormatContext, &amp;avEncodedPacket);

           av_free_packet(&amp;avEncodedPacket);
           av_frame_free(&amp;sourceAvFrame);
           av_frame_free(&amp;destAvFrame);
       }
    }
    </cassert></iostream>

    Thanks in advance !

    EDIT : And the stack trace after the crash,

    Thread 2 (Thread 0x7fffe5506700 (LWP 10005)):
    #0  0x00007ffff4bf6c5d in poll () at /lib64/libc.so.6
    #1  0x00007fffe9073268 in  () at /usr/lib64/libusb-1.0.so.0
    #2  0x00007ffff47010a4 in start_thread () at /lib64/libpthread.so.0
    #3  0x00007ffff4bff08d in clone () at /lib64/libc.so.6

    Thread 1 (Thread 0x7ffff7f869c0 (LWP 10001)):
    #0  0x00007ffff5ecc7dc in avcodec_encode_video2 () at /usr/lib64/libavcodec.so.56
    #1  0x00000000004019b6 in main(int, char**) (argv=0x7fffffffd3d8) at ../src/OpenCV2FFmpeg.cpp:99

    EDIT2 : Problem was that I hadn’t avcodec_open2() the codec as spotted by Ronald. Final version of the code is at https://github.com/barisdemiray/opencv2ffmpeg/, with leaks and probably other problems hoping that I’ll improve it while learning both libraries.