Recherche avancée

Médias (1)

Mot : - Tags -/ticket

Autres articles (94)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (5676)

  • MJPEG decoding is 3x slower when opening a V4L2 input device [closed]

    26 octobre 2024, par Xenonic

    I'm trying to decode a MJPEG video stream coming from a webcam, but I'm hitting some performance blockers when using FFmpeg's C API in my application. I've recreated the problem using the example video decoder, where I just simply open the V4L2 input device, read packets, and push them to the decoder. What's strange is if I try to get my input packets from the V4L2 device instead of from a file, the avcodec_send_packet call to the decoder is nearly 3x slower. After further poking around, I narrowed the issue down to whether or not I open the V4L2 device at all.

    


    Let's look at a minimal example demonstrating this behavior :

    


    extern "C"&#xA;{&#xA;#include <libavcodec></libavcodec>avcodec.h>&#xA;#include <libavformat></libavformat>avformat.h>&#xA;#include <libavutil></libavutil>opt.h>&#xA;#include <libavdevice></libavdevice>avdevice.h>&#xA;}&#xA;&#xA;#define INBUF_SIZE 4096&#xA;&#xA;static void decode(AVCodecContext *dec_ctx, AVFrame *frame, AVPacket *pkt)&#xA;{&#xA;    if (avcodec_send_packet(dec_ctx, pkt) &lt; 0)&#xA;        exit(1);&#xA; &#xA;    int ret = 0;&#xA;    while (ret >= 0) {&#xA;        ret = avcodec_receive_frame(dec_ctx, frame);&#xA;        if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)&#xA;            return;&#xA;        else if (ret &lt; 0)&#xA;            exit(1);&#xA;&#xA;        // Here we&#x27;d save off the decoded frame, but that&#x27;s not necessary for the example.&#xA;    }&#xA;}&#xA;&#xA;int main(int argc, char **argv)&#xA;{&#xA;    const char *filename;&#xA;    const AVCodec *codec;&#xA;    AVCodecParserContext *parser;&#xA;    AVCodecContext *c= NULL;&#xA;    FILE *f;&#xA;    AVFrame *frame;&#xA;    uint8_t inbuf[INBUF_SIZE &#x2B; AV_INPUT_BUFFER_PADDING_SIZE];&#xA;    uint8_t *data;&#xA;    size_t   data_size;&#xA;    int ret;&#xA;    int eof;&#xA;    AVPacket *pkt;&#xA;&#xA;    filename = argv[1];&#xA;&#xA;    pkt = av_packet_alloc();&#xA;    if (!pkt)&#xA;        exit(1);&#xA;&#xA;    /* set end of buffer to 0 (this ensures that no overreading happens for damaged MPEG streams) */&#xA;    memset(inbuf &#x2B; INBUF_SIZE, 0, AV_INPUT_BUFFER_PADDING_SIZE);&#xA;&#xA;    // Use MJPEG instead of the example&#x27;s MPEG1&#xA;    //codec = avcodec_find_decoder(AV_CODEC_ID_MPEG1VIDEO);&#xA;    codec = avcodec_find_decoder(AV_CODEC_ID_MJPEG);&#xA;    if (!codec) {&#xA;        fprintf(stderr, "Codec not found\n");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    parser = av_parser_init(codec->id);&#xA;    if (!parser) {&#xA;        fprintf(stderr, "parser not found\n");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    c = avcodec_alloc_context3(codec);&#xA;    if (!c) {&#xA;        fprintf(stderr, "Could not allocate video codec context\n");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    if (avcodec_open2(c, codec, NULL) &lt; 0) {&#xA;        fprintf(stderr, "Could not open codec\n");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    c->pix_fmt = AV_PIX_FMT_YUVJ422P;&#xA;&#xA;    f = fopen(filename, "rb");&#xA;    if (!f) {&#xA;        fprintf(stderr, "Could not open %s\n", filename);&#xA;        exit(1);&#xA;    }&#xA;&#xA;    frame = av_frame_alloc();&#xA;    if (!frame) {&#xA;        fprintf(stderr, "Could not allocate video frame\n");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    avdevice_register_all();&#xA;    auto* inputFormat = av_find_input_format("v4l2");&#xA;    AVDictionary* options = nullptr;&#xA;    av_dict_set(&amp;options, "input_format", "mjpeg", 0);&#xA;    av_dict_set(&amp;options, "video_size", "1920x1080", 0);&#xA;&#xA;    AVFormatContext* fmtCtx = nullptr;&#xA;&#xA;&#xA;    // Commenting this line out results in fast encoding!&#xA;    // Notice how fmtCtx is not even used anywhere, we still read packets from the file&#xA;    avformat_open_input(&amp;fmtCtx, "/dev/video0", inputFormat, &amp;options);&#xA;&#xA;&#xA;    // Just parse packets from a file and send them to the decoder.&#xA;    do {&#xA;        data_size = fread(inbuf, 1, INBUF_SIZE, f);&#xA;        if (ferror(f))&#xA;            break;&#xA;        eof = !data_size;&#xA;&#xA;        data = inbuf;&#xA;        while (data_size > 0 || eof) {&#xA;            ret = av_parser_parse2(parser, c, &amp;pkt->data, &amp;pkt->size,&#xA;                                   data, data_size, AV_NOPTS_VALUE, AV_NOPTS_VALUE, 0);&#xA;            if (ret &lt; 0) {&#xA;                fprintf(stderr, "Error while parsing\n");&#xA;                exit(1);&#xA;            }&#xA;            data      &#x2B;= ret;&#xA;            data_size -= ret;&#xA;&#xA;            if (pkt->size)&#xA;                decode(c, frame, pkt);&#xA;            else if (eof)&#xA;                break;&#xA;        }&#xA;    } while (!eof);&#xA;&#xA;    return 0;&#xA;}&#xA;

    &#xA;

    Here's a histogram of the CPU time spent in that avcodec_send_packet function call with and without opening the device by commenting out that avformat_open_input call above.

    &#xA;

    Without opening the V4L2 device :

    &#xA;

    fread_cpu

    &#xA;

    With opening the V4L2 device :

    &#xA;

    webcam_cpu

    &#xA;

    Interestingly we can see a significant number of function calls are in that 25ms time bin ! But most of them are 78ms... why ?

    &#xA;

    So what's going on here ? Why does opening the device destroy my decode performance ?

    &#xA;

    Additionally, if I try and run a seemingly equivalent pipeline through the ffmpeg tool itself, I don't hit this problem. Running this command :

    &#xA;

    ffmpeg -f v4l2 -input_format mjpeg -video_size 1920x1080 -r 30 -c:v mjpeg -i /dev/video0 -c:v copy out.mjpeg&#xA;

    &#xA;

    Is generating an output file with a reported speed of just barely over 1.0x, aka. 30 FPS. Perfect, why doesn't the C API give me the same results ? One thing to note is I do get periodic errors from the MJPEG decoder (about every second), not sure if these are a concern or not :

    &#xA;

    [mjpeg @ 0x5590d6b7b0] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 27 >= 27&#xA;[mjpeg @ 0x5590d6b7b0] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 30 >= 30&#xA;...&#xA;

    &#xA;

    I'm running on a Raspberry Pi CM4 with FFmpeg 6.1.1

    &#xA;

  • How to compress videos on AMD RX 7900 XTX with ffmpeg ? [closed]

    19 novembre 2024, par cprn

    On Nvidia's GTX 1070 I used to do the below command, and it'd produce a way smaller file (often less than half of the original) without any visible degradation (at least for my eyes) :

    &#xA;

    ffmpeg -hwaccel cuda -i file.mp4 -c:v hevc_nvenc -crf 20 file.small.mp4&#xA;

    &#xA;

    Now I switched to AMD's RX 7900 XTX and this command obviously doesn't work.

    &#xA;

    What's the "equivalent" of that command ? As in : getting way smaller file with seemingly no quality loss.

    &#xA;

    What I tried :

    &#xA;

      &#xA;
    1. av1_nvenc ends up in errors :
    2. &#xA;

    &#xA;

    ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -hwaccel_output_format vaapi -i file.mp4 -c:v av1_nvenc file.small.mp4&#xA;

    &#xA;

    Impossible to convert between the formats supported by the filter &#x27;Parsed_null_0&#x27; and the filter &#x27;auto_scale_0&#x27;&#xA;[vf#0:0 @ 0x60f7c1aa7d80] Error reinitializing filters!&#xA;[vf#0:0 @ 0x60f7c1aa7d80] Task finished with error code: -38 (Function not implemented)&#xA;[vf#0:0 @ 0x60f7c1aa7d80] Terminating thread with return code -38 (Function not implemented)&#xA;[vost#0:0/av1_nvenc @ 0x60f7c1b10040] Could not open encoder before EOF&#xA;[vost#0:0/av1_nvenc @ 0x60f7c1b10040] Task finished with error code: -22 (Invalid argument)&#xA;[vost#0:0/av1_nvenc @ 0x60f7c1b10040] Terminating thread with return code -22 (Invalid argument)&#xA;[out#0/mp4 @ 0x60f7c1b0d680] Nothing was written into output file, because at least one of its streams received no packets.&#xA;

    &#xA;

      &#xA;
    1. hevc_vaapi goes through the encoding, but produces 20% bigger files instead of smaller :
    2. &#xA;

    &#xA;

    ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -hwaccel_output_format vaapi -i file.mp4 -c:v hevc_vaapi file.small.mp4&#xA;

    &#xA;

    No idea what I'm doing. I know next to nothing about video encoding, just what I read in the documentation, and I'm stuck. Also, I'm on Linux, but this shouldn't matter, I think.

    &#xA;

    FFmpeg info :

    &#xA;

    ❯ ffmpeg -version&#xA;ffmpeg version n7.1 Copyright (c) 2000-2024 the FFmpeg developers&#xA;built with gcc 14.2.1 (GCC) 20240910&#xA;configuration: --prefix=/usr --disable-debug --disable-static --disable-stripping --enable-amf --enable-avisynth --enable-cuda-llvm --enable-lto --enable-fontconfig --enable-frei0r --enable-gmp --enable-gnutls --enable-gpl --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libdav1d --enable-libdrm --enable-libdvdnav --enable-libdvdread --enable-libfreetype --enable-libfribidi --enable-libglslang --enable-libgsm --enable-libharfbuzz --enable-libiec61883 --enable-libjack --enable-libjxl --enable-libmodplug --enable-libmp3lame --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libplacebo --enable-libpulse --enable-librav1e --enable-librsvg --enable-librubberband --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libsvtav1 --enable-libtheora --enable-libv4l2 --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvpl --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxcb --enable-libxml2 --enable-libxvid --enable-libzimg --enable-libzmq --enable-nvdec --enable-nvenc --enable-opencl --enable-opengl --enable-shared --enable-vapoursynth --enable-version3 --enable-vulkan&#xA;libavutil      59. 39.100 / 59. 39.100&#xA;libavcodec     61. 19.100 / 61. 19.100&#xA;libavformat    61.  7.100 / 61.  7.100&#xA;libavdevice    61.  3.100 / 61.  3.100&#xA;libavfilter    10.  4.100 / 10.  4.100&#xA;libswscale      8.  3.100 /  8.  3.100&#xA;libswresample   5.  3.100 /  5.  3.100&#xA;libpostproc    58.  3.100 / 58.  3.100&#xA;

    &#xA;

  • FFmpeg H.264 NVENC - high444p profile not working at 1920x1080 via library but works via command line encoding

    10 décembre 2024, par Vivek Nathani

    I am building a real-time desktop streaming application in Rust and I am using FFmpeg (H.264 NVENC) to encode the raw frames captured as BGRA from desktop.

    &#xA;

    I am able to get it to work on profiles such as baseline, main, and high but nothing seems to be working on high444p which is what I need to do 4:4:4 chroma subsampling.

    &#xA;

    Below is the setup I have for constructing the encoder. I am using the rust-ac-ffmpeg crate to do this.

    &#xA;

    builder = builder&#xA;    .pixel_format("bgra")&#xA;    .height(1080)&#xA;    .width(1920)&#xA;    .set_option("profile", "high444p")&#xA;    .set_option("preset", "medium")&#xA;    .set_option("tune", "hq")&#xA;    .set_option("b:v", "30M")&#xA;    .set_option("maxrate", "30M")&#xA;    .set_option("bufsize", "60M")&#xA;    .set_option("framerate", "60")&#xA;    .set_option("pix_fmt", "yuv444p");&#xA;

    &#xA;

    No matter what combination of settings I try, I always get a single line error as follows :

    &#xA;

    [h264_nvenc @ 0000020bf69f6680] InitializeEncoder failed: invalid param (8): Invalid Level.&#xA;

    &#xA;

    I have also tried adding levels explicitly. Have tried 4.1, 4.2, 5.1, 5.2, 6.1 and 6.2.

    &#xA;

    The only case where this does work is if the resolution is low, let's say around 500x500 or 600x800. But this is not suitable for my usecases.

    &#xA;

    Moreover, the equivalent command for this just works right out of the box. On the same machine. Same ffmpeg version. Running this command on an output.raw file which is just a dump of the frames captured by my screen capture mechanism.

    &#xA;

    ffmpeg -f rawvideo -pix_fmt bgra -s 1920x1080 -framerate 60 -i input.raw -c:v h264_nvenc -preset p1 -profile:v high444p -pix_fmt yuv444p -b:v 30M -maxrate 30M -bufsize 60M output.mp4&#xA;

    &#xA;