Recherche avancée

Médias (91)

Autres articles (76)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Submit enhancements and plugins

    13 avril 2011

    If you have developed a new extension to add one or more useful features to MediaSPIP, let us know and its integration into the core MedisSPIP functionality will be considered.
    You can use the development discussion list to request for help with creating a plugin. As MediaSPIP is based on SPIP - or you can use the SPIP discussion list SPIP-Zone.

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

Sur d’autres sites (11743)

  • swscale/range_convert : fix mpeg ranges in yuv range conversion for non-8-bit pixel...

    18 septembre 2024, par Ramiro Polla
    swscale/range_convert : fix mpeg ranges in yuv range conversion for non-8-bit pixel formats
    

    There is an issue with the constants used in YUV to YUV range conversion,
    where the upper bound is not respected when converting to mpeg range.

    With this commit, the constants are calculated at runtime, depending on
    the bit depth. This approach also allows us to more easily understand how
    the constants are derived.

    For bit depths <= 14, the number of fixed point bits has been set to 14
    for all conversions, to simplify the code.
    For bit depths > 14, the number of fixed points bits has been raised and
    set to 18, to allow for the conversion to be accurate enough for the mpeg
    range to be respected.

    The convert functions now take the conversion constants (coeff and offset)
    as function arguments.
    For bit depths <= 14, coeff is unsigned 16-bit and offset is 32-bit.
    For bit depths > 14, coeff is unsigned 32-bit and offset is 64-bit.

    x86_64 :
    chrRangeFromJpeg8_1920_c : 2127.4 2125.0 (1.00x)
    chrRangeFromJpeg16_1920_c : 2325.2 2127.2 (1.09x)
    chrRangeToJpeg8_1920_c : 3166.9 3168.7 (1.00x)
    chrRangeToJpeg16_1920_c : 2152.4 3164.8 (0.68x)
    lumRangeFromJpeg8_1920_c : 1263.0 1302.5 (0.97x)
    lumRangeFromJpeg16_1920_c : 1080.5 1299.2 (0.83x)
    lumRangeToJpeg8_1920_c : 1886.8 2112.2 (0.89x)
    lumRangeToJpeg16_1920_c : 1077.0 1906.5 (0.56x)

    aarch64 A55 :
    chrRangeFromJpeg8_1920_c : 28835.2 28835.6 (1.00x)
    chrRangeFromJpeg16_1920_c : 28839.8 32680.8 (0.88x)
    chrRangeToJpeg8_1920_c : 23074.7 23075.4 (1.00x)
    chrRangeToJpeg16_1920_c : 17318.9 24996.0 (0.69x)
    lumRangeFromJpeg8_1920_c : 15389.7 15384.5 (1.00x)
    lumRangeFromJpeg16_1920_c : 15388.2 17306.7 (0.89x)
    lumRangeToJpeg8_1920_c : 19227.8 19226.6 (1.00x)
    lumRangeToJpeg16_1920_c : 15387.0 21146.3 (0.73x)

    aarch64 A76 :
    chrRangeFromJpeg8_1920_c : 6324.4 6268.1 (1.01x)
    chrRangeFromJpeg16_1920_c : 6339.9 11521.5 (0.55x)
    chrRangeToJpeg8_1920_c : 9656.0 9612.8 (1.00x)
    chrRangeToJpeg16_1920_c : 6340.4 11651.8 (0.54x)
    lumRangeFromJpeg8_1920_c : 4422.0 4420.8 (1.00x)
    lumRangeFromJpeg16_1920_c : 4420.9 5762.0 (0.77x)
    lumRangeToJpeg8_1920_c : 5949.1 5977.5 (1.00x)
    lumRangeToJpeg16_1920_c : 4446.8 5946.2 (0.75x)

    NOTE : all simd optimizations for range_convert have been disabled.
    they will be re-enabled when they are fixed for each architecture.

    NOTE2 : the same issue still exists in rgb2yuv conversions, which is not
    addressed in this commit.

    • [DH] libswscale/aarch64/swscale.c
    • [DH] libswscale/hscale.c
    • [DH] libswscale/swscale.c
    • [DH] libswscale/swscale_internal.h
    • [DH] libswscale/x86/swscale.c
    • [DH] tests/checkasm/sw_range_convert.c
    • [DH] tests/ref/fate/filter-alphaextract_alphamerge_rgb
    • [DH] tests/ref/fate/filter-pixdesc-gray10be
    • [DH] tests/ref/fate/filter-pixdesc-gray10le
    • [DH] tests/ref/fate/filter-pixdesc-gray12be
    • [DH] tests/ref/fate/filter-pixdesc-gray12le
    • [DH] tests/ref/fate/filter-pixdesc-gray14be
    • [DH] tests/ref/fate/filter-pixdesc-gray14le
    • [DH] tests/ref/fate/filter-pixdesc-gray16be
    • [DH] tests/ref/fate/filter-pixdesc-gray16le
    • [DH] tests/ref/fate/filter-pixdesc-gray9be
    • [DH] tests/ref/fate/filter-pixdesc-gray9le
    • [DH] tests/ref/fate/filter-pixdesc-ya16be
    • [DH] tests/ref/fate/filter-pixdesc-ya16le
    • [DH] tests/ref/fate/filter-pixdesc-yuvj411p
    • [DH] tests/ref/fate/filter-pixdesc-yuvj420p
    • [DH] tests/ref/fate/filter-pixdesc-yuvj422p
    • [DH] tests/ref/fate/filter-pixdesc-yuvj440p
    • [DH] tests/ref/fate/filter-pixdesc-yuvj444p
    • [DH] tests/ref/fate/filter-pixfmts-copy
    • [DH] tests/ref/fate/filter-pixfmts-crop
    • [DH] tests/ref/fate/filter-pixfmts-field
    • [DH] tests/ref/fate/filter-pixfmts-fieldorder
    • [DH] tests/ref/fate/filter-pixfmts-hflip
    • [DH] tests/ref/fate/filter-pixfmts-il
    • [DH] tests/ref/fate/filter-pixfmts-lut
    • [DH] tests/ref/fate/filter-pixfmts-null
    • [DH] tests/ref/fate/filter-pixfmts-pad
    • [DH] tests/ref/fate/filter-pixfmts-pullup
    • [DH] tests/ref/fate/filter-pixfmts-rotate
    • [DH] tests/ref/fate/filter-pixfmts-scale
    • [DH] tests/ref/fate/filter-pixfmts-swapuv
    • [DH] tests/ref/fate/filter-pixfmts-tinterlace_cvlpf
    • [DH] tests/ref/fate/filter-pixfmts-tinterlace_merge
    • [DH] tests/ref/fate/filter-pixfmts-tinterlace_pad
    • [DH] tests/ref/fate/filter-pixfmts-tinterlace_vlpf
    • [DH] tests/ref/fate/filter-pixfmts-transpose
    • [DH] tests/ref/fate/filter-pixfmts-vflip
    • [DH] tests/ref/fate/fitsenc-gray
    • [DH] tests/ref/fate/fitsenc-gray16be
    • [DH] tests/ref/fate/gifenc-gray
    • [DH] tests/ref/fate/idroq-video-encode
    • [DH] tests/ref/fate/jpg-icc
    • [DH] tests/ref/fate/sws-yuv-colorspace
    • [DH] tests/ref/fate/sws-yuv-range
    • [DH] tests/ref/fate/vvc-conformance-SCALING_A_1
    • [DH] tests/ref/lavf/gray16be.fits
    • [DH] tests/ref/lavf/gray16be.pam
    • [DH] tests/ref/lavf/gray16be.png
    • [DH] tests/ref/lavf/jpg
    • [DH] tests/ref/lavf/smjpeg
    • [DH] tests/ref/pixfmt/gbrp-gray
    • [DH] tests/ref/pixfmt/gbrp-gray10be
    • [DH] tests/ref/pixfmt/gbrp-gray10le
    • [DH] tests/ref/pixfmt/gbrp-gray12be
    • [DH] tests/ref/pixfmt/gbrp-gray12le
    • [DH] tests/ref/pixfmt/gbrp-gray16be
    • [DH] tests/ref/pixfmt/gbrp-gray16le
    • [DH] tests/ref/pixfmt/gbrp-yuvj420p
    • [DH] tests/ref/pixfmt/gbrp-yuvj422p
    • [DH] tests/ref/pixfmt/gbrp-yuvj440p
    • [DH] tests/ref/pixfmt/gbrp-yuvj444p
    • [DH] tests/ref/pixfmt/gbrp10-gray
    • [DH] tests/ref/pixfmt/gbrp10-gray10be
    • [DH] tests/ref/pixfmt/gbrp10-gray10le
    • [DH] tests/ref/pixfmt/gbrp10-gray12be
    • [DH] tests/ref/pixfmt/gbrp10-gray12le
    • [DH] tests/ref/pixfmt/gbrp10-gray16be
    • [DH] tests/ref/pixfmt/gbrp10-gray16le
    • [DH] tests/ref/pixfmt/gbrp10-yuvj420p
    • [DH] tests/ref/pixfmt/gbrp10-yuvj422p
    • [DH] tests/ref/pixfmt/gbrp10-yuvj440p
    • [DH] tests/ref/pixfmt/gbrp10-yuvj444p
    • [DH] tests/ref/pixfmt/gbrp12-gray
    • [DH] tests/ref/pixfmt/gbrp12-gray10be
    • [DH] tests/ref/pixfmt/gbrp12-gray10le
    • [DH] tests/ref/pixfmt/gbrp12-gray12be
    • [DH] tests/ref/pixfmt/gbrp12-gray12le
    • [DH] tests/ref/pixfmt/gbrp12-gray16be
    • [DH] tests/ref/pixfmt/gbrp12-gray16le
    • [DH] tests/ref/pixfmt/gbrp12-yuvj420p
    • [DH] tests/ref/pixfmt/gbrp12-yuvj422p
    • [DH] tests/ref/pixfmt/gbrp12-yuvj440p
    • [DH] tests/ref/pixfmt/gbrp12-yuvj444p
    • [DH] tests/ref/pixfmt/gbrp16-gray16be
    • [DH] tests/ref/pixfmt/gbrp16-gray16le
    • [DH] tests/ref/pixfmt/rgb24-gray
    • [DH] tests/ref/pixfmt/rgb24-gray10be
    • [DH] tests/ref/pixfmt/rgb24-gray10le
    • [DH] tests/ref/pixfmt/rgb24-gray12be
    • [DH] tests/ref/pixfmt/rgb24-gray12le
    • [DH] tests/ref/pixfmt/rgb24-gray16be
    • [DH] tests/ref/pixfmt/rgb24-gray16le
    • [DH] tests/ref/pixfmt/rgb24-yuvj420p
    • [DH] tests/ref/pixfmt/rgb24-yuvj422p
    • [DH] tests/ref/pixfmt/rgb24-yuvj440p
    • [DH] tests/ref/pixfmt/rgb24-yuvj444p
    • [DH] tests/ref/pixfmt/rgb48-gray
    • [DH] tests/ref/pixfmt/rgb48-gray10be
    • [DH] tests/ref/pixfmt/rgb48-gray10le
    • [DH] tests/ref/pixfmt/rgb48-gray12be
    • [DH] tests/ref/pixfmt/rgb48-gray12le
    • [DH] tests/ref/pixfmt/rgb48-gray16be
    • [DH] tests/ref/pixfmt/rgb48-gray16le
    • [DH] tests/ref/pixfmt/rgb48-yuvj420p
    • [DH] tests/ref/pixfmt/rgb48-yuvj422p
    • [DH] tests/ref/pixfmt/rgb48-yuvj440p
    • [DH] tests/ref/pixfmt/rgb48-yuvj444p
    • [DH] tests/ref/pixfmt/yuv444p-gray10be
    • [DH] tests/ref/pixfmt/yuv444p-gray10le
    • [DH] tests/ref/pixfmt/yuv444p-gray12be
    • [DH] tests/ref/pixfmt/yuv444p-gray12le
    • [DH] tests/ref/pixfmt/yuv444p-gray16be
    • [DH] tests/ref/pixfmt/yuv444p-gray16le
    • [
  • Problems using FFmpeg / libavfilter for adding overlay to grabbed frames

    21 novembre 2024, par Michael

    On Windows with latest FFmpeg / libav (full build, non-free) a C/C++ app reads YUV420P frames from a frame grabber card.

    &#xA;

    A bitmap (BGR24) overlay image from file should be drawn on every frame for the first 20 seconds via libavfilter. First, the BGR24 overlay image becomes converted via format filter to YUV420P. Then the YUV420P frame from frame grabber and the YUV420P overlay frame are pushed into the overlay filter.

    &#xA;

    FFmpeg / libavfilter does not report any errors or warnings in console / log. Trying to get the filtered frame out of the graph via av_buffersink_get_frame results in an EAGAIN return code.

    &#xA;

    The frames from the frame grabber card are fine, they could become encoded or written to a .yuv file. The overlay frame itself is fine too.

    &#xA;

    This is the complete private code (prototype - no style, memory leaks, ...) :

    &#xA;

    #define __STDC_LIMIT_MACROS&#xA;#define __STDC_CONSTANT_MACROS&#xA;&#xA;#include <cstdio>&#xA;#include <cstdint>&#xA;#include &#xA;&#xA;#include "../fgproto/include/SDL/SDL_video.h"&#xA;#include &#xA;&#xA;using namespace _DSHOWLIB_NAMESPACE;&#xA;&#xA;#ifdef _WIN32&#xA;//Windows&#xA;extern "C" {&#xA;#include "libavcodec/avcodec.h"&#xA;#include "libavformat/avformat.h"&#xA;#include "libswscale/swscale.h"&#xA;#include "libavdevice/avdevice.h"&#xA;#include "libavfilter/avfilter.h"&#xA;#include <libavutil></libavutil>log.h>&#xA;#include <libavutil></libavutil>mem.h>&#xA;#include "libavfilter/buffersink.h"&#xA;#include "libavfilter/buffersrc.h"&#xA;#include "libavutil/opt.h"&#xA;#include "libavutil/hwcontext_qsv.h"&#xA;#include "SDL/SDL.h"&#xA;};&#xA;#endif&#xA;#include <iostream>&#xA;#include <fstream>&#xA;&#xA;void uSleep(double waitTimeInUs, LARGE_INTEGER frequency)&#xA;{&#xA;    LARGE_INTEGER startTime, currentTime;&#xA;&#xA;    QueryPerformanceCounter(&amp;startTime);&#xA;&#xA;    if (waitTimeInUs > 16500.0)&#xA;        Sleep(1);&#xA;&#xA;    do&#xA;    {&#xA;        YieldProcessor();&#xA;        //Sleep(0);&#xA;        QueryPerformanceCounter(&amp;currentTime);&#xA;    }&#xA;    while (waitTimeInUs > (currentTime.QuadPart - startTime.QuadPart) * 1000000.0 / frequency.QuadPart);&#xA;}&#xA;&#xA;void check_error(int ret)&#xA;{&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        char errbuf[128];&#xA;        int tmp = errno;&#xA;        av_strerror(ret, errbuf, sizeof(errbuf));&#xA;        std::cerr &lt;&lt; "Error: " &lt;&lt; errbuf &lt;&lt; &#x27;\n&#x27;;&#xA;        //exit(1);&#xA;    }&#xA;}&#xA;&#xA;bool _isRunning = true;&#xA;&#xA;void swap_uv_planes(AVFrame* frame)&#xA;{&#xA;    uint8_t* temp_plane = frame->data[1]; &#xA;    frame->data[1] = frame->data[2]; &#xA;    frame->data[2] = temp_plane; &#xA;}&#xA;&#xA;typedef struct&#xA;{&#xA;    const AVClass* avclass;&#xA;} MyFilterGraphContext;&#xA;&#xA;static constexpr AVClass my_filter_graph_class = &#xA;{&#xA;    .class_name = "MyFilterGraphContext",&#xA;    .item_name = av_default_item_name,&#xA;    .option = NULL,&#xA;    .version = LIBAVUTIL_VERSION_INT,&#xA;};&#xA;&#xA;MyFilterGraphContext* init_log_context()&#xA;{&#xA;    MyFilterGraphContext* ctx = static_cast(av_mallocz(sizeof(*ctx)));&#xA;&#xA;    if (!ctx)&#xA;    {&#xA;        av_log(nullptr, AV_LOG_ERROR, "Unable to allocate MyFilterGraphContext\n");&#xA;        return nullptr;&#xA;    }&#xA;&#xA;    ctx->avclass = &amp;my_filter_graph_class;&#xA;    return ctx;&#xA;}&#xA;&#xA;int init_overlay_filter(AVFilterGraph** graph, AVFilterContext** src_ctx, AVFilterContext** overlay_src_ctx,&#xA;                        AVFilterContext** sink_ctx)&#xA;{&#xA;    AVFilterGraph* filter_graph;&#xA;    AVFilterContext* buffersrc_ctx;&#xA;    AVFilterContext* overlay_buffersrc_ctx;&#xA;    AVFilterContext* buffersink_ctx;&#xA;    AVFilterContext* overlay_ctx;&#xA;    AVFilterContext* format_ctx;&#xA;&#xA;    const AVFilter* buffersrc, * buffersink, * overlay_buffersrc, * overlay_filter, * format_filter;&#xA;    int ret;&#xA;&#xA;    // Create the filter graph&#xA;    filter_graph = avfilter_graph_alloc();&#xA;    if (!filter_graph)&#xA;    {&#xA;        fprintf(stderr, "Unable to create filter graph.\n");&#xA;        return AVERROR(ENOMEM);&#xA;    }&#xA;&#xA;    // Create buffer source filter for main video&#xA;    buffersrc = avfilter_get_by_name("buffer");&#xA;    if (!buffersrc)&#xA;    {&#xA;        fprintf(stderr, "Unable to find buffer filter.\n");&#xA;        return AVERROR_FILTER_NOT_FOUND;&#xA;    }&#xA;&#xA;    // Create buffer source filter for overlay image&#xA;    overlay_buffersrc = avfilter_get_by_name("buffer");&#xA;    if (!overlay_buffersrc)&#xA;    {&#xA;        fprintf(stderr, "Unable to find buffer filter.\n");&#xA;        return AVERROR_FILTER_NOT_FOUND;&#xA;    }&#xA;&#xA;    // Create buffer sink filter&#xA;    buffersink = avfilter_get_by_name("buffersink");&#xA;    if (!buffersink)&#xA;    {&#xA;        fprintf(stderr, "Unable to find buffersink filter.\n");&#xA;        return AVERROR_FILTER_NOT_FOUND;&#xA;    }&#xA;&#xA;    // Create overlay filter&#xA;    overlay_filter = avfilter_get_by_name("overlay");&#xA;    if (!overlay_filter)&#xA;    {&#xA;        fprintf(stderr, "Unable to find overlay filter.\n");&#xA;        return AVERROR_FILTER_NOT_FOUND;&#xA;    }&#xA;&#xA;    // Create format filter&#xA;    format_filter = avfilter_get_by_name("format");&#xA;    if (!format_filter)&#xA;    {&#xA;        fprintf(stderr, "Unable to find format filter.\n");&#xA;        return AVERROR_FILTER_NOT_FOUND;&#xA;    }&#xA;&#xA;    // Initialize the main video buffer source&#xA;    char args[512];&#xA;&#xA;    // Initialize the overlay buffer source&#xA;    snprintf(args, sizeof(args), "video_size=165x165:pix_fmt=bgr24:time_base=1/25:pixel_aspect=1/1"); &#xA;&#xA;    ret = avfilter_graph_create_filter(&amp;overlay_buffersrc_ctx, overlay_buffersrc, nullptr, args, nullptr,&#xA;        filter_graph);&#xA;&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        fprintf(stderr, "Unable to create buffer source filter for overlay.\n");&#xA;        return ret;&#xA;    }&#xA;&#xA;    snprintf(args, sizeof(args), "video_size=1920x1080:pix_fmt=yuv420p:time_base=1/25:pixel_aspect=1/1");&#xA;&#xA;    ret = avfilter_graph_create_filter(&amp;buffersrc_ctx, buffersrc, nullptr, args, nullptr, filter_graph);&#xA;&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        fprintf(stderr, "Unable to create buffer source filter for main video.\n");&#xA;        return ret;&#xA;    }&#xA;&#xA;    // Initialize the format filter to convert overlay image to yuv420p&#xA;    snprintf(args, sizeof(args), "pix_fmts=yuv420p");&#xA;&#xA;    ret = avfilter_graph_create_filter(&amp;format_ctx, format_filter, nullptr, args, nullptr, filter_graph);&#xA;&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        fprintf(stderr, "Unable to create format filter.\n");&#xA;        return ret;&#xA;    }&#xA;&#xA;    // Initialize the overlay filter&#xA;    ret = avfilter_graph_create_filter(&amp;overlay_ctx, overlay_filter, nullptr, "W-w:H-h:enable=&#x27;between(t,0,20)&#x27;:format=yuv420", nullptr, filter_graph);&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        fprintf(stderr, "Unable to create overlay filter.\n");&#xA;        return ret;&#xA;    }&#xA;&#xA;    // Initialize the buffer sink&#xA;    ret = avfilter_graph_create_filter(&amp;buffersink_ctx, buffersink, nullptr, nullptr, nullptr, filter_graph);&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        fprintf(stderr, "Unable to create buffer sink filter.\n");&#xA;        return ret;&#xA;    }&#xA;&#xA;    // Connect the filters&#xA;    ret = avfilter_link(overlay_buffersrc_ctx, 0, format_ctx, 0);&#xA;&#xA;    if (ret >= 0)&#xA;    {&#xA;        ret = avfilter_link(buffersrc_ctx, 0, overlay_ctx, 0);&#xA;    }&#xA;    else&#xA;    {&#xA;        fprintf(stderr, "Unable to configure filter graph.\n");&#xA;        return ret;&#xA;    }&#xA;&#xA;&#xA;    if (ret >= 0)&#xA;    {&#xA;        ret = avfilter_link(format_ctx, 0, overlay_ctx, 1);&#xA;    }&#xA;    else&#xA;    {&#xA;        fprintf(stderr, "Unable to configure filter graph.\n");&#xA;        return ret;&#xA;    }&#xA;&#xA;    if (ret >= 0)&#xA;    {&#xA;        if ((ret = avfilter_link(overlay_ctx, 0, buffersink_ctx, 0)) &lt; 0)&#xA;        {&#xA;            fprintf(stderr, "Unable to link filter graph.\n");&#xA;            return ret;&#xA;        }&#xA;    }&#xA;    else&#xA;    {&#xA;        fprintf(stderr, "Unable to configure filter graph.\n");&#xA;        return ret;&#xA;    }&#xA;&#xA;    MyFilterGraphContext* log_ctx = init_log_context();&#xA;&#xA;    // Configure the filter graph&#xA;    if ((ret = avfilter_graph_config(filter_graph, log_ctx)) &lt; 0)&#xA;    {&#xA;        fprintf(stderr, "Unable to configure filter graph.\n");&#xA;        return ret;&#xA;    }&#xA;&#xA;    *graph = filter_graph;&#xA;    *src_ctx = buffersrc_ctx;&#xA;    *overlay_src_ctx = overlay_buffersrc_ctx;&#xA;    *sink_ctx = buffersink_ctx;&#xA;&#xA;    return 0;&#xA;}&#xA;&#xA;int main(int argc, char* argv[])&#xA;{&#xA;    unsigned int videoIndex = 0;&#xA;&#xA;    avdevice_register_all();&#xA;&#xA;    av_log_set_level(AV_LOG_TRACE);&#xA;&#xA;    const AVInputFormat* pFrameGrabberInputFormat = av_find_input_format("dshow");&#xA;&#xA;    constexpr int frameGrabberPixelWidth = 1920;&#xA;    constexpr int frameGrabberPixelHeight = 1080;&#xA;    constexpr int frameGrabberFrameRate = 25;&#xA;    constexpr AVPixelFormat frameGrabberPixelFormat = AV_PIX_FMT_YUV420P;&#xA;&#xA;    char shortStringBuffer[32];&#xA;&#xA;    AVDictionary* pFrameGrabberOptions = nullptr;&#xA;&#xA;    _snprintf_s(shortStringBuffer, sizeof(shortStringBuffer), "%dx%d", frameGrabberPixelWidth, frameGrabberPixelHeight);&#xA;    av_dict_set(&amp;pFrameGrabberOptions, "video_size", shortStringBuffer, 0);&#xA;&#xA;    _snprintf_s(shortStringBuffer, sizeof(shortStringBuffer), "%d", frameGrabberFrameRate);&#xA;&#xA;    av_dict_set(&amp;pFrameGrabberOptions, "framerate", shortStringBuffer, 0);&#xA;    av_dict_set(&amp;pFrameGrabberOptions, "pixel_format", "yuv420p", 0);&#xA;    av_dict_set(&amp;pFrameGrabberOptions, "rtbufsize", "128M", 0);&#xA;&#xA;    AVFormatContext* pFrameGrabberFormatContext = avformat_alloc_context();&#xA;&#xA;    pFrameGrabberFormatContext->flags = AVFMT_FLAG_NOBUFFER | AVFMT_FLAG_FLUSH_PACKETS;&#xA;&#xA;    if (avformat_open_input(&amp;pFrameGrabberFormatContext, "video=MZ0380 PCI, Analog 01 Capture",&#xA;                            pFrameGrabberInputFormat, &amp;pFrameGrabberOptions) != 0)&#xA;    {&#xA;        std::cerr &lt;&lt; "Couldn&#x27;t open input stream." &lt;&lt; &#x27;\n&#x27;;&#xA;        return -1;&#xA;    }&#xA;&#xA;    if (avformat_find_stream_info(pFrameGrabberFormatContext, nullptr) &lt; 0)&#xA;    {&#xA;        std::cerr &lt;&lt; "Couldn&#x27;t find stream information." &lt;&lt; &#x27;\n&#x27;;&#xA;        return -1;&#xA;    }&#xA;&#xA;    bool foundVideoStream = false;&#xA;&#xA;    for (unsigned int loop_videoIndex = 0; loop_videoIndex &lt; pFrameGrabberFormatContext->nb_streams; loop_videoIndex&#x2B;&#x2B;)&#xA;    {&#xA;        if (pFrameGrabberFormatContext->streams[loop_videoIndex]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO)&#xA;        {&#xA;            videoIndex = loop_videoIndex;&#xA;            foundVideoStream = true;&#xA;            break;&#xA;        }&#xA;    }&#xA;&#xA;    if (!foundVideoStream)&#xA;    {&#xA;        std::cerr &lt;&lt; "Couldn&#x27;t find a video stream." &lt;&lt; &#x27;\n&#x27;;&#xA;        return -1;&#xA;    }&#xA;&#xA;    const AVCodec* pFrameGrabberCodec = avcodec_find_decoder(&#xA;        pFrameGrabberFormatContext->streams[videoIndex]->codecpar->codec_id);&#xA;&#xA;    AVCodecContext* pFrameGrabberCodecContext = avcodec_alloc_context3(pFrameGrabberCodec);&#xA;&#xA;    if (pFrameGrabberCodec == nullptr)&#xA;    {&#xA;        std::cerr &lt;&lt; "Codec not found." &lt;&lt; &#x27;\n&#x27;;&#xA;        return -1;&#xA;    }&#xA;&#xA;    pFrameGrabberCodecContext->pix_fmt = frameGrabberPixelFormat;&#xA;    pFrameGrabberCodecContext->width = frameGrabberPixelWidth;&#xA;    pFrameGrabberCodecContext->height = frameGrabberPixelHeight;&#xA;&#xA;    int ret = avcodec_open2(pFrameGrabberCodecContext, pFrameGrabberCodec, nullptr);&#xA;&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        std::cerr &lt;&lt; "Could not open pVideoCodec." &lt;&lt; &#x27;\n&#x27;;&#xA;        return -1;&#xA;    }&#xA;&#xA;    const char* outputFilePath = "c:\\temp\\output.mp4";&#xA;    constexpr int outputWidth = frameGrabberPixelWidth;&#xA;    constexpr int outputHeight = frameGrabberPixelHeight;&#xA;    constexpr int outputFrameRate = frameGrabberFrameRate;&#xA;&#xA;    SwsContext* img_convert_ctx = sws_getContext(frameGrabberPixelWidth, frameGrabberPixelHeight,&#xA;                                                 frameGrabberPixelFormat, outputWidth, outputHeight, AV_PIX_FMT_NV12,&#xA;                                                 SWS_BICUBIC, nullptr, nullptr, nullptr);&#xA;&#xA;    constexpr double frameTimeinUs = 1000000.0 / frameGrabberFrameRate;&#xA;&#xA;    LARGE_INTEGER frequency;&#xA;    LARGE_INTEGER lastTime, currentTime;&#xA;&#xA;    QueryPerformanceFrequency(&amp;frequency);&#xA;    QueryPerformanceCounter(&amp;lastTime);&#xA;&#xA;    //SDL----------------------------&#xA;&#xA;    if (SDL_Init(SDL_INIT_VIDEO | SDL_INIT_TIMER | SDL_INIT_EVENTS))&#xA;    {&#xA;        printf("Could not initialize SDL - %s\n", SDL_GetError());&#xA;        return -1;&#xA;    }&#xA;&#xA;    SDL_Window* screen = SDL_CreateWindow("3P FrameGrabber SuperApp", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED,&#xA;                                          frameGrabberPixelWidth, frameGrabberPixelHeight,&#xA;                                          SDL_WINDOW_RESIZABLE | SDL_WINDOW_OPENGL);&#xA;&#xA;    if (!screen)&#xA;    {&#xA;        printf("SDL: could not set video mode - exiting:%s\n", SDL_GetError());&#xA;        return -1;&#xA;    }&#xA;&#xA;    SDL_Renderer* renderer = SDL_CreateRenderer(screen, -1, 0);&#xA;&#xA;    if (!renderer)&#xA;    {&#xA;        printf("SDL: could not create renderer - exiting:%s\n", SDL_GetError());&#xA;        return -1;&#xA;    }&#xA;&#xA;    SDL_Texture* texture = SDL_CreateTexture(renderer, SDL_PIXELFORMAT_YV12, SDL_TEXTUREACCESS_STREAMING,&#xA;                                             frameGrabberPixelWidth, frameGrabberPixelHeight);&#xA;&#xA;    if (!texture)&#xA;    {&#xA;        printf("SDL: could not create texture - exiting:%s\n", SDL_GetError());&#xA;        return -1;&#xA;    }&#xA;&#xA;    SDL_Event event;&#xA;&#xA;    //SDL End------------------------&#xA;&#xA;    const AVCodec* pVideoCodec = avcodec_find_encoder_by_name("h264_qsv");&#xA;&#xA;    if (!pVideoCodec)&#xA;    {&#xA;        std::cerr &lt;&lt; "Codec not found" &lt;&lt; &#x27;\n&#x27;;&#xA;        return 1;&#xA;    }&#xA;&#xA;    AVCodecContext* pVideoCodecContext = avcodec_alloc_context3(pVideoCodec);&#xA;&#xA;    if (!pVideoCodecContext)&#xA;    {&#xA;        std::cerr &lt;&lt; "Could not allocate video pVideoCodec context" &lt;&lt; &#x27;\n&#x27;;&#xA;        return 1;&#xA;    }&#xA;&#xA;    AVBufferRef* pHardwareDeviceContextRef = nullptr;&#xA;&#xA;    ret = av_hwdevice_ctx_create(&amp;pHardwareDeviceContextRef, AV_HWDEVICE_TYPE_QSV,&#xA;                                 "PCI\\VEN_8086&amp;DEV_5912&amp;SUBSYS_310217AA&amp;REV_04\\3&amp;11583659&amp;0&amp;10", nullptr, 0);&#xA;    check_error(ret);&#xA;&#xA;    pVideoCodecContext->bit_rate = static_cast(outputWidth * outputHeight) * 2;&#xA;    pVideoCodecContext->width = outputWidth;&#xA;    pVideoCodecContext->height = outputHeight;&#xA;    pVideoCodecContext->framerate = {outputFrameRate, 1};&#xA;    pVideoCodecContext->time_base = {1, outputFrameRate};&#xA;    pVideoCodecContext->pix_fmt = AV_PIX_FMT_QSV;&#xA;    pVideoCodecContext->max_b_frames = 0;&#xA;&#xA;    AVBufferRef* pHardwareFramesContextRef = av_hwframe_ctx_alloc(pHardwareDeviceContextRef);&#xA;&#xA;    AVHWFramesContext* pHardwareFramesContext = reinterpret_cast(pHardwareFramesContextRef->data);&#xA;&#xA;    pHardwareFramesContext->format = AV_PIX_FMT_QSV;&#xA;    pHardwareFramesContext->sw_format = AV_PIX_FMT_NV12;&#xA;    pHardwareFramesContext->width = outputWidth;&#xA;    pHardwareFramesContext->height = outputHeight;&#xA;    pHardwareFramesContext->initial_pool_size = 20;&#xA;&#xA;    ret = av_hwframe_ctx_init(pHardwareFramesContextRef);&#xA;    check_error(ret);&#xA;&#xA;    pVideoCodecContext->hw_device_ctx = nullptr;&#xA;    pVideoCodecContext->hw_frames_ctx = av_buffer_ref(pHardwareFramesContextRef);&#xA;&#xA;    ret = avcodec_open2(pVideoCodecContext, pVideoCodec, nullptr); //&amp;pVideoOptionsDict);&#xA;    check_error(ret);&#xA;&#xA;    AVFormatContext* pVideoFormatContext = nullptr;&#xA;&#xA;    avformat_alloc_output_context2(&amp;pVideoFormatContext, nullptr, nullptr, outputFilePath);&#xA;&#xA;    if (!pVideoFormatContext)&#xA;    {&#xA;        std::cerr &lt;&lt; "Could not create output context" &lt;&lt; &#x27;\n&#x27;;&#xA;        return 1;&#xA;    }&#xA;&#xA;    const AVOutputFormat* pVideoOutputFormat = pVideoFormatContext->oformat;&#xA;&#xA;    if (pVideoFormatContext->oformat->flags &amp; AVFMT_GLOBALHEADER)&#xA;    {&#xA;        pVideoCodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;&#xA;    }&#xA;&#xA;    const AVStream* pVideoStream = avformat_new_stream(pVideoFormatContext, pVideoCodec);&#xA;&#xA;    if (!pVideoStream)&#xA;    {&#xA;        std::cerr &lt;&lt; "Could not allocate stream" &lt;&lt; &#x27;\n&#x27;;&#xA;        return 1;&#xA;    }&#xA;&#xA;    ret = avcodec_parameters_from_context(pVideoStream->codecpar, pVideoCodecContext);&#xA;&#xA;    check_error(ret);&#xA;&#xA;    if (!(pVideoOutputFormat->flags &amp; AVFMT_NOFILE))&#xA;    {&#xA;        ret = avio_open(&amp;pVideoFormatContext->pb, outputFilePath, AVIO_FLAG_WRITE);&#xA;        check_error(ret);&#xA;    }&#xA;&#xA;    ret = avformat_write_header(pVideoFormatContext, nullptr);&#xA;&#xA;    check_error(ret);&#xA;&#xA;    AVFrame* pHardwareFrame = av_frame_alloc();&#xA;&#xA;    if (av_hwframe_get_buffer(pVideoCodecContext->hw_frames_ctx, pHardwareFrame, 0) &lt; 0)&#xA;    {&#xA;        std::cerr &lt;&lt; "Error allocating a hw frame" &lt;&lt; &#x27;\n&#x27;;&#xA;        return -1;&#xA;    }&#xA;&#xA;    AVFrame* pFrameGrabberFrame = av_frame_alloc();&#xA;    AVPacket* pFrameGrabberPacket = av_packet_alloc();&#xA;&#xA;    AVPacket* pVideoPacket = av_packet_alloc();&#xA;    AVFrame* pVideoFrame = av_frame_alloc();&#xA;&#xA;    AVFrame* pSwappedFrame = av_frame_alloc();&#xA;    av_frame_get_buffer(pSwappedFrame, 32);&#xA;&#xA;    INT64 frameCount = 0;&#xA;&#xA;    pFrameGrabberCodecContext->time_base = {1, frameGrabberFrameRate};&#xA;&#xA;    AVFilterContext* buffersrc_ctx = nullptr;&#xA;    AVFilterContext* buffersink_ctx = nullptr;&#xA;    AVFilterContext* overlay_src_ctx = nullptr;&#xA;    AVFilterGraph* filter_graph = nullptr;&#xA;&#xA;    if ((ret = init_overlay_filter(&amp;filter_graph, &amp;buffersrc_ctx, &amp;overlay_src_ctx, &amp;buffersink_ctx)) &lt; 0)&#xA;    {&#xA;        return ret;&#xA;    }&#xA;&#xA;    // Load overlay image&#xA;    AVFormatContext* overlay_fmt_ctx = nullptr;&#xA;    AVCodecContext* overlay_codec_ctx = nullptr;&#xA;    const AVCodec* overlay_codec = nullptr;&#xA;    AVFrame* overlay_frame = nullptr;&#xA;    AVDictionary* overlay_options = nullptr;&#xA;&#xA;    const char* overlay_image_filename = "c:\\temp\\overlay.bmp";&#xA;&#xA;    av_dict_set(&amp;overlay_options, "video_size", "165x165", 0);&#xA;    av_dict_set(&amp;overlay_options, "pixel_format", "bgr24", 0);&#xA;&#xA;    if ((ret = avformat_open_input(&amp;overlay_fmt_ctx, overlay_image_filename, nullptr, &amp;overlay_options)) &lt; 0)&#xA;    {&#xA;        return ret;&#xA;    }&#xA;&#xA;    if ((ret = avformat_find_stream_info(overlay_fmt_ctx, nullptr)) &lt; 0)&#xA;    {&#xA;        return ret;&#xA;    }&#xA;&#xA;    int overlay_video_stream_index = -1;&#xA;&#xA;    for (int i = 0; i &lt; overlay_fmt_ctx->nb_streams; i&#x2B;&#x2B;)&#xA;    {&#xA;        if (overlay_fmt_ctx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO)&#xA;        {&#xA;            overlay_video_stream_index = i;&#xA;            break;&#xA;        }&#xA;    }&#xA;&#xA;    if (overlay_video_stream_index == -1)&#xA;    {&#xA;        return -1;&#xA;    }&#xA;&#xA;    overlay_codec = avcodec_find_decoder(overlay_fmt_ctx->streams[overlay_video_stream_index]->codecpar->codec_id);&#xA;&#xA;    if (!overlay_codec)&#xA;    {&#xA;        fprintf(stderr, "Overlay codec not found.\n");&#xA;        return -1;&#xA;    }&#xA;&#xA;    overlay_codec_ctx = avcodec_alloc_context3(overlay_codec);&#xA;&#xA;    if (!overlay_codec_ctx)&#xA;    {&#xA;        fprintf(stderr, "Could not allocate overlay codec context.\n");&#xA;        return AVERROR(ENOMEM);&#xA;    }&#xA;&#xA;    avcodec_parameters_to_context(overlay_codec_ctx, overlay_fmt_ctx->streams[overlay_video_stream_index]->codecpar);&#xA;&#xA;    if ((ret = avcodec_open2(overlay_codec_ctx, overlay_codec, nullptr)) &lt; 0)&#xA;    {&#xA;        return ret;&#xA;    }&#xA;&#xA;    overlay_frame = av_frame_alloc();&#xA;&#xA;    if (!overlay_frame)&#xA;    {&#xA;        fprintf(stderr, "Could not allocate overlay frame.\n");&#xA;        return AVERROR(ENOMEM);&#xA;    }&#xA;&#xA;    AVPacket* overlay_packet = av_packet_alloc();&#xA;&#xA;    // Read frames from the file&#xA;    while (av_read_frame(overlay_fmt_ctx, overlay_packet) >= 0)&#xA;    {&#xA;        if (overlay_packet->stream_index == overlay_video_stream_index)&#xA;        {&#xA;            ret = avcodec_send_packet(overlay_codec_ctx, overlay_packet);&#xA;&#xA;            if (ret &lt; 0)&#xA;            {&#xA;                break;&#xA;            }&#xA;&#xA;            ret = avcodec_receive_frame(overlay_codec_ctx, overlay_frame);&#xA;            if (ret >= 0)&#xA;            {&#xA;                &#xA;                break; // We only need the first frame for the overlay&#xA;            }&#xA;&#xA;            if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)&#xA;            {&#xA;                continue;&#xA;            }&#xA;&#xA;            break;&#xA;        }&#xA;&#xA;        av_packet_unref(overlay_packet);&#xA;    }&#xA;&#xA;    av_packet_unref(overlay_packet);&#xA;&#xA;    while (_isRunning)&#xA;    {&#xA;        while (SDL_PollEvent(&amp;event) != 0)&#xA;        {&#xA;            switch (event.type)&#xA;            {&#xA;            case SDL_QUIT:&#xA;                _isRunning = false;&#xA;                break;&#xA;            case SDL_KEYDOWN:&#xA;                if (event.key.keysym.sym == SDLK_ESCAPE)&#xA;                    _isRunning = false;&#xA;                break;&#xA;            default: ;&#xA;            }&#xA;        }&#xA;&#xA;        if (av_read_frame(pFrameGrabberFormatContext, pFrameGrabberPacket) == 0)&#xA;        {&#xA;            if (pFrameGrabberPacket->stream_index == videoIndex)&#xA;            {&#xA;                ret = avcodec_send_packet(pFrameGrabberCodecContext, pFrameGrabberPacket);&#xA;&#xA;                if (ret &lt; 0)&#xA;                {&#xA;                    std::cerr &lt;&lt; "Error sending a packet for decoding!" &lt;&lt; &#x27;\n&#x27;;&#xA;                    return -1;&#xA;                }&#xA;&#xA;                ret = avcodec_receive_frame(pFrameGrabberCodecContext, pFrameGrabberFrame);&#xA;&#xA;                if (ret != 0)&#xA;                {&#xA;                    std::cerr &lt;&lt; "Receiving frame failed!" &lt;&lt; &#x27;\n&#x27;;&#xA;                    return -1;&#xA;                }&#xA;&#xA;                if (ret == AVERROR(EAGAIN) || ret == AVERROR(AVERROR_EOF))&#xA;                {&#xA;                    std::cout &lt;&lt; "End of stream detected. Exiting now." &lt;&lt; &#x27;\n&#x27;;&#xA;                    return 0;&#xA;                }&#xA;&#xA;                if (ret != 0)&#xA;                {&#xA;                    std::cerr &lt;&lt; "Decode Error!" &lt;&lt; &#x27;\n&#x27;;&#xA;                    return -1;&#xA;                }&#xA;&#xA;                // Feed the frame into the filter graph&#xA;                if (av_buffersrc_add_frame_flags(buffersrc_ctx, pFrameGrabberFrame, AV_BUFFERSRC_FLAG_KEEP_REF) &lt; 0)&#xA;                {&#xA;                    fprintf(stderr, "Error while feeding the filtergraph\n");&#xA;                    break;&#xA;                }&#xA;&#xA;                // Push the overlay frame to the overlay_src_ctx&#xA;                ret = av_buffersrc_add_frame_flags(overlay_src_ctx, overlay_frame, AV_BUFFERSRC_FLAG_KEEP_REF);&#xA;                if (ret &lt; 0)&#xA;                {&#xA;                    fprintf(stderr, "Error while feeding the filtergraph\n");&#xA;                    break;&#xA;                }                           &#xA;&#xA;                // Pull filtered frame from the filter graph&#xA;                AVFrame* filtered_frame = av_frame_alloc();&#xA;&#xA;                ret = av_buffersink_get_frame(buffersink_ctx, filtered_frame);&#xA;&#xA;                if (ret &lt; 0)&#xA;                {&#xA;                    check_error(ret);&#xA;                }&#xA;&#xA;                QueryPerformanceCounter(&amp;currentTime);&#xA;&#xA;                double elapsedTime = (currentTime.QuadPart - lastTime.QuadPart) * 1000000.0 / frequency.QuadPart;&#xA;&#xA;                if (elapsedTime > 0.0 &amp;&amp; elapsedTime &lt; frameTimeinUs)&#xA;                {&#xA;                    uSleep(frameTimeinUs - elapsedTime, frequency);&#xA;                }&#xA;&#xA;                SDL_UpdateTexture(texture, nullptr, filtered_frame->data[0], filtered_frame->linesize[0]);&#xA;                SDL_RenderClear(renderer);&#xA;                SDL_RenderCopy(renderer, texture, nullptr, nullptr);&#xA;                SDL_RenderPresent(renderer);&#xA;&#xA;                QueryPerformanceCounter(&amp;lastTime);&#xA;&#xA;                swap_uv_planes(filtered_frame);&#xA;&#xA;                ret = sws_scale_frame(img_convert_ctx, pVideoFrame, filtered_frame);&#xA;&#xA;                if (ret &lt; 0)&#xA;                {&#xA;                    std::cerr &lt;&lt; "Scaling frame for Intel QS Encoder did fail!" &lt;&lt; &#x27;\n&#x27;;&#xA;                    return -1;&#xA;                }&#xA;&#xA;                if (av_hwframe_transfer_data(pHardwareFrame, pVideoFrame, 0) &lt; 0)&#xA;                {&#xA;                    std::cerr &lt;&lt; "Error transferring frame data to hw frame!" &lt;&lt; &#x27;\n&#x27;;&#xA;                    return -1;&#xA;                }&#xA;&#xA;                pHardwareFrame->pts = frameCount&#x2B;&#x2B;;&#xA;&#xA;                ret = avcodec_send_frame(pVideoCodecContext, pHardwareFrame);&#xA;&#xA;                if (ret &lt; 0)&#xA;                {&#xA;                    std::cerr &lt;&lt; "Error sending a frame for encoding" &lt;&lt; &#x27;\n&#x27;;&#xA;                    check_error(ret);&#xA;                }&#xA;&#xA;                av_packet_unref(pVideoPacket);&#xA;&#xA;                while (ret >= 0)&#xA;                {&#xA;                    ret = avcodec_receive_packet(pVideoCodecContext, pVideoPacket);&#xA;&#xA;                    if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)&#xA;                    {&#xA;                        break;&#xA;                    }&#xA;&#xA;                    if (ret &lt; 0)&#xA;                    {&#xA;                        std::cerr &lt;&lt; "Error during encoding" &lt;&lt; &#x27;\n&#x27;;&#xA;                        return 1;&#xA;                    }&#xA;&#xA;                    av_packet_rescale_ts(pVideoPacket, pVideoCodecContext->time_base, pVideoStream->time_base);&#xA;&#xA;                    pVideoPacket->stream_index = pVideoStream->index;&#xA;&#xA;                    ret = av_interleaved_write_frame(pVideoFormatContext, pVideoPacket);&#xA;&#xA;                    check_error(ret);&#xA;&#xA;                    av_packet_unref(pVideoPacket);&#xA;                }&#xA;&#xA;                av_packet_unref(pFrameGrabberPacket);&#xA;                av_frame_free(&amp;filtered_frame);&#xA;            }&#xA;        }&#xA;    }&#xA;&#xA;    av_write_trailer(pVideoFormatContext);&#xA;    av_buffer_unref(&amp;pHardwareDeviceContextRef);&#xA;    avcodec_free_context(&amp;pVideoCodecContext);&#xA;    avio_closep(&amp;pVideoFormatContext->pb);&#xA;    avformat_free_context(pVideoFormatContext);&#xA;    av_packet_free(&amp;pVideoPacket);&#xA;&#xA;    avcodec_free_context(&amp;pFrameGrabberCodecContext);&#xA;    av_frame_free(&amp;pFrameGrabberFrame);&#xA;    av_packet_free(&amp;pFrameGrabberPacket);&#xA;    avformat_close_input(&amp;pFrameGrabberFormatContext);&#xA;&#xA;    return 0;&#xA;}&#xA;&#xA;</fstream></iostream></cstdint></cstdio>

    &#xA;

    The console / log output running the code :

    &#xA;

    [in @ 00000288ee494f40] Setting &#x27;video_size&#x27; to value &#x27;1920x1080&#x27;&#xA;[in @ 00000288ee494f40] Setting &#x27;pix_fmt&#x27; to value &#x27;yuv420p&#x27;&#xA;[in @ 00000288ee494f40] Setting &#x27;time_base&#x27; to value &#x27;1/25&#x27;&#xA;[in @ 00000288ee494f40] Setting &#x27;pixel_aspect&#x27; to value &#x27;1/1&#x27;&#xA;[in @ 00000288ee494f40] w:1920 h:1080 pixfmt:yuv420p tb:1/25 fr:0/1 sar:1/1 csp:unknown range:unknown&#xA;[overlay_in @ 00000288ff1013c0] Setting &#x27;video_size&#x27; to value &#x27;165x165&#x27;&#xA;[overlay_in @ 00000288ff1013c0] Setting &#x27;pix_fmt&#x27; to value &#x27;bgr24&#x27;&#xA;[overlay_in @ 00000288ff1013c0] Setting &#x27;time_base&#x27; to value &#x27;1/25&#x27;&#xA;[overlay_in @ 00000288ff1013c0] Setting &#x27;pixel_aspect&#x27; to value &#x27;1/1&#x27;&#xA;[overlay_in @ 00000288ff1013c0] w:165 h:165 pixfmt:bgr24 tb:1/25 fr:0/1 sar:1/1 csp:unknown range:unknown&#xA;[format @ 00000288ff1015c0] Setting &#x27;pix_fmts&#x27; to value &#x27;yuv420p&#x27;&#xA;[overlay @ 00000288ff101880] Setting &#x27;x&#x27; to value &#x27;W-w&#x27;&#xA;[overlay @ 00000288ff101880] Setting &#x27;y&#x27; to value &#x27;H-h&#x27;&#xA;[overlay @ 00000288ff101880] Setting &#x27;enable&#x27; to value &#x27;between(t,0,20)&#x27;&#xA;[overlay @ 00000288ff101880] Setting &#x27;format&#x27; to value &#x27;yuv420&#x27;&#xA;[auto_scale_0 @ 00000288ff101ec0] w:iw h:ih flags:&#x27;&#x27; interl:0&#xA;[format @ 00000288ff1015c0] auto-inserting filter &#x27;auto_scale_0&#x27; between the filter &#x27;overlay_in&#x27; and the filter &#x27;format&#x27;&#xA;[auto_scale_1 @ 00000288ee4a4cc0] w:iw h:ih flags:&#x27;&#x27; interl:0&#xA;[overlay @ 00000288ff101880] auto-inserting filter &#x27;auto_scale_1&#x27; between the filter &#x27;format&#x27; and the filter &#x27;overlay&#x27;&#xA;[AVFilterGraph @ 00000288ee495c80] query_formats: 5 queried, 6 merged, 6 already done, 0 delayed&#xA;[auto_scale_0 @ 00000288ff101ec0] w:165 h:165 fmt:bgr24 csp:gbr range:pc sar:1/1 -> w:165 h:165 fmt:yuv420p csp:unknown range:unknown sar:1/1 flags:0x00000004&#xA;[auto_scale_1 @ 00000288ee4a4cc0] w:165 h:165 fmt:yuv420p csp:unknown range:unknown sar:1/1 -> w:165 h:165 fmt:yuva420p csp:unknown range:unknown sar:1/1 flags:0x00000004&#xA;[overlay @ 00000288ff101880] main w:1920 h:1080 fmt:yuv420p overlay w:165 h:165 fmt:yuva420p&#xA;[overlay @ 00000288ff101880] [framesync @ 00000288ff1019a8] Selected 1/25 time base&#xA;[overlay @ 00000288ff101880] [framesync @ 00000288ff1019a8] Sync level 2&#xA;

    &#xA;

    I tried to change the index / order of how the two different frames become pushed into the filter graph. Once I got a frame out of the graph but with the dimensions of the overlay image, not with the dimensions of the grabbed frame from the grabber card. So I suppose I am doing something wrong building up the filter graph.

    &#xA;

    To verify that the FFmpeg build contains all necessary modules I ran that procedure via FFmpeg executable in console and it worked and the result was as expected.

    &#xA;

    The command-line producing the expected output is following :

    &#xA;

    ffmpeg -f dshow -i video="MZ0380 PCI, Analog 01 Capture" -video_size 1920x1080 -framerate 25 -pixel_format yuv420p -loglevel debug -i "C:\temp\overlay.bmp" -filter_complex "[0:v][1:v] overlay=W-w:H-h:enable=&#x27;between(t,0,20)&#x27;" -pix_fmt yuv420p -c:a copy output.mp4&#xA;

    &#xA;

  • Problems using libavfilter for adding overlay to frames

    12 novembre 2024, par Michael Werner

    On Windows 11 OS with latest libav (full build) a C/C++ app reads YUV420P frames from a frame grabber card.

    &#xA;

    I want to draw a bitmap (BGR24) overlay image from file on every frame via libavfilter. First I convert the BGR24 overlay image via format filter to YUV420P. Then I feed the YUV420P frame from frame grabber and the YUV420P overlay into the overlay filter.

    &#xA;

    Everything seems to be fine but when I try to get the frame out of the filter graph I always get an "Resource is temporary not available" (EAGAIN) return code, independent on how many frames I put into the graph.

    &#xA;

    The frames from the frame grabber card are fine, I could encode them or write them to a .yuv file. The overlay frame looks fine too.

    &#xA;

    My current initialization code looks like below. It does not report any errors or warnings but when I try to get the filtered frame out of the graph via av_buffersink_get_frame I always get an EAGAIN return code.

    &#xA;

    Here is my current initialization code :

    &#xA;

    int init_overlay_filter(AVFilterGraph** graph, AVFilterContext** src_ctx, AVFilterContext** overlay_src_ctx,&#xA;                        AVFilterContext** sink_ctx)&#xA;{&#xA;    AVFilterGraph* filter_graph;&#xA;    AVFilterContext* buffersrc_ctx;&#xA;    AVFilterContext* overlay_buffersrc_ctx;&#xA;    AVFilterContext* buffersink_ctx;&#xA;    AVFilterContext* overlay_ctx;&#xA;    AVFilterContext* format_ctx;&#xA;    const AVFilter *buffersrc, *buffersink, *overlay_buffersrc, *overlay_filter, *format_filter;&#xA;    int ret;&#xA;&#xA;    // Create the filter graph&#xA;    filter_graph = avfilter_graph_alloc();&#xA;    if (!filter_graph)&#xA;    {&#xA;        fprintf(stderr, "Unable to create filter graph.\n");&#xA;        return AVERROR(ENOMEM);&#xA;    }&#xA;&#xA;    // Create buffer source filter for main video&#xA;    buffersrc = avfilter_get_by_name("buffer");&#xA;    if (!buffersrc)&#xA;    {&#xA;        fprintf(stderr, "Unable to find buffer filter.\n");&#xA;        return AVERROR_FILTER_NOT_FOUND;&#xA;    }&#xA;&#xA;    // Create buffer source filter for overlay image&#xA;    overlay_buffersrc = avfilter_get_by_name("buffer");&#xA;    if (!overlay_buffersrc)&#xA;    {&#xA;        fprintf(stderr, "Unable to find buffer filter.\n");&#xA;        return AVERROR_FILTER_NOT_FOUND;&#xA;    }&#xA;&#xA;    // Create buffer sink filter&#xA;    buffersink = avfilter_get_by_name("buffersink");&#xA;    if (!buffersink)&#xA;    {&#xA;        fprintf(stderr, "Unable to find buffersink filter.\n");&#xA;        return AVERROR_FILTER_NOT_FOUND;&#xA;    }&#xA;&#xA;    // Create overlay filter&#xA;    overlay_filter = avfilter_get_by_name("overlay");&#xA;    if (!overlay_filter)&#xA;    {&#xA;        fprintf(stderr, "Unable to find overlay filter.\n");&#xA;        return AVERROR_FILTER_NOT_FOUND;&#xA;    }&#xA;&#xA;    // Create format filter&#xA;    format_filter = avfilter_get_by_name("format");&#xA;    if (!format_filter) &#xA;    {&#xA;        fprintf(stderr, "Unable to find format filter.\n");&#xA;        return AVERROR_FILTER_NOT_FOUND;&#xA;    }&#xA;&#xA;    // Initialize the main video buffer source&#xA;    char args[512];&#xA;    snprintf(args, sizeof(args),&#xA;             "video_size=1920x1080:pix_fmt=yuv420p:time_base=1/25:pixel_aspect=1/1");&#xA;    ret = avfilter_graph_create_filter(&amp;buffersrc_ctx, buffersrc, "in", args, NULL, filter_graph);&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        fprintf(stderr, "Unable to create buffer source filter for main video.\n");&#xA;        return ret;&#xA;    }&#xA;&#xA;    // Initialize the overlay buffer source&#xA;    snprintf(args, sizeof(args),&#xA;             "video_size=165x165:pix_fmt=bgr24:time_base=1/25:pixel_aspect=1/1");&#xA;    ret = avfilter_graph_create_filter(&amp;overlay_buffersrc_ctx, overlay_buffersrc, "overlay_in", args, NULL,&#xA;                                       filter_graph);&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        fprintf(stderr, "Unable to create buffer source filter for overlay.\n");&#xA;        return ret;&#xA;    }&#xA;&#xA;    // Initialize the format filter to convert overlay image to yuv420p&#xA;    snprintf(args, sizeof(args), "pix_fmts=yuv420p");&#xA;    ret = avfilter_graph_create_filter(&amp;format_ctx, format_filter, "format", args, NULL, filter_graph);&#xA;&#xA;    if (ret &lt; 0) &#xA;    {&#xA;        fprintf(stderr, "Unable to create format filter.\n");&#xA;        return ret;&#xA;    }&#xA;&#xA;    // Initialize the buffer sink&#xA;    ret = avfilter_graph_create_filter(&amp;buffersink_ctx, buffersink, "out", NULL, NULL, filter_graph);&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        fprintf(stderr, "Unable to create buffer sink filter.\n");&#xA;        return ret;&#xA;    }&#xA;&#xA;    // Initialize the overlay filter&#xA;    ret = avfilter_graph_create_filter(&amp;overlay_ctx, overlay_filter, "overlay", "W-w:H-h:enable=&#x27;between(t,0,20)&#x27;:format=yuv420", NULL, filter_graph);&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        fprintf(stderr, "Unable to create overlay filter.\n");&#xA;        return ret;&#xA;    }&#xA;&#xA;    // Connect the filters&#xA;    ret = avfilter_link(overlay_buffersrc_ctx, 0, format_ctx, 0);&#xA;&#xA;    if (ret >= 0)&#xA;    {&#xA;        ret = avfilter_link(buffersrc_ctx, 0, overlay_ctx, 0);&#xA;    }&#xA;    else&#xA;    {&#xA;        fprintf(stderr, "Unable to configure filter graph.\n");&#xA;        return ret;&#xA;    }&#xA;&#xA;&#xA;    if (ret >= 0) &#xA;    {&#xA;        ret = avfilter_link(format_ctx, 0, overlay_ctx, 1);&#xA;    }&#xA;    else&#xA;    {&#xA;        fprintf(stderr, "Unable to configure filter graph.\n");&#xA;        return ret;&#xA;    }&#xA;&#xA;    if (ret >= 0) &#xA;    {&#xA;        if ((ret = avfilter_link(overlay_ctx, 0, buffersink_ctx, 0)) &lt; 0)&#xA;        {&#xA;            fprintf(stderr, "Unable to link filter graph.\n");&#xA;            return ret;&#xA;        }&#xA;    }&#xA;    else&#xA;    {&#xA;        fprintf(stderr, "Unable to configure filter graph.\n");&#xA;        return ret;&#xA;    }&#xA;&#xA;    // Configure the filter graph&#xA;    if ((ret = avfilter_graph_config(filter_graph, NULL)) &lt; 0)&#xA;    {&#xA;        fprintf(stderr, "Unable to configure filter graph.\n");&#xA;        return ret;&#xA;    }&#xA;&#xA;    *graph = filter_graph;&#xA;    *src_ctx = buffersrc_ctx;&#xA;    *overlay_src_ctx = overlay_buffersrc_ctx;&#xA;    *sink_ctx = buffersink_ctx;&#xA;&#xA;    return 0;&#xA;}&#xA;

    &#xA;

    Feeding the filter graph is done this way :

    &#xA;

    av_buffersrc_add_frame_flags(buffersrc_ctx, pFrameGrabberFrame, AV_BUFFERSRC_FLAG_KEEP_REF)&#xA;av_buffersink_get_frame(buffersink_ctx, filtered_frame)&#xA;

    &#xA;

    av_buffersink_get_frame returns always EAGAIN, no matter how many frames I feed into the graph. The frames (from framegrabber and the overlay frame) itself are looking fine.

    &#xA;

    I did set libav logging level to maximum but I do not see any warnings or errors or helpful, related information in the log.

    &#xA;

    Here the log output related to the filter configuration :

    &#xA;

    [in @ 00000288ee494f40] Setting &#x27;video_size&#x27; to value &#x27;1920x1080&#x27;&#xA;[in @ 00000288ee494f40] Setting &#x27;pix_fmt&#x27; to value &#x27;yuv420p&#x27;&#xA;[in @ 00000288ee494f40] Setting &#x27;time_base&#x27; to value &#x27;1/25&#x27;&#xA;[in @ 00000288ee494f40] Setting &#x27;pixel_aspect&#x27; to value &#x27;1/1&#x27;&#xA;[in @ 00000288ee494f40] w:1920 h:1080 pixfmt:yuv420p tb:1/25 fr:0/1 sar:1/1 csp:unknown range:unknown&#xA;[overlay_in @ 00000288ff1013c0] Setting &#x27;video_size&#x27; to value &#x27;165x165&#x27;&#xA;[overlay_in @ 00000288ff1013c0] Setting &#x27;pix_fmt&#x27; to value &#x27;bgr24&#x27;&#xA;[overlay_in @ 00000288ff1013c0] Setting &#x27;time_base&#x27; to value &#x27;1/25&#x27;&#xA;[overlay_in @ 00000288ff1013c0] Setting &#x27;pixel_aspect&#x27; to value &#x27;1/1&#x27;&#xA;[overlay_in @ 00000288ff1013c0] w:165 h:165 pixfmt:bgr24 tb:1/25 fr:0/1 sar:1/1 csp:unknown range:unknown&#xA;[format @ 00000288ff1015c0] Setting &#x27;pix_fmts&#x27; to value &#x27;yuv420p&#x27;&#xA;[overlay @ 00000288ff101880] Setting &#x27;x&#x27; to value &#x27;W-w&#x27;&#xA;[overlay @ 00000288ff101880] Setting &#x27;y&#x27; to value &#x27;H-h&#x27;&#xA;[overlay @ 00000288ff101880] Setting &#x27;enable&#x27; to value &#x27;between(t,0,20)&#x27;&#xA;[overlay @ 00000288ff101880] Setting &#x27;format&#x27; to value &#x27;yuv420&#x27;&#xA;[auto_scale_0 @ 00000288ff101ec0] w:iw h:ih flags:&#x27;&#x27; interl:0&#xA;[format @ 00000288ff1015c0] auto-inserting filter &#x27;auto_scale_0&#x27; between the filter &#x27;overlay_in&#x27; and the filter &#x27;format&#x27;&#xA;[auto_scale_1 @ 00000288ee4a4cc0] w:iw h:ih flags:&#x27;&#x27; interl:0&#xA;[overlay @ 00000288ff101880] auto-inserting filter &#x27;auto_scale_1&#x27; between the filter &#x27;format&#x27; and the filter &#x27;overlay&#x27;&#xA;[AVFilterGraph @ 00000288ee495c80] query_formats: 5 queried, 6 merged, 6 already done, 0 delayed&#xA;[auto_scale_0 @ 00000288ff101ec0] w:165 h:165 fmt:bgr24 csp:gbr range:pc sar:1/1 -> w:165 h:165 fmt:yuv420p csp:unknown range:unknown sar:1/1 flags:0x00000004&#xA;[auto_scale_1 @ 00000288ee4a4cc0] w:165 h:165 fmt:yuv420p csp:unknown range:unknown sar:1/1 -> w:165 h:165 fmt:yuva420p csp:unknown range:unknown sar:1/1 flags:0x00000004&#xA;[overlay @ 00000288ff101880] main w:1920 h:1080 fmt:yuv420p overlay w:165 h:165 fmt:yuva420p&#xA;[overlay @ 00000288ff101880] [framesync @ 00000288ff1019a8] Selected 1/25 time base&#xA;[overlay @ 00000288ff101880] [framesync @ 00000288ff1019a8] Sync level 2&#xA;

    &#xA;