Recherche avancée

Médias (2)

Mot : - Tags -/documentation

Autres articles (100)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • L’agrémenter visuellement

    10 avril 2011

    MediaSPIP est basé sur un système de thèmes et de squelettes. Les squelettes définissent le placement des informations dans la page, définissant un usage spécifique de la plateforme, et les thèmes l’habillage graphique général.
    Chacun peut proposer un nouveau thème graphique ou un squelette et le mettre à disposition de la communauté.

Sur d’autres sites (12395)

  • Problems using FFmpeg / libavfilter for adding overlay to grabbed frames

    21 novembre 2024, par Michael

    On Windows with latest FFmpeg / libav (full build, non-free) a C/C++ app reads YUV420P frames from a frame grabber card.

    


    A bitmap (BGR24) overlay image from file should be drawn on every frame for the first 20 seconds via libavfilter. First, the BGR24 overlay image becomes converted via format filter to YUV420P. Then the YUV420P frame from frame grabber and the YUV420P overlay frame are pushed into the overlay filter.

    


    FFmpeg / libavfilter does not report any errors or warnings in console / log. Trying to get the filtered frame out of the graph via av_buffersink_get_frame results in an EAGAIN return code.

    


    The frames from the frame grabber card are fine, they could become encoded or written to a .yuv file. The overlay frame itself is fine too.

    


    This is the complete private code (prototype - no style, memory leaks, ...) :

    


    #define __STDC_LIMIT_MACROS&#xA;#define __STDC_CONSTANT_MACROS&#xA;&#xA;#include <cstdio>&#xA;#include <cstdint>&#xA;#include &#xA;&#xA;#include "../fgproto/include/SDL/SDL_video.h"&#xA;#include &#xA;&#xA;using namespace _DSHOWLIB_NAMESPACE;&#xA;&#xA;#ifdef _WIN32&#xA;//Windows&#xA;extern "C" {&#xA;#include "libavcodec/avcodec.h"&#xA;#include "libavformat/avformat.h"&#xA;#include "libswscale/swscale.h"&#xA;#include "libavdevice/avdevice.h"&#xA;#include "libavfilter/avfilter.h"&#xA;#include <libavutil></libavutil>log.h>&#xA;#include <libavutil></libavutil>mem.h>&#xA;#include "libavfilter/buffersink.h"&#xA;#include "libavfilter/buffersrc.h"&#xA;#include "libavutil/opt.h"&#xA;#include "libavutil/hwcontext_qsv.h"&#xA;#include "SDL/SDL.h"&#xA;};&#xA;#endif&#xA;#include <iostream>&#xA;#include <fstream>&#xA;&#xA;void uSleep(double waitTimeInUs, LARGE_INTEGER frequency)&#xA;{&#xA;    LARGE_INTEGER startTime, currentTime;&#xA;&#xA;    QueryPerformanceCounter(&amp;startTime);&#xA;&#xA;    if (waitTimeInUs > 16500.0)&#xA;        Sleep(1);&#xA;&#xA;    do&#xA;    {&#xA;        YieldProcessor();&#xA;        //Sleep(0);&#xA;        QueryPerformanceCounter(&amp;currentTime);&#xA;    }&#xA;    while (waitTimeInUs > (currentTime.QuadPart - startTime.QuadPart) * 1000000.0 / frequency.QuadPart);&#xA;}&#xA;&#xA;void check_error(int ret)&#xA;{&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        char errbuf[128];&#xA;        int tmp = errno;&#xA;        av_strerror(ret, errbuf, sizeof(errbuf));&#xA;        std::cerr &lt;&lt; "Error: " &lt;&lt; errbuf &lt;&lt; &#x27;\n&#x27;;&#xA;        //exit(1);&#xA;    }&#xA;}&#xA;&#xA;bool _isRunning = true;&#xA;&#xA;void swap_uv_planes(AVFrame* frame)&#xA;{&#xA;    uint8_t* temp_plane = frame->data[1]; &#xA;    frame->data[1] = frame->data[2]; &#xA;    frame->data[2] = temp_plane; &#xA;}&#xA;&#xA;typedef struct&#xA;{&#xA;    const AVClass* avclass;&#xA;} MyFilterGraphContext;&#xA;&#xA;static constexpr AVClass my_filter_graph_class = &#xA;{&#xA;    .class_name = "MyFilterGraphContext",&#xA;    .item_name = av_default_item_name,&#xA;    .option = NULL,&#xA;    .version = LIBAVUTIL_VERSION_INT,&#xA;};&#xA;&#xA;MyFilterGraphContext* init_log_context()&#xA;{&#xA;    MyFilterGraphContext* ctx = static_cast(av_mallocz(sizeof(*ctx)));&#xA;&#xA;    if (!ctx)&#xA;    {&#xA;        av_log(nullptr, AV_LOG_ERROR, "Unable to allocate MyFilterGraphContext\n");&#xA;        return nullptr;&#xA;    }&#xA;&#xA;    ctx->avclass = &amp;my_filter_graph_class;&#xA;    return ctx;&#xA;}&#xA;&#xA;int init_overlay_filter(AVFilterGraph** graph, AVFilterContext** src_ctx, AVFilterContext** overlay_src_ctx,&#xA;                        AVFilterContext** sink_ctx)&#xA;{&#xA;    AVFilterGraph* filter_graph;&#xA;    AVFilterContext* buffersrc_ctx;&#xA;    AVFilterContext* overlay_buffersrc_ctx;&#xA;    AVFilterContext* buffersink_ctx;&#xA;    AVFilterContext* overlay_ctx;&#xA;    AVFilterContext* format_ctx;&#xA;&#xA;    const AVFilter* buffersrc, * buffersink, * overlay_buffersrc, * overlay_filter, * format_filter;&#xA;    int ret;&#xA;&#xA;    // Create the filter graph&#xA;    filter_graph = avfilter_graph_alloc();&#xA;    if (!filter_graph)&#xA;    {&#xA;        fprintf(stderr, "Unable to create filter graph.\n");&#xA;        return AVERROR(ENOMEM);&#xA;    }&#xA;&#xA;    // Create buffer source filter for main video&#xA;    buffersrc = avfilter_get_by_name("buffer");&#xA;    if (!buffersrc)&#xA;    {&#xA;        fprintf(stderr, "Unable to find buffer filter.\n");&#xA;        return AVERROR_FILTER_NOT_FOUND;&#xA;    }&#xA;&#xA;    // Create buffer source filter for overlay image&#xA;    overlay_buffersrc = avfilter_get_by_name("buffer");&#xA;    if (!overlay_buffersrc)&#xA;    {&#xA;        fprintf(stderr, "Unable to find buffer filter.\n");&#xA;        return AVERROR_FILTER_NOT_FOUND;&#xA;    }&#xA;&#xA;    // Create buffer sink filter&#xA;    buffersink = avfilter_get_by_name("buffersink");&#xA;    if (!buffersink)&#xA;    {&#xA;        fprintf(stderr, "Unable to find buffersink filter.\n");&#xA;        return AVERROR_FILTER_NOT_FOUND;&#xA;    }&#xA;&#xA;    // Create overlay filter&#xA;    overlay_filter = avfilter_get_by_name("overlay");&#xA;    if (!overlay_filter)&#xA;    {&#xA;        fprintf(stderr, "Unable to find overlay filter.\n");&#xA;        return AVERROR_FILTER_NOT_FOUND;&#xA;    }&#xA;&#xA;    // Create format filter&#xA;    format_filter = avfilter_get_by_name("format");&#xA;    if (!format_filter)&#xA;    {&#xA;        fprintf(stderr, "Unable to find format filter.\n");&#xA;        return AVERROR_FILTER_NOT_FOUND;&#xA;    }&#xA;&#xA;    // Initialize the main video buffer source&#xA;    char args[512];&#xA;&#xA;    // Initialize the overlay buffer source&#xA;    snprintf(args, sizeof(args), "video_size=165x165:pix_fmt=bgr24:time_base=1/25:pixel_aspect=1/1"); &#xA;&#xA;    ret = avfilter_graph_create_filter(&amp;overlay_buffersrc_ctx, overlay_buffersrc, nullptr, args, nullptr,&#xA;        filter_graph);&#xA;&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        fprintf(stderr, "Unable to create buffer source filter for overlay.\n");&#xA;        return ret;&#xA;    }&#xA;&#xA;    snprintf(args, sizeof(args), "video_size=1920x1080:pix_fmt=yuv420p:time_base=1/25:pixel_aspect=1/1");&#xA;&#xA;    ret = avfilter_graph_create_filter(&amp;buffersrc_ctx, buffersrc, nullptr, args, nullptr, filter_graph);&#xA;&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        fprintf(stderr, "Unable to create buffer source filter for main video.\n");&#xA;        return ret;&#xA;    }&#xA;&#xA;    // Initialize the format filter to convert overlay image to yuv420p&#xA;    snprintf(args, sizeof(args), "pix_fmts=yuv420p");&#xA;&#xA;    ret = avfilter_graph_create_filter(&amp;format_ctx, format_filter, nullptr, args, nullptr, filter_graph);&#xA;&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        fprintf(stderr, "Unable to create format filter.\n");&#xA;        return ret;&#xA;    }&#xA;&#xA;    // Initialize the overlay filter&#xA;    ret = avfilter_graph_create_filter(&amp;overlay_ctx, overlay_filter, nullptr, "W-w:H-h:enable=&#x27;between(t,0,20)&#x27;:format=yuv420", nullptr, filter_graph);&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        fprintf(stderr, "Unable to create overlay filter.\n");&#xA;        return ret;&#xA;    }&#xA;&#xA;    // Initialize the buffer sink&#xA;    ret = avfilter_graph_create_filter(&amp;buffersink_ctx, buffersink, nullptr, nullptr, nullptr, filter_graph);&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        fprintf(stderr, "Unable to create buffer sink filter.\n");&#xA;        return ret;&#xA;    }&#xA;&#xA;    // Connect the filters&#xA;    ret = avfilter_link(overlay_buffersrc_ctx, 0, format_ctx, 0);&#xA;&#xA;    if (ret >= 0)&#xA;    {&#xA;        ret = avfilter_link(buffersrc_ctx, 0, overlay_ctx, 0);&#xA;    }&#xA;    else&#xA;    {&#xA;        fprintf(stderr, "Unable to configure filter graph.\n");&#xA;        return ret;&#xA;    }&#xA;&#xA;&#xA;    if (ret >= 0)&#xA;    {&#xA;        ret = avfilter_link(format_ctx, 0, overlay_ctx, 1);&#xA;    }&#xA;    else&#xA;    {&#xA;        fprintf(stderr, "Unable to configure filter graph.\n");&#xA;        return ret;&#xA;    }&#xA;&#xA;    if (ret >= 0)&#xA;    {&#xA;        if ((ret = avfilter_link(overlay_ctx, 0, buffersink_ctx, 0)) &lt; 0)&#xA;        {&#xA;            fprintf(stderr, "Unable to link filter graph.\n");&#xA;            return ret;&#xA;        }&#xA;    }&#xA;    else&#xA;    {&#xA;        fprintf(stderr, "Unable to configure filter graph.\n");&#xA;        return ret;&#xA;    }&#xA;&#xA;    MyFilterGraphContext* log_ctx = init_log_context();&#xA;&#xA;    // Configure the filter graph&#xA;    if ((ret = avfilter_graph_config(filter_graph, log_ctx)) &lt; 0)&#xA;    {&#xA;        fprintf(stderr, "Unable to configure filter graph.\n");&#xA;        return ret;&#xA;    }&#xA;&#xA;    *graph = filter_graph;&#xA;    *src_ctx = buffersrc_ctx;&#xA;    *overlay_src_ctx = overlay_buffersrc_ctx;&#xA;    *sink_ctx = buffersink_ctx;&#xA;&#xA;    return 0;&#xA;}&#xA;&#xA;int main(int argc, char* argv[])&#xA;{&#xA;    unsigned int videoIndex = 0;&#xA;&#xA;    avdevice_register_all();&#xA;&#xA;    av_log_set_level(AV_LOG_TRACE);&#xA;&#xA;    const AVInputFormat* pFrameGrabberInputFormat = av_find_input_format("dshow");&#xA;&#xA;    constexpr int frameGrabberPixelWidth = 1920;&#xA;    constexpr int frameGrabberPixelHeight = 1080;&#xA;    constexpr int frameGrabberFrameRate = 25;&#xA;    constexpr AVPixelFormat frameGrabberPixelFormat = AV_PIX_FMT_YUV420P;&#xA;&#xA;    char shortStringBuffer[32];&#xA;&#xA;    AVDictionary* pFrameGrabberOptions = nullptr;&#xA;&#xA;    _snprintf_s(shortStringBuffer, sizeof(shortStringBuffer), "%dx%d", frameGrabberPixelWidth, frameGrabberPixelHeight);&#xA;    av_dict_set(&amp;pFrameGrabberOptions, "video_size", shortStringBuffer, 0);&#xA;&#xA;    _snprintf_s(shortStringBuffer, sizeof(shortStringBuffer), "%d", frameGrabberFrameRate);&#xA;&#xA;    av_dict_set(&amp;pFrameGrabberOptions, "framerate", shortStringBuffer, 0);&#xA;    av_dict_set(&amp;pFrameGrabberOptions, "pixel_format", "yuv420p", 0);&#xA;    av_dict_set(&amp;pFrameGrabberOptions, "rtbufsize", "128M", 0);&#xA;&#xA;    AVFormatContext* pFrameGrabberFormatContext = avformat_alloc_context();&#xA;&#xA;    pFrameGrabberFormatContext->flags = AVFMT_FLAG_NOBUFFER | AVFMT_FLAG_FLUSH_PACKETS;&#xA;&#xA;    if (avformat_open_input(&amp;pFrameGrabberFormatContext, "video=MZ0380 PCI, Analog 01 Capture",&#xA;                            pFrameGrabberInputFormat, &amp;pFrameGrabberOptions) != 0)&#xA;    {&#xA;        std::cerr &lt;&lt; "Couldn&#x27;t open input stream." &lt;&lt; &#x27;\n&#x27;;&#xA;        return -1;&#xA;    }&#xA;&#xA;    if (avformat_find_stream_info(pFrameGrabberFormatContext, nullptr) &lt; 0)&#xA;    {&#xA;        std::cerr &lt;&lt; "Couldn&#x27;t find stream information." &lt;&lt; &#x27;\n&#x27;;&#xA;        return -1;&#xA;    }&#xA;&#xA;    bool foundVideoStream = false;&#xA;&#xA;    for (unsigned int loop_videoIndex = 0; loop_videoIndex &lt; pFrameGrabberFormatContext->nb_streams; loop_videoIndex&#x2B;&#x2B;)&#xA;    {&#xA;        if (pFrameGrabberFormatContext->streams[loop_videoIndex]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO)&#xA;        {&#xA;            videoIndex = loop_videoIndex;&#xA;            foundVideoStream = true;&#xA;            break;&#xA;        }&#xA;    }&#xA;&#xA;    if (!foundVideoStream)&#xA;    {&#xA;        std::cerr &lt;&lt; "Couldn&#x27;t find a video stream." &lt;&lt; &#x27;\n&#x27;;&#xA;        return -1;&#xA;    }&#xA;&#xA;    const AVCodec* pFrameGrabberCodec = avcodec_find_decoder(&#xA;        pFrameGrabberFormatContext->streams[videoIndex]->codecpar->codec_id);&#xA;&#xA;    AVCodecContext* pFrameGrabberCodecContext = avcodec_alloc_context3(pFrameGrabberCodec);&#xA;&#xA;    if (pFrameGrabberCodec == nullptr)&#xA;    {&#xA;        std::cerr &lt;&lt; "Codec not found." &lt;&lt; &#x27;\n&#x27;;&#xA;        return -1;&#xA;    }&#xA;&#xA;    pFrameGrabberCodecContext->pix_fmt = frameGrabberPixelFormat;&#xA;    pFrameGrabberCodecContext->width = frameGrabberPixelWidth;&#xA;    pFrameGrabberCodecContext->height = frameGrabberPixelHeight;&#xA;&#xA;    int ret = avcodec_open2(pFrameGrabberCodecContext, pFrameGrabberCodec, nullptr);&#xA;&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        std::cerr &lt;&lt; "Could not open pVideoCodec." &lt;&lt; &#x27;\n&#x27;;&#xA;        return -1;&#xA;    }&#xA;&#xA;    const char* outputFilePath = "c:\\temp\\output.mp4";&#xA;    constexpr int outputWidth = frameGrabberPixelWidth;&#xA;    constexpr int outputHeight = frameGrabberPixelHeight;&#xA;    constexpr int outputFrameRate = frameGrabberFrameRate;&#xA;&#xA;    SwsContext* img_convert_ctx = sws_getContext(frameGrabberPixelWidth, frameGrabberPixelHeight,&#xA;                                                 frameGrabberPixelFormat, outputWidth, outputHeight, AV_PIX_FMT_NV12,&#xA;                                                 SWS_BICUBIC, nullptr, nullptr, nullptr);&#xA;&#xA;    constexpr double frameTimeinUs = 1000000.0 / frameGrabberFrameRate;&#xA;&#xA;    LARGE_INTEGER frequency;&#xA;    LARGE_INTEGER lastTime, currentTime;&#xA;&#xA;    QueryPerformanceFrequency(&amp;frequency);&#xA;    QueryPerformanceCounter(&amp;lastTime);&#xA;&#xA;    //SDL----------------------------&#xA;&#xA;    if (SDL_Init(SDL_INIT_VIDEO | SDL_INIT_TIMER | SDL_INIT_EVENTS))&#xA;    {&#xA;        printf("Could not initialize SDL - %s\n", SDL_GetError());&#xA;        return -1;&#xA;    }&#xA;&#xA;    SDL_Window* screen = SDL_CreateWindow("3P FrameGrabber SuperApp", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED,&#xA;                                          frameGrabberPixelWidth, frameGrabberPixelHeight,&#xA;                                          SDL_WINDOW_RESIZABLE | SDL_WINDOW_OPENGL);&#xA;&#xA;    if (!screen)&#xA;    {&#xA;        printf("SDL: could not set video mode - exiting:%s\n", SDL_GetError());&#xA;        return -1;&#xA;    }&#xA;&#xA;    SDL_Renderer* renderer = SDL_CreateRenderer(screen, -1, 0);&#xA;&#xA;    if (!renderer)&#xA;    {&#xA;        printf("SDL: could not create renderer - exiting:%s\n", SDL_GetError());&#xA;        return -1;&#xA;    }&#xA;&#xA;    SDL_Texture* texture = SDL_CreateTexture(renderer, SDL_PIXELFORMAT_YV12, SDL_TEXTUREACCESS_STREAMING,&#xA;                                             frameGrabberPixelWidth, frameGrabberPixelHeight);&#xA;&#xA;    if (!texture)&#xA;    {&#xA;        printf("SDL: could not create texture - exiting:%s\n", SDL_GetError());&#xA;        return -1;&#xA;    }&#xA;&#xA;    SDL_Event event;&#xA;&#xA;    //SDL End------------------------&#xA;&#xA;    const AVCodec* pVideoCodec = avcodec_find_encoder_by_name("h264_qsv");&#xA;&#xA;    if (!pVideoCodec)&#xA;    {&#xA;        std::cerr &lt;&lt; "Codec not found" &lt;&lt; &#x27;\n&#x27;;&#xA;        return 1;&#xA;    }&#xA;&#xA;    AVCodecContext* pVideoCodecContext = avcodec_alloc_context3(pVideoCodec);&#xA;&#xA;    if (!pVideoCodecContext)&#xA;    {&#xA;        std::cerr &lt;&lt; "Could not allocate video pVideoCodec context" &lt;&lt; &#x27;\n&#x27;;&#xA;        return 1;&#xA;    }&#xA;&#xA;    AVBufferRef* pHardwareDeviceContextRef = nullptr;&#xA;&#xA;    ret = av_hwdevice_ctx_create(&amp;pHardwareDeviceContextRef, AV_HWDEVICE_TYPE_QSV,&#xA;                                 "PCI\\VEN_8086&amp;DEV_5912&amp;SUBSYS_310217AA&amp;REV_04\\3&amp;11583659&amp;0&amp;10", nullptr, 0);&#xA;    check_error(ret);&#xA;&#xA;    pVideoCodecContext->bit_rate = static_cast(outputWidth * outputHeight) * 2;&#xA;    pVideoCodecContext->width = outputWidth;&#xA;    pVideoCodecContext->height = outputHeight;&#xA;    pVideoCodecContext->framerate = {outputFrameRate, 1};&#xA;    pVideoCodecContext->time_base = {1, outputFrameRate};&#xA;    pVideoCodecContext->pix_fmt = AV_PIX_FMT_QSV;&#xA;    pVideoCodecContext->max_b_frames = 0;&#xA;&#xA;    AVBufferRef* pHardwareFramesContextRef = av_hwframe_ctx_alloc(pHardwareDeviceContextRef);&#xA;&#xA;    AVHWFramesContext* pHardwareFramesContext = reinterpret_cast(pHardwareFramesContextRef->data);&#xA;&#xA;    pHardwareFramesContext->format = AV_PIX_FMT_QSV;&#xA;    pHardwareFramesContext->sw_format = AV_PIX_FMT_NV12;&#xA;    pHardwareFramesContext->width = outputWidth;&#xA;    pHardwareFramesContext->height = outputHeight;&#xA;    pHardwareFramesContext->initial_pool_size = 20;&#xA;&#xA;    ret = av_hwframe_ctx_init(pHardwareFramesContextRef);&#xA;    check_error(ret);&#xA;&#xA;    pVideoCodecContext->hw_device_ctx = nullptr;&#xA;    pVideoCodecContext->hw_frames_ctx = av_buffer_ref(pHardwareFramesContextRef);&#xA;&#xA;    ret = avcodec_open2(pVideoCodecContext, pVideoCodec, nullptr); //&amp;pVideoOptionsDict);&#xA;    check_error(ret);&#xA;&#xA;    AVFormatContext* pVideoFormatContext = nullptr;&#xA;&#xA;    avformat_alloc_output_context2(&amp;pVideoFormatContext, nullptr, nullptr, outputFilePath);&#xA;&#xA;    if (!pVideoFormatContext)&#xA;    {&#xA;        std::cerr &lt;&lt; "Could not create output context" &lt;&lt; &#x27;\n&#x27;;&#xA;        return 1;&#xA;    }&#xA;&#xA;    const AVOutputFormat* pVideoOutputFormat = pVideoFormatContext->oformat;&#xA;&#xA;    if (pVideoFormatContext->oformat->flags &amp; AVFMT_GLOBALHEADER)&#xA;    {&#xA;        pVideoCodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;&#xA;    }&#xA;&#xA;    const AVStream* pVideoStream = avformat_new_stream(pVideoFormatContext, pVideoCodec);&#xA;&#xA;    if (!pVideoStream)&#xA;    {&#xA;        std::cerr &lt;&lt; "Could not allocate stream" &lt;&lt; &#x27;\n&#x27;;&#xA;        return 1;&#xA;    }&#xA;&#xA;    ret = avcodec_parameters_from_context(pVideoStream->codecpar, pVideoCodecContext);&#xA;&#xA;    check_error(ret);&#xA;&#xA;    if (!(pVideoOutputFormat->flags &amp; AVFMT_NOFILE))&#xA;    {&#xA;        ret = avio_open(&amp;pVideoFormatContext->pb, outputFilePath, AVIO_FLAG_WRITE);&#xA;        check_error(ret);&#xA;    }&#xA;&#xA;    ret = avformat_write_header(pVideoFormatContext, nullptr);&#xA;&#xA;    check_error(ret);&#xA;&#xA;    AVFrame* pHardwareFrame = av_frame_alloc();&#xA;&#xA;    if (av_hwframe_get_buffer(pVideoCodecContext->hw_frames_ctx, pHardwareFrame, 0) &lt; 0)&#xA;    {&#xA;        std::cerr &lt;&lt; "Error allocating a hw frame" &lt;&lt; &#x27;\n&#x27;;&#xA;        return -1;&#xA;    }&#xA;&#xA;    AVFrame* pFrameGrabberFrame = av_frame_alloc();&#xA;    AVPacket* pFrameGrabberPacket = av_packet_alloc();&#xA;&#xA;    AVPacket* pVideoPacket = av_packet_alloc();&#xA;    AVFrame* pVideoFrame = av_frame_alloc();&#xA;&#xA;    AVFrame* pSwappedFrame = av_frame_alloc();&#xA;    av_frame_get_buffer(pSwappedFrame, 32);&#xA;&#xA;    INT64 frameCount = 0;&#xA;&#xA;    pFrameGrabberCodecContext->time_base = {1, frameGrabberFrameRate};&#xA;&#xA;    AVFilterContext* buffersrc_ctx = nullptr;&#xA;    AVFilterContext* buffersink_ctx = nullptr;&#xA;    AVFilterContext* overlay_src_ctx = nullptr;&#xA;    AVFilterGraph* filter_graph = nullptr;&#xA;&#xA;    if ((ret = init_overlay_filter(&amp;filter_graph, &amp;buffersrc_ctx, &amp;overlay_src_ctx, &amp;buffersink_ctx)) &lt; 0)&#xA;    {&#xA;        return ret;&#xA;    }&#xA;&#xA;    // Load overlay image&#xA;    AVFormatContext* overlay_fmt_ctx = nullptr;&#xA;    AVCodecContext* overlay_codec_ctx = nullptr;&#xA;    const AVCodec* overlay_codec = nullptr;&#xA;    AVFrame* overlay_frame = nullptr;&#xA;    AVDictionary* overlay_options = nullptr;&#xA;&#xA;    const char* overlay_image_filename = "c:\\temp\\overlay.bmp";&#xA;&#xA;    av_dict_set(&amp;overlay_options, "video_size", "165x165", 0);&#xA;    av_dict_set(&amp;overlay_options, "pixel_format", "bgr24", 0);&#xA;&#xA;    if ((ret = avformat_open_input(&amp;overlay_fmt_ctx, overlay_image_filename, nullptr, &amp;overlay_options)) &lt; 0)&#xA;    {&#xA;        return ret;&#xA;    }&#xA;&#xA;    if ((ret = avformat_find_stream_info(overlay_fmt_ctx, nullptr)) &lt; 0)&#xA;    {&#xA;        return ret;&#xA;    }&#xA;&#xA;    int overlay_video_stream_index = -1;&#xA;&#xA;    for (int i = 0; i &lt; overlay_fmt_ctx->nb_streams; i&#x2B;&#x2B;)&#xA;    {&#xA;        if (overlay_fmt_ctx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO)&#xA;        {&#xA;            overlay_video_stream_index = i;&#xA;            break;&#xA;        }&#xA;    }&#xA;&#xA;    if (overlay_video_stream_index == -1)&#xA;    {&#xA;        return -1;&#xA;    }&#xA;&#xA;    overlay_codec = avcodec_find_decoder(overlay_fmt_ctx->streams[overlay_video_stream_index]->codecpar->codec_id);&#xA;&#xA;    if (!overlay_codec)&#xA;    {&#xA;        fprintf(stderr, "Overlay codec not found.\n");&#xA;        return -1;&#xA;    }&#xA;&#xA;    overlay_codec_ctx = avcodec_alloc_context3(overlay_codec);&#xA;&#xA;    if (!overlay_codec_ctx)&#xA;    {&#xA;        fprintf(stderr, "Could not allocate overlay codec context.\n");&#xA;        return AVERROR(ENOMEM);&#xA;    }&#xA;&#xA;    avcodec_parameters_to_context(overlay_codec_ctx, overlay_fmt_ctx->streams[overlay_video_stream_index]->codecpar);&#xA;&#xA;    if ((ret = avcodec_open2(overlay_codec_ctx, overlay_codec, nullptr)) &lt; 0)&#xA;    {&#xA;        return ret;&#xA;    }&#xA;&#xA;    overlay_frame = av_frame_alloc();&#xA;&#xA;    if (!overlay_frame)&#xA;    {&#xA;        fprintf(stderr, "Could not allocate overlay frame.\n");&#xA;        return AVERROR(ENOMEM);&#xA;    }&#xA;&#xA;    AVPacket* overlay_packet = av_packet_alloc();&#xA;&#xA;    // Read frames from the file&#xA;    while (av_read_frame(overlay_fmt_ctx, overlay_packet) >= 0)&#xA;    {&#xA;        if (overlay_packet->stream_index == overlay_video_stream_index)&#xA;        {&#xA;            ret = avcodec_send_packet(overlay_codec_ctx, overlay_packet);&#xA;&#xA;            if (ret &lt; 0)&#xA;            {&#xA;                break;&#xA;            }&#xA;&#xA;            ret = avcodec_receive_frame(overlay_codec_ctx, overlay_frame);&#xA;            if (ret >= 0)&#xA;            {&#xA;                &#xA;                break; // We only need the first frame for the overlay&#xA;            }&#xA;&#xA;            if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)&#xA;            {&#xA;                continue;&#xA;            }&#xA;&#xA;            break;&#xA;        }&#xA;&#xA;        av_packet_unref(overlay_packet);&#xA;    }&#xA;&#xA;    av_packet_unref(overlay_packet);&#xA;&#xA;    while (_isRunning)&#xA;    {&#xA;        while (SDL_PollEvent(&amp;event) != 0)&#xA;        {&#xA;            switch (event.type)&#xA;            {&#xA;            case SDL_QUIT:&#xA;                _isRunning = false;&#xA;                break;&#xA;            case SDL_KEYDOWN:&#xA;                if (event.key.keysym.sym == SDLK_ESCAPE)&#xA;                    _isRunning = false;&#xA;                break;&#xA;            default: ;&#xA;            }&#xA;        }&#xA;&#xA;        if (av_read_frame(pFrameGrabberFormatContext, pFrameGrabberPacket) == 0)&#xA;        {&#xA;            if (pFrameGrabberPacket->stream_index == videoIndex)&#xA;            {&#xA;                ret = avcodec_send_packet(pFrameGrabberCodecContext, pFrameGrabberPacket);&#xA;&#xA;                if (ret &lt; 0)&#xA;                {&#xA;                    std::cerr &lt;&lt; "Error sending a packet for decoding!" &lt;&lt; &#x27;\n&#x27;;&#xA;                    return -1;&#xA;                }&#xA;&#xA;                ret = avcodec_receive_frame(pFrameGrabberCodecContext, pFrameGrabberFrame);&#xA;&#xA;                if (ret != 0)&#xA;                {&#xA;                    std::cerr &lt;&lt; "Receiving frame failed!" &lt;&lt; &#x27;\n&#x27;;&#xA;                    return -1;&#xA;                }&#xA;&#xA;                if (ret == AVERROR(EAGAIN) || ret == AVERROR(AVERROR_EOF))&#xA;                {&#xA;                    std::cout &lt;&lt; "End of stream detected. Exiting now." &lt;&lt; &#x27;\n&#x27;;&#xA;                    return 0;&#xA;                }&#xA;&#xA;                if (ret != 0)&#xA;                {&#xA;                    std::cerr &lt;&lt; "Decode Error!" &lt;&lt; &#x27;\n&#x27;;&#xA;                    return -1;&#xA;                }&#xA;&#xA;                // Feed the frame into the filter graph&#xA;                if (av_buffersrc_add_frame_flags(buffersrc_ctx, pFrameGrabberFrame, AV_BUFFERSRC_FLAG_KEEP_REF) &lt; 0)&#xA;                {&#xA;                    fprintf(stderr, "Error while feeding the filtergraph\n");&#xA;                    break;&#xA;                }&#xA;&#xA;                // Push the overlay frame to the overlay_src_ctx&#xA;                ret = av_buffersrc_add_frame_flags(overlay_src_ctx, overlay_frame, AV_BUFFERSRC_FLAG_KEEP_REF);&#xA;                if (ret &lt; 0)&#xA;                {&#xA;                    fprintf(stderr, "Error while feeding the filtergraph\n");&#xA;                    break;&#xA;                }                           &#xA;&#xA;                // Pull filtered frame from the filter graph&#xA;                AVFrame* filtered_frame = av_frame_alloc();&#xA;&#xA;                ret = av_buffersink_get_frame(buffersink_ctx, filtered_frame);&#xA;&#xA;                if (ret &lt; 0)&#xA;                {&#xA;                    check_error(ret);&#xA;                }&#xA;&#xA;                QueryPerformanceCounter(&amp;currentTime);&#xA;&#xA;                double elapsedTime = (currentTime.QuadPart - lastTime.QuadPart) * 1000000.0 / frequency.QuadPart;&#xA;&#xA;                if (elapsedTime > 0.0 &amp;&amp; elapsedTime &lt; frameTimeinUs)&#xA;                {&#xA;                    uSleep(frameTimeinUs - elapsedTime, frequency);&#xA;                }&#xA;&#xA;                SDL_UpdateTexture(texture, nullptr, filtered_frame->data[0], filtered_frame->linesize[0]);&#xA;                SDL_RenderClear(renderer);&#xA;                SDL_RenderCopy(renderer, texture, nullptr, nullptr);&#xA;                SDL_RenderPresent(renderer);&#xA;&#xA;                QueryPerformanceCounter(&amp;lastTime);&#xA;&#xA;                swap_uv_planes(filtered_frame);&#xA;&#xA;                ret = sws_scale_frame(img_convert_ctx, pVideoFrame, filtered_frame);&#xA;&#xA;                if (ret &lt; 0)&#xA;                {&#xA;                    std::cerr &lt;&lt; "Scaling frame for Intel QS Encoder did fail!" &lt;&lt; &#x27;\n&#x27;;&#xA;                    return -1;&#xA;                }&#xA;&#xA;                if (av_hwframe_transfer_data(pHardwareFrame, pVideoFrame, 0) &lt; 0)&#xA;                {&#xA;                    std::cerr &lt;&lt; "Error transferring frame data to hw frame!" &lt;&lt; &#x27;\n&#x27;;&#xA;                    return -1;&#xA;                }&#xA;&#xA;                pHardwareFrame->pts = frameCount&#x2B;&#x2B;;&#xA;&#xA;                ret = avcodec_send_frame(pVideoCodecContext, pHardwareFrame);&#xA;&#xA;                if (ret &lt; 0)&#xA;                {&#xA;                    std::cerr &lt;&lt; "Error sending a frame for encoding" &lt;&lt; &#x27;\n&#x27;;&#xA;                    check_error(ret);&#xA;                }&#xA;&#xA;                av_packet_unref(pVideoPacket);&#xA;&#xA;                while (ret >= 0)&#xA;                {&#xA;                    ret = avcodec_receive_packet(pVideoCodecContext, pVideoPacket);&#xA;&#xA;                    if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)&#xA;                    {&#xA;                        break;&#xA;                    }&#xA;&#xA;                    if (ret &lt; 0)&#xA;                    {&#xA;                        std::cerr &lt;&lt; "Error during encoding" &lt;&lt; &#x27;\n&#x27;;&#xA;                        return 1;&#xA;                    }&#xA;&#xA;                    av_packet_rescale_ts(pVideoPacket, pVideoCodecContext->time_base, pVideoStream->time_base);&#xA;&#xA;                    pVideoPacket->stream_index = pVideoStream->index;&#xA;&#xA;                    ret = av_interleaved_write_frame(pVideoFormatContext, pVideoPacket);&#xA;&#xA;                    check_error(ret);&#xA;&#xA;                    av_packet_unref(pVideoPacket);&#xA;                }&#xA;&#xA;                av_packet_unref(pFrameGrabberPacket);&#xA;                av_frame_free(&amp;filtered_frame);&#xA;            }&#xA;        }&#xA;    }&#xA;&#xA;    av_write_trailer(pVideoFormatContext);&#xA;    av_buffer_unref(&amp;pHardwareDeviceContextRef);&#xA;    avcodec_free_context(&amp;pVideoCodecContext);&#xA;    avio_closep(&amp;pVideoFormatContext->pb);&#xA;    avformat_free_context(pVideoFormatContext);&#xA;    av_packet_free(&amp;pVideoPacket);&#xA;&#xA;    avcodec_free_context(&amp;pFrameGrabberCodecContext);&#xA;    av_frame_free(&amp;pFrameGrabberFrame);&#xA;    av_packet_free(&amp;pFrameGrabberPacket);&#xA;    avformat_close_input(&amp;pFrameGrabberFormatContext);&#xA;&#xA;    return 0;&#xA;}&#xA;&#xA;</fstream></iostream></cstdint></cstdio>

    &#xA;

    The console / log output running the code :

    &#xA;

    [in @ 00000288ee494f40] Setting &#x27;video_size&#x27; to value &#x27;1920x1080&#x27;&#xA;[in @ 00000288ee494f40] Setting &#x27;pix_fmt&#x27; to value &#x27;yuv420p&#x27;&#xA;[in @ 00000288ee494f40] Setting &#x27;time_base&#x27; to value &#x27;1/25&#x27;&#xA;[in @ 00000288ee494f40] Setting &#x27;pixel_aspect&#x27; to value &#x27;1/1&#x27;&#xA;[in @ 00000288ee494f40] w:1920 h:1080 pixfmt:yuv420p tb:1/25 fr:0/1 sar:1/1 csp:unknown range:unknown&#xA;[overlay_in @ 00000288ff1013c0] Setting &#x27;video_size&#x27; to value &#x27;165x165&#x27;&#xA;[overlay_in @ 00000288ff1013c0] Setting &#x27;pix_fmt&#x27; to value &#x27;bgr24&#x27;&#xA;[overlay_in @ 00000288ff1013c0] Setting &#x27;time_base&#x27; to value &#x27;1/25&#x27;&#xA;[overlay_in @ 00000288ff1013c0] Setting &#x27;pixel_aspect&#x27; to value &#x27;1/1&#x27;&#xA;[overlay_in @ 00000288ff1013c0] w:165 h:165 pixfmt:bgr24 tb:1/25 fr:0/1 sar:1/1 csp:unknown range:unknown&#xA;[format @ 00000288ff1015c0] Setting &#x27;pix_fmts&#x27; to value &#x27;yuv420p&#x27;&#xA;[overlay @ 00000288ff101880] Setting &#x27;x&#x27; to value &#x27;W-w&#x27;&#xA;[overlay @ 00000288ff101880] Setting &#x27;y&#x27; to value &#x27;H-h&#x27;&#xA;[overlay @ 00000288ff101880] Setting &#x27;enable&#x27; to value &#x27;between(t,0,20)&#x27;&#xA;[overlay @ 00000288ff101880] Setting &#x27;format&#x27; to value &#x27;yuv420&#x27;&#xA;[auto_scale_0 @ 00000288ff101ec0] w:iw h:ih flags:&#x27;&#x27; interl:0&#xA;[format @ 00000288ff1015c0] auto-inserting filter &#x27;auto_scale_0&#x27; between the filter &#x27;overlay_in&#x27; and the filter &#x27;format&#x27;&#xA;[auto_scale_1 @ 00000288ee4a4cc0] w:iw h:ih flags:&#x27;&#x27; interl:0&#xA;[overlay @ 00000288ff101880] auto-inserting filter &#x27;auto_scale_1&#x27; between the filter &#x27;format&#x27; and the filter &#x27;overlay&#x27;&#xA;[AVFilterGraph @ 00000288ee495c80] query_formats: 5 queried, 6 merged, 6 already done, 0 delayed&#xA;[auto_scale_0 @ 00000288ff101ec0] w:165 h:165 fmt:bgr24 csp:gbr range:pc sar:1/1 -> w:165 h:165 fmt:yuv420p csp:unknown range:unknown sar:1/1 flags:0x00000004&#xA;[auto_scale_1 @ 00000288ee4a4cc0] w:165 h:165 fmt:yuv420p csp:unknown range:unknown sar:1/1 -> w:165 h:165 fmt:yuva420p csp:unknown range:unknown sar:1/1 flags:0x00000004&#xA;[overlay @ 00000288ff101880] main w:1920 h:1080 fmt:yuv420p overlay w:165 h:165 fmt:yuva420p&#xA;[overlay @ 00000288ff101880] [framesync @ 00000288ff1019a8] Selected 1/25 time base&#xA;[overlay @ 00000288ff101880] [framesync @ 00000288ff1019a8] Sync level 2&#xA;

    &#xA;

    I tried to change the index / order of how the two different frames become pushed into the filter graph. Once I got a frame out of the graph but with the dimensions of the overlay image, not with the dimensions of the grabbed frame from the grabber card. So I suppose I am doing something wrong building up the filter graph.

    &#xA;

    To verify that the FFmpeg build contains all necessary modules I ran that procedure via FFmpeg executable in console and it worked and the result was as expected.

    &#xA;

    The command-line producing the expected output is following :

    &#xA;

    ffmpeg -f dshow -i video="MZ0380 PCI, Analog 01 Capture" -video_size 1920x1080 -framerate 25 -pixel_format yuv420p -loglevel debug -i "C:\temp\overlay.bmp" -filter_complex "[0:v][1:v] overlay=W-w:H-h:enable=&#x27;between(t,0,20)&#x27;" -pix_fmt yuv420p -c:a copy output.mp4&#xA;

    &#xA;

  • How HSBC and ING are transforming banking with AI

    9 novembre 2024, par Daniel Crough — Banking and Financial Services, Featured Banking Content

    We recently partnered with FinTech Futures to produce an exciting webinar discussing how analytics leaders from two global banks are using AI to protect customers, streamline operations, and support environmental goals.

    Watch the on-demand webinar : Advancing analytics maturity.

    By providing your email and clicking “submit”, you agree to receive direct marketing materials relating to Matomo products and services, surveys, information about events, publications and promotions. You can unsubscribe at any time by clicking the opt-out link provided in each communication. We will process your personal information in accordance with our Privacy Policy.

    &lt;script&gt;document.getElementById( &quot;ak_js_3&quot; ).setAttribute( &quot;value&quot;, ( new Date() ).getTime() );&lt;/script&gt;

    &lt;script&gt;<br />
    gform.initializeOnLoaded( function() {gformInitSpinner( 71, 'https://matomo.org/wp-content/plugins/gravityforms/images/spinner.svg', true );jQuery('#gform_ajax_frame_71').on('load',function(){var contents = jQuery(this).contents().find('*').html();var is_postback = contents.indexOf('GF_AJAX_POSTBACK') &gt;= 0;if(!is_postback){return;}var form_content = jQuery(this).contents().find('#gform_wrapper_71');var is_confirmation = jQuery(this).contents().find('#gform_confirmation_wrapper_71').length &gt; 0;var is_redirect = contents.indexOf('gformRedirect(){') &gt;= 0;var is_form = form_content.length &gt; 0 &amp;&amp; ! is_redirect &amp;&amp; ! is_confirmation;var mt = parseInt(jQuery('html').css('margin-top'), 10) + parseInt(jQuery('body').css('margin-top'), 10) + 100;if(is_form){jQuery('#gform_wrapper_71').html(form_content.html());if(form_content.hasClass('gform_validation_error')){jQuery('#gform_wrapper_71').addClass('gform_validation_error');} else {jQuery('#gform_wrapper_71').removeClass('gform_validation_error');}setTimeout( function() { /* delay the scroll by 50 milliseconds to fix a bug in chrome */  }, 50 );if(window['gformInitDatepicker']) {gformInitDatepicker();}if(window['gformInitPriceFields']) {gformInitPriceFields();}var current_page = jQuery('#gform_source_page_number_71').val();gformInitSpinner( 71, 'https://matomo.org/wp-content/plugins/gravityforms/images/spinner.svg', true );jQuery(document).trigger('gform_page_loaded', [71, current_page]);window['gf_submitting_71'] = false;}else if(!is_redirect){var confirmation_content = jQuery(this).contents().find('.GF_AJAX_POSTBACK').html();if(!confirmation_content){confirmation_content = contents;}setTimeout(function(){jQuery('#gform_wrapper_71').replaceWith(confirmation_content);jQuery(document).trigger('gform_confirmation_loaded', [71]);window['gf_submitting_71'] = false;wp.a11y.speak(jQuery('#gform_confirmation_message_71').text());}, 50);}else{jQuery('#gform_71').append(contents);if(window['gformRedirect']) {gformRedirect();}}jQuery(document).trigger(&quot;gform_pre_post_render&quot;, [{ formId: &quot;71&quot;, currentPage: &quot;current_page&quot;, abort: function() { this.preventDefault(); } }]);                if (event.defaultPrevented) {                return;         }        const gformWrapperDiv = document.getElementById( &quot;gform_wrapper_71&quot; );        if ( gformWrapperDiv ) {            const visibilitySpan = document.createElement( &quot;span&quot; );            visibilitySpan.id = &quot;gform_visibility_test_71&quot;;            gformWrapperDiv.insertAdjacentElement( &quot;afterend&quot;, visibilitySpan );        }        const visibilityTestDiv = document.getElementById( &quot;gform_visibility_test_71&quot; );        let postRenderFired = false;                function triggerPostRender() {            if ( postRenderFired ) {                return;            }            postRenderFired = true;            jQuery( document ).trigger( 'gform_post_render', [71, current_page] );            gform.utils.trigger( { event: 'gform/postRender', native: false, data: { formId: 71, currentPage: current_page } } );            if ( visibilityTestDiv ) {                visibilityTestDiv.parentNode.removeChild( visibilityTestDiv );            }        }        function debounce( func, wait, immediate ) {            var timeout;            return function() {                var context = this, args = arguments;                var later = function() {                    timeout = null;                    if ( !immediate ) func.apply( context, args );                };                var callNow = immediate &amp;&amp; !timeout;                clearTimeout( timeout );                timeout = setTimeout( later, wait );                if ( callNow ) func.apply( context, args );            };        }        const debouncedTriggerPostRender = debounce( function() {            triggerPostRender();        }, 200 );        if ( visibilityTestDiv &amp;&amp; visibilityTestDiv.offsetParent === null ) {            const observer = new MutationObserver( ( mutations ) =&gt; {                mutations.forEach( ( mutation ) =&gt; {                    if ( mutation.type === 'attributes' &amp;&amp; visibilityTestDiv.offsetParent !== null ) {                        debouncedTriggerPostRender();                        observer.disconnect();                    }                });            });            observer.observe( document.body, {                attributes: true,                childList: false,                subtree: true,                attributeFilter: [ 'style', 'class' ],            });        } else {            triggerPostRender();        }    } );} );<br />
    &lt;/script&gt;

    Meet the expert panel

    Roshini Johri heads ESG Analytics at HSBC, where she leads AI and remote sensing applications supporting the bank’s net zero goals. Her expertise spans climate tech and financial services, with a focus on scalable analytics solutions.

     

    Marco Li Mandri leads Advanced Analytics Strategy at ING, where he focuses on delivering high-impact solutions and strengthening analytics foundations. His background combines analytics, KYC operations, and AI strategy.

     

    Carmen Soini Tourres works as a Web Analyst Consultant at Matomo, helping financial organisations optimise their digital presence whilst maintaining privacy compliance.

     

    Key findings from the webinar

    The discussion highlighted four essential elements for advancing analytics capabilities :

    1. Strong data foundations matter most

    “It doesn’t matter how good the AI model is. It is garbage in, garbage out,”

    Johri explained. Banks need robust data governance that works across different regulatory environments.

    2. Transform rather than tweak

    Li Mandri emphasised the need to reconsider entire processes :

    “We try to look at the banking domain and processes and try to re-imagine how they should be done with AI.”

    3. Bridge technical and business understanding

    Both leaders stressed the value of analytics translators who understand both technology and business needs.

    “We’re investing in this layer we call product leads,”

    Li Mandri explained. These roles combine technical knowledge with business acumen – a rare but vital skill set.

    4. Consider production costs early

    Moving from proof-of-concept to production requires careful planning. As Johri noted :

    “The scale of doing things in production is quite massive and often doesn’t get accounted for in the cost.”

    This includes :

    • Ongoing monitoring requirements
    • Maintenance needs
    • Regulatory compliance checks
    • Regular model updates

    Real-world applications

    ING’s approach demonstrates how banks can transform their operations through thoughtful AI implementation. Li Mandri shared several areas where the bank has successfully deployed analytics solutions, each benefiting both the bank and its customers.

    Customer experience enhancement

    The bank’s implementation of AI-powered instant loan processing shows how analytics can transform traditional banking.

    “We know AI can make loans instant for the customer, that’s great. Clicking one button and adding a loan, that really changes things,”

    Li Mandri explained. This goes beyond automation – it represents a fundamental shift in how banks serve their customers.

    The system analyses customer data to make rapid lending decisions while maintaining strong risk assessment standards. For customers, this means no more lengthy waiting periods or complex applications. For the bank, it means more efficient resource use and better risk management.

    The bank also uses AI to personalise customer communications.

    “We’re using that to make certain campaigns more personalised, having a certain tone of voice,”

    noted Li Mandri. This particularly resonates with younger customers who expect relevant, personalised interactions from their bank.

    Operational efficiency transformation

    ING’s approach to Know Your Customer (KYC) processes shows how AI can transform resource-heavy operations.

    “KYC is a big area of cost for the bank. So we see massive value there, a lot of scale,”

    Li Mandri explained. The bank developed an AI-powered system that :

    • Automates document verification
    • Flags potential compliance issues for human review
    • Maintains consistent standards across jurisdictions
    • Reduces processing time while improving accuracy

    This implementation required careful consideration of regulations across different markets. The bank developed monitoring systems to ensure their AI models maintain high accuracy while meeting compliance standards.

    In the back office, ING uses AI to extract and process data from various documents, significantly reducing manual work. This automation lets staff focus on complex tasks requiring human judgment.

    Sustainable finance initiatives

    ING’s commitment to sustainable banking has driven innovative uses of AI in environmental assessment.

    “We have this ambition to be a sustainable bank. If you want to be a sustainable finance customer, that requires a lot of work to understand who the company is, always comparing against its peers.”

    The bank developed AI models that :

    • Analyse company sustainability metrics
    • Compare environmental performance against industry benchmarks
    • Assess transition plans for high-emission industries
    • Monitor ongoing compliance with sustainability commitments

    This system helps staff evaluate the environmental impact of potential deals quickly and accurately.

    “We are using AI there to help our frontline process customers to see how green that deal might be and then use that as a decision point,”

    Li Mandri noted.

    HSBC’s innovative approach

    Under Johri’s leadership, HSBC has developed several groundbreaking uses of AI and analytics, particularly in environmental monitoring and operational efficiency. Their work shows how banks can use advanced technology to address complex global challenges while meeting regulatory requirements.

    Environmental monitoring through advanced technology

    HSBC uses computer vision and satellite imagery analysis to measure environmental impact with new precision.

    “This is another big research area where we look at satellite images and we do what is called remote sensing, which is the study of a remote area,”

    Johri explained.

    The system provides several key capabilities :

    • Analysis of forest coverage and deforestation rates
    • Assessment of biodiversity impact in specific regions
    • Monitoring of environmental changes over time
    • Measurement of environmental risk in lending portfolios

    “We can look at distant images of forest areas and understand how much percentage deforestation is being caused in that area, and we can then measure our biodiversity impact more accurately,”

    Johri noted. This technology enables HSBC to :

    • Make informed lending decisions
    • Monitor environmental commitments of borrowers
    • Support sustainability-linked lending programmes
    • Provide accurate environmental impact reporting

    Transforming document analysis

    HSBC is tackling one of banking’s most time-consuming challenges : processing vast amounts of documentation.

    “Can we reduce the onus of human having to go and read 200 pages of sustainability reports each time to extract answers ?”

    Johri asked. Their solution combines several AI technologies to make this process more efficient while maintaining accuracy.

    The bank’s approach includes :

    • Natural language processing to understand complex documents
    • Machine learning models to extract relevant information
    • Validation systems to ensure accuracy
    • Integration with existing compliance frameworks

    “We’re exploring solutions to improve our reporting, but we need to do it in a safe, robust and transparent way.”

    This careful balance between efficiency and accuracy exemplifies HSBC’s approach to AI.

    Building future-ready analytics capabilities

    Both banks emphasise that successful analytics requires a comprehensive, long-term approach. Their experiences highlight several critical considerations for financial institutions looking to advance their analytics capabilities.

    Developing clear governance frameworks

    “Understanding your AI risk appetite is crucial because banking is a highly regulated environment,”

    Johri emphasised. Banks need to establish governance structures that :

    • Define acceptable uses for AI
    • Establish monitoring and control mechanisms
    • Ensure compliance with evolving regulations
    • Maintain transparency in AI decision-making

    Creating solutions that scale

    Li Mandri stressed the importance of building systems that grow with the organisation :

    “When you try to prototype a model, you have to take care about the data safety, ethical consideration, you have to identify a way to monitor that model. You need model standard governance.”

    Successful scaling requires :

    • Standard approaches to model development
    • Clear evaluation frameworks
    • Simple processes for model updates
    • Strong monitoring systems
    • Regular performance reviews

    Investing in people and skills

    Both leaders highlighted how important skilled people are to analytics success.

    “Having a good hiring strategy as well as creating that data literacy is really important,”

    Johri noted. Banks need to :

    • Develop comprehensive training programmes
    • Create clear career paths for analytics professionals
    • Foster collaboration between technical and business teams
    • Build internal expertise in emerging technologies

    Planning for the future

    Looking ahead, both banks are preparing for increased regulation and growing demands for transparency. Key focus areas include :

    • Adapting to new privacy regulations
    • Making AI decisions more explainable
    • Improving data quality and governance
    • Strengthening cybersecurity measures

    Practical steps for financial institutions

    The experiences shared by HSBC and ING provide valuable insights for financial institutions at any stage of their analytics journey. Their successes and challenges outline a clear path forward.

    Key steps for success

    Financial institutions looking to enhance their analytics capabilities should :

    1. Start with strong foundations
      • Invest in clear data governance frameworks
      • Set data quality standards
      • Build thorough documentation processes
      • Create transparent data tracking
    2. Think strategically about AI implementation
      • Focus on transformative rather than small changes
      • Consider the full costs of AI projects
      • Build solutions that can grow
      • Balance innovation with risk management
    3. Invest in people and processes
      • Develop internal analytics expertise
      • Create clear paths for career growth
      • Foster collaboration between technical and business teams
      • Build a culture of data literacy
    4. Plan for scale
      • Establish monitoring systems
      • Create governance frameworks
      • Develop standard approaches to model development
      • Stay flexible for future regulatory changes

    Learn more

    Want to hear more insights from these industry leaders ? Watch the complete webinar recording on demand. You’ll learn :

    • Detailed technical insights from both banks
    • Extended Q&A with the speakers
    • Additional case studies and examples
    • Practical implementation advice
     
     

    Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

    Watch the on-demand webinar : Advancing analytics maturity.

    By providing your email and clicking “submit”, you agree to receive direct marketing materials relating to Matomo products and services, surveys, information about events, publications and promotions. You can unsubscribe at any time by clicking the opt-out link provided in each communication. We will process your personal information in accordance with our Privacy Policy.

    &lt;script&gt;document.getElementById( &quot;ak_js_4&quot; ).setAttribute( &quot;value&quot;, ( new Date() ).getTime() );&lt;/script&gt;

    &lt;script&gt;<br />
    gform.initializeOnLoaded( function() {gformInitSpinner( 71, 'https://matomo.org/wp-content/plugins/gravityforms/images/spinner.svg', true );jQuery('#gform_ajax_frame_71').on('load',function(){var contents = jQuery(this).contents().find('*').html();var is_postback = contents.indexOf('GF_AJAX_POSTBACK') &gt;= 0;if(!is_postback){return;}var form_content = jQuery(this).contents().find('#gform_wrapper_71');var is_confirmation = jQuery(this).contents().find('#gform_confirmation_wrapper_71').length &gt; 0;var is_redirect = contents.indexOf('gformRedirect(){') &gt;= 0;var is_form = form_content.length &gt; 0 &amp;&amp; ! is_redirect &amp;&amp; ! is_confirmation;var mt = parseInt(jQuery('html').css('margin-top'), 10) + parseInt(jQuery('body').css('margin-top'), 10) + 100;if(is_form){jQuery('#gform_wrapper_71').html(form_content.html());if(form_content.hasClass('gform_validation_error')){jQuery('#gform_wrapper_71').addClass('gform_validation_error');} else {jQuery('#gform_wrapper_71').removeClass('gform_validation_error');}setTimeout( function() { /* delay the scroll by 50 milliseconds to fix a bug in chrome */  }, 50 );if(window['gformInitDatepicker']) {gformInitDatepicker();}if(window['gformInitPriceFields']) {gformInitPriceFields();}var current_page = jQuery('#gform_source_page_number_71').val();gformInitSpinner( 71, 'https://matomo.org/wp-content/plugins/gravityforms/images/spinner.svg', true );jQuery(document).trigger('gform_page_loaded', [71, current_page]);window['gf_submitting_71'] = false;}else if(!is_redirect){var confirmation_content = jQuery(this).contents().find('.GF_AJAX_POSTBACK').html();if(!confirmation_content){confirmation_content = contents;}setTimeout(function(){jQuery('#gform_wrapper_71').replaceWith(confirmation_content);jQuery(document).trigger('gform_confirmation_loaded', [71]);window['gf_submitting_71'] = false;wp.a11y.speak(jQuery('#gform_confirmation_message_71').text());}, 50);}else{jQuery('#gform_71').append(contents);if(window['gformRedirect']) {gformRedirect();}}jQuery(document).trigger(&quot;gform_pre_post_render&quot;, [{ formId: &quot;71&quot;, currentPage: &quot;current_page&quot;, abort: function() { this.preventDefault(); } }]);                if (event.defaultPrevented) {                return;         }        const gformWrapperDiv = document.getElementById( &quot;gform_wrapper_71&quot; );        if ( gformWrapperDiv ) {            const visibilitySpan = document.createElement( &quot;span&quot; );            visibilitySpan.id = &quot;gform_visibility_test_71&quot;;            gformWrapperDiv.insertAdjacentElement( &quot;afterend&quot;, visibilitySpan );        }        const visibilityTestDiv = document.getElementById( &quot;gform_visibility_test_71&quot; );        let postRenderFired = false;                function triggerPostRender() {            if ( postRenderFired ) {                return;            }            postRenderFired = true;            jQuery( document ).trigger( 'gform_post_render', [71, current_page] );            gform.utils.trigger( { event: 'gform/postRender', native: false, data: { formId: 71, currentPage: current_page } } );            if ( visibilityTestDiv ) {                visibilityTestDiv.parentNode.removeChild( visibilityTestDiv );            }        }        function debounce( func, wait, immediate ) {            var timeout;            return function() {                var context = this, args = arguments;                var later = function() {                    timeout = null;                    if ( !immediate ) func.apply( context, args );                };                var callNow = immediate &amp;&amp; !timeout;                clearTimeout( timeout );                timeout = setTimeout( later, wait );                if ( callNow ) func.apply( context, args );            };        }        const debouncedTriggerPostRender = debounce( function() {            triggerPostRender();        }, 200 );        if ( visibilityTestDiv &amp;&amp; visibilityTestDiv.offsetParent === null ) {            const observer = new MutationObserver( ( mutations ) =&gt; {                mutations.forEach( ( mutation ) =&gt; {                    if ( mutation.type === 'attributes' &amp;&amp; visibilityTestDiv.offsetParent !== null ) {                        debouncedTriggerPostRender();                        observer.disconnect();                    }                });            });            observer.observe( document.body, {                attributes: true,                childList: false,                subtree: true,                attributeFilter: [ 'style', 'class' ],            });        } else {            triggerPostRender();        }    } );} );<br />
    &lt;/script&gt;
  • avdevice/decklink : Fix compile breakage on OSX

    19 octobre 2018, par Devin Heitmueller
    avdevice/decklink : Fix compile breakage on OSX
    

    Make the function static, or else Clang complains with :

    error : no previous prototype for function 'decklink_get_attr_string' [-Werror,-Wmissing-prototypes]

    Signed-off-by : Devin Heitmueller <dheitmueller@ltnglobal.com>
    Signed-off-by : Marton Balint <cus@passwd.hu>

    • [DH] libavdevice/decklink_common.cpp