Recherche avancée

Médias (0)

Mot : - Tags -/upload

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (44)

  • Pas question de marché, de cloud etc...

    10 avril 2011

    Le vocabulaire utilisé sur ce site essaie d’éviter toute référence à la mode qui fleurit allègrement
    sur le web 2.0 et dans les entreprises qui en vivent.
    Vous êtes donc invité à bannir l’utilisation des termes "Brand", "Cloud", "Marché" etc...
    Notre motivation est avant tout de créer un outil simple, accessible à pour tout le monde, favorisant
    le partage de créations sur Internet et permettant aux auteurs de garder une autonomie optimale.
    Aucun "contrat Gold ou Premium" n’est donc prévu, aucun (...)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

Sur d’autres sites (7700)

  • avdevice/decklink : Fix compile breakage on OSX

    19 octobre 2018, par Devin Heitmueller
    avdevice/decklink : Fix compile breakage on OSX
    

    Make the function static, or else Clang complains with :

    error : no previous prototype for function 'decklink_get_attr_string' [-Werror,-Wmissing-prototypes]

    Signed-off-by : Devin Heitmueller <dheitmueller@ltnglobal.com>
    Signed-off-by : Marton Balint <cus@passwd.hu>

    • [DH] libavdevice/decklink_common.cpp
  • Problems using FFmpeg / libavfilter for adding overlay to grabbed frames

    21 novembre 2024, par Michael

    On Windows with latest FFmpeg / libav (full build, non-free) a C/C++ app reads YUV420P frames from a frame grabber card.

    &#xA;

    A bitmap (BGR24) overlay image from file should be drawn on every frame for the first 20 seconds via libavfilter. First, the BGR24 overlay image becomes converted via format filter to YUV420P. Then the YUV420P frame from frame grabber and the YUV420P overlay frame are pushed into the overlay filter.

    &#xA;

    FFmpeg / libavfilter does not report any errors or warnings in console / log. Trying to get the filtered frame out of the graph via av_buffersink_get_frame results in an EAGAIN return code.

    &#xA;

    The frames from the frame grabber card are fine, they could become encoded or written to a .yuv file. The overlay frame itself is fine too.

    &#xA;

    This is the complete private code (prototype - no style, memory leaks, ...) :

    &#xA;

    #define __STDC_LIMIT_MACROS&#xA;#define __STDC_CONSTANT_MACROS&#xA;&#xA;#include <cstdio>&#xA;#include <cstdint>&#xA;#include &#xA;&#xA;#include "../fgproto/include/SDL/SDL_video.h"&#xA;#include &#xA;&#xA;using namespace _DSHOWLIB_NAMESPACE;&#xA;&#xA;#ifdef _WIN32&#xA;//Windows&#xA;extern "C" {&#xA;#include "libavcodec/avcodec.h"&#xA;#include "libavformat/avformat.h"&#xA;#include "libswscale/swscale.h"&#xA;#include "libavdevice/avdevice.h"&#xA;#include "libavfilter/avfilter.h"&#xA;#include <libavutil></libavutil>log.h>&#xA;#include <libavutil></libavutil>mem.h>&#xA;#include "libavfilter/buffersink.h"&#xA;#include "libavfilter/buffersrc.h"&#xA;#include "libavutil/opt.h"&#xA;#include "libavutil/hwcontext_qsv.h"&#xA;#include "SDL/SDL.h"&#xA;};&#xA;#endif&#xA;#include <iostream>&#xA;#include <fstream>&#xA;&#xA;void uSleep(double waitTimeInUs, LARGE_INTEGER frequency)&#xA;{&#xA;    LARGE_INTEGER startTime, currentTime;&#xA;&#xA;    QueryPerformanceCounter(&amp;startTime);&#xA;&#xA;    if (waitTimeInUs > 16500.0)&#xA;        Sleep(1);&#xA;&#xA;    do&#xA;    {&#xA;        YieldProcessor();&#xA;        //Sleep(0);&#xA;        QueryPerformanceCounter(&amp;currentTime);&#xA;    }&#xA;    while (waitTimeInUs > (currentTime.QuadPart - startTime.QuadPart) * 1000000.0 / frequency.QuadPart);&#xA;}&#xA;&#xA;void check_error(int ret)&#xA;{&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        char errbuf[128];&#xA;        int tmp = errno;&#xA;        av_strerror(ret, errbuf, sizeof(errbuf));&#xA;        std::cerr &lt;&lt; "Error: " &lt;&lt; errbuf &lt;&lt; &#x27;\n&#x27;;&#xA;        //exit(1);&#xA;    }&#xA;}&#xA;&#xA;bool _isRunning = true;&#xA;&#xA;void swap_uv_planes(AVFrame* frame)&#xA;{&#xA;    uint8_t* temp_plane = frame->data[1]; &#xA;    frame->data[1] = frame->data[2]; &#xA;    frame->data[2] = temp_plane; &#xA;}&#xA;&#xA;typedef struct&#xA;{&#xA;    const AVClass* avclass;&#xA;} MyFilterGraphContext;&#xA;&#xA;static constexpr AVClass my_filter_graph_class = &#xA;{&#xA;    .class_name = "MyFilterGraphContext",&#xA;    .item_name = av_default_item_name,&#xA;    .option = NULL,&#xA;    .version = LIBAVUTIL_VERSION_INT,&#xA;};&#xA;&#xA;MyFilterGraphContext* init_log_context()&#xA;{&#xA;    MyFilterGraphContext* ctx = static_cast(av_mallocz(sizeof(*ctx)));&#xA;&#xA;    if (!ctx)&#xA;    {&#xA;        av_log(nullptr, AV_LOG_ERROR, "Unable to allocate MyFilterGraphContext\n");&#xA;        return nullptr;&#xA;    }&#xA;&#xA;    ctx->avclass = &amp;my_filter_graph_class;&#xA;    return ctx;&#xA;}&#xA;&#xA;int init_overlay_filter(AVFilterGraph** graph, AVFilterContext** src_ctx, AVFilterContext** overlay_src_ctx,&#xA;                        AVFilterContext** sink_ctx)&#xA;{&#xA;    AVFilterGraph* filter_graph;&#xA;    AVFilterContext* buffersrc_ctx;&#xA;    AVFilterContext* overlay_buffersrc_ctx;&#xA;    AVFilterContext* buffersink_ctx;&#xA;    AVFilterContext* overlay_ctx;&#xA;    AVFilterContext* format_ctx;&#xA;&#xA;    const AVFilter* buffersrc, * buffersink, * overlay_buffersrc, * overlay_filter, * format_filter;&#xA;    int ret;&#xA;&#xA;    // Create the filter graph&#xA;    filter_graph = avfilter_graph_alloc();&#xA;    if (!filter_graph)&#xA;    {&#xA;        fprintf(stderr, "Unable to create filter graph.\n");&#xA;        return AVERROR(ENOMEM);&#xA;    }&#xA;&#xA;    // Create buffer source filter for main video&#xA;    buffersrc = avfilter_get_by_name("buffer");&#xA;    if (!buffersrc)&#xA;    {&#xA;        fprintf(stderr, "Unable to find buffer filter.\n");&#xA;        return AVERROR_FILTER_NOT_FOUND;&#xA;    }&#xA;&#xA;    // Create buffer source filter for overlay image&#xA;    overlay_buffersrc = avfilter_get_by_name("buffer");&#xA;    if (!overlay_buffersrc)&#xA;    {&#xA;        fprintf(stderr, "Unable to find buffer filter.\n");&#xA;        return AVERROR_FILTER_NOT_FOUND;&#xA;    }&#xA;&#xA;    // Create buffer sink filter&#xA;    buffersink = avfilter_get_by_name("buffersink");&#xA;    if (!buffersink)&#xA;    {&#xA;        fprintf(stderr, "Unable to find buffersink filter.\n");&#xA;        return AVERROR_FILTER_NOT_FOUND;&#xA;    }&#xA;&#xA;    // Create overlay filter&#xA;    overlay_filter = avfilter_get_by_name("overlay");&#xA;    if (!overlay_filter)&#xA;    {&#xA;        fprintf(stderr, "Unable to find overlay filter.\n");&#xA;        return AVERROR_FILTER_NOT_FOUND;&#xA;    }&#xA;&#xA;    // Create format filter&#xA;    format_filter = avfilter_get_by_name("format");&#xA;    if (!format_filter)&#xA;    {&#xA;        fprintf(stderr, "Unable to find format filter.\n");&#xA;        return AVERROR_FILTER_NOT_FOUND;&#xA;    }&#xA;&#xA;    // Initialize the main video buffer source&#xA;    char args[512];&#xA;&#xA;    // Initialize the overlay buffer source&#xA;    snprintf(args, sizeof(args), "video_size=165x165:pix_fmt=bgr24:time_base=1/25:pixel_aspect=1/1"); &#xA;&#xA;    ret = avfilter_graph_create_filter(&amp;overlay_buffersrc_ctx, overlay_buffersrc, nullptr, args, nullptr,&#xA;        filter_graph);&#xA;&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        fprintf(stderr, "Unable to create buffer source filter for overlay.\n");&#xA;        return ret;&#xA;    }&#xA;&#xA;    snprintf(args, sizeof(args), "video_size=1920x1080:pix_fmt=yuv420p:time_base=1/25:pixel_aspect=1/1");&#xA;&#xA;    ret = avfilter_graph_create_filter(&amp;buffersrc_ctx, buffersrc, nullptr, args, nullptr, filter_graph);&#xA;&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        fprintf(stderr, "Unable to create buffer source filter for main video.\n");&#xA;        return ret;&#xA;    }&#xA;&#xA;    // Initialize the format filter to convert overlay image to yuv420p&#xA;    snprintf(args, sizeof(args), "pix_fmts=yuv420p");&#xA;&#xA;    ret = avfilter_graph_create_filter(&amp;format_ctx, format_filter, nullptr, args, nullptr, filter_graph);&#xA;&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        fprintf(stderr, "Unable to create format filter.\n");&#xA;        return ret;&#xA;    }&#xA;&#xA;    // Initialize the overlay filter&#xA;    ret = avfilter_graph_create_filter(&amp;overlay_ctx, overlay_filter, nullptr, "W-w:H-h:enable=&#x27;between(t,0,20)&#x27;:format=yuv420", nullptr, filter_graph);&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        fprintf(stderr, "Unable to create overlay filter.\n");&#xA;        return ret;&#xA;    }&#xA;&#xA;    // Initialize the buffer sink&#xA;    ret = avfilter_graph_create_filter(&amp;buffersink_ctx, buffersink, nullptr, nullptr, nullptr, filter_graph);&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        fprintf(stderr, "Unable to create buffer sink filter.\n");&#xA;        return ret;&#xA;    }&#xA;&#xA;    // Connect the filters&#xA;    ret = avfilter_link(overlay_buffersrc_ctx, 0, format_ctx, 0);&#xA;&#xA;    if (ret >= 0)&#xA;    {&#xA;        ret = avfilter_link(buffersrc_ctx, 0, overlay_ctx, 0);&#xA;    }&#xA;    else&#xA;    {&#xA;        fprintf(stderr, "Unable to configure filter graph.\n");&#xA;        return ret;&#xA;    }&#xA;&#xA;&#xA;    if (ret >= 0)&#xA;    {&#xA;        ret = avfilter_link(format_ctx, 0, overlay_ctx, 1);&#xA;    }&#xA;    else&#xA;    {&#xA;        fprintf(stderr, "Unable to configure filter graph.\n");&#xA;        return ret;&#xA;    }&#xA;&#xA;    if (ret >= 0)&#xA;    {&#xA;        if ((ret = avfilter_link(overlay_ctx, 0, buffersink_ctx, 0)) &lt; 0)&#xA;        {&#xA;            fprintf(stderr, "Unable to link filter graph.\n");&#xA;            return ret;&#xA;        }&#xA;    }&#xA;    else&#xA;    {&#xA;        fprintf(stderr, "Unable to configure filter graph.\n");&#xA;        return ret;&#xA;    }&#xA;&#xA;    MyFilterGraphContext* log_ctx = init_log_context();&#xA;&#xA;    // Configure the filter graph&#xA;    if ((ret = avfilter_graph_config(filter_graph, log_ctx)) &lt; 0)&#xA;    {&#xA;        fprintf(stderr, "Unable to configure filter graph.\n");&#xA;        return ret;&#xA;    }&#xA;&#xA;    *graph = filter_graph;&#xA;    *src_ctx = buffersrc_ctx;&#xA;    *overlay_src_ctx = overlay_buffersrc_ctx;&#xA;    *sink_ctx = buffersink_ctx;&#xA;&#xA;    return 0;&#xA;}&#xA;&#xA;int main(int argc, char* argv[])&#xA;{&#xA;    unsigned int videoIndex = 0;&#xA;&#xA;    avdevice_register_all();&#xA;&#xA;    av_log_set_level(AV_LOG_TRACE);&#xA;&#xA;    const AVInputFormat* pFrameGrabberInputFormat = av_find_input_format("dshow");&#xA;&#xA;    constexpr int frameGrabberPixelWidth = 1920;&#xA;    constexpr int frameGrabberPixelHeight = 1080;&#xA;    constexpr int frameGrabberFrameRate = 25;&#xA;    constexpr AVPixelFormat frameGrabberPixelFormat = AV_PIX_FMT_YUV420P;&#xA;&#xA;    char shortStringBuffer[32];&#xA;&#xA;    AVDictionary* pFrameGrabberOptions = nullptr;&#xA;&#xA;    _snprintf_s(shortStringBuffer, sizeof(shortStringBuffer), "%dx%d", frameGrabberPixelWidth, frameGrabberPixelHeight);&#xA;    av_dict_set(&amp;pFrameGrabberOptions, "video_size", shortStringBuffer, 0);&#xA;&#xA;    _snprintf_s(shortStringBuffer, sizeof(shortStringBuffer), "%d", frameGrabberFrameRate);&#xA;&#xA;    av_dict_set(&amp;pFrameGrabberOptions, "framerate", shortStringBuffer, 0);&#xA;    av_dict_set(&amp;pFrameGrabberOptions, "pixel_format", "yuv420p", 0);&#xA;    av_dict_set(&amp;pFrameGrabberOptions, "rtbufsize", "128M", 0);&#xA;&#xA;    AVFormatContext* pFrameGrabberFormatContext = avformat_alloc_context();&#xA;&#xA;    pFrameGrabberFormatContext->flags = AVFMT_FLAG_NOBUFFER | AVFMT_FLAG_FLUSH_PACKETS;&#xA;&#xA;    if (avformat_open_input(&amp;pFrameGrabberFormatContext, "video=MZ0380 PCI, Analog 01 Capture",&#xA;                            pFrameGrabberInputFormat, &amp;pFrameGrabberOptions) != 0)&#xA;    {&#xA;        std::cerr &lt;&lt; "Couldn&#x27;t open input stream." &lt;&lt; &#x27;\n&#x27;;&#xA;        return -1;&#xA;    }&#xA;&#xA;    if (avformat_find_stream_info(pFrameGrabberFormatContext, nullptr) &lt; 0)&#xA;    {&#xA;        std::cerr &lt;&lt; "Couldn&#x27;t find stream information." &lt;&lt; &#x27;\n&#x27;;&#xA;        return -1;&#xA;    }&#xA;&#xA;    bool foundVideoStream = false;&#xA;&#xA;    for (unsigned int loop_videoIndex = 0; loop_videoIndex &lt; pFrameGrabberFormatContext->nb_streams; loop_videoIndex&#x2B;&#x2B;)&#xA;    {&#xA;        if (pFrameGrabberFormatContext->streams[loop_videoIndex]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO)&#xA;        {&#xA;            videoIndex = loop_videoIndex;&#xA;            foundVideoStream = true;&#xA;            break;&#xA;        }&#xA;    }&#xA;&#xA;    if (!foundVideoStream)&#xA;    {&#xA;        std::cerr &lt;&lt; "Couldn&#x27;t find a video stream." &lt;&lt; &#x27;\n&#x27;;&#xA;        return -1;&#xA;    }&#xA;&#xA;    const AVCodec* pFrameGrabberCodec = avcodec_find_decoder(&#xA;        pFrameGrabberFormatContext->streams[videoIndex]->codecpar->codec_id);&#xA;&#xA;    AVCodecContext* pFrameGrabberCodecContext = avcodec_alloc_context3(pFrameGrabberCodec);&#xA;&#xA;    if (pFrameGrabberCodec == nullptr)&#xA;    {&#xA;        std::cerr &lt;&lt; "Codec not found." &lt;&lt; &#x27;\n&#x27;;&#xA;        return -1;&#xA;    }&#xA;&#xA;    pFrameGrabberCodecContext->pix_fmt = frameGrabberPixelFormat;&#xA;    pFrameGrabberCodecContext->width = frameGrabberPixelWidth;&#xA;    pFrameGrabberCodecContext->height = frameGrabberPixelHeight;&#xA;&#xA;    int ret = avcodec_open2(pFrameGrabberCodecContext, pFrameGrabberCodec, nullptr);&#xA;&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        std::cerr &lt;&lt; "Could not open pVideoCodec." &lt;&lt; &#x27;\n&#x27;;&#xA;        return -1;&#xA;    }&#xA;&#xA;    const char* outputFilePath = "c:\\temp\\output.mp4";&#xA;    constexpr int outputWidth = frameGrabberPixelWidth;&#xA;    constexpr int outputHeight = frameGrabberPixelHeight;&#xA;    constexpr int outputFrameRate = frameGrabberFrameRate;&#xA;&#xA;    SwsContext* img_convert_ctx = sws_getContext(frameGrabberPixelWidth, frameGrabberPixelHeight,&#xA;                                                 frameGrabberPixelFormat, outputWidth, outputHeight, AV_PIX_FMT_NV12,&#xA;                                                 SWS_BICUBIC, nullptr, nullptr, nullptr);&#xA;&#xA;    constexpr double frameTimeinUs = 1000000.0 / frameGrabberFrameRate;&#xA;&#xA;    LARGE_INTEGER frequency;&#xA;    LARGE_INTEGER lastTime, currentTime;&#xA;&#xA;    QueryPerformanceFrequency(&amp;frequency);&#xA;    QueryPerformanceCounter(&amp;lastTime);&#xA;&#xA;    //SDL----------------------------&#xA;&#xA;    if (SDL_Init(SDL_INIT_VIDEO | SDL_INIT_TIMER | SDL_INIT_EVENTS))&#xA;    {&#xA;        printf("Could not initialize SDL - %s\n", SDL_GetError());&#xA;        return -1;&#xA;    }&#xA;&#xA;    SDL_Window* screen = SDL_CreateWindow("3P FrameGrabber SuperApp", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED,&#xA;                                          frameGrabberPixelWidth, frameGrabberPixelHeight,&#xA;                                          SDL_WINDOW_RESIZABLE | SDL_WINDOW_OPENGL);&#xA;&#xA;    if (!screen)&#xA;    {&#xA;        printf("SDL: could not set video mode - exiting:%s\n", SDL_GetError());&#xA;        return -1;&#xA;    }&#xA;&#xA;    SDL_Renderer* renderer = SDL_CreateRenderer(screen, -1, 0);&#xA;&#xA;    if (!renderer)&#xA;    {&#xA;        printf("SDL: could not create renderer - exiting:%s\n", SDL_GetError());&#xA;        return -1;&#xA;    }&#xA;&#xA;    SDL_Texture* texture = SDL_CreateTexture(renderer, SDL_PIXELFORMAT_YV12, SDL_TEXTUREACCESS_STREAMING,&#xA;                                             frameGrabberPixelWidth, frameGrabberPixelHeight);&#xA;&#xA;    if (!texture)&#xA;    {&#xA;        printf("SDL: could not create texture - exiting:%s\n", SDL_GetError());&#xA;        return -1;&#xA;    }&#xA;&#xA;    SDL_Event event;&#xA;&#xA;    //SDL End------------------------&#xA;&#xA;    const AVCodec* pVideoCodec = avcodec_find_encoder_by_name("h264_qsv");&#xA;&#xA;    if (!pVideoCodec)&#xA;    {&#xA;        std::cerr &lt;&lt; "Codec not found" &lt;&lt; &#x27;\n&#x27;;&#xA;        return 1;&#xA;    }&#xA;&#xA;    AVCodecContext* pVideoCodecContext = avcodec_alloc_context3(pVideoCodec);&#xA;&#xA;    if (!pVideoCodecContext)&#xA;    {&#xA;        std::cerr &lt;&lt; "Could not allocate video pVideoCodec context" &lt;&lt; &#x27;\n&#x27;;&#xA;        return 1;&#xA;    }&#xA;&#xA;    AVBufferRef* pHardwareDeviceContextRef = nullptr;&#xA;&#xA;    ret = av_hwdevice_ctx_create(&amp;pHardwareDeviceContextRef, AV_HWDEVICE_TYPE_QSV,&#xA;                                 "PCI\\VEN_8086&amp;DEV_5912&amp;SUBSYS_310217AA&amp;REV_04\\3&amp;11583659&amp;0&amp;10", nullptr, 0);&#xA;    check_error(ret);&#xA;&#xA;    pVideoCodecContext->bit_rate = static_cast(outputWidth * outputHeight) * 2;&#xA;    pVideoCodecContext->width = outputWidth;&#xA;    pVideoCodecContext->height = outputHeight;&#xA;    pVideoCodecContext->framerate = {outputFrameRate, 1};&#xA;    pVideoCodecContext->time_base = {1, outputFrameRate};&#xA;    pVideoCodecContext->pix_fmt = AV_PIX_FMT_QSV;&#xA;    pVideoCodecContext->max_b_frames = 0;&#xA;&#xA;    AVBufferRef* pHardwareFramesContextRef = av_hwframe_ctx_alloc(pHardwareDeviceContextRef);&#xA;&#xA;    AVHWFramesContext* pHardwareFramesContext = reinterpret_cast(pHardwareFramesContextRef->data);&#xA;&#xA;    pHardwareFramesContext->format = AV_PIX_FMT_QSV;&#xA;    pHardwareFramesContext->sw_format = AV_PIX_FMT_NV12;&#xA;    pHardwareFramesContext->width = outputWidth;&#xA;    pHardwareFramesContext->height = outputHeight;&#xA;    pHardwareFramesContext->initial_pool_size = 20;&#xA;&#xA;    ret = av_hwframe_ctx_init(pHardwareFramesContextRef);&#xA;    check_error(ret);&#xA;&#xA;    pVideoCodecContext->hw_device_ctx = nullptr;&#xA;    pVideoCodecContext->hw_frames_ctx = av_buffer_ref(pHardwareFramesContextRef);&#xA;&#xA;    ret = avcodec_open2(pVideoCodecContext, pVideoCodec, nullptr); //&amp;pVideoOptionsDict);&#xA;    check_error(ret);&#xA;&#xA;    AVFormatContext* pVideoFormatContext = nullptr;&#xA;&#xA;    avformat_alloc_output_context2(&amp;pVideoFormatContext, nullptr, nullptr, outputFilePath);&#xA;&#xA;    if (!pVideoFormatContext)&#xA;    {&#xA;        std::cerr &lt;&lt; "Could not create output context" &lt;&lt; &#x27;\n&#x27;;&#xA;        return 1;&#xA;    }&#xA;&#xA;    const AVOutputFormat* pVideoOutputFormat = pVideoFormatContext->oformat;&#xA;&#xA;    if (pVideoFormatContext->oformat->flags &amp; AVFMT_GLOBALHEADER)&#xA;    {&#xA;        pVideoCodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;&#xA;    }&#xA;&#xA;    const AVStream* pVideoStream = avformat_new_stream(pVideoFormatContext, pVideoCodec);&#xA;&#xA;    if (!pVideoStream)&#xA;    {&#xA;        std::cerr &lt;&lt; "Could not allocate stream" &lt;&lt; &#x27;\n&#x27;;&#xA;        return 1;&#xA;    }&#xA;&#xA;    ret = avcodec_parameters_from_context(pVideoStream->codecpar, pVideoCodecContext);&#xA;&#xA;    check_error(ret);&#xA;&#xA;    if (!(pVideoOutputFormat->flags &amp; AVFMT_NOFILE))&#xA;    {&#xA;        ret = avio_open(&amp;pVideoFormatContext->pb, outputFilePath, AVIO_FLAG_WRITE);&#xA;        check_error(ret);&#xA;    }&#xA;&#xA;    ret = avformat_write_header(pVideoFormatContext, nullptr);&#xA;&#xA;    check_error(ret);&#xA;&#xA;    AVFrame* pHardwareFrame = av_frame_alloc();&#xA;&#xA;    if (av_hwframe_get_buffer(pVideoCodecContext->hw_frames_ctx, pHardwareFrame, 0) &lt; 0)&#xA;    {&#xA;        std::cerr &lt;&lt; "Error allocating a hw frame" &lt;&lt; &#x27;\n&#x27;;&#xA;        return -1;&#xA;    }&#xA;&#xA;    AVFrame* pFrameGrabberFrame = av_frame_alloc();&#xA;    AVPacket* pFrameGrabberPacket = av_packet_alloc();&#xA;&#xA;    AVPacket* pVideoPacket = av_packet_alloc();&#xA;    AVFrame* pVideoFrame = av_frame_alloc();&#xA;&#xA;    AVFrame* pSwappedFrame = av_frame_alloc();&#xA;    av_frame_get_buffer(pSwappedFrame, 32);&#xA;&#xA;    INT64 frameCount = 0;&#xA;&#xA;    pFrameGrabberCodecContext->time_base = {1, frameGrabberFrameRate};&#xA;&#xA;    AVFilterContext* buffersrc_ctx = nullptr;&#xA;    AVFilterContext* buffersink_ctx = nullptr;&#xA;    AVFilterContext* overlay_src_ctx = nullptr;&#xA;    AVFilterGraph* filter_graph = nullptr;&#xA;&#xA;    if ((ret = init_overlay_filter(&amp;filter_graph, &amp;buffersrc_ctx, &amp;overlay_src_ctx, &amp;buffersink_ctx)) &lt; 0)&#xA;    {&#xA;        return ret;&#xA;    }&#xA;&#xA;    // Load overlay image&#xA;    AVFormatContext* overlay_fmt_ctx = nullptr;&#xA;    AVCodecContext* overlay_codec_ctx = nullptr;&#xA;    const AVCodec* overlay_codec = nullptr;&#xA;    AVFrame* overlay_frame = nullptr;&#xA;    AVDictionary* overlay_options = nullptr;&#xA;&#xA;    const char* overlay_image_filename = "c:\\temp\\overlay.bmp";&#xA;&#xA;    av_dict_set(&amp;overlay_options, "video_size", "165x165", 0);&#xA;    av_dict_set(&amp;overlay_options, "pixel_format", "bgr24", 0);&#xA;&#xA;    if ((ret = avformat_open_input(&amp;overlay_fmt_ctx, overlay_image_filename, nullptr, &amp;overlay_options)) &lt; 0)&#xA;    {&#xA;        return ret;&#xA;    }&#xA;&#xA;    if ((ret = avformat_find_stream_info(overlay_fmt_ctx, nullptr)) &lt; 0)&#xA;    {&#xA;        return ret;&#xA;    }&#xA;&#xA;    int overlay_video_stream_index = -1;&#xA;&#xA;    for (int i = 0; i &lt; overlay_fmt_ctx->nb_streams; i&#x2B;&#x2B;)&#xA;    {&#xA;        if (overlay_fmt_ctx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO)&#xA;        {&#xA;            overlay_video_stream_index = i;&#xA;            break;&#xA;        }&#xA;    }&#xA;&#xA;    if (overlay_video_stream_index == -1)&#xA;    {&#xA;        return -1;&#xA;    }&#xA;&#xA;    overlay_codec = avcodec_find_decoder(overlay_fmt_ctx->streams[overlay_video_stream_index]->codecpar->codec_id);&#xA;&#xA;    if (!overlay_codec)&#xA;    {&#xA;        fprintf(stderr, "Overlay codec not found.\n");&#xA;        return -1;&#xA;    }&#xA;&#xA;    overlay_codec_ctx = avcodec_alloc_context3(overlay_codec);&#xA;&#xA;    if (!overlay_codec_ctx)&#xA;    {&#xA;        fprintf(stderr, "Could not allocate overlay codec context.\n");&#xA;        return AVERROR(ENOMEM);&#xA;    }&#xA;&#xA;    avcodec_parameters_to_context(overlay_codec_ctx, overlay_fmt_ctx->streams[overlay_video_stream_index]->codecpar);&#xA;&#xA;    if ((ret = avcodec_open2(overlay_codec_ctx, overlay_codec, nullptr)) &lt; 0)&#xA;    {&#xA;        return ret;&#xA;    }&#xA;&#xA;    overlay_frame = av_frame_alloc();&#xA;&#xA;    if (!overlay_frame)&#xA;    {&#xA;        fprintf(stderr, "Could not allocate overlay frame.\n");&#xA;        return AVERROR(ENOMEM);&#xA;    }&#xA;&#xA;    AVPacket* overlay_packet = av_packet_alloc();&#xA;&#xA;    // Read frames from the file&#xA;    while (av_read_frame(overlay_fmt_ctx, overlay_packet) >= 0)&#xA;    {&#xA;        if (overlay_packet->stream_index == overlay_video_stream_index)&#xA;        {&#xA;            ret = avcodec_send_packet(overlay_codec_ctx, overlay_packet);&#xA;&#xA;            if (ret &lt; 0)&#xA;            {&#xA;                break;&#xA;            }&#xA;&#xA;            ret = avcodec_receive_frame(overlay_codec_ctx, overlay_frame);&#xA;            if (ret >= 0)&#xA;            {&#xA;                &#xA;                break; // We only need the first frame for the overlay&#xA;            }&#xA;&#xA;            if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)&#xA;            {&#xA;                continue;&#xA;            }&#xA;&#xA;            break;&#xA;        }&#xA;&#xA;        av_packet_unref(overlay_packet);&#xA;    }&#xA;&#xA;    av_packet_unref(overlay_packet);&#xA;&#xA;    while (_isRunning)&#xA;    {&#xA;        while (SDL_PollEvent(&amp;event) != 0)&#xA;        {&#xA;            switch (event.type)&#xA;            {&#xA;            case SDL_QUIT:&#xA;                _isRunning = false;&#xA;                break;&#xA;            case SDL_KEYDOWN:&#xA;                if (event.key.keysym.sym == SDLK_ESCAPE)&#xA;                    _isRunning = false;&#xA;                break;&#xA;            default: ;&#xA;            }&#xA;        }&#xA;&#xA;        if (av_read_frame(pFrameGrabberFormatContext, pFrameGrabberPacket) == 0)&#xA;        {&#xA;            if (pFrameGrabberPacket->stream_index == videoIndex)&#xA;            {&#xA;                ret = avcodec_send_packet(pFrameGrabberCodecContext, pFrameGrabberPacket);&#xA;&#xA;                if (ret &lt; 0)&#xA;                {&#xA;                    std::cerr &lt;&lt; "Error sending a packet for decoding!" &lt;&lt; &#x27;\n&#x27;;&#xA;                    return -1;&#xA;                }&#xA;&#xA;                ret = avcodec_receive_frame(pFrameGrabberCodecContext, pFrameGrabberFrame);&#xA;&#xA;                if (ret != 0)&#xA;                {&#xA;                    std::cerr &lt;&lt; "Receiving frame failed!" &lt;&lt; &#x27;\n&#x27;;&#xA;                    return -1;&#xA;                }&#xA;&#xA;                if (ret == AVERROR(EAGAIN) || ret == AVERROR(AVERROR_EOF))&#xA;                {&#xA;                    std::cout &lt;&lt; "End of stream detected. Exiting now." &lt;&lt; &#x27;\n&#x27;;&#xA;                    return 0;&#xA;                }&#xA;&#xA;                if (ret != 0)&#xA;                {&#xA;                    std::cerr &lt;&lt; "Decode Error!" &lt;&lt; &#x27;\n&#x27;;&#xA;                    return -1;&#xA;                }&#xA;&#xA;                // Feed the frame into the filter graph&#xA;                if (av_buffersrc_add_frame_flags(buffersrc_ctx, pFrameGrabberFrame, AV_BUFFERSRC_FLAG_KEEP_REF) &lt; 0)&#xA;                {&#xA;                    fprintf(stderr, "Error while feeding the filtergraph\n");&#xA;                    break;&#xA;                }&#xA;&#xA;                // Push the overlay frame to the overlay_src_ctx&#xA;                ret = av_buffersrc_add_frame_flags(overlay_src_ctx, overlay_frame, AV_BUFFERSRC_FLAG_KEEP_REF);&#xA;                if (ret &lt; 0)&#xA;                {&#xA;                    fprintf(stderr, "Error while feeding the filtergraph\n");&#xA;                    break;&#xA;                }                           &#xA;&#xA;                // Pull filtered frame from the filter graph&#xA;                AVFrame* filtered_frame = av_frame_alloc();&#xA;&#xA;                ret = av_buffersink_get_frame(buffersink_ctx, filtered_frame);&#xA;&#xA;                if (ret &lt; 0)&#xA;                {&#xA;                    check_error(ret);&#xA;                }&#xA;&#xA;                QueryPerformanceCounter(&amp;currentTime);&#xA;&#xA;                double elapsedTime = (currentTime.QuadPart - lastTime.QuadPart) * 1000000.0 / frequency.QuadPart;&#xA;&#xA;                if (elapsedTime > 0.0 &amp;&amp; elapsedTime &lt; frameTimeinUs)&#xA;                {&#xA;                    uSleep(frameTimeinUs - elapsedTime, frequency);&#xA;                }&#xA;&#xA;                SDL_UpdateTexture(texture, nullptr, filtered_frame->data[0], filtered_frame->linesize[0]);&#xA;                SDL_RenderClear(renderer);&#xA;                SDL_RenderCopy(renderer, texture, nullptr, nullptr);&#xA;                SDL_RenderPresent(renderer);&#xA;&#xA;                QueryPerformanceCounter(&amp;lastTime);&#xA;&#xA;                swap_uv_planes(filtered_frame);&#xA;&#xA;                ret = sws_scale_frame(img_convert_ctx, pVideoFrame, filtered_frame);&#xA;&#xA;                if (ret &lt; 0)&#xA;                {&#xA;                    std::cerr &lt;&lt; "Scaling frame for Intel QS Encoder did fail!" &lt;&lt; &#x27;\n&#x27;;&#xA;                    return -1;&#xA;                }&#xA;&#xA;                if (av_hwframe_transfer_data(pHardwareFrame, pVideoFrame, 0) &lt; 0)&#xA;                {&#xA;                    std::cerr &lt;&lt; "Error transferring frame data to hw frame!" &lt;&lt; &#x27;\n&#x27;;&#xA;                    return -1;&#xA;                }&#xA;&#xA;                pHardwareFrame->pts = frameCount&#x2B;&#x2B;;&#xA;&#xA;                ret = avcodec_send_frame(pVideoCodecContext, pHardwareFrame);&#xA;&#xA;                if (ret &lt; 0)&#xA;                {&#xA;                    std::cerr &lt;&lt; "Error sending a frame for encoding" &lt;&lt; &#x27;\n&#x27;;&#xA;                    check_error(ret);&#xA;                }&#xA;&#xA;                av_packet_unref(pVideoPacket);&#xA;&#xA;                while (ret >= 0)&#xA;                {&#xA;                    ret = avcodec_receive_packet(pVideoCodecContext, pVideoPacket);&#xA;&#xA;                    if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)&#xA;                    {&#xA;                        break;&#xA;                    }&#xA;&#xA;                    if (ret &lt; 0)&#xA;                    {&#xA;                        std::cerr &lt;&lt; "Error during encoding" &lt;&lt; &#x27;\n&#x27;;&#xA;                        return 1;&#xA;                    }&#xA;&#xA;                    av_packet_rescale_ts(pVideoPacket, pVideoCodecContext->time_base, pVideoStream->time_base);&#xA;&#xA;                    pVideoPacket->stream_index = pVideoStream->index;&#xA;&#xA;                    ret = av_interleaved_write_frame(pVideoFormatContext, pVideoPacket);&#xA;&#xA;                    check_error(ret);&#xA;&#xA;                    av_packet_unref(pVideoPacket);&#xA;                }&#xA;&#xA;                av_packet_unref(pFrameGrabberPacket);&#xA;                av_frame_free(&amp;filtered_frame);&#xA;            }&#xA;        }&#xA;    }&#xA;&#xA;    av_write_trailer(pVideoFormatContext);&#xA;    av_buffer_unref(&amp;pHardwareDeviceContextRef);&#xA;    avcodec_free_context(&amp;pVideoCodecContext);&#xA;    avio_closep(&amp;pVideoFormatContext->pb);&#xA;    avformat_free_context(pVideoFormatContext);&#xA;    av_packet_free(&amp;pVideoPacket);&#xA;&#xA;    avcodec_free_context(&amp;pFrameGrabberCodecContext);&#xA;    av_frame_free(&amp;pFrameGrabberFrame);&#xA;    av_packet_free(&amp;pFrameGrabberPacket);&#xA;    avformat_close_input(&amp;pFrameGrabberFormatContext);&#xA;&#xA;    return 0;&#xA;}&#xA;&#xA;</fstream></iostream></cstdint></cstdio>

    &#xA;

    The console / log output running the code :

    &#xA;

    [in @ 00000288ee494f40] Setting &#x27;video_size&#x27; to value &#x27;1920x1080&#x27;&#xA;[in @ 00000288ee494f40] Setting &#x27;pix_fmt&#x27; to value &#x27;yuv420p&#x27;&#xA;[in @ 00000288ee494f40] Setting &#x27;time_base&#x27; to value &#x27;1/25&#x27;&#xA;[in @ 00000288ee494f40] Setting &#x27;pixel_aspect&#x27; to value &#x27;1/1&#x27;&#xA;[in @ 00000288ee494f40] w:1920 h:1080 pixfmt:yuv420p tb:1/25 fr:0/1 sar:1/1 csp:unknown range:unknown&#xA;[overlay_in @ 00000288ff1013c0] Setting &#x27;video_size&#x27; to value &#x27;165x165&#x27;&#xA;[overlay_in @ 00000288ff1013c0] Setting &#x27;pix_fmt&#x27; to value &#x27;bgr24&#x27;&#xA;[overlay_in @ 00000288ff1013c0] Setting &#x27;time_base&#x27; to value &#x27;1/25&#x27;&#xA;[overlay_in @ 00000288ff1013c0] Setting &#x27;pixel_aspect&#x27; to value &#x27;1/1&#x27;&#xA;[overlay_in @ 00000288ff1013c0] w:165 h:165 pixfmt:bgr24 tb:1/25 fr:0/1 sar:1/1 csp:unknown range:unknown&#xA;[format @ 00000288ff1015c0] Setting &#x27;pix_fmts&#x27; to value &#x27;yuv420p&#x27;&#xA;[overlay @ 00000288ff101880] Setting &#x27;x&#x27; to value &#x27;W-w&#x27;&#xA;[overlay @ 00000288ff101880] Setting &#x27;y&#x27; to value &#x27;H-h&#x27;&#xA;[overlay @ 00000288ff101880] Setting &#x27;enable&#x27; to value &#x27;between(t,0,20)&#x27;&#xA;[overlay @ 00000288ff101880] Setting &#x27;format&#x27; to value &#x27;yuv420&#x27;&#xA;[auto_scale_0 @ 00000288ff101ec0] w:iw h:ih flags:&#x27;&#x27; interl:0&#xA;[format @ 00000288ff1015c0] auto-inserting filter &#x27;auto_scale_0&#x27; between the filter &#x27;overlay_in&#x27; and the filter &#x27;format&#x27;&#xA;[auto_scale_1 @ 00000288ee4a4cc0] w:iw h:ih flags:&#x27;&#x27; interl:0&#xA;[overlay @ 00000288ff101880] auto-inserting filter &#x27;auto_scale_1&#x27; between the filter &#x27;format&#x27; and the filter &#x27;overlay&#x27;&#xA;[AVFilterGraph @ 00000288ee495c80] query_formats: 5 queried, 6 merged, 6 already done, 0 delayed&#xA;[auto_scale_0 @ 00000288ff101ec0] w:165 h:165 fmt:bgr24 csp:gbr range:pc sar:1/1 -> w:165 h:165 fmt:yuv420p csp:unknown range:unknown sar:1/1 flags:0x00000004&#xA;[auto_scale_1 @ 00000288ee4a4cc0] w:165 h:165 fmt:yuv420p csp:unknown range:unknown sar:1/1 -> w:165 h:165 fmt:yuva420p csp:unknown range:unknown sar:1/1 flags:0x00000004&#xA;[overlay @ 00000288ff101880] main w:1920 h:1080 fmt:yuv420p overlay w:165 h:165 fmt:yuva420p&#xA;[overlay @ 00000288ff101880] [framesync @ 00000288ff1019a8] Selected 1/25 time base&#xA;[overlay @ 00000288ff101880] [framesync @ 00000288ff1019a8] Sync level 2&#xA;

    &#xA;

    I tried to change the index / order of how the two different frames become pushed into the filter graph. Once I got a frame out of the graph but with the dimensions of the overlay image, not with the dimensions of the grabbed frame from the grabber card. So I suppose I am doing something wrong building up the filter graph.

    &#xA;

    To verify that the FFmpeg build contains all necessary modules I ran that procedure via FFmpeg executable in console and it worked and the result was as expected.

    &#xA;

    The command-line producing the expected output is following :

    &#xA;

    ffmpeg -f dshow -i video="MZ0380 PCI, Analog 01 Capture" -video_size 1920x1080 -framerate 25 -pixel_format yuv420p -loglevel debug -i "C:\temp\overlay.bmp" -filter_complex "[0:v][1:v] overlay=W-w:H-h:enable=&#x27;between(t,0,20)&#x27;" -pix_fmt yuv420p -c:a copy output.mp4&#xA;

    &#xA;

  • Can ffmpeg concatenate mp3 files using the process at audio-joiner.com ?

    7 juin 2020, par Ed999

    I have a dozen or more mp3 audio files, which I need to concatenate into a single mp3 file. The files all have the same bitrate (320 kbps) and sample rate (44.1 kHz), but all of them have differing durations.

    &#xA;&#xA;

    I have studied the three methods of concatenation recommended on stackoverflow (How to concatenate two MP4 files using FFmpeg). One method actually works, but when I play back the output file I find that there are noticeable audio artifacts (audible glitches) at each join point.

    &#xA;&#xA;

    I've been told that this problem is caused by the input files not having identical duration. This seems likely, because I've had some successes in concatenating audio files with identical bit rate, sample rate, and duration.

    &#xA;&#xA;

    I have seen, online, some much more complex scripts which are, at present, miles beyond my understanding. One solution I was directed to required a fairly deep knowledge of Python !

    &#xA;&#xA;

    However, my research also included a site at audio-joiner.com - and this had the only completely successful method I've yet found, for files of non-identical duration. That site processed some of my input files, joined the multiple files into one, and the concatenated output file it produced did not have any audible glitches at the joins.

    &#xA;&#xA;

    I looked into the process the site was using, hoping to get a clue as to where I've been going wrong, but the script on the site (which looks like ajax-based javascript) is too complex for me to follow.

    &#xA;&#xA;

    Because the process seemed to take quite a long time, I wouldn't be too surprised to learn that the mp3 input files are being converted to some other audio format, joined, then converted back to mp3 for the output. But if so, that wouldn't put me off using the process.

    &#xA;&#xA;

    Is anyone familiar with the approach being used, and can say whether it might be reproducible using ffmpeg ?

    &#xA;&#xA;

    .

    &#xA;&#xA;

    ADDED -

    &#xA;&#xA;

    There are 7 scripts, in all, listed in the source of the relevant page :

    &#xA;&#xA;

    https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js&#xA;https://cdnjs.cloudflare.com/ajax/libs/jquery/1.12.0/jquery.min.js&#xA;https://static.123apps.com/js/socket.io.js&#xA;https://static.123apps.com/js/shared_0.js&#xA;https://static.123apps.com/js/shared_1.js&#xA;https://static.123apps.com/js/ajoiner.js&#xA;https://ajax.googleapis.com/ajax/libs/swfobject/2.2/swfobject.js&#xA;

    &#xA;&#xA;

    .

    &#xA;&#xA;

    ADDED -

    &#xA;&#xA;

    The successful (javascript) function seems to be this, but it isn't obvious to me why it is succeeding (too complex for me !). Can anyone suggest what approach it is taking ? For example, is it transcoding the mp3 files to an intermediate format, and concatenating the intermediate files ?

    &#xA;&#xA;

    function start_join(e){&#xA;  l("start_join():"),l(e);&#xA;  var t;&#xA;  return(t=$.parseJSON(e)) &amp;&amp; $("#ajoiner").ajoiner("set_params",t),!0&#xA;}&#xA;&#xA;function cancel_join(e){&#xA;  return l("cancel_join():"),l(e),!0&#xA;}&#xA;&#xA;!function(o){&#xA;  var t={&#xA;    init:function(e){&#xA;      var t=o(this),n=o.extend({lang:{cancel:"Cancel",download:"Download"}},e);&#xA;      o(this).data("ajoiner",{o:n,tmp_i:1,pid:-1,params:{}});&#xA;      t.data("ajoiner");&#xA;      t.ajoiner("_connect"),o("body").bind("socket_connected",function(){t.ajoiner("_connect")})&#xA;    },set_params:function(e){&#xA;      var t=o(this).data("ajoiner");&#xA;      isset(e)?(e.uid=Cookies.get("uid"),t.params=e,t.params.lang_id=lang_id,t.params.host=location.hostname,t.params.hostprotocol=location.protocol,l("socket emit join:"),l(t.params),socket.emit("join",t.params)):error("set_params: params not set")&#xA;    },_connect:function(){&#xA;&#xA;      var t=o(this).data("ajoiner");&#xA;&#xA;      l("_connect"),socket.on("join",function(e){&#xA;        "progress"==e.message_type?(t.tmp_i,t.tmp_i&#x2B;&#x2B;,void 0!==getObj("theSWF")&amp;&amp;(getObj("theSWF").set_join_progress(parseInt(e.progress_value)),l("SWF.set_join_progress("&#x2B;parseInt(e.progress_value)&#x2B;")")),isset(e.pid)&amp;&amp;(t.pid=e.pid)):"final_result"==e.message_type?(void(e.tmp_i=0)!==getObj("theSWF")&amp;&amp;(getObj("theSWF").join_finished(o.stringifyJSON(e)),l("SWF.join_finished(&#x27;"&#x2B;o.stringifyJSON(e)&#x2B;"&#x27;)")),last_conv_result=e):"error"==e.message_type&amp;&amp;l(e.error_desc)&#xA;      }&#xA;    )},_cancel_convert:function(){&#xA;      var e=o(this).data("ajoiner");&#xA;      0code>

    &#xA;