
Recherche avancée
Médias (3)
-
Exemple de boutons d’action pour une collection collaborative
27 février 2013, par
Mis à jour : Mars 2013
Langue : français
Type : Image
-
Exemple de boutons d’action pour une collection personnelle
27 février 2013, par
Mis à jour : Février 2013
Langue : English
Type : Image
-
Collections - Formulaire de création rapide
19 février 2013, par
Mis à jour : Février 2013
Langue : français
Type : Image
Autres articles (28)
-
L’espace de configuration de MediaSPIP
29 novembre 2010, parL’espace de configuration de MediaSPIP est réservé aux administrateurs. Un lien de menu "administrer" est généralement affiché en haut de la page [1].
Il permet de configurer finement votre site.
La navigation de cet espace de configuration est divisé en trois parties : la configuration générale du site qui permet notamment de modifier : les informations principales concernant le site (...) -
Les formats acceptés
28 janvier 2010, parLes commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
ffmpeg -codecs ffmpeg -formats
Les format videos acceptés en entrée
Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
Les formats vidéos de sortie possibles
Dans un premier temps on (...) -
Ajouter notes et légendes aux images
7 février 2011, parPour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
Modification lors de l’ajout d’un média
Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)
Sur d’autres sites (9146)
-
Problems using FFmpeg / libavfilter for adding overlay to grabbed frames
21 novembre 2024, par MichaelOn Windows with latest FFmpeg / libav (full build, non-free) a C/C++ app reads YUV420P frames from a frame grabber card.


A bitmap (BGR24) overlay image from file should be drawn on every frame for the first 20 seconds via libavfilter. First, the BGR24 overlay image becomes converted via format filter to YUV420P. Then the YUV420P frame from frame grabber and the YUV420P overlay frame are pushed into the overlay filter.


FFmpeg / libavfilter does not report any errors or warnings in console / log. Trying to get the filtered frame out of the graph via
av_buffersink_get_frame
results in an EAGAIN return code.

The frames from the frame grabber card are fine, they could become encoded or written to a .yuv file. The overlay frame itself is fine too.


This is the complete private code (prototype - no style, memory leaks, ...) :


#define __STDC_LIMIT_MACROS
#define __STDC_CONSTANT_MACROS

#include <cstdio>
#include <cstdint>
#include 

#include "../fgproto/include/SDL/SDL_video.h"
#include 

using namespace _DSHOWLIB_NAMESPACE;

#ifdef _WIN32
//Windows
extern "C" {
#include "libavcodec/avcodec.h"
#include "libavformat/avformat.h"
#include "libswscale/swscale.h"
#include "libavdevice/avdevice.h"
#include "libavfilter/avfilter.h"
#include <libavutil></libavutil>log.h>
#include <libavutil></libavutil>mem.h>
#include "libavfilter/buffersink.h"
#include "libavfilter/buffersrc.h"
#include "libavutil/opt.h"
#include "libavutil/hwcontext_qsv.h"
#include "SDL/SDL.h"
};
#endif
#include <iostream>
#include <fstream>

void uSleep(double waitTimeInUs, LARGE_INTEGER frequency)
{
 LARGE_INTEGER startTime, currentTime;

 QueryPerformanceCounter(&startTime);

 if (waitTimeInUs > 16500.0)
 Sleep(1);

 do
 {
 YieldProcessor();
 //Sleep(0);
 QueryPerformanceCounter(&currentTime);
 }
 while (waitTimeInUs > (currentTime.QuadPart - startTime.QuadPart) * 1000000.0 / frequency.QuadPart);
}

void check_error(int ret)
{
 if (ret < 0)
 {
 char errbuf[128];
 int tmp = errno;
 av_strerror(ret, errbuf, sizeof(errbuf));
 std::cerr << "Error: " << errbuf << '\n';
 //exit(1);
 }
}

bool _isRunning = true;

void swap_uv_planes(AVFrame* frame)
{
 uint8_t* temp_plane = frame->data[1]; 
 frame->data[1] = frame->data[2]; 
 frame->data[2] = temp_plane; 
}

typedef struct
{
 const AVClass* avclass;
} MyFilterGraphContext;

static constexpr AVClass my_filter_graph_class = 
{
 .class_name = "MyFilterGraphContext",
 .item_name = av_default_item_name,
 .option = NULL,
 .version = LIBAVUTIL_VERSION_INT,
};

MyFilterGraphContext* init_log_context()
{
 MyFilterGraphContext* ctx = static_cast(av_mallocz(sizeof(*ctx)));

 if (!ctx)
 {
 av_log(nullptr, AV_LOG_ERROR, "Unable to allocate MyFilterGraphContext\n");
 return nullptr;
 }

 ctx->avclass = &my_filter_graph_class;
 return ctx;
}

int init_overlay_filter(AVFilterGraph** graph, AVFilterContext** src_ctx, AVFilterContext** overlay_src_ctx,
 AVFilterContext** sink_ctx)
{
 AVFilterGraph* filter_graph;
 AVFilterContext* buffersrc_ctx;
 AVFilterContext* overlay_buffersrc_ctx;
 AVFilterContext* buffersink_ctx;
 AVFilterContext* overlay_ctx;
 AVFilterContext* format_ctx;

 const AVFilter* buffersrc, * buffersink, * overlay_buffersrc, * overlay_filter, * format_filter;
 int ret;

 // Create the filter graph
 filter_graph = avfilter_graph_alloc();
 if (!filter_graph)
 {
 fprintf(stderr, "Unable to create filter graph.\n");
 return AVERROR(ENOMEM);
 }

 // Create buffer source filter for main video
 buffersrc = avfilter_get_by_name("buffer");
 if (!buffersrc)
 {
 fprintf(stderr, "Unable to find buffer filter.\n");
 return AVERROR_FILTER_NOT_FOUND;
 }

 // Create buffer source filter for overlay image
 overlay_buffersrc = avfilter_get_by_name("buffer");
 if (!overlay_buffersrc)
 {
 fprintf(stderr, "Unable to find buffer filter.\n");
 return AVERROR_FILTER_NOT_FOUND;
 }

 // Create buffer sink filter
 buffersink = avfilter_get_by_name("buffersink");
 if (!buffersink)
 {
 fprintf(stderr, "Unable to find buffersink filter.\n");
 return AVERROR_FILTER_NOT_FOUND;
 }

 // Create overlay filter
 overlay_filter = avfilter_get_by_name("overlay");
 if (!overlay_filter)
 {
 fprintf(stderr, "Unable to find overlay filter.\n");
 return AVERROR_FILTER_NOT_FOUND;
 }

 // Create format filter
 format_filter = avfilter_get_by_name("format");
 if (!format_filter)
 {
 fprintf(stderr, "Unable to find format filter.\n");
 return AVERROR_FILTER_NOT_FOUND;
 }

 // Initialize the main video buffer source
 char args[512];

 // Initialize the overlay buffer source
 snprintf(args, sizeof(args), "video_size=165x165:pix_fmt=bgr24:time_base=1/25:pixel_aspect=1/1"); 

 ret = avfilter_graph_create_filter(&overlay_buffersrc_ctx, overlay_buffersrc, nullptr, args, nullptr,
 filter_graph);

 if (ret < 0)
 {
 fprintf(stderr, "Unable to create buffer source filter for overlay.\n");
 return ret;
 }

 snprintf(args, sizeof(args), "video_size=1920x1080:pix_fmt=yuv420p:time_base=1/25:pixel_aspect=1/1");

 ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, nullptr, args, nullptr, filter_graph);

 if (ret < 0)
 {
 fprintf(stderr, "Unable to create buffer source filter for main video.\n");
 return ret;
 }

 // Initialize the format filter to convert overlay image to yuv420p
 snprintf(args, sizeof(args), "pix_fmts=yuv420p");

 ret = avfilter_graph_create_filter(&format_ctx, format_filter, nullptr, args, nullptr, filter_graph);

 if (ret < 0)
 {
 fprintf(stderr, "Unable to create format filter.\n");
 return ret;
 }

 // Initialize the overlay filter
 ret = avfilter_graph_create_filter(&overlay_ctx, overlay_filter, nullptr, "W-w:H-h:enable='between(t,0,20)':format=yuv420", nullptr, filter_graph);
 if (ret < 0)
 {
 fprintf(stderr, "Unable to create overlay filter.\n");
 return ret;
 }

 // Initialize the buffer sink
 ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, nullptr, nullptr, nullptr, filter_graph);
 if (ret < 0)
 {
 fprintf(stderr, "Unable to create buffer sink filter.\n");
 return ret;
 }

 // Connect the filters
 ret = avfilter_link(overlay_buffersrc_ctx, 0, format_ctx, 0);

 if (ret >= 0)
 {
 ret = avfilter_link(buffersrc_ctx, 0, overlay_ctx, 0);
 }
 else
 {
 fprintf(stderr, "Unable to configure filter graph.\n");
 return ret;
 }


 if (ret >= 0)
 {
 ret = avfilter_link(format_ctx, 0, overlay_ctx, 1);
 }
 else
 {
 fprintf(stderr, "Unable to configure filter graph.\n");
 return ret;
 }

 if (ret >= 0)
 {
 if ((ret = avfilter_link(overlay_ctx, 0, buffersink_ctx, 0)) < 0)
 {
 fprintf(stderr, "Unable to link filter graph.\n");
 return ret;
 }
 }
 else
 {
 fprintf(stderr, "Unable to configure filter graph.\n");
 return ret;
 }

 MyFilterGraphContext* log_ctx = init_log_context();

 // Configure the filter graph
 if ((ret = avfilter_graph_config(filter_graph, log_ctx)) < 0)
 {
 fprintf(stderr, "Unable to configure filter graph.\n");
 return ret;
 }

 *graph = filter_graph;
 *src_ctx = buffersrc_ctx;
 *overlay_src_ctx = overlay_buffersrc_ctx;
 *sink_ctx = buffersink_ctx;

 return 0;
}

int main(int argc, char* argv[])
{
 unsigned int videoIndex = 0;

 avdevice_register_all();

 av_log_set_level(AV_LOG_TRACE);

 const AVInputFormat* pFrameGrabberInputFormat = av_find_input_format("dshow");

 constexpr int frameGrabberPixelWidth = 1920;
 constexpr int frameGrabberPixelHeight = 1080;
 constexpr int frameGrabberFrameRate = 25;
 constexpr AVPixelFormat frameGrabberPixelFormat = AV_PIX_FMT_YUV420P;

 char shortStringBuffer[32];

 AVDictionary* pFrameGrabberOptions = nullptr;

 _snprintf_s(shortStringBuffer, sizeof(shortStringBuffer), "%dx%d", frameGrabberPixelWidth, frameGrabberPixelHeight);
 av_dict_set(&pFrameGrabberOptions, "video_size", shortStringBuffer, 0);

 _snprintf_s(shortStringBuffer, sizeof(shortStringBuffer), "%d", frameGrabberFrameRate);

 av_dict_set(&pFrameGrabberOptions, "framerate", shortStringBuffer, 0);
 av_dict_set(&pFrameGrabberOptions, "pixel_format", "yuv420p", 0);
 av_dict_set(&pFrameGrabberOptions, "rtbufsize", "128M", 0);

 AVFormatContext* pFrameGrabberFormatContext = avformat_alloc_context();

 pFrameGrabberFormatContext->flags = AVFMT_FLAG_NOBUFFER | AVFMT_FLAG_FLUSH_PACKETS;

 if (avformat_open_input(&pFrameGrabberFormatContext, "video=MZ0380 PCI, Analog 01 Capture",
 pFrameGrabberInputFormat, &pFrameGrabberOptions) != 0)
 {
 std::cerr << "Couldn't open input stream." << '\n';
 return -1;
 }

 if (avformat_find_stream_info(pFrameGrabberFormatContext, nullptr) < 0)
 {
 std::cerr << "Couldn't find stream information." << '\n';
 return -1;
 }

 bool foundVideoStream = false;

 for (unsigned int loop_videoIndex = 0; loop_videoIndex < pFrameGrabberFormatContext->nb_streams; loop_videoIndex++)
 {
 if (pFrameGrabberFormatContext->streams[loop_videoIndex]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO)
 {
 videoIndex = loop_videoIndex;
 foundVideoStream = true;
 break;
 }
 }

 if (!foundVideoStream)
 {
 std::cerr << "Couldn't find a video stream." << '\n';
 return -1;
 }

 const AVCodec* pFrameGrabberCodec = avcodec_find_decoder(
 pFrameGrabberFormatContext->streams[videoIndex]->codecpar->codec_id);

 AVCodecContext* pFrameGrabberCodecContext = avcodec_alloc_context3(pFrameGrabberCodec);

 if (pFrameGrabberCodec == nullptr)
 {
 std::cerr << "Codec not found." << '\n';
 return -1;
 }

 pFrameGrabberCodecContext->pix_fmt = frameGrabberPixelFormat;
 pFrameGrabberCodecContext->width = frameGrabberPixelWidth;
 pFrameGrabberCodecContext->height = frameGrabberPixelHeight;

 int ret = avcodec_open2(pFrameGrabberCodecContext, pFrameGrabberCodec, nullptr);

 if (ret < 0)
 {
 std::cerr << "Could not open pVideoCodec." << '\n';
 return -1;
 }

 const char* outputFilePath = "c:\\temp\\output.mp4";
 constexpr int outputWidth = frameGrabberPixelWidth;
 constexpr int outputHeight = frameGrabberPixelHeight;
 constexpr int outputFrameRate = frameGrabberFrameRate;

 SwsContext* img_convert_ctx = sws_getContext(frameGrabberPixelWidth, frameGrabberPixelHeight,
 frameGrabberPixelFormat, outputWidth, outputHeight, AV_PIX_FMT_NV12,
 SWS_BICUBIC, nullptr, nullptr, nullptr);

 constexpr double frameTimeinUs = 1000000.0 / frameGrabberFrameRate;

 LARGE_INTEGER frequency;
 LARGE_INTEGER lastTime, currentTime;

 QueryPerformanceFrequency(&frequency);
 QueryPerformanceCounter(&lastTime);

 //SDL----------------------------

 if (SDL_Init(SDL_INIT_VIDEO | SDL_INIT_TIMER | SDL_INIT_EVENTS))
 {
 printf("Could not initialize SDL - %s\n", SDL_GetError());
 return -1;
 }

 SDL_Window* screen = SDL_CreateWindow("3P FrameGrabber SuperApp", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED,
 frameGrabberPixelWidth, frameGrabberPixelHeight,
 SDL_WINDOW_RESIZABLE | SDL_WINDOW_OPENGL);

 if (!screen)
 {
 printf("SDL: could not set video mode - exiting:%s\n", SDL_GetError());
 return -1;
 }

 SDL_Renderer* renderer = SDL_CreateRenderer(screen, -1, 0);

 if (!renderer)
 {
 printf("SDL: could not create renderer - exiting:%s\n", SDL_GetError());
 return -1;
 }

 SDL_Texture* texture = SDL_CreateTexture(renderer, SDL_PIXELFORMAT_YV12, SDL_TEXTUREACCESS_STREAMING,
 frameGrabberPixelWidth, frameGrabberPixelHeight);

 if (!texture)
 {
 printf("SDL: could not create texture - exiting:%s\n", SDL_GetError());
 return -1;
 }

 SDL_Event event;

 //SDL End------------------------

 const AVCodec* pVideoCodec = avcodec_find_encoder_by_name("h264_qsv");

 if (!pVideoCodec)
 {
 std::cerr << "Codec not found" << '\n';
 return 1;
 }

 AVCodecContext* pVideoCodecContext = avcodec_alloc_context3(pVideoCodec);

 if (!pVideoCodecContext)
 {
 std::cerr << "Could not allocate video pVideoCodec context" << '\n';
 return 1;
 }

 AVBufferRef* pHardwareDeviceContextRef = nullptr;

 ret = av_hwdevice_ctx_create(&pHardwareDeviceContextRef, AV_HWDEVICE_TYPE_QSV,
 "PCI\\VEN_8086&DEV_5912&SUBSYS_310217AA&REV_04\\3&11583659&0&10", nullptr, 0);
 check_error(ret);

 pVideoCodecContext->bit_rate = static_cast(outputWidth * outputHeight) * 2;
 pVideoCodecContext->width = outputWidth;
 pVideoCodecContext->height = outputHeight;
 pVideoCodecContext->framerate = {outputFrameRate, 1};
 pVideoCodecContext->time_base = {1, outputFrameRate};
 pVideoCodecContext->pix_fmt = AV_PIX_FMT_QSV;
 pVideoCodecContext->max_b_frames = 0;

 AVBufferRef* pHardwareFramesContextRef = av_hwframe_ctx_alloc(pHardwareDeviceContextRef);

 AVHWFramesContext* pHardwareFramesContext = reinterpret_cast(pHardwareFramesContextRef->data);

 pHardwareFramesContext->format = AV_PIX_FMT_QSV;
 pHardwareFramesContext->sw_format = AV_PIX_FMT_NV12;
 pHardwareFramesContext->width = outputWidth;
 pHardwareFramesContext->height = outputHeight;
 pHardwareFramesContext->initial_pool_size = 20;

 ret = av_hwframe_ctx_init(pHardwareFramesContextRef);
 check_error(ret);

 pVideoCodecContext->hw_device_ctx = nullptr;
 pVideoCodecContext->hw_frames_ctx = av_buffer_ref(pHardwareFramesContextRef);

 ret = avcodec_open2(pVideoCodecContext, pVideoCodec, nullptr); //&pVideoOptionsDict);
 check_error(ret);

 AVFormatContext* pVideoFormatContext = nullptr;

 avformat_alloc_output_context2(&pVideoFormatContext, nullptr, nullptr, outputFilePath);

 if (!pVideoFormatContext)
 {
 std::cerr << "Could not create output context" << '\n';
 return 1;
 }

 const AVOutputFormat* pVideoOutputFormat = pVideoFormatContext->oformat;

 if (pVideoFormatContext->oformat->flags & AVFMT_GLOBALHEADER)
 {
 pVideoCodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
 }

 const AVStream* pVideoStream = avformat_new_stream(pVideoFormatContext, pVideoCodec);

 if (!pVideoStream)
 {
 std::cerr << "Could not allocate stream" << '\n';
 return 1;
 }

 ret = avcodec_parameters_from_context(pVideoStream->codecpar, pVideoCodecContext);

 check_error(ret);

 if (!(pVideoOutputFormat->flags & AVFMT_NOFILE))
 {
 ret = avio_open(&pVideoFormatContext->pb, outputFilePath, AVIO_FLAG_WRITE);
 check_error(ret);
 }

 ret = avformat_write_header(pVideoFormatContext, nullptr);

 check_error(ret);

 AVFrame* pHardwareFrame = av_frame_alloc();

 if (av_hwframe_get_buffer(pVideoCodecContext->hw_frames_ctx, pHardwareFrame, 0) < 0)
 {
 std::cerr << "Error allocating a hw frame" << '\n';
 return -1;
 }

 AVFrame* pFrameGrabberFrame = av_frame_alloc();
 AVPacket* pFrameGrabberPacket = av_packet_alloc();

 AVPacket* pVideoPacket = av_packet_alloc();
 AVFrame* pVideoFrame = av_frame_alloc();

 AVFrame* pSwappedFrame = av_frame_alloc();
 av_frame_get_buffer(pSwappedFrame, 32);

 INT64 frameCount = 0;

 pFrameGrabberCodecContext->time_base = {1, frameGrabberFrameRate};

 AVFilterContext* buffersrc_ctx = nullptr;
 AVFilterContext* buffersink_ctx = nullptr;
 AVFilterContext* overlay_src_ctx = nullptr;
 AVFilterGraph* filter_graph = nullptr;

 if ((ret = init_overlay_filter(&filter_graph, &buffersrc_ctx, &overlay_src_ctx, &buffersink_ctx)) < 0)
 {
 return ret;
 }

 // Load overlay image
 AVFormatContext* overlay_fmt_ctx = nullptr;
 AVCodecContext* overlay_codec_ctx = nullptr;
 const AVCodec* overlay_codec = nullptr;
 AVFrame* overlay_frame = nullptr;
 AVDictionary* overlay_options = nullptr;

 const char* overlay_image_filename = "c:\\temp\\overlay.bmp";

 av_dict_set(&overlay_options, "video_size", "165x165", 0);
 av_dict_set(&overlay_options, "pixel_format", "bgr24", 0);

 if ((ret = avformat_open_input(&overlay_fmt_ctx, overlay_image_filename, nullptr, &overlay_options)) < 0)
 {
 return ret;
 }

 if ((ret = avformat_find_stream_info(overlay_fmt_ctx, nullptr)) < 0)
 {
 return ret;
 }

 int overlay_video_stream_index = -1;

 for (int i = 0; i < overlay_fmt_ctx->nb_streams; i++)
 {
 if (overlay_fmt_ctx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO)
 {
 overlay_video_stream_index = i;
 break;
 }
 }

 if (overlay_video_stream_index == -1)
 {
 return -1;
 }

 overlay_codec = avcodec_find_decoder(overlay_fmt_ctx->streams[overlay_video_stream_index]->codecpar->codec_id);

 if (!overlay_codec)
 {
 fprintf(stderr, "Overlay codec not found.\n");
 return -1;
 }

 overlay_codec_ctx = avcodec_alloc_context3(overlay_codec);

 if (!overlay_codec_ctx)
 {
 fprintf(stderr, "Could not allocate overlay codec context.\n");
 return AVERROR(ENOMEM);
 }

 avcodec_parameters_to_context(overlay_codec_ctx, overlay_fmt_ctx->streams[overlay_video_stream_index]->codecpar);

 if ((ret = avcodec_open2(overlay_codec_ctx, overlay_codec, nullptr)) < 0)
 {
 return ret;
 }

 overlay_frame = av_frame_alloc();

 if (!overlay_frame)
 {
 fprintf(stderr, "Could not allocate overlay frame.\n");
 return AVERROR(ENOMEM);
 }

 AVPacket* overlay_packet = av_packet_alloc();

 // Read frames from the file
 while (av_read_frame(overlay_fmt_ctx, overlay_packet) >= 0)
 {
 if (overlay_packet->stream_index == overlay_video_stream_index)
 {
 ret = avcodec_send_packet(overlay_codec_ctx, overlay_packet);

 if (ret < 0)
 {
 break;
 }

 ret = avcodec_receive_frame(overlay_codec_ctx, overlay_frame);
 if (ret >= 0)
 {
 
 break; // We only need the first frame for the overlay
 }

 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
 {
 continue;
 }

 break;
 }

 av_packet_unref(overlay_packet);
 }

 av_packet_unref(overlay_packet);

 while (_isRunning)
 {
 while (SDL_PollEvent(&event) != 0)
 {
 switch (event.type)
 {
 case SDL_QUIT:
 _isRunning = false;
 break;
 case SDL_KEYDOWN:
 if (event.key.keysym.sym == SDLK_ESCAPE)
 _isRunning = false;
 break;
 default: ;
 }
 }

 if (av_read_frame(pFrameGrabberFormatContext, pFrameGrabberPacket) == 0)
 {
 if (pFrameGrabberPacket->stream_index == videoIndex)
 {
 ret = avcodec_send_packet(pFrameGrabberCodecContext, pFrameGrabberPacket);

 if (ret < 0)
 {
 std::cerr << "Error sending a packet for decoding!" << '\n';
 return -1;
 }

 ret = avcodec_receive_frame(pFrameGrabberCodecContext, pFrameGrabberFrame);

 if (ret != 0)
 {
 std::cerr << "Receiving frame failed!" << '\n';
 return -1;
 }

 if (ret == AVERROR(EAGAIN) || ret == AVERROR(AVERROR_EOF))
 {
 std::cout << "End of stream detected. Exiting now." << '\n';
 return 0;
 }

 if (ret != 0)
 {
 std::cerr << "Decode Error!" << '\n';
 return -1;
 }

 // Feed the frame into the filter graph
 if (av_buffersrc_add_frame_flags(buffersrc_ctx, pFrameGrabberFrame, AV_BUFFERSRC_FLAG_KEEP_REF) < 0)
 {
 fprintf(stderr, "Error while feeding the filtergraph\n");
 break;
 }

 // Push the overlay frame to the overlay_src_ctx
 ret = av_buffersrc_add_frame_flags(overlay_src_ctx, overlay_frame, AV_BUFFERSRC_FLAG_KEEP_REF);
 if (ret < 0)
 {
 fprintf(stderr, "Error while feeding the filtergraph\n");
 break;
 } 

 // Pull filtered frame from the filter graph
 AVFrame* filtered_frame = av_frame_alloc();

 ret = av_buffersink_get_frame(buffersink_ctx, filtered_frame);

 if (ret < 0)
 {
 check_error(ret);
 }

 QueryPerformanceCounter(&currentTime);

 double elapsedTime = (currentTime.QuadPart - lastTime.QuadPart) * 1000000.0 / frequency.QuadPart;

 if (elapsedTime > 0.0 && elapsedTime < frameTimeinUs)
 {
 uSleep(frameTimeinUs - elapsedTime, frequency);
 }

 SDL_UpdateTexture(texture, nullptr, filtered_frame->data[0], filtered_frame->linesize[0]);
 SDL_RenderClear(renderer);
 SDL_RenderCopy(renderer, texture, nullptr, nullptr);
 SDL_RenderPresent(renderer);

 QueryPerformanceCounter(&lastTime);

 swap_uv_planes(filtered_frame);

 ret = sws_scale_frame(img_convert_ctx, pVideoFrame, filtered_frame);

 if (ret < 0)
 {
 std::cerr << "Scaling frame for Intel QS Encoder did fail!" << '\n';
 return -1;
 }

 if (av_hwframe_transfer_data(pHardwareFrame, pVideoFrame, 0) < 0)
 {
 std::cerr << "Error transferring frame data to hw frame!" << '\n';
 return -1;
 }

 pHardwareFrame->pts = frameCount++;

 ret = avcodec_send_frame(pVideoCodecContext, pHardwareFrame);

 if (ret < 0)
 {
 std::cerr << "Error sending a frame for encoding" << '\n';
 check_error(ret);
 }

 av_packet_unref(pVideoPacket);

 while (ret >= 0)
 {
 ret = avcodec_receive_packet(pVideoCodecContext, pVideoPacket);

 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
 {
 break;
 }

 if (ret < 0)
 {
 std::cerr << "Error during encoding" << '\n';
 return 1;
 }

 av_packet_rescale_ts(pVideoPacket, pVideoCodecContext->time_base, pVideoStream->time_base);

 pVideoPacket->stream_index = pVideoStream->index;

 ret = av_interleaved_write_frame(pVideoFormatContext, pVideoPacket);

 check_error(ret);

 av_packet_unref(pVideoPacket);
 }

 av_packet_unref(pFrameGrabberPacket);
 av_frame_free(&filtered_frame);
 }
 }
 }

 av_write_trailer(pVideoFormatContext);
 av_buffer_unref(&pHardwareDeviceContextRef);
 avcodec_free_context(&pVideoCodecContext);
 avio_closep(&pVideoFormatContext->pb);
 avformat_free_context(pVideoFormatContext);
 av_packet_free(&pVideoPacket);

 avcodec_free_context(&pFrameGrabberCodecContext);
 av_frame_free(&pFrameGrabberFrame);
 av_packet_free(&pFrameGrabberPacket);
 avformat_close_input(&pFrameGrabberFormatContext);

 return 0;
}

</fstream></iostream></cstdint></cstdio>


The console / log output running the code :


[in @ 00000288ee494f40] Setting 'video_size' to value '1920x1080'
[in @ 00000288ee494f40] Setting 'pix_fmt' to value 'yuv420p'
[in @ 00000288ee494f40] Setting 'time_base' to value '1/25'
[in @ 00000288ee494f40] Setting 'pixel_aspect' to value '1/1'
[in @ 00000288ee494f40] w:1920 h:1080 pixfmt:yuv420p tb:1/25 fr:0/1 sar:1/1 csp:unknown range:unknown
[overlay_in @ 00000288ff1013c0] Setting 'video_size' to value '165x165'
[overlay_in @ 00000288ff1013c0] Setting 'pix_fmt' to value 'bgr24'
[overlay_in @ 00000288ff1013c0] Setting 'time_base' to value '1/25'
[overlay_in @ 00000288ff1013c0] Setting 'pixel_aspect' to value '1/1'
[overlay_in @ 00000288ff1013c0] w:165 h:165 pixfmt:bgr24 tb:1/25 fr:0/1 sar:1/1 csp:unknown range:unknown
[format @ 00000288ff1015c0] Setting 'pix_fmts' to value 'yuv420p'
[overlay @ 00000288ff101880] Setting 'x' to value 'W-w'
[overlay @ 00000288ff101880] Setting 'y' to value 'H-h'
[overlay @ 00000288ff101880] Setting 'enable' to value 'between(t,0,20)'
[overlay @ 00000288ff101880] Setting 'format' to value 'yuv420'
[auto_scale_0 @ 00000288ff101ec0] w:iw h:ih flags:'' interl:0
[format @ 00000288ff1015c0] auto-inserting filter 'auto_scale_0' between the filter 'overlay_in' and the filter 'format'
[auto_scale_1 @ 00000288ee4a4cc0] w:iw h:ih flags:'' interl:0
[overlay @ 00000288ff101880] auto-inserting filter 'auto_scale_1' between the filter 'format' and the filter 'overlay'
[AVFilterGraph @ 00000288ee495c80] query_formats: 5 queried, 6 merged, 6 already done, 0 delayed
[auto_scale_0 @ 00000288ff101ec0] w:165 h:165 fmt:bgr24 csp:gbr range:pc sar:1/1 -> w:165 h:165 fmt:yuv420p csp:unknown range:unknown sar:1/1 flags:0x00000004
[auto_scale_1 @ 00000288ee4a4cc0] w:165 h:165 fmt:yuv420p csp:unknown range:unknown sar:1/1 -> w:165 h:165 fmt:yuva420p csp:unknown range:unknown sar:1/1 flags:0x00000004
[overlay @ 00000288ff101880] main w:1920 h:1080 fmt:yuv420p overlay w:165 h:165 fmt:yuva420p
[overlay @ 00000288ff101880] [framesync @ 00000288ff1019a8] Selected 1/25 time base
[overlay @ 00000288ff101880] [framesync @ 00000288ff1019a8] Sync level 2



I tried to change the index / order of how the two different frames become pushed into the filter graph. Once I got a frame out of the graph but with the dimensions of the overlay image, not with the dimensions of the grabbed frame from the grabber card. So I suppose I am doing something wrong building up the filter graph.


To verify that the FFmpeg build contains all necessary modules I ran that procedure via FFmpeg executable in console and it worked and the result was as expected.


The command-line producing the expected output is following :


ffmpeg -f dshow -i video="MZ0380 PCI, Analog 01 Capture" -video_size 1920x1080 -framerate 25 -pixel_format yuv420p -loglevel debug -i "C:\temp\overlay.bmp" -filter_complex "[0:v][1:v] overlay=W-w:H-h:enable='between(t,0,20)'" -pix_fmt yuv420p -c:a copy output.mp4



-
Streaming video from nodejs to an open player
27 août 2013, par Matthew YoungOdd ball question for somebody just getting started with html5 players and streaming video....
When using YouTube long videos can be scrolled towards then end then played from there. Assuming YouTube first pulls down metadata like total video start/stop points and a bunch of thumbnails for scrolling.
Is this possible with an open html5 video player (like projekkter) ? Reason asking is that I have video data inside a mongo database that I would like to stream similar to the YouTube player.
Inside mongo I have a bunch of smaller h264 files each in a document : actual raw h264 usually 1000kb (max 2 seconds), creation timestamp (long), and potentially a converted format (like mp4) for known clients. Idea is to query off a time range and order by creation time then piping the results into readable stream. There is a nice ffmpeg module to take streams and reformat if needed. Thought about piping the stream to the client with binaryjs and appending it into the player.
But the source directives in the documentation are usually URLs plus I need to lock down the start/stop point for the total video being played plus thumbnails.
-
fluent-ffmpeg generating incorrect framerate
25 novembre 2013, par ZakThompsonI'm having a strange issue converting images to a video. I am using the excellent fluent-ffmpeg module for a node.js server. I have 179 jpg images which I wish to convert to a 30fps video (should be about 6s). I have successfully done so using the following ffmpeg command :
ffmpeg -r 30 -i frame%03d.jpg -c:v libx264 out.mp4
This outputs the following when inspected by ffmpeg :
ffmpeg -i out.mp4
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'out.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf54.63.104
Duration: 00:00:06.00, start: 0.000000, bitrate: 1631 kb/s
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuvj420p, 640x480 [SAR 1:1 DAR 4:3], 1627 kb/s, 30 fps, 30 tbr, 15360 tbn, 60 tbc
Metadata:
handler_name : VideoHandlerNow, I am attempting to do the same thing with fluent-ffmpeg :
var proc = new ffmpeg({ source: 'frame%03d.jpg', nolog: true })
.addOptions(['-c:v libx264','-r 30'])
.saveToFile('test.mp4', function(retcode, error){
console.log('file has been converted succesfully');
});Should be exactly the same, right ? But here is what I'm getting :
ffmpeg -i test.mp4
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'test.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf54.63.104
Duration: 00:00:07.20, start: 0.000000, bitrate: 1556 kb/s
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuvj420p, 640x480 [SAR 1:1 DAR 4:3], 1553 kb/s, 30 fps, 30 tbr, 15360 tbn, 60 tbc
Metadata:
handler_name : VideoHandlerNow what's most interesting here is that although both were made from the same set of images and both supposedly have the same frame rate, the one made with fluent-ffmpeg has a duration of 7.20s, a full 1.20 longer than the first one. Upon comparing the two videos, it seems the fluent-ffmpeg one is actually at 25fps even though it reports 30.
Note that I have tried properly adding the two flags using the methods (
.withVideoCodec, .withFps
) with the same result, I merely resorted to adding the arguments manually in an attempt to make it exactly the same as my original command.If anybody here has experience with this module and/or has any suggestions, it would be greatly appreciated !