
Recherche avancée
Médias (91)
-
Les Miserables
9 décembre 2019, par
Mis à jour : Décembre 2019
Langue : français
Type : Textuel
-
VideoHandle
8 novembre 2019, par
Mis à jour : Novembre 2019
Langue : français
Type : Video
-
Somos millones 1
21 juillet 2014, par
Mis à jour : Juin 2015
Langue : français
Type : Video
-
Un test - mauritanie
3 avril 2014, par
Mis à jour : Avril 2014
Langue : français
Type : Textuel
-
Pourquoi Obama lit il mes mails ?
4 février 2014, par
Mis à jour : Février 2014
Langue : français
-
IMG 0222
6 octobre 2013, par
Mis à jour : Octobre 2013
Langue : français
Type : Image
Autres articles (97)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (7101)
-
Combining JavaCV and openCV
13 juin 2014, par mister-viperI have the following problem :
I have an Android application which uses native OpenCV code. In a first step, the frames which were edited by OpenCV came from the camera. Then they were processed and drawn on the display.However, my requirements now have changed. The frames which have to be edited come from a video file stored on the SD card. They must be processed by the openCV code and then stored in a new video file.
After reading some a lot of stuff, I recognized that Android has no built-in stuff for correctly reading a video file frame by frame and allowing to process the frames while doing so. On a computer OpenCV has the VideoCapture function. But this does not work on Android as openCV has no ffmpeg that comes with it.
After reading more stuff, I found that JavaCV comes with an FFMPEGFrameGrabber and also an FFMPEGFrameRecorder. So, I implemented everything which now allows me to grab single frames from a video, obtain an IplImage frame and store this frame in a new video.
Now the problem :
During obtaining and storing the IplImage frame must be processed using the original OpenCV code as it is not feasible to port the complete code to JavaCV.So in a first place I wrote a small test JNI function which gets the address of a MAT object and draws a small circle on it.
extern "C" {
JNIEXPORT void JNICALL Java_de_vion_postprocessing_step2_EyeTracking_editFrame(
JNIEnv*, jobject, jlong thiz, jlong addrRgba) {
//Convert the mat addresses into the objects
Mat& rgbFrame = *(Mat*) addrRgba;
Point2i scaledSmoothPoint(100,100);
circle(rgbFrame, scaledSmoothPoint, 20, YELLOW, -1);
}As I read that IplImage extends CvArr I just call the function within in my code as follows :
captured_frame = grabber.grab();
if (captured_frame == null) {
// no new frames
break;
}
editFrame(captured_frame .address());However, I now get the following error :
06-12 18:58:23.135: E/cv::error()(6498): OpenCV Error: Assertion failed (cn <= 4) in
void cv::scalarToRawData(const Scalar&, void*, int, int), file
/home/reports/ci/slave_desktop/50-SDK/opencv/modules/core/src/matrix.cpp, line 845
06-12 18:58:23.135: A/libc(6498): Fatal signal 6 (SIGABRT) at 0x00001962 (code=-6),
thread 6526 (AsyncTask #1)Finally, me question :
How can I process the IplImage frame using nativeOpenCV and finally store this IplImage frame then in the video recorder.I am also open to new Ideas which do not necessarily require JavaCV as long as I do not have to write the FrameGrabber and FrameRecorder my self.
Best regards,
André -
Problems using libavfilter for adding overlay to frames
12 novembre 2024, par Michael WernerOn Windows 11 OS with latest libav (full build) a C/C++ app reads YUV420P frames from a frame grabber card.


I want to draw a bitmap (BGR24) overlay image from file on every frame via libavfilter. First I convert the BGR24 overlay image via format filter to YUV420P. Then I feed the YUV420P frame from frame grabber and the YUV420P overlay into the overlay filter.


Everything seems to be fine but when I try to get the frame out of the filter graph I always get an "Resource is temporary not available" (EAGAIN) return code, independent on how many frames I put into the graph.


The frames from the frame grabber card are fine, I could encode them or write them to a .yuv file. The overlay frame looks fine too.


My current initialization code looks like below. It does not report any errors or warnings but when I try to get the filtered frame out of the graph via
av_buffersink_get_frame
I always get anEAGAIN
return code.

Here is my current initialization code :


int init_overlay_filter(AVFilterGraph** graph, AVFilterContext** src_ctx, AVFilterContext** overlay_src_ctx,
 AVFilterContext** sink_ctx)
{
 AVFilterGraph* filter_graph;
 AVFilterContext* buffersrc_ctx;
 AVFilterContext* overlay_buffersrc_ctx;
 AVFilterContext* buffersink_ctx;
 AVFilterContext* overlay_ctx;
 AVFilterContext* format_ctx;
 const AVFilter *buffersrc, *buffersink, *overlay_buffersrc, *overlay_filter, *format_filter;
 int ret;

 // Create the filter graph
 filter_graph = avfilter_graph_alloc();
 if (!filter_graph)
 {
 fprintf(stderr, "Unable to create filter graph.\n");
 return AVERROR(ENOMEM);
 }

 // Create buffer source filter for main video
 buffersrc = avfilter_get_by_name("buffer");
 if (!buffersrc)
 {
 fprintf(stderr, "Unable to find buffer filter.\n");
 return AVERROR_FILTER_NOT_FOUND;
 }

 // Create buffer source filter for overlay image
 overlay_buffersrc = avfilter_get_by_name("buffer");
 if (!overlay_buffersrc)
 {
 fprintf(stderr, "Unable to find buffer filter.\n");
 return AVERROR_FILTER_NOT_FOUND;
 }

 // Create buffer sink filter
 buffersink = avfilter_get_by_name("buffersink");
 if (!buffersink)
 {
 fprintf(stderr, "Unable to find buffersink filter.\n");
 return AVERROR_FILTER_NOT_FOUND;
 }

 // Create overlay filter
 overlay_filter = avfilter_get_by_name("overlay");
 if (!overlay_filter)
 {
 fprintf(stderr, "Unable to find overlay filter.\n");
 return AVERROR_FILTER_NOT_FOUND;
 }

 // Create format filter
 format_filter = avfilter_get_by_name("format");
 if (!format_filter) 
 {
 fprintf(stderr, "Unable to find format filter.\n");
 return AVERROR_FILTER_NOT_FOUND;
 }

 // Initialize the main video buffer source
 char args[512];
 snprintf(args, sizeof(args),
 "video_size=1920x1080:pix_fmt=yuv420p:time_base=1/25:pixel_aspect=1/1");
 ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in", args, NULL, filter_graph);
 if (ret < 0)
 {
 fprintf(stderr, "Unable to create buffer source filter for main video.\n");
 return ret;
 }

 // Initialize the overlay buffer source
 snprintf(args, sizeof(args),
 "video_size=165x165:pix_fmt=bgr24:time_base=1/25:pixel_aspect=1/1");
 ret = avfilter_graph_create_filter(&overlay_buffersrc_ctx, overlay_buffersrc, "overlay_in", args, NULL,
 filter_graph);
 if (ret < 0)
 {
 fprintf(stderr, "Unable to create buffer source filter for overlay.\n");
 return ret;
 }

 // Initialize the format filter to convert overlay image to yuv420p
 snprintf(args, sizeof(args), "pix_fmts=yuv420p");
 ret = avfilter_graph_create_filter(&format_ctx, format_filter, "format", args, NULL, filter_graph);

 if (ret < 0) 
 {
 fprintf(stderr, "Unable to create format filter.\n");
 return ret;
 }

 // Initialize the buffer sink
 ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out", NULL, NULL, filter_graph);
 if (ret < 0)
 {
 fprintf(stderr, "Unable to create buffer sink filter.\n");
 return ret;
 }

 // Initialize the overlay filter
 ret = avfilter_graph_create_filter(&overlay_ctx, overlay_filter, "overlay", "W-w:H-h:enable='between(t,0,20)':format=yuv420", NULL, filter_graph);
 if (ret < 0)
 {
 fprintf(stderr, "Unable to create overlay filter.\n");
 return ret;
 }

 // Connect the filters
 ret = avfilter_link(overlay_buffersrc_ctx, 0, format_ctx, 0);

 if (ret >= 0)
 {
 ret = avfilter_link(buffersrc_ctx, 0, overlay_ctx, 0);
 }
 else
 {
 fprintf(stderr, "Unable to configure filter graph.\n");
 return ret;
 }


 if (ret >= 0) 
 {
 ret = avfilter_link(format_ctx, 0, overlay_ctx, 1);
 }
 else
 {
 fprintf(stderr, "Unable to configure filter graph.\n");
 return ret;
 }

 if (ret >= 0) 
 {
 if ((ret = avfilter_link(overlay_ctx, 0, buffersink_ctx, 0)) < 0)
 {
 fprintf(stderr, "Unable to link filter graph.\n");
 return ret;
 }
 }
 else
 {
 fprintf(stderr, "Unable to configure filter graph.\n");
 return ret;
 }

 // Configure the filter graph
 if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0)
 {
 fprintf(stderr, "Unable to configure filter graph.\n");
 return ret;
 }

 *graph = filter_graph;
 *src_ctx = buffersrc_ctx;
 *overlay_src_ctx = overlay_buffersrc_ctx;
 *sink_ctx = buffersink_ctx;

 return 0;
}



Feeding the filter graph is done this way :


av_buffersrc_add_frame_flags(buffersrc_ctx, pFrameGrabberFrame, AV_BUFFERSRC_FLAG_KEEP_REF)
av_buffersink_get_frame(buffersink_ctx, filtered_frame)



av_buffersink_get_frame
returns alwaysEAGAIN
, no matter how many frames I feed into the graph. The frames (from framegrabber and the overlay frame) itself are looking fine.

I did set libav logging level to maximum but I do not see any warnings or errors or helpful, related information in the log.


Here the log output related to the filter configuration :


[in @ 00000288ee494f40] Setting 'video_size' to value '1920x1080'
[in @ 00000288ee494f40] Setting 'pix_fmt' to value 'yuv420p'
[in @ 00000288ee494f40] Setting 'time_base' to value '1/25'
[in @ 00000288ee494f40] Setting 'pixel_aspect' to value '1/1'
[in @ 00000288ee494f40] w:1920 h:1080 pixfmt:yuv420p tb:1/25 fr:0/1 sar:1/1 csp:unknown range:unknown
[overlay_in @ 00000288ff1013c0] Setting 'video_size' to value '165x165'
[overlay_in @ 00000288ff1013c0] Setting 'pix_fmt' to value 'bgr24'
[overlay_in @ 00000288ff1013c0] Setting 'time_base' to value '1/25'
[overlay_in @ 00000288ff1013c0] Setting 'pixel_aspect' to value '1/1'
[overlay_in @ 00000288ff1013c0] w:165 h:165 pixfmt:bgr24 tb:1/25 fr:0/1 sar:1/1 csp:unknown range:unknown
[format @ 00000288ff1015c0] Setting 'pix_fmts' to value 'yuv420p'
[overlay @ 00000288ff101880] Setting 'x' to value 'W-w'
[overlay @ 00000288ff101880] Setting 'y' to value 'H-h'
[overlay @ 00000288ff101880] Setting 'enable' to value 'between(t,0,20)'
[overlay @ 00000288ff101880] Setting 'format' to value 'yuv420'
[auto_scale_0 @ 00000288ff101ec0] w:iw h:ih flags:'' interl:0
[format @ 00000288ff1015c0] auto-inserting filter 'auto_scale_0' between the filter 'overlay_in' and the filter 'format'
[auto_scale_1 @ 00000288ee4a4cc0] w:iw h:ih flags:'' interl:0
[overlay @ 00000288ff101880] auto-inserting filter 'auto_scale_1' between the filter 'format' and the filter 'overlay'
[AVFilterGraph @ 00000288ee495c80] query_formats: 5 queried, 6 merged, 6 already done, 0 delayed
[auto_scale_0 @ 00000288ff101ec0] w:165 h:165 fmt:bgr24 csp:gbr range:pc sar:1/1 -> w:165 h:165 fmt:yuv420p csp:unknown range:unknown sar:1/1 flags:0x00000004
[auto_scale_1 @ 00000288ee4a4cc0] w:165 h:165 fmt:yuv420p csp:unknown range:unknown sar:1/1 -> w:165 h:165 fmt:yuva420p csp:unknown range:unknown sar:1/1 flags:0x00000004
[overlay @ 00000288ff101880] main w:1920 h:1080 fmt:yuv420p overlay w:165 h:165 fmt:yuva420p
[overlay @ 00000288ff101880] [framesync @ 00000288ff1019a8] Selected 1/25 time base
[overlay @ 00000288ff101880] [framesync @ 00000288ff1019a8] Sync level 2



-
Need help using libavfilter for adding overlay to frames [closed]
30 juillet 2024, par Michael WernerHello gentlemen and ladies,


I am working with libavfilter and I am getting crazy.


On Windows 11 OS with latest libav (full build) a C/C++ app reads YUV420P frames from a frame grabber card.


I want to draw a bitmap (BGR24) overlay image from file on every frame via libavfilter. First I convert the BGR24 overlay image via format filter to YUV420P. Then I feed the YUV420P frame from frame grabber and the YUV420P overlay into the overlay filter.


Everything seems to be fine but when I try to get the frame out of the filter graph I always get an "Resource is temporary not available" (EAGAIN) return code, independent on how many frames I put into the graph.


The frames from the frame grabber card are fine, I could encode them or write them to a .yuv file. The overlay frame looks fine too.


My current initialization code looks like below. It does not report any errors or warnings but when I try to get the filtered frame out of the graph via
av_buffersink_get_frame
I always get anEAGAIN
return code.

Here is my current initialization code :


int init_overlay_filter(AVFilterGraph** graph, AVFilterContext** src_ctx, AVFilterContext** overlay_src_ctx,
 AVFilterContext** sink_ctx)
{
 AVFilterGraph* filter_graph;
 AVFilterContext* buffersrc_ctx;
 AVFilterContext* overlay_buffersrc_ctx;
 AVFilterContext* buffersink_ctx;
 AVFilterContext* overlay_ctx;
 AVFilterContext* format_ctx;
 const AVFilter *buffersrc, *buffersink, *overlay_buffersrc, *overlay_filter, *format_filter;
 int ret;

 // Create the filter graph
 filter_graph = avfilter_graph_alloc();
 if (!filter_graph)
 {
 fprintf(stderr, "Unable to create filter graph.\n");
 return AVERROR(ENOMEM);
 }

 // Create buffer source filter for main video
 buffersrc = avfilter_get_by_name("buffer");
 if (!buffersrc)
 {
 fprintf(stderr, "Unable to find buffer filter.\n");
 return AVERROR_FILTER_NOT_FOUND;
 }

 // Create buffer source filter for overlay image
 overlay_buffersrc = avfilter_get_by_name("buffer");
 if (!overlay_buffersrc)
 {
 fprintf(stderr, "Unable to find buffer filter.\n");
 return AVERROR_FILTER_NOT_FOUND;
 }

 // Create buffer sink filter
 buffersink = avfilter_get_by_name("buffersink");
 if (!buffersink)
 {
 fprintf(stderr, "Unable to find buffersink filter.\n");
 return AVERROR_FILTER_NOT_FOUND;
 }

 // Create overlay filter
 overlay_filter = avfilter_get_by_name("overlay");
 if (!overlay_filter)
 {
 fprintf(stderr, "Unable to find overlay filter.\n");
 return AVERROR_FILTER_NOT_FOUND;
 }

 // Create format filter
 format_filter = avfilter_get_by_name("format");
 if (!format_filter) 
 {
 fprintf(stderr, "Unable to find format filter.\n");
 return AVERROR_FILTER_NOT_FOUND;
 }

 // Initialize the main video buffer source
 char args[512];
 snprintf(args, sizeof(args),
 "video_size=1920x1080:pix_fmt=yuv420p:time_base=1/25:pixel_aspect=1/1");
 ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in", args, NULL, filter_graph);
 if (ret < 0)
 {
 fprintf(stderr, "Unable to create buffer source filter for main video.\n");
 return ret;
 }

 // Initialize the overlay buffer source
 snprintf(args, sizeof(args),
 "video_size=165x165:pix_fmt=bgr24:time_base=1/25:pixel_aspect=1/1");
 ret = avfilter_graph_create_filter(&overlay_buffersrc_ctx, overlay_buffersrc, "overlay_in", args, NULL,
 filter_graph);
 if (ret < 0)
 {
 fprintf(stderr, "Unable to create buffer source filter for overlay.\n");
 return ret;
 }

 // Initialize the format filter to convert overlay image to yuv420p
 snprintf(args, sizeof(args), "pix_fmts=yuv420p");
 ret = avfilter_graph_create_filter(&format_ctx, format_filter, "format", args, NULL, filter_graph);

 if (ret < 0) 
 {
 fprintf(stderr, "Unable to create format filter.\n");
 return ret;
 }

 // Initialize the buffer sink
 ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out", NULL, NULL, filter_graph);
 if (ret < 0)
 {
 fprintf(stderr, "Unable to create buffer sink filter.\n");
 return ret;
 }

 // Initialize the overlay filter
 ret = avfilter_graph_create_filter(&overlay_ctx, overlay_filter, "overlay", "W-w:H-h:enable='between(t,0,20)':format=yuv420", NULL, filter_graph);
 if (ret < 0)
 {
 fprintf(stderr, "Unable to create overlay filter.\n");
 return ret;
 }

 // Connect the filters
 ret = avfilter_link(overlay_buffersrc_ctx, 0, format_ctx, 0);

 if (ret >= 0)
 {
 ret = avfilter_link(buffersrc_ctx, 0, overlay_ctx, 0);
 }
 else
 {
 fprintf(stderr, "Unable to configure filter graph.\n");
 return ret;
 }


 if (ret >= 0) 
 {
 ret = avfilter_link(format_ctx, 0, overlay_ctx, 1);
 }
 else
 {
 fprintf(stderr, "Unable to configure filter graph.\n");
 return ret;
 }

 if (ret >= 0) 
 {
 if ((ret = avfilter_link(overlay_ctx, 0, buffersink_ctx, 0)) < 0)
 {
 fprintf(stderr, "Unable to link filter graph.\n");
 return ret;
 }
 }
 else
 {
 fprintf(stderr, "Unable to configure filter graph.\n");
 return ret;
 }

 // Configure the filter graph
 if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0)
 {
 fprintf(stderr, "Unable to configure filter graph.\n");
 return ret;
 }

 *graph = filter_graph;
 *src_ctx = buffersrc_ctx;
 *overlay_src_ctx = overlay_buffersrc_ctx;
 *sink_ctx = buffersink_ctx;

 return 0;
}



Feeding the filter graph is done this way :


av_buffersrc_add_frame_flags(buffersrc_ctx, pFrameGrabberFrame, AV_BUFFERSRC_FLAG_KEEP_REF)
av_buffersink_get_frame(buffersink_ctx, filtered_frame)



av_buffersink_get_frame
returns alwaysEAGAIN
, no matter how many frames I feed into the graph. The frames (from framegrabber and the overlay frame) itself are looking fine.

I did set libav logging level to maximum but I do not see any warnings or errors or helpful, related information in the log.


Here the log output related to the filter configuration :


[in @ 00000288ee494f40] Setting 'video_size' to value '1920x1080'
[in @ 00000288ee494f40] Setting 'pix_fmt' to value 'yuv420p'
[in @ 00000288ee494f40] Setting 'time_base' to value '1/25'
[in @ 00000288ee494f40] Setting 'pixel_aspect' to value '1/1'
[in @ 00000288ee494f40] w:1920 h:1080 pixfmt:yuv420p tb:1/25 fr:0/1 sar:1/1 csp:unknown range:unknown
[overlay_in @ 00000288ff1013c0] Setting 'video_size' to value '165x165'
[overlay_in @ 00000288ff1013c0] Setting 'pix_fmt' to value 'bgr24'
[overlay_in @ 00000288ff1013c0] Setting 'time_base' to value '1/25'
[overlay_in @ 00000288ff1013c0] Setting 'pixel_aspect' to value '1/1'
[overlay_in @ 00000288ff1013c0] w:165 h:165 pixfmt:bgr24 tb:1/25 fr:0/1 sar:1/1 csp:unknown range:unknown
[format @ 00000288ff1015c0] Setting 'pix_fmts' to value 'yuv420p'
[overlay @ 00000288ff101880] Setting 'x' to value 'W-w'
[overlay @ 00000288ff101880] Setting 'y' to value 'H-h'
[overlay @ 00000288ff101880] Setting 'enable' to value 'between(t,0,20)'
[overlay @ 00000288ff101880] Setting 'format' to value 'yuv420'
[auto_scale_0 @ 00000288ff101ec0] w:iw h:ih flags:'' interl:0
[format @ 00000288ff1015c0] auto-inserting filter 'auto_scale_0' between the filter 'overlay_in' and the filter 'format'
[auto_scale_1 @ 00000288ee4a4cc0] w:iw h:ih flags:'' interl:0
[overlay @ 00000288ff101880] auto-inserting filter 'auto_scale_1' between the filter 'format' and the filter 'overlay'
[AVFilterGraph @ 00000288ee495c80] query_formats: 5 queried, 6 merged, 6 already done, 0 delayed
[auto_scale_0 @ 00000288ff101ec0] w:165 h:165 fmt:bgr24 csp:gbr range:pc sar:1/1 -> w:165 h:165 fmt:yuv420p csp:unknown range:unknown sar:1/1 flags:0x00000004
[auto_scale_1 @ 00000288ee4a4cc0] w:165 h:165 fmt:yuv420p csp:unknown range:unknown sar:1/1 -> w:165 h:165 fmt:yuva420p csp:unknown range:unknown sar:1/1 flags:0x00000004
[overlay @ 00000288ff101880] main w:1920 h:1080 fmt:yuv420p overlay w:165 h:165 fmt:yuva420p
[overlay @ 00000288ff101880] [framesync @ 00000288ff1019a8] Selected 1/25 time base
[overlay @ 00000288ff101880] [framesync @ 00000288ff1019a8] Sync level 2