
Recherche avancée
Médias (2)
-
Core Media Video
4 avril 2013, par
Mis à jour : Juin 2013
Langue : français
Type : Video
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (57)
-
Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs
12 avril 2011, parLa manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras. -
MediaSPIP : Modification des droits de création d’objets et de publication définitive
11 novembre 2010, parPar défaut, MediaSPIP permet de créer 5 types d’objets.
Toujours par défaut les droits de création et de publication définitive de ces objets sont réservés aux administrateurs, mais ils sont bien entendu configurables par les webmestres.
Ces droits sont ainsi bloqués pour plusieurs raisons : parce que le fait d’autoriser à publier doit être la volonté du webmestre pas de l’ensemble de la plateforme et donc ne pas être un choix par défaut ; parce qu’avoir un compte peut servir à autre choses également, (...) -
La sauvegarde automatique de canaux SPIP
1er avril 2010, parDans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)
Sur d’autres sites (7831)
-
FFMPEG MKV -> MP4 Batch Conversion
15 juillet 2024, par blaziken386I'm trying to write a program that lets me convert a series of .mkv files with subtitle files into .mp4 files with the subs hardcoded.


Right now, the script I use is


ffmpeg -i input.mkv -vf subtitles=input.mkv output.mp4




This is fine, but it means I can only convert them one at a time, and it's kind of a hassle because it means I have to fiddle with it every few minutes to set up the next one.


I have another script I use for converting .flac files to .mp3 files, which is


@ECHO OFF

FOR %%f IN (*.flac) DO (
echo Converting: %%f
ffmpeg -i "%%f" -ab 320k -map_metadata 0 "%%~nf.mp3"
)

echo Finished

PAUSE




Running that converts every single .flac folder into an .mp3 equivalent, with the same filename and everything.


I've tried to combine the above scripts into something like this :


@ECHO OFF

FOR %%f IN (*.mkv) DO (
echo Converting: %%f
ffmpeg -i "%%f" -vf subtitles=%%f "%%~nf.mp4"
)

echo Finished

PAUSE



but every time I do so, it returns errors like "invalid argument" or "unable to find a suitable output type", or "error initializing filters", or "2 frames left in the queue on closing" or something along those lines. I've swapped out subtitles=%%f for "subtitles-%%f" or subtitles="%%f.mkv" and so on and so forth, and none of those give me what I want either. Sometimes it creates Empty .mp4 file containers with nothing in them, sometimes it does nothing at all.


I don't really understand what exactly is happening under the hood in that flac->mp3 code, because I grabbed it from a different stackoverflow post years ago. All I know is that trying to copy that code and repurpose it into something else doesn't work. Is this just an issue where I've fucked up the formatting of the code and not realized it, or is this a "ffmpeg can't actually do that because of a weird technical issue" thing ?


I also tried the code listed here, when Stackoverflow listed that as a possible duplicate, but that gave me similar errors, and I don't really understand why !


Also, if it's relevant, I'm running windows.


-
Up-to-date example for libavfilter usage [closed]
25 juillet 2024, par Michael WernerI am looking for an example how to use the latest FFmpeg libavfilter functions to add a bitmap overlay image to a video. I only find more than ten years old examples which are based on an old versions of libavfilter.


What I am trying to do :
On Windows with latest libav (full build) an C++ app reads YUV420P frames from a frame grabber card. I want to draw a Windows bitmap BGR24 overlay image (from file) on every frame via libavfilter. First I convert the BGR24 overlay image via format filter to YUV420P. Then I feed the YUV420P frame from frame grabber and the YUV420P overlay into the overlay filter. Everything seems to be fine but when I try to get the frame out of the filter graph I always get an "Resource is temporary not available" (EAGAIN) error, independent on how many frames I put into the graph.


My current initialization code looks like below. It does not report any errors or warnings but when I try to get the filtered frame out of the graph via
av_buffersink_get_frame
I always get anEAGAIN
return code.

Here is my current initialization code :


int init_overlay_filter(AVFilterGraph** graph, AVFilterContext** src_ctx, AVFilterContext** overlay_src_ctx,
 AVFilterContext** sink_ctx)
{
 AVFilterGraph* filter_graph;
 AVFilterContext* buffersrc_ctx;
 AVFilterContext* overlay_buffersrc_ctx;
 AVFilterContext* buffersink_ctx;
 AVFilterContext* overlay_ctx;
 AVFilterContext* format_ctx;
 const AVFilter *buffersrc, *buffersink, *overlay_buffersrc, *overlay_filter, *format_filter;
 int ret;

 // Create the filter graph
 filter_graph = avfilter_graph_alloc();
 if (!filter_graph)
 {
 fprintf(stderr, "Unable to create filter graph.\n");
 return AVERROR(ENOMEM);
 }

 // Create buffer source filter for main video
 buffersrc = avfilter_get_by_name("buffer");
 if (!buffersrc)
 {
 fprintf(stderr, "Unable to find buffer filter.\n");
 return AVERROR_FILTER_NOT_FOUND;
 }

 // Create buffer source filter for overlay image
 overlay_buffersrc = avfilter_get_by_name("buffer");
 if (!overlay_buffersrc)
 {
 fprintf(stderr, "Unable to find buffer filter.\n");
 return AVERROR_FILTER_NOT_FOUND;
 }

 // Create buffer sink filter
 buffersink = avfilter_get_by_name("buffersink");
 if (!buffersink)
 {
 fprintf(stderr, "Unable to find buffersink filter.\n");
 return AVERROR_FILTER_NOT_FOUND;
 }

 // Create overlay filter
 overlay_filter = avfilter_get_by_name("overlay");
 if (!overlay_filter)
 {
 fprintf(stderr, "Unable to find overlay filter.\n");
 return AVERROR_FILTER_NOT_FOUND;
 }

 // Create format filter
 format_filter = avfilter_get_by_name("format");
 if (!format_filter) 
 {
 fprintf(stderr, "Unable to find format filter.\n");
 return AVERROR_FILTER_NOT_FOUND;
 }

 // Initialize the main video buffer source
 char args[512];
 snprintf(args, sizeof(args),
 "video_size=1920x1080:pix_fmt=yuv420p:time_base=1/25:pixel_aspect=1/1");
 ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in", args, NULL, filter_graph);
 if (ret < 0)
 {
 fprintf(stderr, "Unable to create buffer source filter for main video.\n");
 return ret;
 }

 // Initialize the overlay buffer source
 snprintf(args, sizeof(args),
 "video_size=165x165:pix_fmt=bgr24:time_base=1/25:pixel_aspect=1/1");
 ret = avfilter_graph_create_filter(&overlay_buffersrc_ctx, overlay_buffersrc, "overlay_in", args, NULL,
 filter_graph);
 if (ret < 0)
 {
 fprintf(stderr, "Unable to create buffer source filter for overlay.\n");
 return ret;
 }

 // Initialize the format filter to convert overlay image to yuv420p
 snprintf(args, sizeof(args), "pix_fmts=yuv420p");
 ret = avfilter_graph_create_filter(&format_ctx, format_filter, "format", args, NULL, filter_graph);

 if (ret < 0) 
 {
 fprintf(stderr, "Unable to create format filter.\n");
 return ret;
 }

 // Initialize the buffer sink
 ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out", NULL, NULL, filter_graph);
 if (ret < 0)
 {
 fprintf(stderr, "Unable to create buffer sink filter.\n");
 return ret;
 }

 // Initialize the overlay filter
 ret = avfilter_graph_create_filter(&overlay_ctx, overlay_filter, "overlay", "W-w:H-h:enable='between(t,0,20)':format=yuv420", NULL, filter_graph);
 if (ret < 0)
 {
 fprintf(stderr, "Unable to create overlay filter.\n");
 return ret;
 }

 // Connect the filters
 ret = avfilter_link(overlay_buffersrc_ctx, 0, format_ctx, 0);

 if (ret >= 0)
 {
 ret = avfilter_link(buffersrc_ctx, 0, overlay_ctx, 0);
 }
 else
 {
 fprintf(stderr, "Unable to configure filter graph.\n");
 return ret;
 }


 if (ret >= 0) 
 {
 ret = avfilter_link(format_ctx, 0, overlay_ctx, 1);
 }
 else
 {
 fprintf(stderr, "Unable to configure filter graph.\n");
 return ret;
 }

 if (ret >= 0) 
 {
 if ((ret = avfilter_link(overlay_ctx, 0, buffersink_ctx, 0)) < 0)
 {
 fprintf(stderr, "Unable to link filter graph.\n");
 return ret;
 }
 }
 else
 {
 fprintf(stderr, "Unable to configure filter graph.\n");
 return ret;
 }

 // Configure the filter graph
 if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0)
 {
 fprintf(stderr, "Unable to configure filter graph.\n");
 return ret;
 }

 *graph = filter_graph;
 *src_ctx = buffersrc_ctx;
 *overlay_src_ctx = overlay_buffersrc_ctx;
 *sink_ctx = buffersink_ctx;

 return 0;
}



-
Construct fictitious P-frames from just I-frames [closed]
25 juillet 2024, par nilgirianSome context.. I saw this video recently https://youtu.be/zXTpASSd9xE?si=5alGvZ_e13w0Ahmb it's a continuous zoom into a fractal.


I've been thinking a whole lot of how did they created this video 9 years ago ? The problem is that these frames are mathematically intensive to calculate back then and today still fairly really hard now.


He states in the video it took him 33 hours to generate 1 keyframe.


I was wondering how I would replicate that work. I know by brute force I can generate several images files (essentially each image would be an I-frame) and then ask ffmpeg to compress it into mp4 (where it will convert most of those images into P-frames). I know that. But if I did it that way I calculated it'd take me 6.5 years to render that 9min video (at 30fps, 9 years ago).


So I imagine he only generated I-frames to cut down on time. And then this person somehow created fictitious P-frames in-between. Given that frame-to-frame are similar this seems like it should be doable since you're just zooming in. If he only generated just the I-frames at every 1 second (at 30fps) that work could be cut down to just 82 days.


So if I only want to generate the images that will be used as I-frames could ffmpeg or some other program just automatically make a best guess to generate fictitious P-frames for me ?