Recherche avancée

Médias (91)

Autres articles (100)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (8617)

  • Construct fictitious P-frames from just I-frames [closed]

    25 juillet 2024, par nilgirian

    Some context.. I saw this video recently https://youtu.be/zXTpASSd9xE?si=5alGvZ_e13w0Ahmb it's a continuous zoom into a fractal.

    


    I've been thinking a whole lot of how did they created this video 9 years ago ? The problem is that these frames are mathematically intensive to calculate back then and today still fairly really hard now.

    


    He states in the video it took him 33 hours to generate 1 keyframe.

    


    I was wondering how I would replicate that work. I know by brute force I can generate several images files (essentially each image would be an I-frame) and then ask ffmpeg to compress it into mp4 (where it will convert most of those images into P-frames). I know that. But if I did it that way I calculated it'd take me 6.5 years to render that 9min video (at 30fps, 9 years ago).

    


    So I imagine he only generated I-frames to cut down on time. And then this person somehow created fictitious P-frames in-between. Given that frame-to-frame are similar this seems like it should be doable since you're just zooming in. If he only generated just the I-frames at every 1 second (at 30fps) that work could be cut down to just 82 days.

    


    So if I only want to generate the images that will be used as I-frames could ffmpeg or some other program just automatically make a best guess to generate fictitious P-frames for me ?

    


  • lavu/riscv : grok B as an extension

    22 juillet 2024, par Rémi Denis-Courmont
    lavu/riscv : grok B as an extension
    

    The RISC-V B bit manipulation extension was ratified only two months ago.
    But it is strictly equivalent to the union of the zba, zbb and zbs
    extensions which were defined almost 3 years earlier. Rather than require
    new assembler, we can just match the extension name manually and translate
    it into its constituent parts.

    • [DH] libavutil/riscv/asm.S
  • Up-to-date example for libavfilter usage [closed]

    25 juillet 2024, par Michael Werner

    I am looking for an example how to use the latest FFmpeg libavfilter functions to add a bitmap overlay image to a video. I only find more than ten years old examples which are based on an old versions of libavfilter.

    


    What I am trying to do :
On Windows with latest libav (full build) an C++ app reads YUV420P frames from a frame grabber card. I want to draw a Windows bitmap BGR24 overlay image (from file) on every frame via libavfilter. First I convert the BGR24 overlay image via format filter to YUV420P. Then I feed the YUV420P frame from frame grabber and the YUV420P overlay into the overlay filter. Everything seems to be fine but when I try to get the frame out of the filter graph I always get an "Resource is temporary not available" (EAGAIN) error, independent on how many frames I put into the graph.

    


    My current initialization code looks like below. It does not report any errors or warnings but when I try to get the filtered frame out of the graph via av_buffersink_get_frame I always get an EAGAIN return code.

    


    Here is my current initialization code :

    


    int init_overlay_filter(AVFilterGraph** graph, AVFilterContext** src_ctx, AVFilterContext** overlay_src_ctx,
                        AVFilterContext** sink_ctx)
{
    AVFilterGraph* filter_graph;
    AVFilterContext* buffersrc_ctx;
    AVFilterContext* overlay_buffersrc_ctx;
    AVFilterContext* buffersink_ctx;
    AVFilterContext* overlay_ctx;
    AVFilterContext* format_ctx;
    const AVFilter *buffersrc, *buffersink, *overlay_buffersrc, *overlay_filter, *format_filter;
    int ret;

    // Create the filter graph
    filter_graph = avfilter_graph_alloc();
    if (!filter_graph)
    {
        fprintf(stderr, "Unable to create filter graph.\n");
        return AVERROR(ENOMEM);
    }

    // Create buffer source filter for main video
    buffersrc = avfilter_get_by_name("buffer");
    if (!buffersrc)
    {
        fprintf(stderr, "Unable to find buffer filter.\n");
        return AVERROR_FILTER_NOT_FOUND;
    }

    // Create buffer source filter for overlay image
    overlay_buffersrc = avfilter_get_by_name("buffer");
    if (!overlay_buffersrc)
    {
        fprintf(stderr, "Unable to find buffer filter.\n");
        return AVERROR_FILTER_NOT_FOUND;
    }

    // Create buffer sink filter
    buffersink = avfilter_get_by_name("buffersink");
    if (!buffersink)
    {
        fprintf(stderr, "Unable to find buffersink filter.\n");
        return AVERROR_FILTER_NOT_FOUND;
    }

    // Create overlay filter
    overlay_filter = avfilter_get_by_name("overlay");
    if (!overlay_filter)
    {
        fprintf(stderr, "Unable to find overlay filter.\n");
        return AVERROR_FILTER_NOT_FOUND;
    }

    // Create format filter
    format_filter = avfilter_get_by_name("format");
    if (!format_filter) 
    {
        fprintf(stderr, "Unable to find format filter.\n");
        return AVERROR_FILTER_NOT_FOUND;
    }

    // Initialize the main video buffer source
    char args[512];
    snprintf(args, sizeof(args),
             "video_size=1920x1080:pix_fmt=yuv420p:time_base=1/25:pixel_aspect=1/1");
    ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in", args, NULL, filter_graph);
    if (ret < 0)
    {
        fprintf(stderr, "Unable to create buffer source filter for main video.\n");
        return ret;
    }

    // Initialize the overlay buffer source
    snprintf(args, sizeof(args),
             "video_size=165x165:pix_fmt=bgr24:time_base=1/25:pixel_aspect=1/1");
    ret = avfilter_graph_create_filter(&overlay_buffersrc_ctx, overlay_buffersrc, "overlay_in", args, NULL,
                                       filter_graph);
    if (ret < 0)
    {
        fprintf(stderr, "Unable to create buffer source filter for overlay.\n");
        return ret;
    }

    // Initialize the format filter to convert overlay image to yuv420p
    snprintf(args, sizeof(args), "pix_fmts=yuv420p");
    ret = avfilter_graph_create_filter(&format_ctx, format_filter, "format", args, NULL, filter_graph);

    if (ret < 0) 
    {
        fprintf(stderr, "Unable to create format filter.\n");
        return ret;
    }

    // Initialize the buffer sink
    ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out", NULL, NULL, filter_graph);
    if (ret < 0)
    {
        fprintf(stderr, "Unable to create buffer sink filter.\n");
        return ret;
    }

    // Initialize the overlay filter
    ret = avfilter_graph_create_filter(&overlay_ctx, overlay_filter, "overlay", "W-w:H-h:enable='between(t,0,20)':format=yuv420", NULL, filter_graph);
    if (ret < 0)
    {
        fprintf(stderr, "Unable to create overlay filter.\n");
        return ret;
    }

    // Connect the filters
    ret = avfilter_link(overlay_buffersrc_ctx, 0, format_ctx, 0);

    if (ret >= 0)
    {
        ret = avfilter_link(buffersrc_ctx, 0, overlay_ctx, 0);
    }
    else
    {
        fprintf(stderr, "Unable to configure filter graph.\n");
        return ret;
    }


    if (ret >= 0) 
    {
        ret = avfilter_link(format_ctx, 0, overlay_ctx, 1);
    }
    else
    {
        fprintf(stderr, "Unable to configure filter graph.\n");
        return ret;
    }

    if (ret >= 0) 
    {
        if ((ret = avfilter_link(overlay_ctx, 0, buffersink_ctx, 0)) < 0)
        {
            fprintf(stderr, "Unable to link filter graph.\n");
            return ret;
        }
    }
    else
    {
        fprintf(stderr, "Unable to configure filter graph.\n");
        return ret;
    }

    // Configure the filter graph
    if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0)
    {
        fprintf(stderr, "Unable to configure filter graph.\n");
        return ret;
    }

    *graph = filter_graph;
    *src_ctx = buffersrc_ctx;
    *overlay_src_ctx = overlay_buffersrc_ctx;
    *sink_ctx = buffersink_ctx;

    return 0;
}