Recherche avancée

Médias (91)

Autres articles (64)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

Sur d’autres sites (9604)

  • avcodec/dvdsub : Fix warning about incompatible pointer type

    14 février 2020, par Andreas Rheinhardt
    avcodec/dvdsub : Fix warning about incompatible pointer type
    

    Fixes "passing argument 2 of ‘strtoul’ from incompatible pointer
    type [-Wincompatible-pointer-types]" ("expected ‘char ** restrict’ but
    argument is of type ‘const char **’") for GCC and "passing 'const char
    **' to parameter of type 'char **' discards qualifiers in nested pointer
    types [-Wincompatible-pointer-types-discards-qualifiers]" for Clang.

    The cast itself is safe ; it is only needed because strtoul itself is not
    const-correct.

    Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
    Reviewed-by : Paul B Mahol <onemda@gmail.com>
    Signed-off-by : James Almer <jamrial@gmail.com>

    • [DH] libavcodec/dvdsub.c
  • How to force A/V sync using mkvmerge and external time-codes ?

    19 avril 2017, par b..

    Background

    I’m working on a project where video and audio are algorithmic interpretations of an MKV source file where I use ffmpeg -ss and -t to extract a particular region of audio and video to separate files. I use scene changes in the video in the audio process (i.e. the audio changes on video scene change), so sync is crucial.

    Audio is 48khz, using 512 sample blocks.
    Video is 23.976fps (I also tried 24).

    I store the frame onset of sceneChanges in a file in terms of cumulative blocks :

    blocksPerFrame = (48000 / 512) / 23.976
    sceneOnsetBlock = sceneOnsetFrame*blocksPerFrame

    I use these blocks in my audio code to treat the samples associated with each scene as a group.

    When I combine the audio and video back together (currently using ffmpeg to generate mp4(v) mp3(a) in an MKV container), the audio and video start off in sync but increasingly drifts until it ends up being 35 seconds off. The worst part is that the audio lag is nonlinear ! By non-linear, I mean that if I plot the lag against the location of that lag in time, I don’t get a line, but what you see in the image below). I can’t just shift or scale the audio to fit the video because of this nonlinearity. I cannot figure out the cause of this nonlinearly increasing audio delay ; I’ve double and triple checked my math.

    Cumulative lag against time

    Since I know the exact timing of scene changes, I should be able to generate "external timecodes" (from the blocks above) for mkvmerge to perfectly sync the output !

    Subquestions :

    1. Is this the best approach (beyond trying to figure out what went wrong in the first place) ? As I’m using my video frames as a
      reference, if I use the scene changes as timecodes for the audio,
      will it force the video to match the audio or vice versa ? I’m much less concerned with the duration than the sync. The video was much more laborious to produce, so I’d rather loose some sound than some frames.

    2. I’m not clear on what numbers to use in the timecodes file.
      According to mkvmerge documentation "For video this is exactly
      one frame, for audio this is one packet of the specific audio type."
      Since I’m using MP3, what is the packet size ? Ideally, I could specify a packetsize (in the audio-encoder ?) that matches my block size (512) to keep things consistent and simple. Can I do this with ffmpeg ?

    Thank you !

  • Could someone please explain this filter graph ?

    9 mai, par Wynell

    https://www.ffmpeg.org/doxygen/trunk/filtering_video_8c-example.html

    &#xA;

        filter_graph = avfilter_graph_alloc();&#xA;    // ...&#xA;    const AVFilter *buffersrc  = avfilter_get_by_name("buffer");&#xA;    const AVFilter *buffersink = avfilter_get_by_name("buffersink");&#xA;    // ...&#xA;    /* buffer video source: the decoded frames from the decoder will be inserted here. */&#xA;    snprintf(args, sizeof(args),&#xA;            "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",&#xA;            dec_ctx->width, dec_ctx->height, dec_ctx->pix_fmt,&#xA;            time_base.num, time_base.den,&#xA;            dec_ctx->sample_aspect_ratio.num, dec_ctx->sample_aspect_ratio.den);&#xA; &#xA;    ret = avfilter_graph_create_filter(&amp;buffersrc_ctx, buffersrc, "in",&#xA;                                       args, NULL, filter_graph);&#xA;    /* buffer video sink: to terminate the filter chain. */&#xA;    ret = avfilter_graph_create_filter(&amp;buffersink_ctx, buffersink, "out",&#xA;                                       NULL, NULL, filter_graph);&#xA;    // ...&#xA;/*&#xA;     * Set the endpoints for the filter graph. The filter_graph will&#xA;     * be linked to the graph described by filters_descr.&#xA;     */&#xA; &#xA;    /*&#xA;     * The buffer source output must be connected to the input pad of&#xA;     * the first filter described by filters_descr; since the first&#xA;     * filter input label is not specified, it is set to "in" by&#xA;     * default.&#xA;     */&#xA;    outputs->name       = av_strdup("in");&#xA;    outputs->filter_ctx = buffersrc_ctx;&#xA;    outputs->pad_idx    = 0;&#xA;    outputs->next       = NULL;&#xA; &#xA;    /*&#xA;     * The buffer sink input must be connected to the output pad of&#xA;     * the last filter described by filters_descr; since the last&#xA;     * filter output label is not specified, it is set to "out" by&#xA;     * default.&#xA;     */&#xA;    inputs->name       = av_strdup("out");&#xA;    inputs->filter_ctx = buffersink_ctx;&#xA;    inputs->pad_idx    = 0;&#xA;    inputs->next       = NULL;&#xA; &#xA;    if ((ret = avfilter_graph_parse_ptr(filter_graph, filters_descr,&#xA;                                    &amp;inputs, &amp;outputs, NULL)) &lt; 0)&#xA;

    &#xA;

    So I see these parts separately like this :

    &#xA;

    &#xA;

    But I can't combine them in my head.
    &#xA;Are these in and out filters the same everywhere ?
    &#xA;What filter or graph are these inputs and outputs ? If the parsed graph takes the input from in filter, why is it (in filter) in outputs variable then (and vice versa) !? What is even the role of these variables ?
    &#xA;Could you please explain the step by step algorithm how this code works

    &#xA;