Recherche avancée

Médias (1)

Mot : - Tags -/blender

Autres articles (52)

  • Qu’est ce qu’un éditorial

    21 juin 2013, par

    Ecrivez votre de point de vue dans un article. Celui-ci sera rangé dans une rubrique prévue à cet effet.
    Un éditorial est un article de type texte uniquement. Il a pour objectif de ranger les points de vue dans une rubrique dédiée. Un seul éditorial est placé à la une en page d’accueil. Pour consulter les précédents, consultez la rubrique dédiée.
    Vous pouvez personnaliser le formulaire de création d’un éditorial.
    Formulaire de création d’un éditorial Dans le cas d’un document de type éditorial, les (...)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Possibilité de déploiement en ferme

    12 avril 2011, par

    MediaSPIP peut être installé comme une ferme, avec un seul "noyau" hébergé sur un serveur dédié et utilisé par une multitude de sites différents.
    Cela permet, par exemple : de pouvoir partager les frais de mise en œuvre entre plusieurs projets / individus ; de pouvoir déployer rapidement une multitude de sites uniques ; d’éviter d’avoir à mettre l’ensemble des créations dans un fourre-tout numérique comme c’est le cas pour les grandes plate-formes tout public disséminées sur le (...)

Sur d’autres sites (7274)

  • How can I find get the frame number from a gif file using ffmpeg's AVFormatContext struct ?

    17 décembre 2015, par user1400047

    I searched from all the subnode of AVFormatContext, buf just found the fps. no frame numbers or duration info at all. who can help me ?

  • vc-1 : Optimise parser (with special attention to ARM)

    23 avril 2014, par Ben Avison
    vc-1 : Optimise parser (with special attention to ARM)
    

    The previous implementation of the parser made four passes over each input
    buffer (reduced to two if the container format already guaranteed the input
    buffer corresponded to frames, such as with MKV). But these buffers are
    often 200K in size, certainly enough to flush the data out of L1 cache, and
    for many CPUs, all the way out to main memory. The passes were :

    1) locate frame boundaries (not needed for MKV etc)
    2) copy the data into a contiguous block (not needed for MKV etc)
    3) locate the start codes within each frame
    4) unescape the data between start codes

    After this, the unescaped data was parsed to extract certain header fields,
    but because the unescape operation was so large, this was usually also
    effectively operating on uncached memory. Most of the unescaped data was
    simply thrown away and never processed further. Only step 2 - because it
    used memcpy - was using prefetch, making things even worse.

    This patch reorganises these steps so that, aside from the copying, the
    operations are performed in parallel, maximising cache utilisation. No more
    than the worst-case number of bytes needed for header parsing is unescaped.
    Most of the data is, in practice, only read in order to search for a start
    code, for which optimised implementations already existed in the H264 codec
    (notably the ARM version uses prefetch, so we end up doing both remaining
    passes at maximum speed). For MKV files, we know when we’ve found the last
    start code of interest in a given frame, so we are able to avoid doing even
    that one remaining pass for most of the buffer.

    In some use-cases (such as the Raspberry Pi) video decode is handled by the
    GPU, but the entire elementary stream is still fed through the parser to
    pick out certain elements of the header which are necessary to manage the
    decode process. As you might expect, in these cases, the performance of the
    parser is significant.

    To measure parser performance, I used the same VC-1 elementary stream in
    either an MPEG-2 transport stream or a MKV file, and fed it through ffmpeg
    with -c:v copy -c:a copy -f null. These are the gperftools counts for
    those streams, both filtered to only include vc1_parse() and its callees,
    and unfiltered (to include the whole binary). Lower numbers are better :

    Before After
    File Filtered Mean StdDev Mean StdDev Confidence Change
    M2TS No 861.7 8.2 650.5 8.1 100.0% +32.5%
    MKV No 868.9 7.4 731.7 9.0 100.0% +18.8%
    M2TS Yes 250.0 11.2 27.2 3.4 100.0% +817.9%
    MKV Yes 149.0 12.8 1.7 0.8 100.0% +8526.3%

    Yes, that last case shows vc1_parse() running 86 times faster ! The M2TS
    case does show a larger absolute improvement though, since it was worse
    to begin with.

    This patch has been tested with the FATE suite (albeit on x86 for speed).

    Signed-off-by : Michael Niedermayer <michaelni@gmx.at>

    • [DH] libavcodec/vc1_parser.c
  • lavu/libm : add exp10 support

    22 décembre 2015, par Ganesh Ajjanagadde
    lavu/libm : add exp10 support
    

    exp10 is a function available in GNU libm. Looks like no other common
    libm has it. This adds support for it to FFmpeg.

    There are essentially 2 ways of handling the fallback :
    1. Using pow(10, x)
    2. Using exp2(M_LOG2_10 * x).

    First one represents a Pareto improvement, with no speed or accuracy
    regression anywhere, but speed improvement limited to GNU libm.

    Second one represents a slight accuracy loss (relative error 1e-13)
    for non GNU libm. Speedup of > 2x is obtained on non GNU libm platforms,
     30% on GNU libm. These are "average case numbers", another benefit is
    the lack of triggering of the well-known terrible worst case paths
    through pow.

    Based on reviews, second one chosen. Comment added accordingly.

    Reviewed-by : Hendrik Leppkes <h.leppkes@gmail.com>
    Reviewed-by : Michael Niedermayer <michael@niedermayer.cc>
    Reviewed-by : Ronald S. Bultje <rsbultje@gmail.com>
    Signed-off-by : Ganesh Ajjanagadde <gajjanagadde@gmail.com>

    • [DH] configure
    • [DH] libavutil/libm.h