Recherche avancée

Médias (91)

Autres articles (31)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

  • Librairies et binaires spécifiques au traitement vidéo et sonore

    31 janvier 2010, par

    Les logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
    Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
    Binaires complémentaires et facultatifs flvtool2 : (...)

Sur d’autres sites (7064)

  • Gettting silence level to be used with silencedetect automatically

    3 mai 2017, par P. Dee

    My goal is to find the average or maximum silent level in dB to use it with silencedetect. I found volumedetect and I thought to use the histogram_ results to find the lowest dB numbers(low as in -40dB, -50dB etc.) with a high number of occurrences.

    What is a better idea ? Can it be combined with the silencedetect command, so I don’t need to enter the dB-Value at all ?

  • vc-1 : Optimise parser (with special attention to ARM)

    21 juillet 2014, par Ben Avison
    vc-1 : Optimise parser (with special attention to ARM)
    

    The previous implementation of the parser made four passes over each input
    buffer (reduced to two if the container format already guaranteed the input
    buffer corresponded to frames, such as with MKV). But these buffers are
    often 200K in size, certainly enough to flush the data out of L1 cache, and
    for many CPUs, all the way out to main memory. The passes were :

    1) locate frame boundaries (not needed for MKV etc)
    2) copy the data into a contiguous block (not needed for MKV etc)
    3) locate the start codes within each frame
    4) unescape the data between start codes

    After this, the unescaped data was parsed to extract certain header fields,
    but because the unescape operation was so large, this was usually also
    effectively operating on uncached memory. Most of the unescaped data was
    simply thrown away and never processed further. Only step 2 - because it
    used memcpy - was using prefetch, making things even worse.

    This patch reorganises these steps so that, aside from the copying, the
    operations are performed in parallel, maximising cache utilisation. No more
    than the worst-case number of bytes needed for header parsing is unescaped.
    Most of the data is, in practice, only read in order to search for a start
    code, for which optimised implementations already existed in the H264 codec
    (notably the ARM version uses prefetch, so we end up doing both remaining
    passes at maximum speed). For MKV files, we know when we’ve found the last
    start code of interest in a given frame, so we are able to avoid doing even
    that one remaining pass for most of the buffer.

    In some use-cases (such as the Raspberry Pi) video decode is handled by the
    GPU, but the entire elementary stream is still fed through the parser to
    pick out certain elements of the header which are necessary to manage the
    decode process. As you might expect, in these cases, the performance of the
    parser is significant.

    To measure parser performance, I used the same VC-1 elementary stream in
    either an MPEG-2 transport stream or a MKV file, and fed it through avconv
    with -c:v copy -c:a copy -f null. These are the gperftools counts for
    those streams, both filtered to only include vc1_parse() and its callees,
    and unfiltered (to include the whole binary). Lower numbers are better :

    Before After
    File Filtered Mean StdDev Mean StdDev Confidence Change
    M2TS No 861.7 8.2 650.5 8.1 100.0% +32.5%
    MKV No 868.9 7.4 731.7 9.0 100.0% +18.8%
    M2TS Yes 250.0 11.2 27.2 3.4 100.0% +817.9%
    MKV Yes 149.0 12.8 1.7 0.8 100.0% +8526.3%

    Yes, that last case shows vc1_parse() running 86 times faster ! The M2TS
    case does show a larger absolute improvement though, since it was worse
    to begin with.

    This patch has been tested with the FATE suite (albeit on x86 for speed).

    Signed-off-by : Luca Barbato <lu_zero@gentoo.org>

    • [DBH] libavcodec/vc1_parser.c
  • Copy frame specific properties (eg. width and height)

    5 août 2014, par gkuczera

    It’s not that something is not working, I just can’t figure out how to copy frame props (eg. height and width) after usage of sws_scale function - this function doesn’t copy them into the destination frame).

    Why I need this ? My frame after scaling is becoming the input for the filter, and it’s source props have to be specified to specific numbers, not like it accepts everything (so a frame with width and height equal to zero - which I get after scaling - is not an option).

    I tried to use

    av_frame_copy_props

    but even in this function’s description, they mentioned that it will not do this.

    Here is the code :

    AVFrame* tOwnersFrame = pOwner->getFrame();
    AVFrame* tResizedFrame = avcodec_alloc_frame();
    int tResizedFrameWidth = pMaxFrameWidth;
    int tResizedFrameHeight = pMaxFrameHeight;

    if (!tResizedFrame)
    {
       cout &lt;&lt; "Couldn't allocate the frame!" &lt;&lt; endl;
       return;
    }

    uint8_t* tBuffer;
    int tBytesNeeded;

    tBytesNeeded = avpicture_get_size(PIX_FMT_RGB24, tResizedFrameWidth, tResizedFrameHeight);
    tBuffer = (uint8_t*)av_malloc(tBytesNeeded * sizeof(uint8_t));
    avpicture_fill((AVPicture*)tResizedFrame, tBuffer, PIX_FMT_RGB24, tResizedFrameWidth, tResizedFrameHeight);

    mSwsContext = sws_getCachedContext(mSwsContext, pOwner->getFrameWidth(), pOwner->getFrameHeight(), AV_PIX_FMT_BGR24, tResizedFrameWidth, tResizedFrameHeight, PIX_FMT_RGB24, SWS_BILINEAR, NULL, NULL, NULL);
    sws_scale(mSwsContext, (const uint8_t* const *)tOwnersFrame->data, tOwnersFrame->linesize, 0, pOwner->getFrameHeight(), tResizedFrame->data, tResizedFrame->linesize);
    cout &lt;&lt; "FramesMerger::resizeFrameMax - arg frame size: " &lt;&lt; pOwner->getFrame()->width &lt;&lt; ", " &lt;&lt; pOwner->getFrame()->height &lt;&lt; endl;
    cout &lt;&lt; "FramesMerger::resizeFrameMax - resized frame size: " &lt;&lt; tResizedFrame->width &lt;&lt; ", " &lt;&lt; tResizedFrame->height &lt;&lt; endl;