Recherche avancée

Médias (1)

Mot : - Tags -/book

Autres articles (45)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Mise à disposition des fichiers

    14 avril 2011, par

    Par défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
    Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
    Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)

  • Installation en mode ferme

    4 février 2011, par

    Le mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
    C’est la méthode que nous utilisons sur cette même plateforme.
    L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
    Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)

Sur d’autres sites (6511)

  • Copy frame specific properties (eg. width and height)

    5 août 2014, par gkuczera

    It’s not that something is not working, I just can’t figure out how to copy frame props (eg. height and width) after usage of sws_scale function - this function doesn’t copy them into the destination frame).

    Why I need this ? My frame after scaling is becoming the input for the filter, and it’s source props have to be specified to specific numbers, not like it accepts everything (so a frame with width and height equal to zero - which I get after scaling - is not an option).

    I tried to use

    av_frame_copy_props

    but even in this function’s description, they mentioned that it will not do this.

    Here is the code :

    AVFrame* tOwnersFrame = pOwner->getFrame();
    AVFrame* tResizedFrame = avcodec_alloc_frame();
    int tResizedFrameWidth = pMaxFrameWidth;
    int tResizedFrameHeight = pMaxFrameHeight;

    if (!tResizedFrame)
    {
       cout << "Couldn't allocate the frame!" << endl;
       return;
    }

    uint8_t* tBuffer;
    int tBytesNeeded;

    tBytesNeeded = avpicture_get_size(PIX_FMT_RGB24, tResizedFrameWidth, tResizedFrameHeight);
    tBuffer = (uint8_t*)av_malloc(tBytesNeeded * sizeof(uint8_t));
    avpicture_fill((AVPicture*)tResizedFrame, tBuffer, PIX_FMT_RGB24, tResizedFrameWidth, tResizedFrameHeight);

    mSwsContext = sws_getCachedContext(mSwsContext, pOwner->getFrameWidth(), pOwner->getFrameHeight(), AV_PIX_FMT_BGR24, tResizedFrameWidth, tResizedFrameHeight, PIX_FMT_RGB24, SWS_BILINEAR, NULL, NULL, NULL);
    sws_scale(mSwsContext, (const uint8_t* const *)tOwnersFrame->data, tOwnersFrame->linesize, 0, pOwner->getFrameHeight(), tResizedFrame->data, tResizedFrame->linesize);
    cout << "FramesMerger::resizeFrameMax - arg frame size: " << pOwner->getFrame()->width << ", " << pOwner->getFrame()->height << endl;
    cout << "FramesMerger::resizeFrameMax - resized frame size: " << tResizedFrame->width << ", " << tResizedFrame->height << endl;
  • vc-1 : Optimise parser (with special attention to ARM)

    21 juillet 2014, par Ben Avison
    vc-1 : Optimise parser (with special attention to ARM)
    

    The previous implementation of the parser made four passes over each input
    buffer (reduced to two if the container format already guaranteed the input
    buffer corresponded to frames, such as with MKV). But these buffers are
    often 200K in size, certainly enough to flush the data out of L1 cache, and
    for many CPUs, all the way out to main memory. The passes were :

    1) locate frame boundaries (not needed for MKV etc)
    2) copy the data into a contiguous block (not needed for MKV etc)
    3) locate the start codes within each frame
    4) unescape the data between start codes

    After this, the unescaped data was parsed to extract certain header fields,
    but because the unescape operation was so large, this was usually also
    effectively operating on uncached memory. Most of the unescaped data was
    simply thrown away and never processed further. Only step 2 - because it
    used memcpy - was using prefetch, making things even worse.

    This patch reorganises these steps so that, aside from the copying, the
    operations are performed in parallel, maximising cache utilisation. No more
    than the worst-case number of bytes needed for header parsing is unescaped.
    Most of the data is, in practice, only read in order to search for a start
    code, for which optimised implementations already existed in the H264 codec
    (notably the ARM version uses prefetch, so we end up doing both remaining
    passes at maximum speed). For MKV files, we know when we’ve found the last
    start code of interest in a given frame, so we are able to avoid doing even
    that one remaining pass for most of the buffer.

    In some use-cases (such as the Raspberry Pi) video decode is handled by the
    GPU, but the entire elementary stream is still fed through the parser to
    pick out certain elements of the header which are necessary to manage the
    decode process. As you might expect, in these cases, the performance of the
    parser is significant.

    To measure parser performance, I used the same VC-1 elementary stream in
    either an MPEG-2 transport stream or a MKV file, and fed it through avconv
    with -c:v copy -c:a copy -f null. These are the gperftools counts for
    those streams, both filtered to only include vc1_parse() and its callees,
    and unfiltered (to include the whole binary). Lower numbers are better :

    Before After
    File Filtered Mean StdDev Mean StdDev Confidence Change
    M2TS No 861.7 8.2 650.5 8.1 100.0% +32.5%
    MKV No 868.9 7.4 731.7 9.0 100.0% +18.8%
    M2TS Yes 250.0 11.2 27.2 3.4 100.0% +817.9%
    MKV Yes 149.0 12.8 1.7 0.8 100.0% +8526.3%

    Yes, that last case shows vc1_parse() running 86 times faster ! The M2TS
    case does show a larger absolute improvement though, since it was worse
    to begin with.

    This patch has been tested with the FATE suite (albeit on x86 for speed).

    Signed-off-by : Luca Barbato <lu_zero@gentoo.org>

    • [DBH] libavcodec/vc1_parser.c
  • ffmpeg | x265 VS ffmpeg

    5 octobre 2016, par Kdmeizk

    What are advantages (like a better result for example) to use

    ffmpeg -i input -pix_fmt yuv444p -f yuv4mpegpipe - | x265.exe --input - --y4m --output output

    instead of

    ffmpeg -i input -pix_fmt yuv444p -c:v libx265 output

    I saw often the first command, but with a bit of tests, the result is exactly the same for I/P/B frame numbers, and differ only of 20 B/s for I frame (plus, I assume this difference should not be taken into account).