Recherche avancée

Médias (91)

Autres articles (72)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Menus personnalisés

    14 novembre 2010, par

    MediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
    Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
    Menus créés à l’initialisation du site
    Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...)

Sur d’autres sites (10640)

  • Produce waveform video from audio using FFMPEG

    27 avril 2017, par RhythmicDevil

    I am trying to create a waveform video from audio. My goal is to produce a video that looks something like this

    enter image description here

    For my test I have an mp3 that plays a short clipped sound. There are 4 bars of 1/4 notes and 4 bars of 1/8 notes played at 120bpm. I am having some trouble coming up with the right combination of preprocessing and filtering to produce a video that looks like the image. The colors dont have to be exact, I am more concerned with the shape of the beats. I tried a couple of different approaches using showwaves and showspectrum. I cant quite wrap my head around why when using showwaves the beats go past so quickly, but using showspectrum produces a video where I can see each individual beat.

    ShowWaves

    ffmpeg -i beat_test.mp3 -filter_complex "[0:a]showwaves=s=1280x100:mode=cline:rate=25:scale=sqrt,format=yuv420p[v]" -map "[v]" -map 0:a output_wav.mp4

    This link will download the output of that command.

    ShowSpectrum

    ffmpeg -i beat_test.mp3 -filter_complex "[0:a]showspectrum=s=1280x100:mode=combined:color=intensity:saturation=5:slide=1:scale=cbrt,format=yuv420p[v]" -map "[v]" -an -map 0:a output_spec.mp4

    This link will download the output of that command.

    I posted the simple examples because I didn’t want to confuse the issue by adding all the variations I have tried.

    In practice I suppose I can get away with the output from showspectrum but I’d like to understand where/how I am thinking about this incorrectly. Thanks for any advice.

    Here is a link to the source audio file.

  • lavu/frame : Add Dolby Vision metadata side data type

    3 janvier 2022, par Niklas Haas
    lavu/frame : Add Dolby Vision metadata side data type
    

    In order to be able to extend this struct later (as the Dolby Vision RPU
    evolves), all of the 'container' structs are considered extensible, and
    the individual constituent fields must instead be accessed via offsets.
    The precedent for this style of access is set in
    <libavutil/detection_bbox.h>

    Signed-off-by : Niklas Haas <git@haasn.dev>
    Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@outlook.com>

    • [DH] doc/APIchanges
    • [DH] libavutil/dovi_meta.c
    • [DH] libavutil/dovi_meta.h
    • [DH] libavutil/frame.c
    • [DH] libavutil/frame.h
    • [DH] libavutil/version.h
  • FFmpeg Has A Native VP8 Decoder

    24 juin 2010, par Multimedia Mike — VP8

    Thanks to David Conrad and Ronald Bultje who committed their native VP8 video decoder to the FFmpeg codebase yesterday. At this point, it can decode 14/17 of the VP8 test vectors that Google released during the initial open sourcing event. Work is ongoing on those 3 non-passing samples (missing bilinear filter). Meanwhile, FFmpeg’s optimization-obsessive personalities are hard at work optimizing the native decoder. The current decoder is already profiled to be faster than Google/On2’s official libvpx.

    Testing
    So it falls to FATE to test this on the ridiculous diversity of platforms that FFmpeg supports. I staged individual test specs for each of the 17 test vectors : vp8-test-vector-001 ... vp8-test-vector-017. After the samples have propagated through to the various FATE installations, I’ll activate the 14 test specs that are currently passing.

    Initial Testing Methodology
    Inspired by Ronald Bultje’s idea, I built the latest FFmpeg-SVN with libvpx enabled. Then I selected between the reference and native decoders as such :

    $ for i in 001 002 003 004 005 006 007 008 009 \
     010 011 012 013 014 015 016 017
    do
      echo vp80-00-comprehensive-$i.ivf
      ffmpeg -vcodec libvpx -i \
        /path/to/vp8-test-vectors-r1/vp80-00-comprehensive-$i.ivf \
        -f framemd5 - 2> /dev/null
    done > refs.txt
    

    $ for i in 001 002 003 004 005 006 007 008 009 \
    010 011 012 013 014 015 016 017
    do
    echo vp80-00-comprehensive-$i.ivf
    ffmpeg -vcodec vp8 -i \
    /path/to/vp8-test-vectors-r1/vp80-00-comprehensive-$i.ivf \
    -f framemd5 - 2> /dev/null
    done > native.txt

    $ diff -u refs.txt native.txt

    That reveals precisely which files differ.