Recherche avancée

Médias (1)

Mot : - Tags -/bug

Autres articles (62)

  • Utilisation et configuration du script

    19 janvier 2011, par

    Informations spécifiques à la distribution Debian
    Si vous utilisez cette distribution, vous devrez activer les dépôts "debian-multimedia" comme expliqué ici :
    Depuis la version 0.3.1 du script, le dépôt peut être automatiquement activé à la suite d’une question.
    Récupération du script
    Le script d’installation peut être récupéré de deux manières différentes.
    Via svn en utilisant la commande pour récupérer le code source à jour :
    svn co (...)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (4149)

  • FFmpeg CRF control using x264 vs libvpx-vp9

    19 octobre 2016, par igon

    I have some experience using ffmpeg with x264 and I wanted to do a comparison with libvpx-vp9. I tested a simple single pass encoding of a raw video, varying the crf settings and presets both with x264 and libvpx-vp9. I am new to libvpx and I followed this and this carefully but I might have still specified wrong combination of parameters since the results I get do not make much sense to me.

    For x264 I did :

    ffmpeg -i test_video.y4m -c:v libx264 -threads 1 -crf <crf> -preset <preset> -y output.mkv
    </preset></crf>

    and obtained the following results :

    codec  , settings                        , time        , PSNR      ,bitrate
    libx264,['-crf', '20', '-preset', 'fast'],13.1897280216, 42.938337 ,15728
    libx264,['-crf', '20', '-preset', 'medium'],16.80494689, 42.879753 ,15287
    libx264,['-crf', '20', '-preset', 'slow'],25.1142120361, 42.919206 ,15400
    libx264,['-crf', '30', '-preset', 'fast'],8.79047083855, 37.975141 ,4106
    libx264,['-crf', '30', '-preset', 'medium'],9.936599016, 37.713778 ,3749
    libx264,['-crf', '30', '-preset', 'slow'],13.0959510803, 37.569511 ,3555

    This makes sense to me, given a crf value you get a value of PSNR and changing the preset can decrease the bitrate but increase the time to encode.

    For libvpx-vp9 I did :

    ffmpeg -i test_video.y4m -c:v libvpx-vp9 -threads 1 -crf <crf> -cpu-used <effort> -y output.mkv
    </effort></crf>

    First of all I thought from tutorials online that the -cpu-used option is equivalent to -preset in x264. Is that correct ? If so what is the difference with -quality ? Furthermore since the range goes from -8 to 8 I assumed that negative values where the fast options while positive values the slowest. Results I get are very confusing though :

    codec     , settings                      , time        , PSNR     ,bitrate
    libvpx-vp9,['-crf', '20', '-cpu-used', '-2'],19.6644911766,32.54317,571
    libvpx-vp9,['-crf', '20', '-cpu-used', '0'],176.670887947,32.69899,564
    libvpx-vp9,['-crf', '20', '-cpu-used', '2'],20.0206270218,32.54317,571
    libvpx-vp9,['-crf', '30', '-cpu-used', '-2'],19.7931578159,32.54317,571
    libvpx-vp9,['-crf', '30', '-cpu-used', '0'],176.587754965,32.69899,564
    libvpx-vp9,['-crf', '30', '-cpu-used', '2'],19.8394429684,32.54317,571

    Bitrate is very low and PSNR seems unaffected by the crf setting (and very low compared to x264). The -cpu-used setting has very minimal impact and also seems that -2 and 2 are the same option.. What am I missing ? I expected libvpx to take more time to encode (which is definitely true) but at the same time higher quality transcodes. What parameters should I use to
    have a fair comparison with x264 ?

    Edit : Thanks to @mulvya and this doc I figured that to work in crf mode with libvpx I have to add -b:v 0. I re-ran my tests and I get :

       codec     , settings                                 , time        , PSNR     ,bitrate
    libvpx-vp9,['-crf', '20', '-b:v', '0', '-cpu-used', '-2'],57.6835780144,45.111158,17908
    libvpx-vp9,['-crf', '20', '-b:v', '0', '-cpu-used', '0'] ,401.360313892,45.285367,17431
    libvpx-vp9,['-crf', '20', '-b:v', '0', '-cpu-used', '2'] ,57.4941239357,45.111158,17908
    libvpx-vp9,['-crf', '30', '-b:v', '0', '-cpu-used', '-2'],49.175855875,42.588178,11085
    libvpx-vp9,['-crf', '30', '-b:v', '0', '-cpu-used', '0'] ,347.158324957,42.782194,10935
    libvpx-vp9,['-crf', '30', '-b:v', '0', '-cpu-used', '2'] ,49.1892938614,42.588178,11085

    PSNR and bitrate went up significantly by adding -b:v 0

  • avformat/matroskaenc : Don't waste bytes writing level 1 elements

    20 avril 2019, par Andreas Rheinhardt
    avformat/matroskaenc : Don't waste bytes writing level 1 elements
    

    Up until now, the length field of most level 1 elements has been written
    using eight bytes, although it is known in advance how much space the
    content of said elements will take up so that it would be possible to
    determine the minimal amount of bytes for the length field. This
    commit changes this.

    Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
    Signed-off-by : James Almer <jamrial@gmail.com>

    • [DH] libavformat/matroskaenc.c
    • [DH] tests/fate/matroska.mak
    • [DH] tests/fate/wavpack.mak
    • [DH] tests/ref/fate/aac-autobsf-adtstoasc
    • [DH] tests/ref/fate/binsub-mksenc
    • [DH] tests/ref/fate/rgb24-mkv
    • [DH] tests/ref/lavf/mka
    • [DH] tests/ref/lavf/mkv
    • [DH] tests/ref/lavf/mkv_attachment
    • [DH] tests/ref/seek/lavf-mkv
  • ffmpeg : use vidstabtransform to overlay it over blurred background

    5 novembre 2023, par konewka

    I am using ffmpeg to concatenate multiple video clips taken from the same object over multiple timeframes. To make sure the videos are properly aligned (and therefore show the object in rougly the same position), I manually identify two points in the first frame each clip, and use that to calculate the scaling and positioning necessary for proper alignment. I'm using Python for this, and it also generates the ffmpeg command for me. When it has calculated that the appropriate scale of the video is less than 100%, that means that some parts of the frame will become black. To counter that, I overlay the scaled and positioned video over a blurred version of the original video (like this effect)

    &#xA;

    Now, additionally, some of the video clips are a bit shaky, so my flow now first applies the vidstabdetect and vidstabtransform filters, and uses the transformed stabilized version as input for my final command. However, if the shaking is significant, the vidstabtransform will zoom in and therefore I will either lose some of the details around the edges, or a black border is created around the edge. As I am later including the stabilized version of the video in the concatenation, with the possibility of it shrinking, I would rather perform the vidstabtransform step inside my command, and use the output directly into the overlay over the blurred version. That way, I would want to achieve that the clip rotates across the frame as it is stabilized, and it is shown over the blurred background. Is it possible to achieve this using ffmpeg, or am I trying to stretch it too far ?

    &#xA;

    As a minimal example, these are my commands :

    &#xA;

    ffmpeg -i video1.mp4 -vf vidstabdetect=output=transform.trf -f null - &#xA;&#xA;ffmpeg -i video1.mp4 -vf vidstabtransform=input=transform.trf video1_stabilized.mp4&#xA;&#xA;# same for video2.mp4&#xA;&#xA;ffmpeg -i video1_stabilized.mp4 -i video2_stabilized.mp4 -filter_complex "&#xA;    [0:v]split=2[v0blur][v0scale];&#xA;    [v0blur]gblur=sigma=50[v0blur];  // blur the video&#xA;    [v0scale]scale=round(iw*0.8/2)*2:round(ih*0.8/2)*2[v0scale];  // scale the video&#xA;    [v0blur][v0scale]overlay=x=100:y=200[v0];  // overlay the scaled video over the blur at a specific location&#xA;    [1:v]split=2[v1blur][v1scale];&#xA;    [v1blur]gblur=sigma=50[v1blur];&#xA;    [v1scale]scale=round(iw*0.9/2)*2:round(ih*0.9/2)*2[v1scale];&#xA;    [v1blur][v1scale]overlay=x=150:y=150[v1];&#xA;    [v0][v1]concat=n=2  // concatenate the two clips" &#xA;-c:v libx264 -r 30 out.mp4&#xA;

    &#xA;

    So, I know I can put the vidstabtransform step into the filter_complex-graph (I'll do the detection in a separate step still), but can I also use it such that I can achieve the stabilization over the blurred background and have the clip move around the frame as it is stabilized ?

    &#xA;

    EDIT : so to include vidstabtransform into the filter graph, it would then look like this :

    &#xA;

    ffmpeg -i video1.mp4 -i video2.mp4 -filter_complex "&#xA;    [0:v]vidstabtransform=input=transform1.trf[v0stab]&#xA;    [v0stab]split=2[v0blur][v0scale];&#xA;    [v0blur]gblur=sigma=50[v0blur];&#xA;    [v0scale]scale=round(iw*0.8/2)*2:round(ih*0.8/2)*2[v0scale];&#xA;    [v0blur][v0scale]overlay=x=100:y=200[v0];&#xA;    [1:v]vidstabtransform=input=transform2.trf[v1stab]&#xA;    [v1stab]split=2[v1blur][v1scale];&#xA;    [v1blur]gblur=sigma=50[v1blur];&#xA;    [v1scale]scale=round(iw*0.9/2)*2:round(ih*0.9/2)*2[v1scale];&#xA;    [v1blur][v1scale]overlay=x=150:y=150[v1];&#xA;    [v0][v1]concat=n=2"&#xA;-c:v libx264 -r 30 out.mp4&#xA;

    &#xA;