Recherche avancée

Médias (17)

Mot : - Tags -/wired

Autres articles (72)

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (10163)

  • Normalizing audio in ffmpeg - how ?

    11 novembre 2020, par Betty Crokker

    I'm creating one of those "Brady Bunch" videos for a choir using a C# application I'm writing that uses ffmpeg for all the heavy lifting, and for the most part it's working great but I'm having trouble getting the audio levels just right.

    


    What I'm doing right now, is first "normalizing" the audio from the individual singers like this :

    


      

    • Extract audio into a WAV file using ffmpeg
    • 


    • Load the WAV file into my application using NAudio
    • 


    • Find the maximum 16-bit value
    • 


    • When I create the merged video, specify a volume for this stream that boosts the maximum value to 32767
    • 


    


    So, for example, if I have 3 streams : stream A's maximum audio is 32767 already, stream B's maximum audio is 32000, and stream C's maximum audio is 16000, then when I merge these videos I will specify

    


    [0:a]volume=1.0,aresample=async=1:first_pts=0[aud0]
[1:a]volume=1.02,aresample=async=1:first_pts=0[aud1]
[2:a]volume=2.05,aresample=async=1:first_pts=0[aud2]
[aud0][aud1][aud2]amix=inputs=3[a]


    


    (I have an additional "volume tweak" that lets me adjust the volume level of individual singers as necessary, but we can ignore that for this question)

    


    I am reading the ffmpeg wiki on Audio Volume Manipulation, and I will implement that next, but I don't know what to do with the output it generates. It looks like I'm going to get mean and max volume levels in dB and while I understand decibels in a "yeah, I learned about those in college 30 years ago" kind of way, I don't know how to use those values to normalize the audio of my input videos.

    


    The problem is, in the ffmpeg output video, the audio level is quite low. If I do the same process of extracting the audio and looking at the WAV file in the merged video that ffmpeg generated, the maximum value is only 4904.

    


    How do I implement an algorithm that automatically sets the output volume to a "reasonable" level ? I realize I can simply add a manual volume filter and have the human set the level, but that's going to be a lot of back & forth of generating the merged video, listening to it, adjusting the level, merging again, etc. I want a way where my application figures out an appropriate output volume (possibly with human adjustment allowed).

    


    EDIT

    


    Asking ffmpeg to determine the mean and max volume of each clip does provide mean and max volume in dB, and I can then use those values to scale each input clip :

    


    [0:a]volume=3.40dB,aresample=async=1:first_pts=0[aud0]
[1:a]volume=3.90dB,aresample=async=1:first_pts=0[aud1]
[2:a]volume=4.40dB,aresample=async=1:first_pts=0[aud2]
[3:a]volume=-0.00dB,aresample=async=1:first_pts=0[aud3]


    


    But my final video is still strangely quiet. For now, I've added a manually-entered volume factor that gets applied at the very end :

    


    [aud0][aud1][aud2]amix=inputs=3[a]
[a]volume=volume=3.00[b]


    


    So my question is, in effect, how do I determine algorithmically what this final volume factor needs to be ?

    


    MORE EDIT

    


    There's something deeper going on here, I just set the volume filter to 100 and the output is only slightly louder. Here are my filters, and the relevant portions of the command line :

    


    color=size=1920x1080:c=0x0000FF [base];
[0:v] scale=576x324 [clip0];
[0:a]volume=1.48,aresample=async=1:first_pts=0[aud0];
[1:v] crop=808:1022:202:276,scale=384x486 [clip1];
[1:a]volume=1.57,aresample=async=1:first_pts=0[aud1];
[2:v] crop=1160:1010:428:70,scale=558x486 [clip2];
[2:a]volume=1.66,aresample=async=1:first_pts=0[aud2];
[3:v] crop=1326:1080:180:0,scale=576x469 [clip3];
[3:a]volume=1.70,aresample=async=1:first_pts=0[aud3];
[4:a]volume=0.20,aresample=async=1:first_pts=0[aud4];
[5:a]volume=0.73,aresample=async=1:first_pts=0[aud5];
[6:v] crop=1326:1080:276:0,scale=576x469 [clip4];
[6:a]volume=1.51,aresample=async=1:first_pts=0[aud6];
[base][clip0] overlay=shortest=1:x=32:y=158 [tmp0];
[tmp0][clip1] overlay=shortest=1:x=768:y=27 [tmp1];
[tmp1][clip2] overlay=shortest=1:x=1321:y=27 [tmp2];
[tmp2][clip3] overlay=shortest=1:x=32:y=625 [tmp3];
[tmp3][clip4] overlay=shortest=1:x=672:y=625 [tmp4];
[aud0][aud1][aud2][aud3][aud4][aud5][aud6]amix=inputs=7[a];
[a]adelay=delays=200:all=1[b];
[b]volume=volume=100.00[c];
[c]asplit[a1][a2];

ffmpeg -y ....
   -map "[tmp4]" -map "[a1]" -c:v libx264 "D:\voutput.mp4" 
   -map "[a2]" "D:\aoutput.mp3""


    


    When I do this, the audio I want is louder (loud enough to clip and get distorted), but definitely not 100x louder.

    


  • Android Java (ffmpeg-kit). Assistance(opinion) with combining 4 ffmpeg commands together

    26 novembre 2022, par D-MAN

    I have the following 4 ffmpeg commands. 1. adds a png frame(border) over the entire length of the video. 2. creates a boomerang effect. 3. adds an outro jpeg to the last 2 seconds of the video. 4. Adds an intro jpeg to the first 2 seconds of the video.(these commands work individually)

    


    My aim is to combine all of these individual commands into one command to create a complete edited video. The final video needs all these elements in one final edited video.

    


    Your assistance is greatly appreciated.

    


    /**
 *

 * (Middle overlay filter) String exe = "-i " + input_video_uri + " -framerate 60 -i " + frame + " -filter_complex [0]pad="+mVideoWidth+":"+mVideoHeight+":576:0[vid];[vid][1]overlay -c:a copy -vcodec mpeg4 -crf 0 -preset ultrafast -qscale 0 " + file2.getAbsolutePath();

 * (Boomerang effect) String exe = "-y -i " + input_video_uri + " -filter_complex [0]reverse[r];[0][r][0]concat=n=3,setpts=0.5*PTS " + file2.getAbsolutePath();

 * (Put image at end of video) String exe = "-i "+ input_video_uri +" -i "+ frame +" -filter_complex \"[0:v][1:v] overlay=0:0:enable='between(t,"+ (msec - 2 ) + ","+ msec+")'\" -pix_fmt yuv420p -c:a copy " + file2.getAbsolutePath();

 * (Put image at start of video) String exe = "-i "+ input_video_uri +" -i "+ frame +" -filter_complex \"[0:v][1:v] overlay=0:0:enable='between(t,0,2)'\" -pix_fmt yuv420p -c:a copy " + file2.getAbsolutePath();

 * */


    


    Being new to ffmpeg, I am limited in knowledge. However, I have tried '&&' which produced an unrecognized error from the ffmpeg library.

    


  • x86/tx_float : remove vgatherdpd usage

    20 mai 2022, par Lynne
    x86/tx_float : remove vgatherdpd usage
    

    Its performance loss ranges from either being just as fast as individual loads
    (Skylake), a few percent slower (Alderlake), 8% slower (Zen 3), to completely
    disasterous (older/other CPUs).

    Sadly, gathers never panned out fast on x86, even with the benefit of time and
    implementation experience.

    This also saves a register, as there's no need to fill out an additional
    register mask.

    Zen 3 (16384-point transform) :
    Before : 1561050 decicycles in av_tx (fft), 131072 runs, 0 skips
    After : 1449621 decicycles in av_tx (fft), 131072 runs, 0 skips

    Alderlake :
    2% slower on big transforms (65536), to 1% (131072), to a few percent for smaller
    sizes.

    • [DH] libavutil/x86/tx_float.asm
    • [DH] libavutil/x86/tx_float_init.c