Recherche avancée

Médias (1)

Mot : - Tags -/getid3

Autres articles (72)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (9534)

  • How to generate a video from image sequence of set duration

    4 septembre 2021, par Sherali Karimov

    I have a sequence of images, one per frame. Sometimes some frames are missing in random parts.

    


    If it possible to instruct ffmpeg to generate a video of a pre-set length in number of frames from this sequence replacing missing frames with a black screen ?

    


    This would allow us to notice that frames are missing during the visual inspection.

    


    I tried to overlay the stream of images with a loop of another, black, image but I don't know how to tell it to force continue looping for N frames...

    


    The OS is CentOS 8.
Here is how I am combining them now :

    


    ffmpeg -y -nostdin -hide_banner -thread_queue_size 4096 
    -start_number 1 -framerate 25 -apply_trc iec61966_2_1 -i /data/OUT/RENDER/MG013/MG013_007/beauty/MG013_007.Beauty.%04d.exr 
    -start_number 1 -framerate 25 -apply_trc iec61966_2_1 -i /data/OUT/RENDER/MG013/MG013_007/chars/MG013_007.Beauty.%04d.exr 
    -filter_complex "[0:v]copy[out];[out][1:v]overlay=x=(W-w)/2:y=(H-h)/2[out];[out][out]" -c:v libx264  -pix_fmt yuv420p  
    -profile:v baseline  -refs 2  -crf 18 -map '[out]' -s 1920x1080 -t 1.875 /data/MG/episodes/MG013/story/playblasts/render/MG013_007.mov


    


    This is working, except for the cases of missing frames...

    


    Thank you !

    


  • Feed output of one filter to the input of one filter multiple times with ffmpeg

    27 août 2021, par kentcdodds

    I have the following ffmpeg commands to create a podcast episode :

    


    # remove all silence at start and end of the audio files
ffmpeg -i call.mp3 -af silenceremove=1:0:-50dB call1.mp3
ffmpeg -i response.mp3 -af silenceremove=1:0:-50dB response1.mp3

# remove silence longer than 1 second anywhere within the audio files
ffmpeg -i call1.mp3 -af silenceremove=stop_periods=-1:stop_duration=1:stop_threshold=-50dB call2.mp3
ffmpeg -i response1.mp3 -af silenceremove=stop_periods=-1:stop_duration=1:stop_threshold=-50dB response2.mp3

# normalize audio files
ffmpeg -i call2.mp3 -af loudnorm=I=-16:LRA=11:TP=0.0 call3.mp3
ffmpeg -i response2.mp3 -af loudnorm=I=-16:LRA=11:TP=0.0 response3.mp3

# cross fade audio files with intro/interstitial/outro
ffmpeg -i intro.mp3 -i call3.mp3 -i interstitial.mp3 -i response3.mp3 -i outro.mp3
  -filter_complex "[0][1]acrossfade=d=1:c2=nofade[a01];
                   [a01][2]acrossfade=d=1:c1=nofade[a02];
                   [a02][3]acrossfade=d=1:c2=nofade[a03];
                   [a03][4]acrossfade=d=1:c1=nofade"
  output.mp3


    


    While this "works" fine, I can't help but feel like it would be more efficient to do this all in one ffmpeg command. Based on what I found online this should be possible, but I don't understand the syntax well enough to know how to make it work. Here's what I tried :

    


    ffmpeg -i intro.mp3 -i call.mp3 -i interstitial.mp3 -i response.mp3 -i outro.mp3
       -af [1]silenceremove=1:0:-50dB[trimmedCall]
       -af [3]silenceremove=1:0:-50dB[trimmedResponse]
       -af [trimmedCall]silenceremove=stop_periods=-1:stop_duration=1:stop_threshold=-50dB[noSilenceCall]
       -af [trimmedResponse]silenceremove=stop_periods=-1:stop_duration=1:stop_threshold=-50dB[noSilenceResponse]
       -af [noSilenceCall]loudnorm=I=-16:LRA=11:TP=0.0[call]
       -af [noSilenceResponse]loudnorm=I=-16:LRA=11:TP=0.0[response]
      -filter_complex "[0][call]acrossfade=d=1:c2=nofade[a01];
                       [a01][2]acrossfade=d=1:c1=nofade[a02];
                       [a02][response]acrossfade=d=1:c2=nofade[a03];
                       [a03][4]acrossfade=d=1:c1=nofade"
  output.mp3


    


    But I have a feeling I have a fundamental misunderstanding of this because I got this error which I don't understand :

    


    Stream specifier 'call' in filtergraph description 
[0][call]acrossfade=d=1:c2=nofade[a01];
[a01][2]acrossfade=d=1:c1=nofade[a02];
[a02][response]acrossfade=d=1:c2=nofade[a03];
[a03][4]acrossfade=d=1:c1=nofade
       matches no streams.


    


    For added context, I'm running all these commands through @ffmpeg/ffmpeg so that last command actually looks like this (in JavaScript) :

    


    await ffmpeg.run(
  '-i', 'intro.mp3',
  '-i', 'call.mp3',
  '-i', 'interstitial.mp3',
  '-i', 'response.mp3',
  '-i', 'outro.mp3',
  '-af', '[1]silenceremove=1:0:-50dB[trimmedCall]',
  '-af', '[3]silenceremove=1:0:-50dB[trimmedResponse]',
  '-af', '[trimmedCall]silenceremove=stop_periods=-1:stop_duration=1:stop_threshold=-50dB[noSilenceCall]',
  '-af', '[trimmedResponse]silenceremove=stop_periods=-1:stop_duration=1:stop_threshold=-50dB[noSilenceResponse]',
  '-af', '[noSilenceCall]loudnorm=I=-16:LRA=11:TP=0.0[call]',
  '-af', '[noSilenceResponse]loudnorm=I=-16:LRA=11:TP=0.0[response]',
  '-filter_complex', `
[0][call]acrossfade=d=1:c2=nofade[a01];
[a01][2]acrossfade=d=1:c1=nofade[a02];
[a02][response]acrossfade=d=1:c2=nofade[a03];
[a03][4]acrossfade=d=1:c1=nofade
  `,
  'output.mp3',
)


    


  • How to generate a video of set duration in frames from image sequence

    4 septembre 2021, par Sherali Karimov

    I have a sequence of images, one per frame. Sometimes some frames are missing in random parts.

    


    If it possible to instruct ffmpeg to generate a video of a pre-set length in number of frames from this sequence replacing missing frames with a black screen ?

    


    This would allow us to notice that frames are missing during the visual inspection.

    


    I tried to overlay the stream of images with a loop of another, black, image but I don't know how to tell it to force continue looping for N frames...

    


    The OS is CentOS 8.
Here is how I am combining them now :

    


    ffmpeg -y -nostdin -hide_banner -thread_queue_size 4096 
    -start_number 1 -framerate 25 -apply_trc iec61966_2_1 -i /data/OUT/RENDER/MG013/MG013_007/beauty/MG013_007.Beauty.%04d.exr 
    -start_number 1 -framerate 25 -apply_trc iec61966_2_1 -i /data/OUT/RENDER/MG013/MG013_007/chars/MG013_007.Beauty.%04d.exr 
    -filter_complex "[0:v]copy[out];[out][1:v]overlay=x=(W-w)/2:y=(H-h)/2[out];[out][out]" -c:v libx264  -pix_fmt yuv420p  
    -profile:v baseline  -refs 2  -crf 18 -map '[out]' -s 1920x1080  /data/MG/episodes/MG013/story/playblasts/render/MG013_007.mov


    


    This is working, except for the cases of missing frames...

    


    Ideally I would love to instruct it to generate, for instance, 450
frames and it to fill in the missing frames with blanks.

    


    Thank you !