Recherche avancée

Médias (91)

Autres articles (106)

  • Soumettre bugs et patchs

    10 avril 2011

    Un logiciel n’est malheureusement jamais parfait...
    Si vous pensez avoir mis la main sur un bug, reportez le dans notre système de tickets en prenant bien soin de nous remonter certaines informations pertinentes : le type de navigateur et sa version exacte avec lequel vous avez l’anomalie ; une explication la plus précise possible du problème rencontré ; si possibles les étapes pour reproduire le problème ; un lien vers le site / la page en question ;
    Si vous pensez avoir résolu vous même le bug (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

Sur d’autres sites (8470)

  • Convert DTS to AC3 but only if there is no AC3 track already present in container

    24 mars, par Domagoj

    I have sound system that does not support DTS only AC3. I'm automating the process using bash that detects when movie was added to folder, downloads subtitles and converts audio track to AC3 using this command (one part of it) :

    


    ffmpeg -i "{{episode}}" -map 0:v -map 0:a:0 -map 0:a -map 0:s -c:v copy -c:a copy -c:s copy -c:a:0 ac3 -b:a:0 640k "{{directory}}"/{{episode_name}}temp2.mkv

    


    This works without issue and I end up with a .mkv file that contains original DTS audio track and newly created AC3 audio track. The issue is that some files already contain both AC3 and DTS tracks and in those cases I end up with two AC3 tracks and one DTS track. Another issue is that this command is triggered every time there is update to subtitles. So it's possible that the command will execute multiple times in a period of a few days and the container will have X number of the AC3 tracks.

    


    I need a way to detect if file already contains AC3 track before I initiate command from above, but I'm not sure what the command would be. Any help is appreciated !

    


  • FFMPEG stream video to Youtube Live

    13 juin 2022, par BlessedHIT

    I have a mov file and I'm using ffmpeg to stream it to youtube live using the following command,

    


    ffmpeg -re -i "episode.mov" -pix_fmt yuvj420p -x264-params keyint=48:min-keyint=48:scenecut=-1 -b:v 4500k -b:a 128k -ar 44100 -acodec aac -vcodec libx264 -preset medium -crf 28 -threads 4 -f flv "rtmp://a.rtmp.youtube.com/live2/YOUTUBE.LIVESTREAM.KEY"


    


    But im getting the following message on youtube,

    


    YouTube is not receiving enough video to maintain smooth streaming. As such, viewers will experience buffering


    


    My ffmpeg output showed my bitrate being between 800 - 1000 mbps, way lower than what i have specified in my ffmpeg command.

    


    I am using a not so powerful virtual machine, and so i thought this might be why i am not getting the desired bitrate.

    


    To overcome my hardware limitations, I then decided to encode the file for streaming using this command :

    


    ffmpeg -i episode.mov -c:v libx264 -preset medium -b:v 4500k -maxrate 4500k -bufsize 6000k -vf "scale=1280:-1,format=yuv420p" -g 50 -c:a aac -b:a 128k -ac 2 -ar 44100 episode.flv


    


    Then I stream copy the file using :

    


    ffmpeg -re -i episode.flv -c copy -f flv "rtmp://a.rtmp.youtube.com/live2/YOUTUBE.LIVESTREAM.KEY"


    


    And that seems to give me a stream that youtube is happy with.

    


    My question is, is there a way I can rewrite my ffmpeg command to livestream with the desired bitrate without needing to first encode my mov to another file or is adding more memory the only way forward here ?

    


  • Feed output of one filter to the input of one filter multiple times with ffmpeg

    27 août 2021, par kentcdodds

    I have the following ffmpeg commands to create a podcast episode :

    


    # remove all silence at start and end of the audio files
ffmpeg -i call.mp3 -af silenceremove=1:0:-50dB call1.mp3
ffmpeg -i response.mp3 -af silenceremove=1:0:-50dB response1.mp3

# remove silence longer than 1 second anywhere within the audio files
ffmpeg -i call1.mp3 -af silenceremove=stop_periods=-1:stop_duration=1:stop_threshold=-50dB call2.mp3
ffmpeg -i response1.mp3 -af silenceremove=stop_periods=-1:stop_duration=1:stop_threshold=-50dB response2.mp3

# normalize audio files
ffmpeg -i call2.mp3 -af loudnorm=I=-16:LRA=11:TP=0.0 call3.mp3
ffmpeg -i response2.mp3 -af loudnorm=I=-16:LRA=11:TP=0.0 response3.mp3

# cross fade audio files with intro/interstitial/outro
ffmpeg -i intro.mp3 -i call3.mp3 -i interstitial.mp3 -i response3.mp3 -i outro.mp3
  -filter_complex "[0][1]acrossfade=d=1:c2=nofade[a01];
                   [a01][2]acrossfade=d=1:c1=nofade[a02];
                   [a02][3]acrossfade=d=1:c2=nofade[a03];
                   [a03][4]acrossfade=d=1:c1=nofade"
  output.mp3


    


    While this "works" fine, I can't help but feel like it would be more efficient to do this all in one ffmpeg command. Based on what I found online this should be possible, but I don't understand the syntax well enough to know how to make it work. Here's what I tried :

    


    ffmpeg -i intro.mp3 -i call.mp3 -i interstitial.mp3 -i response.mp3 -i outro.mp3
       -af [1]silenceremove=1:0:-50dB[trimmedCall]
       -af [3]silenceremove=1:0:-50dB[trimmedResponse]
       -af [trimmedCall]silenceremove=stop_periods=-1:stop_duration=1:stop_threshold=-50dB[noSilenceCall]
       -af [trimmedResponse]silenceremove=stop_periods=-1:stop_duration=1:stop_threshold=-50dB[noSilenceResponse]
       -af [noSilenceCall]loudnorm=I=-16:LRA=11:TP=0.0[call]
       -af [noSilenceResponse]loudnorm=I=-16:LRA=11:TP=0.0[response]
      -filter_complex "[0][call]acrossfade=d=1:c2=nofade[a01];
                       [a01][2]acrossfade=d=1:c1=nofade[a02];
                       [a02][response]acrossfade=d=1:c2=nofade[a03];
                       [a03][4]acrossfade=d=1:c1=nofade"
  output.mp3


    


    But I have a feeling I have a fundamental misunderstanding of this because I got this error which I don't understand :

    


    Stream specifier 'call' in filtergraph description 
[0][call]acrossfade=d=1:c2=nofade[a01];
[a01][2]acrossfade=d=1:c1=nofade[a02];
[a02][response]acrossfade=d=1:c2=nofade[a03];
[a03][4]acrossfade=d=1:c1=nofade
       matches no streams.


    


    For added context, I'm running all these commands through @ffmpeg/ffmpeg so that last command actually looks like this (in JavaScript) :

    


    await ffmpeg.run(
  '-i', 'intro.mp3',
  '-i', 'call.mp3',
  '-i', 'interstitial.mp3',
  '-i', 'response.mp3',
  '-i', 'outro.mp3',
  '-af', '[1]silenceremove=1:0:-50dB[trimmedCall]',
  '-af', '[3]silenceremove=1:0:-50dB[trimmedResponse]',
  '-af', '[trimmedCall]silenceremove=stop_periods=-1:stop_duration=1:stop_threshold=-50dB[noSilenceCall]',
  '-af', '[trimmedResponse]silenceremove=stop_periods=-1:stop_duration=1:stop_threshold=-50dB[noSilenceResponse]',
  '-af', '[noSilenceCall]loudnorm=I=-16:LRA=11:TP=0.0[call]',
  '-af', '[noSilenceResponse]loudnorm=I=-16:LRA=11:TP=0.0[response]',
  '-filter_complex', `
[0][call]acrossfade=d=1:c2=nofade[a01];
[a01][2]acrossfade=d=1:c1=nofade[a02];
[a02][response]acrossfade=d=1:c2=nofade[a03];
[a03][4]acrossfade=d=1:c1=nofade
  `,
  'output.mp3',
)