Recherche avancée

Médias (91)

Autres articles (54)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (10293)

  • FFMPEG : Generate 7.1 channel audio file with the longest time of input file

    10 juin 2020, par Anthony

    I want to use ffmpeg to generate 7.1 channel audio file from 8 different audio files.
But I found the output file's duration is decided by the input file with shortest duration.
I didn't find any parameter to auto-pad the shorter audio file or choose the longest duration as final duration.

    



    I already have seen the offlical document as below.
https://ffmpeg.org/ffmpeg-all.html
https://trac.ffmpeg.org/wiki/AudioChannelManipulation
But nothing is helpful.

    



    This is the command I use right now :

    



    ffmpeg -i fl.wav -i fr.wav -i fc.wav -i lfe.wav -i bl.wav -i bl.wav -i sl.wav -i sr.wav -filter_complex "[0:a][1:a][2:a][3:a][4:a][5:a][6:a][7:a]join=inputs=8:channel_layout=7.1[a]" -map "[a]" output.wav

    


  • Python : Passing complex (ffmpeg) arguments to Popen

    25 juin 2016, par xaccrocheur

    This ffmpeg Popen invocation works :

    command = ['ffmpeg', '-y',
              '-i', filename,
              '-filter_complex', 'showwavespic',
              '-colorkey', 'red',
              '-frames:v', '1',
              '-s', '800:30',
              '-vsync', '2',
              '/tmp/waveform.png']
    process = sp.Popen( command, stdin=sp.PIPE, stderr=sp.PIPE)
    process.wait()

    But I need to use ’compand, showwavespic’ and this comma seems to be blocking the execution. I also need to pass all sorts of strange characters, like columns and, well, all that you can find in a CLI invocation.

    How can I pass complex arguments ?

  • Transcode HLS Segments individually using FFMPEG

    27 mai 2013, par rayh

    I am recording a continuous, live stream to a high-bitrate HLS stream. I then want to asynchronously transcode this to different formats/bitrates. I have this working, mostly, except audio artefacts are appearing between each segment (gaps and pops).

    Here is an example ffmpeg command line :

    ffmpeg -threads 1 -nostdin -loglevel verbose \
      -nostdin -y -i input.ts -c:a libfdk_aac \
      -ac 2 -b:a 64k -y -metadata -vn output.ts

    Inspecting an example sound file shows that there is a gap at the end of the audio :

    End

    And the start of the file looks suspiciously attenuated (although this may not be an issue) :

    Start

    My suspicion is that these artefacts are happening because transcoding are occurring without the context of the stream as a whole.

    Any ideas on how to convince FFMPEG to produce audio that will fit back into a HLS stream ?

    ** UPDATE 1 **

    Here are the start/end of the original segment. As you can see, the start still appears the same, but the end is cleanly ended at 30s. I expect some degree of padding with lossy encoding, but I there is some way that HLS manages to do gapless playback (is this related to iTunes method with custom metadata ?)

    Original Start
    Original End

    ** UPDATED 2 **

    So, I converted both the original (128k aac in MPEG2 TS) and the transcoded (64k aac in aac/adts container) to WAV and put the two side-by-side. This is the result :

    Side-by-side start
    Side-by-side end

    I'm not sure if this is representative of how a client will play it back, but it seems a bit odd that decoding the transcoded one introduces a gap at the start and makes the segment longer. Given they are both lossy encoding, I would have expected padding to be equally present in both (if at all).

    ** UPDATE 3 **

    According to http://en.wikipedia.org/wiki/Gapless_playback - Only a handful of encoders support gapless - for MP3, I've switched to lame in ffmpeg, and the problem, so far, appears to have gone.

    For AAC (see http://en.wikipedia.org/wiki/FAAC), I have tried libfaac (as opposed to libfdk_aac) and it also seems to produce gapless audio. However, the quality of the latter isn't that great and I'd rather use libfdk_aac is possible.