Recherche avancée

Médias (5)

Mot : - Tags -/open film making

Autres articles (58)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (10846)

  • Why am I getting FFMPEG "cannot set sample format 0x10000 2 (Invalid argument)" error ? [duplicate]

    30 mars 2020, par Embedded

    I’m trying to stream audio input from Seeed Studio 4-Mic-Array Respeaker on desired IP address and port on Raspberry Pi 4.

    Command that I’m using is :

    ffmpeg -re -f alsa -i hw:1,0 -ac 4 -ar 16000 -f rtp rtp://10.0.0.1:1234

    Error I’m getting is :

    cannot set sample format 0x10000 2 (Invalid argument)

    Result from arecord --dump-hw-params -D hw:1,0 command is :

    Recording WAVE 'stdin' : Unsigned 8 bit, Rate 8000 Hz, Mono
    HW Params of device "hw:1,0":
    --------------------
    ACCESS:  MMAP_INTERLEAVED RW_INTERLEAVED
    FORMAT:  S32_LE
    SUBFORMAT:  STD
    SAMPLE_BITS: 32
    FRAME_BITS: 128
    CHANNELS: 4
    RATE: [8000 48000]
    PERIOD_TIME: (333 2048000]
    PERIOD_SIZE: [16 16384]
    PERIOD_BYTES: [256 262144]
    PERIODS: [2 2048]
    BUFFER_TIME: (666 4096000]
    BUFFER_SIZE: [32 32768]
    BUFFER_BYTES: [512 524288]
    TICK_TIME: ALL
    --------------------
    arecord: set_params:1299: Sample format non available
    Available formats:
    - S32_LE
  • How to Locate Timestamps of a Subset of a Video

    29 mai 2020, par Zhanwen Chen

    I have never done any video-based programming before, and although this SuperUser post provides a way to do it on the command line, I prefer a programmatic approach, preferably with Python.

    



    I have a bunch of sub-videos. Suppose one of them is called 1234_trimmed.mp4 which is a short segment cut from the original, much-longer video 1234.mp4. How can I figure out the start and end timestamps of 1234_trimmed.mp4 inside 1234.mp4 ?

    



    FYI, the videos are all originally on YouTube anyway ("1234" corresponds to the YouTube video ID) if there's any shortcut that way.

    


  • FFmpeg : stream audio playlist, normalize loudness and generate spectrogram and waveform

    23 février 2021, par watain

    I would like to use FFmpeg to stream a playlist containing multiple audio files (mainly FLAC and MP3). During playback, I would like FFmpeg to normalize the loudness of the audio signal and generate a spectrogram and waveform of the audio signal separately. The spectrogram and waveform should serve as a audio stream monitor. The final audio stream, spectrogram and waveform outputs will be sent to a browser, which plays the audio stream and continuously renders the spectrogram and waveform "images". I would also like to be able to remove and add audio files from the playlist during playback.

    



    As a first step, I would like to use the ffmpeg command to achieve the desired result, before I'll try to write code which does the same programmatically.

    



    (sidenote : I've discovered libgroove which basically does what I want, but I would like to understand the FFmpeg internals and write my own piece of software. The target language is Go and using either the goav or go-libav libraries might do the job. However, I might end up writing the code in C, then creating Go language bindings from C, instead of relying on one of the named libraries.)

    



    Here's a little overview :

    



    playlist (input) --> loudnorm --> split --> spectrogram --> separate output
                                    |
                                  split ---> waveform ----> separate output
                                    |
                                    +------> encode ------> audio stream output


    



    For the loudness normalization, I intend to use the loudnorm filter, which implements the EBU R128 algorithm.

    



    For the spectrogram, I intend to use the showspectrum or showspectrumpic filter. Since I want the spectrogram to be "steamable", I'm not really sure how to do this. Maybe there's a way to output segments step-by-step ? Or maybe there's a way to output some sort of representation (JSON or any other format) step-by-step ?

    



    For the waveform, I intend to use the showwaves or showwavespic filter. The same as for the spectrogram applies here, since the output should be "streamable".

    



    I'm having a little trouble to achieve what I want using the ffmpeg command. Here's what I have so far :

    



    ffmpeg \
    -re -i input.flac \
    -filter_complex "
      [0:a] loudnorm [ln]; \
      [ln] asplit [a][b]; \
      [a] showspectrumpic=size=640x518:mode=combined [ss]; \
      [b] showwavespic=size=1280x202 [sw]
    " \
    -map '[ln]' -map '[ss]' -map '[sw]' \
    -f tee \
    -acodec libmp3lame -ab 128k -ac 2 -ar 44100 \
    "
      [aselect='ln'] rtp://127.0.0.1:1234 | \
      [aselect='ss'] ss.png | \
      [aselect='sw'] sw.png
    "


    



    Currently, I get the following error :

    



    Output with label 'ln' does not exist in any defined filter graph, or was already used elsewhere.


    



    Also, I'm not sure whether aselect is the correct functionality to use. Any Hints ?