Recherche avancée

Médias (0)

Mot : - Tags -/publication

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (76)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

Sur d’autres sites (10785)

  • Using FFMPEG to split a 16 channel audio input source into 4 seperate 4 channel audio feeds for streaming

    30 décembre 2019, par Mathew Knight

    I hope someone can help

    I am currently trying to split a 16ch Dante audio feed from a separate machine into 4 different audio streams that I can use to then TX via RTMP to Wowza for MPEG-DASH encoding, at present i am just trying to split them into files, I will add the RTMP streaming later.

    The biggest issue I am encountering at current is that FFMPEG is returning me this error from my input string

    Filter channelsplit:WR has an unconnected output

    here is my current input string

    ffmpeg -f dshow -i audio="Dante Via Receive (Dante Via)" -filter_complex "[0:a]channelsplit=channel_layout=hexadecagonal[FL][FR][FC][BL][BR][BC][SL][SR][TFL][TFC][TFR][TBL][TBC][TBR][WL][WR]" -map "[FL][FR][FC][BL]" 1-4.wav -map "[BR][BC][SL][SR]" 5-8.wav -map "[TFL][TFC][TFR][TBL]" 9-12.wav -map "[TBC][TBR][WL][WR]" 13-16.wav

    and here is the full FFMPEG output

    ffmpeg version git-2019-12-26-b0d0d7e Copyright (c) 2000-2019 the FFmpeg developers
     built with gcc 9.2.1 (GCC) 20191125
     configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt --enable-amf
     libavutil      56. 37.100 / 56. 37.100
     libavcodec     58. 65.100 / 58. 65.100
     libavformat    58. 35.101 / 58. 35.101
     libavdevice    58.  9.101 / 58.  9.101
     libavfilter     7. 69.101 /  7. 69.101
     libswscale      5.  6.100 /  5.  6.100
     libswresample   3.  6.100 /  3.  6.100
     libpostproc    55.  6.100 / 55.  6.100
    Guessed Channel Layout for Input Stream #0.0 : stereo
    Input #0, dshow, from 'audio=Dante Via Receive (Dante Via)':
     Duration: N/A, start: 103082.790000, bitrate: 1411 kb/s
       Stream #0:0: Audio: pcm_s16le, 44100 Hz, stereo, s16, 1411 kb/s
    File '1-4.wav' already exists. Overwrite? [y/N] y
    File '5-8.wav' already exists. Overwrite? [y/N] y
    File '9-12.wav' already exists. Overwrite? [y/N] y
    File '13-16.wav' already exists. Overwrite? [y/N] y
    Filter channelsplit:WR has an unconnected output

    I’m also getting the issue where FFMPEG is guessing that the channel count is stereo, which is incorrect but i’m having problems figuring out how to define the input stream as 16ch’s of audio

    Any help with this would be greatly recieved

    Cheers

    M

  • FFMPEG detect silence command runs correctly but doses not give the silence duration

    7 janvier 2020, par Aizayousaf

    I have a .wav audio file and I need to extract silence/pause duration in this file. I’m using ffmpeg with silence detect filter but I’m unable to understand why its not giving silence duration with this file while it gives result with other files. Can anyone help me to understand the out given below that why its not showing detected silences.

    Input Command :

    ffmpeg -i "input.wav" -af silencedetect=noise=-30dB:d=0.5 -f null -

    OutPut

    ffmpeg version 4.2.1 Copyright (c) 2000-2019 the FFmpeg developers
    built with gcc 9.1.1 (GCC) 20190807
    configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --    enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-
    libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-
    libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-
    libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --
    enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --
    enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --
    enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --    enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt

    libavutil      56. 31.100 / 56. 31.100
    libavcodec     58. 54.100 / 58. 54.100
    libavformat    58. 29.100 / 58. 29.100
    libavdevice    58.  8.100 / 58.  8.100
    libavfilter     7. 57.100 /  7. 57.100
    libswscale      5.  5.100 /  5.  5.100
    libswresample   3.  5.100 /  3.  5.100
    libpostproc    55.  5.100 / 55.  5.100

    Guessed Channel Layout for Input Stream #0.0 : stereo
    Input #0, wav, from 'D:\Research\PhD\Carolina\AD\wav\media.io_Wakeman_Rhyne_001_01.wav':
    Duration: 00:17:38.04, bitrate: 1411 kb/s
    Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, stereo, s16, 1411 kb/s
    Stream mapping:
    Stream #0:0 -> #0:0 (pcm_s16le (native) -> pcm_s16le (native))
    Press [q] to stop, [?] for help
    Output #0, null, to 'pipe:':
  • PyQt-thread. Get dynamicly output

    1er janvier 2020, par ZPro

    I use PyQt-thread for parallel conversion of mp3 files to aac via ffmpeg.
    Here is my code :

    class SubprocessThread(QThread):
       signal = pyqtSignal('PyQt_PyObject')

       def __init__(self, command, args):
           QThread.__init__(self)
           self.command = command
           self.args = args

       def __del__(self):
           self.wait()

       def run(self):
           output = subprocess.check_output('{0} {1}'.format(self.command, self.args), shell=True).split()
           self.signal.emit(output)

    And here is example of usage :

    threads = []

               for part in parts.keys():
                   args = "-i \'{0}.mp3\' -c:a aac -b:a {1}k \'{2}.m4a\'".format(
                       os.path.join(tmp_dir, str(part)),
                       int(self.bitrate_cbx.currentText()),
                       os.path.join(tmp_dir, str(part)))
                   print(args)  # debug
                   ffmpeg_thread = SubprocessThread('ffmpeg', args)
                   ffmpeg_thread.signal.connect(self.on_data_ready)
                   threads.append(ffmpeg_thread)
                   ffmpeg_thread.start()
                   self.threads_count += 1

    I want to make progress bar, based on conversion, but ffmpeg always updates last string in his output (when conversion in progress).
    Here is an example of ffmpeg output while files are converting :

    user@host$ ffmpeg -i '/home/user/001.mp3' -c:a aac -b:a 128k -vn '/home/user/test.m4a'
    ffmpeg version n4.2.1 Copyright (c) 2000-2019 the FFmpeg developers
     built with gcc 9.2.0 (GCC)
     configuration: --prefix=/usr --disable-debug --disable-static --disable-stripping --enable-fontconfig --enable-gmp --enable-gnutls --enable-gpl --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libdav1d --enable-libdrm --enable-libfreetype --enable-libfribidi --enable-libgsm --enable-libiec61883 --enable-libjack --enable-libmodplug --enable-libmp3lame --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libv4l2 --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxcb --enable-libxml2 --enable-libxvid --enable-nvdec --enable-nvenc --enable-omx --enable-shared --enable-version3
     libavutil      56. 31.100 / 56. 31.100
     libavcodec     58. 54.100 / 58. 54.100
     libavformat    58. 29.100 / 58. 29.100
     libavdevice    58.  8.100 / 58.  8.100
     libavfilter     7. 57.100 /  7. 57.100
     libswscale      5.  5.100 /  5.  5.100
     libswresample   3.  5.100 /  3.  5.100
     libpostproc    55.  5.100 / 55.  5.100
    Input #0, mp3, from '/home/user/001.mp3':
     Metadata:
       encoder         : Lavf57.41.100
       title           : test
       artist          : test
       album_artist    : test
       album           : test
       composer        : test
       genre           : test
       date            : 2018
     Duration: 00:12:38.02, start: 0.025056, bitrate: 192 kb/s
       Stream #0:0: Audio: mp3, 44100 Hz, stereo, fltp, 192 kb/s
       Metadata:
         encoder         : Lavc57.48
       Stream #0:1: Video: mjpeg (Baseline), yuvj420p(pc, bt470bg/unknown/unknown), 500x500 [SAR 1:1 DAR 1:1], 90k tbr, 90k tbn, 90k tbc (attached pic)
       Metadata:
         comment         : Cover (front)
    Stream mapping:
     Stream #0:0 -> #0:0 (mp3 (mp3float) -> aac (native))
    Press [q] to stop, [?] for help
    Output #0, ipod, to '/home/user/test.m4a':
     Metadata:
       date            : test
       title           : test
       artist          : test
       album_artist    : test
       album           : test
       composer        : test
       genre           : test
       encoder         : Lavf58.29.100
       Stream #0:0: Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s
       Metadata:
         encoder         : Lavc58.54.100 aac
    size=   12107kB time=00:12:38.01 bitrate= 130.8kbits/s speed=79.2x

    How can I receive this data (string, that begins from "size=...") from my parallel QThreads to calculate overall progress ?