Recherche avancée

Médias (1)

Mot : - Tags -/blender

Autres articles (36)

  • Modifier la date de publication

    21 juin 2013, par

    Comment changer la date de publication d’un média ?
    Il faut au préalable rajouter un champ "Date de publication" dans le masque de formulaire adéquat :
    Administrer > Configuration des masques de formulaires > Sélectionner "Un média"
    Dans la rubrique "Champs à ajouter, cocher "Date de publication "
    Cliquer en bas de la page sur Enregistrer

  • Le plugin : Podcasts.

    14 juillet 2010, par

    Le problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
    Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
    Types de fichiers supportés dans les flux
    Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

Sur d’autres sites (9786)

  • OSX MistServer/FFMPEG : RTMP Input/Output error

    28 septembre 2017, par brewcrazy

    I have an IP camera that outputs a RTSP stream that I’m trying to use to display a live feed on my website. This is a small site that only my wife and I will access so I’m trying to use a free streaming service. For that reason, I’ve decided to try MistServer’s open source option.

    I currently have downloaded MistServer and have it running without installation on my mac (sudo ./MistController). With MistServer running, I have a stream set up and default protocols configured. The stream is configured as follows :

    stream name: ipcam
    source: push://

    The configuration page gives me the following source to push to :

    RTMP full url: rtmp://127.0.0.1/live/ipcam
    RTMP url: rtmp://127.0.0.1/live/
    RTMP stream key: ipcam

    In the streams view, the stream’s status is unavailable, but I’m assuming this is because it isn’t receiving an input. I haven’t been able to confirm this via documentation.

    Here is the FFMPEG command that I am running and the error that I’m getting :

    ffmpeg -rtsp_transport tcp -i rtsp://<user>:@:554/live0.264 -acodec copy -vcodec copy -f flv rtmp://127.0.0.1/live/ipcam

    ffmpeg version 3.3.3 Copyright (c) 2000-2017 the FFmpeg developers
     built with Apple LLVM version 8.1.0 (clang-802.0.42)
     configuration: --prefix=/usr/local/Cellar/ffmpeg/3.3.3 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-libmp3lame --enable-libx264 --enable-libxvid --enable-opencl --enable-videotoolbox --disable-lzma --enable-vda
     libavutil      55. 58.100 / 55. 58.100
     libavcodec     57. 89.100 / 57. 89.100
     libavformat    57. 71.100 / 57. 71.100
     libavdevice    57.  6.100 / 57.  6.100
     libavfilter     6. 82.100 /  6. 82.100
     libavresample   3.  5.  0 /  3.  5.  0
     libswscale      4.  6.100 /  4.  6.100
     libswresample   2.  7.100 /  2.  7.100
     libpostproc    54.  5.100 / 54.  5.100
    Guessed Channel Layout for Input Stream #0.1 : mono
    Input #0, rtsp, from 'rtsp://admin:@192.168.10.112:554/live0.264':
     Metadata:
       title           : Session Streamed by LIBZRTSP
       comment         : live0.264
     Duration: N/A, start: 0.242000, bitrate: N/A
       Stream #0:0: Video: h264 (Main), yuv420p(progressive), 1280x720, 25 fps, 24.83 tbr, 90k tbn, 50 tbc
       Stream #0:1: Audio: pcm_mulaw, 8000 Hz, mono, s16, 64 kb/s
    rtmp://127.0.0.1/live/ipcam: Input/output error
    </user>

    I can’t determine from this error if the issue is the FFMPEG command or my MistServer configuration.

  • Why is ffmpeg's hstack so much slower than overlay and pad ?

    27 janvier 2021, par cgenco

    I'm using ffmpeg to stitch together two videos of people chatting into a video with each of them side-by-side, like this :

    &#xA;

    left.mp4 + right.mp4 = out.mp4

    &#xA;

    Here's the command I'm currently using to get this done, which runs at 2.5x on my 13" M1 MacBook Pro :

    &#xA;

    ffmpeg -y -i left.mp4 -i right.mp4 -filter_complex "&#xA;  [0:v] crop=w=in_w/2 [croppedLeft];&#xA;  [1:v][1:v] overlay=x=overlay_w/4 [shiftedRight];&#xA;  [shiftedRight][croppedLeft] overlay [vout];&#xA;  [0:a][1:a] amix [aout]&#xA;" -map "[vout]" -map "[aout]" -ac 2 out.mp4&#xA;

    &#xA;

    This command crops the left video to half of its original width (cropping so the video is centered), then shifts the right video a quarter of its width to the right, then overlays the left video on the left half of the output merged with the shifted right video.

    &#xA;

    One day on my weekly fun-time read-through the FFmpeg filters documentation I stumbled on a filter named hstack, which is described as being "faster than using overlay and pad filter to create same output."

    &#xA;

    My ex wife can affirm that there are few higher priorities in my life than going faster, so I altered my ffmpeg script to use hstack instead of two overlays :

    &#xA;

    ffmpeg -y -i left.mp4 -i right.mp4 -filter_complex "&#xA;  [0:v] crop=w=in_w/2 [croppedLeft];&#xA;  [1:v] crop=w=in_w/2 [croppedRight];&#xA;  [croppedLeft][croppedRight] vstack [vout];&#xA;  [0:a][1:a] amix [aout]&#xA;" -map "[vout]" -map "[aout]" -ac 2 out.mp4&#xA;

    &#xA;

    ...but that command runs painfully slowly, like 0.1x. It takes multiple minutes to render a single second.

    &#xA;

    So uhhh what's going on here ? Why is hstack taking so long when it's supposed to be faster ?

    &#xA;

    I've tried this on both the M1 native build from OSXExperts (version N-99816-g3da35b7) and the standard ffmpeg from brew and hstack is just as slow on each.

    &#xA;

  • Python get audio data from rtsp stream

    22 avril 2021, par smashedbotatos

    I am trying to get audio data from an rstp stream that is in the format of mlaw with Python 3.7. I want to be able to place it in a numpy array like I can do with pyaudio. Then when there is sound, record it. It isn't something that always has audio noise.

    &#xA;

    This is how I coded it for Pyaudio using a physical input. Basically I want to do the same, but instead use an RTSP stream from a URL.

    &#xA;

    p = pyaudio.PyAudio()&#xA;stream = self.p.open(format=FORMAT,&#xA;                     channels=CHANNELS,&#xA;                     rate=RATE,&#xA;                     input=True,&#xA;                     output=True,&#xA;                     frames_per_buffer=chunk)&#xA;&#xA;def listen(self):&#xA;  print(&#x27;Listening beginning&#x27;)&#xA;  while True:&#xA;      input = self.stream.read(chunk)&#xA;      rms_val = self.rms(input)&#xA;      if rms_val > Threshold:&#xA;          record()&#xA;&#xA;def record():&#xA;    print(&#x27;Noise detected, recording beginning&#x27;)&#xA;    rec = []&#xA;    rec_start = time.time()&#xA;    current = time.time()&#xA;    end = time.time() &#x2B; TIMEOUT_LENGTH&#xA;&#xA;    while current &lt;= end:&#xA;&#xA;        data = self.stream.read(chunk)&#xA;        if rms(data) >= Threshold: end = time.time() &#x2B; 2&#xA;&#xA;        current = time.time()&#xA;        rec.append(data)&#xA;&#xA;def rms(frame):&#xA;    count = len(frame) / swidth&#xA;    format = "%dh" % (count)&#xA;    shorts = struct.unpack(format, frame)&#xA;    sum_squares = 0.0&#xA;    for sample in shorts:&#xA;        n = sample * SHORT_NORMALIZE&#xA;        sum_squares &#x2B;= n * n&#xA;    rms = math.pow(sum_squares / count, 0.5)&#xA;    return rms * 1000&#xA;

    &#xA;

    Here is what I have tried for ffmpeg, but it just freezes with no error and doesn't print any data. It even actually crashes the IoT device with the rtsp stream on it. Is there a way i can do this with urllib or requests or even an ffmpeg command opened with subprocess ?

    &#xA;

    import ffmpeg&#xA;&#xA;packet_size = 4096&#xA;&#xA;process = ffmpeg.input(&#x27;rtsp://192.168.1.122:554/au:scanner.au&#x27;).output(&#x27;-&#x27;, format=&#x27;mulaw&#x27;).run_async(pipe_stdout=True)&#xA;packet = process.stdout.read(packet_size)&#xA;&#xA;while process.poll() is None:&#xA;    packet = process.stdout.read(packet_size)&#xA;    print(packet)&#xA;

    &#xA;

    My end result is doing two things. One recording a wav when there is audio, Two, converting from the recorded wav and uploading that audio to SFTP as opus and mp3.

    &#xA;