Recherche avancée

Médias (91)

Autres articles (103)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Monitoring de fermes de MediaSPIP (et de SPIP tant qu’à faire)

    31 mai 2013, par

    Lorsque l’on gère plusieurs (voir plusieurs dizaines) de MediaSPIP sur la même installation, il peut être très pratique d’obtenir d’un coup d’oeil certaines informations.
    Cet article a pour but de documenter les scripts de monitoring Munin développés avec l’aide d’Infini.
    Ces scripts sont installés automatiquement par le script d’installation automatique si une installation de munin est détectée.
    Description des scripts
    Trois scripts Munin ont été développés :
    1. mediaspip_medias
    Un script de (...)

Sur d’autres sites (10681)

  • How to pipe multiple inputs to ffmpeg through python subprocess ?

    14 avril 2021, par D. Ramsook

    I have a ffmpeg command that uses two different input files (stored in 'input_string'). In ffmpeg '-' after the '-i' represents reading from stdin.

    


    I currently use the below setup to collect the output of the command. How can it be modified so that fileA and fileB, which are stored as bytes, are piped as the inputs to the ffmpeg command ?

    


    from subprocess import run, PIPE

with open('fileA.yuv', 'rb') as f:
    fileA = f.read()

with open('fileB.yuv', 'rb') as f:
    fileB = f.read()   


input_string = """ffmpeg -f rawvideo -s:v 1280x720 -pix_fmt yuv420p -r 24 -i - -f rawvideo -s:v 50x50 -pix_fmt yuv420p -r 24 -i - -lavfi  ...#remainder of command"""
result = run(input_string, stdout=PIPE, stderr=PIPE, universal_newlines=True)
result = result.stderr.splitlines()


    


  • Broken pipe when closing subprocess pipe with FFmpeg

    5 avril 2021, par Shawn

    First, I'm a total noob with ffmpeg. That said, similar to this question, I'm trying to read a video file as bytes and extract 1 frame as a thumbnail image, as bytes, so I can upload the thumbnail to AWS S3. I don't want to save files to disk and then have to delete them. I modified the accepted answer in the aforementioned question for my purposes, which is to handle different file formats, not just video. Image files work just fine with this code, but an mp4 breaks the pipe at byte_command.stdin.close(). I'm sure I'm missing something simple, but can't figure it out.

    


    The input bytes are a valid mp4, as I'm getting the following in the Terminal :

    


      Metadata:
    major_brand     : mp42
    minor_version   : 0
    compatible_brands: mp42isom
  Duration: 00:02:48.48, start: 0.000000, bitrate: N/A
    Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 640x480, 486 kb/s, 29.97 fps, 29.97 tbr, 90k tbn, 59.94 tbc (default)
    Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 125 kb/s (default)
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> mjpeg (native))


    


    from FFmpeg when I write to stdin.

    



    


    My FFmpeg command I'm passing in :
ffmpeg -i /dev/stdin -f image2pipe -frames:v 1 -

    


    I've tried numerous variations of this command, with -f nut, -f ... etc, to no avail.

    



    


    At the command line, without using python or subprocess, I've tried :
ffmpeg -i /var/www/app/thumbnail/movie.mp4 -frames:v 1 output.png and I get a nice png image of the video.

    



    


    My method :

    


    def get_converted_bytes_from_bytes(input_bytes: bytes, command: str) -> bytes or None:
    byte_command = subprocess.Popen(
        shlex.split(command),
        stdin=subprocess.PIPE,
        stdout=subprocess.PIPE,
        shell=False,
        bufsize=10 ** 8,
    )
    b = b""

    byte_command.stdin.write(input_bytes)
    byte_command.stdin.close()
    while True:
        output = byte_command.stdout.read()
        if len(output) > 0:
            b += output
        else:
            error_msg = byte_command.poll()
            if error_msg is not None:
                break
    return b


    


    What am I missing ? Thank you !

    



    


    UPDATE, AS REQUESTED :

    


    Code Sample :

    


    import shlex
import subprocess


def get_converted_bytes_from_bytes(input_bytes: bytes, command: str) -> bytes or None:
    byte_command = subprocess.Popen(
        shlex.split(command),
        stdin=subprocess.PIPE,
        stdout=subprocess.PIPE,
        shell=False,
        bufsize=10 ** 8,
    )
    b = b""
    # write bytes to processe's stdin and close the pipe to pass
    # data to piped process
    byte_command.stdin.write(input_bytes)
    byte_command.stdin.close()
    while True:
        output = byte_command.stdout.read()
        if len(output) > 0:
            b += output
        else:
            error_msg = byte_command.poll()
            if error_msg is not None:
                break
    return b


def run():
    subprocess.run(
        shlex.split(
            "ffmpeg -y -f lavfi -i testsrc=size=640x480:rate=1 -vcodec libx264 -pix_fmt yuv420p -crf 23 -t 5 test.mp4"
        )
    )
    with open("test.mp4", "rb") as mp4:
        b1 = mp4.read()
        b = get_converted_bytes_from_bytes(
            b1,
            "ffmpeg -y -loglevel error -i /dev/stdin -f image2pipe -frames:v 1 -",
        )
        print(b)


if __name__ == "__main__":
    run()



    


  • Create silent wav and pipe it

    2 avril 2021, par Ícaro Erasmo

    I've been through many stackoverflow pages and forums trying to find the answer I want.
I created a virtual microphone and I'm trying to pipe to it some wav sounds created using FFMPEG.

    


    When I want to pipe a keyboard noise I pipe the sound to my virtual sound capture device like this :

    


    ffmpeg -fflags &#x2B;discardcorrupt -i <keyboard sound="sound" path="path"> -f s16le -ar 44100 -ac 1 - > /tmp/gapFakeMic&#xA;</keyboard>

    &#xA;

    And when I want to pipe some synthetized voice sound using Espeak to my virtual microphone, I do this :

    &#xA;

    espeak -vbrazil-mbrola-4 <some random="random" text="text"> --stdout | ffmpeg -fflags &#x2B;discardcorrupt -i pipe:0 -f s16le -ar 44100 -ac 1 - > /tmp/gapFakeMic&#xA;</some>

    &#xA;

    The problem is my capture device doesn't record the sound like a normal recorder that still records even when there's no sound being transmited to it. So I'm trying to append the silence to the wav which is being created while my application is running. Always when I try to send the silence to buffer, FFMPEG returns the following response :

    &#xA;

    [NULL @ 0x5579f7921a00] Unable to find a suitable output format for &#x27;pipe:&#x27;&#xA;

    &#xA;

    FFMPEG is a powerful tool but its documentation lacks to be useful for newbies like me. So, I'd appreciate if anyone could answer this or at least give me any direction or some resource where I could find a way of achieving this.

    &#xA;

    EDIT :

    &#xA;

    Here's how I'm producing the silence to my virtual microphone :

    &#xA;

    ffmpeg -f lavfi -i anullsrc=channel_layout=mono:sample_rate=44100 -t <time in="in" seconds="seconds"> - > /tmp/gapFakeMic&#xA;</time>

    &#xA;

    Here's the full log :

    &#xA;

    ffmpeg version 4.1.6-1~deb10u1 Copyright (c) 2000-2020 the FFmpeg developers&#xA;  built with gcc 8 (Debian 8.3.0-6)&#xA;  configuration: --prefix=/usr --extra-version=&#x27;1~deb10u1&#x27; --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared&#xA;&#xA;  libavutil      56. 22.100 / 56. 22.100&#xA;  libavcodec     58. 35.100 / 58. 35.100&#xA;  libavformat    58. 20.100 / 58. 20.100&#xA;  libavdevice    58.  5.100 / 58.  5.100&#xA;  libavfilter     7. 40.101 /  7. 40.101&#xA;  libavresample   4.  0.  0 /  4.  0.  0&#xA;  libswscale      5.  3.100 /  5.  3.100&#xA;  libswresample   3.  3.100 /  3.  3.100&#xA;  libpostproc    55.  3.100 / 55.  3.100&#xA;&#xA;Input #0, lavfi, from &#x27;anullsrc=channel_layout=mono:sample_rate=44100&#x27;:&#xA;  Duration: N/A, start: 0.000000, bitrate: 352 kb/s&#xA;    Stream #0:0: Audio: pcm_u8, 44100 Hz, mono, u8, 352 kb/s&#xA;&#xA;[NULL @ 0x560516626f40] Unable to find a suitable output format for &#x27;pipe:&#x27;&#xA;pipe:: Invalid argument&#xA;

    &#xA;

    EDIT 2 :

    &#xA;

    After Gyan provided a solution in the comments the error above doesn't show anymore but my result audio is being broken and doesn't come out as expected. Now the command that generates and appends the silent audio is like this :

    &#xA;

    ffmpeg -f lavfi -i anullsrc=channel_layout=stereo:sample_rate=44100 -t <time in="in" seconds="seconds"> -f s16le - > /tmp/gapFakeMic&#xA;</time>

    &#xA;

    Edit 3 :

    &#xA;

    I've made some changes to the command I'm using to pipe silence to the virtual mic. I think the pipe is breaking because of some incompatibility in audio formats. I hope I can find a solution in the next few days. After every little change I realize some improvements. Now I can hear the silence between the keys sounds but it isn't recording all the audios I'm passing to it. Here's how the command is now :

    &#xA;

    ffmpeg -f lavfi -i anullsrc=channel_layout=mono:sample_rate=44100 -t <time in="in" seconds="seconds"> -f s16le -ar 44100 -ac 1 - > /home/icaroerasmo/gapFakeMic`&#xA;</time>

    &#xA;

    I also realized that when I pipe the sound to a pipe file created inside my home folder the audio quality improves.

    &#xA;

    Edit 4 :

    &#xA;

    After all this struggle it's clear now that the named pipe is breaking in the second time it's called. I've already googled how to flush a named pipe but I didn't find anything that worked.

    &#xA;