Recherche avancée

Médias (0)

Mot : - Tags -/protocoles

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (59)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • Contribute to documentation

    13 avril 2011

    Documentation is vital to the development of improved technical capabilities.
    MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
    To contribute, register to the project users’ mailing (...)

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

Sur d’autres sites (4474)

  • ffmpeg pipe Invalid data found when processing input

    28 mars 2021, par Ankit Maheshwari
    Here is my configuration.


    


    

    

    const ffmpegPath = require('@ffmpeg-installer/ffmpeg').path;
const spawn = require('child_process').spawn;

ffmpeg = spawn(ffmpegPath, [
      
// Remove this line, as well as `-shortest`, if you send audio from the browser.
      // '-f', 'lavfi', '-i', 'anullsrc',

      // FFmpeg will read input video from STDIN
      '-i', '-',

      // -re flag means to Read input at native frame rate.
      '-re', '-y',

      // thread_queue_size added to avoid err: Thread message queue blocking; consider raising the thread_queue_size option, required before each input - this is for image2
      // '-thread_queue_size', '2048',

      // REF TO OVERLAY 
      // https://stackoverflow.com/questions/10438713/overlay-animated-images-with-transparency-over-a-static-background-image-using-f?rq=1
      // -loop loops the background image input so that we don't just have one frame, crucial!

      // The image file muxer writes video frames to image files, http://underpop.online.fr/f/ffmpeg/help/image2-1.htm.gz
      '-f', 'image2',

      // The -loop option is specific to the image file demuxer and gif muxer, so it can't be used for typical video files, but it can be used to infinitely loop a series of input images.
      '-loop', '1',

      // pattern_type is used to determine the format of the images contained in the files.
      // Read images matching the "*.png" glob pattern, that is files terminating with the ".png" suffix
      '-pattern_type', 'glob',

      // '-i', `images/${streamConfigData.youtube_key}/destination/image-*.png`,

      '-i', `images/${streamConfigData.youtube_key}/overlay/abc.png`,

      // '-vf', 'scale=1920x1080:flags=lanczos',
      
      // -shortest ends encoding when the shortest input ends, which is necessary as looping the background means that that input will never end.
      // 'overlay=shortest=1',

      "-filter_complex", "[1:v]format=rgba,colorchannelmixer=aa=1[fg];[0][fg]overlay",
      
      // Because we're using a generated audio source which never ends,
      // specify that we'll stop at end of other input.  Remove this line if you
      // send audio from the browser.
      // '-shortest',
      
      // If we're encoding H.264 in-browser, we can set the video codec to 'copy'
      // so that we don't waste any CPU and quality with unnecessary transcoding.
      // If the browser doesn't support H.264, set the video codec to 'libx264'
      // or similar to transcode it to H.264 here on the server.
      // '-vcodec', 'libx264',
      // it is not possible to filter and stream copy the same stream at the same time. https://stackoverflow.com/a/53526514/4057143
      '-vcodec', 'copy',
      
      // if browser not supports encoding AAC, we must transcode the audio to AAC here on the server.
      // '-acodec', 'aac',

      // Use this rate control mode if you want to keep the best quality and care less about the file size. CRF scale is 0–51, where 0 is lossless, 23 is the default, and 51 is worst quality possible. A lower value generally leads to higher quality, https://trac.ffmpeg.org/wiki/Encode/H.264
      '-crf', '23',

      // preset provide a certain encoding speed to compression ratio. A slower preset will provide better compression. medium – default preset, https://trac.ffmpeg.org/wiki/Encode/H.264
      '-preset', 'ultrafast',

      // -r set the frame rate. Generally, -r. Use the filter when you need to change framerate before applying further filters.
      // '-r', '30',
      // '-framerate', '30',

      //debug level logs
      '-loglevel', 'debug',
      '-v', 'verbose',

      // -g GOP_LEN_IN_FRAMES, -g sets the keyframe interval. https://superuser.com/a/908325
      '-g', '60',

      // video timescale, not sure what it is!
      '-video_track_timescale', '1000',

      // a live stream with more/less constant bit rate, to be able to control the bandwidth used.
      // a live stream with limited bit rate
      '-b:v', '15000k',
      // '-maxrate', '4000k',
      // '-bufsize', '8000k',

      // FLV is the container format used in conjunction with RTMP
      '-f', 'flv',
      
      // The output RTMP URL.
      // For debugging, you could set this to a filename like 'test.flv', and play
      // the resulting file with VLC.
      rtmpUrl 
    ], {
      env: {
          NODE_ENV: 'production',
          PATH: process.env.PATH
      }
    });

    


    


    



  • Create silent wav and pipe it

    2 avril 2021, par Ícaro Erasmo

    I've been through many stackoverflow pages and forums trying to find the answer I want.
I created a virtual microphone and I'm trying to pipe to it some wav sounds created using FFMPEG.

    


    When I want to pipe a keyboard noise I pipe the sound to my virtual sound capture device like this :

    


    ffmpeg -fflags &#x2B;discardcorrupt -i <keyboard sound="sound" path="path"> -f s16le -ar 44100 -ac 1 - > /tmp/gapFakeMic&#xA;</keyboard>

    &#xA;

    And when I want to pipe some synthetized voice sound using Espeak to my virtual microphone, I do this :

    &#xA;

    espeak -vbrazil-mbrola-4 <some random="random" text="text"> --stdout | ffmpeg -fflags &#x2B;discardcorrupt -i pipe:0 -f s16le -ar 44100 -ac 1 - > /tmp/gapFakeMic&#xA;</some>

    &#xA;

    The problem is my capture device doesn't record the sound like a normal recorder that still records even when there's no sound being transmited to it. So I'm trying to append the silence to the wav which is being created while my application is running. Always when I try to send the silence to buffer, FFMPEG returns the following response :

    &#xA;

    [NULL @ 0x5579f7921a00] Unable to find a suitable output format for &#x27;pipe:&#x27;&#xA;

    &#xA;

    FFMPEG is a powerful tool but its documentation lacks to be useful for newbies like me. So, I'd appreciate if anyone could answer this or at least give me any direction or some resource where I could find a way of achieving this.

    &#xA;

    EDIT :

    &#xA;

    Here's how I'm producing the silence to my virtual microphone :

    &#xA;

    ffmpeg -f lavfi -i anullsrc=channel_layout=mono:sample_rate=44100 -t <time in="in" seconds="seconds"> - > /tmp/gapFakeMic&#xA;</time>

    &#xA;

    Here's the full log :

    &#xA;

    ffmpeg version 4.1.6-1~deb10u1 Copyright (c) 2000-2020 the FFmpeg developers&#xA;  built with gcc 8 (Debian 8.3.0-6)&#xA;  configuration: --prefix=/usr --extra-version=&#x27;1~deb10u1&#x27; --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared&#xA;&#xA;  libavutil      56. 22.100 / 56. 22.100&#xA;  libavcodec     58. 35.100 / 58. 35.100&#xA;  libavformat    58. 20.100 / 58. 20.100&#xA;  libavdevice    58.  5.100 / 58.  5.100&#xA;  libavfilter     7. 40.101 /  7. 40.101&#xA;  libavresample   4.  0.  0 /  4.  0.  0&#xA;  libswscale      5.  3.100 /  5.  3.100&#xA;  libswresample   3.  3.100 /  3.  3.100&#xA;  libpostproc    55.  3.100 / 55.  3.100&#xA;&#xA;Input #0, lavfi, from &#x27;anullsrc=channel_layout=mono:sample_rate=44100&#x27;:&#xA;  Duration: N/A, start: 0.000000, bitrate: 352 kb/s&#xA;    Stream #0:0: Audio: pcm_u8, 44100 Hz, mono, u8, 352 kb/s&#xA;&#xA;[NULL @ 0x560516626f40] Unable to find a suitable output format for &#x27;pipe:&#x27;&#xA;pipe:: Invalid argument&#xA;

    &#xA;

    EDIT 2 :

    &#xA;

    After Gyan provided a solution in the comments the error above doesn't show anymore but my result audio is being broken and doesn't come out as expected. Now the command that generates and appends the silent audio is like this :

    &#xA;

    ffmpeg -f lavfi -i anullsrc=channel_layout=stereo:sample_rate=44100 -t <time in="in" seconds="seconds"> -f s16le - > /tmp/gapFakeMic&#xA;</time>

    &#xA;

    Edit 3 :

    &#xA;

    I've made some changes to the command I'm using to pipe silence to the virtual mic. I think the pipe is breaking because of some incompatibility in audio formats. I hope I can find a solution in the next few days. After every little change I realize some improvements. Now I can hear the silence between the keys sounds but it isn't recording all the audios I'm passing to it. Here's how the command is now :

    &#xA;

    ffmpeg -f lavfi -i anullsrc=channel_layout=mono:sample_rate=44100 -t <time in="in" seconds="seconds"> -f s16le -ar 44100 -ac 1 - > /home/icaroerasmo/gapFakeMic`&#xA;</time>

    &#xA;

    I also realized that when I pipe the sound to a pipe file created inside my home folder the audio quality improves.

    &#xA;

    Edit 4 :

    &#xA;

    After all this struggle it's clear now that the named pipe is breaking in the second time it's called. I've already googled how to flush a named pipe but I didn't find anything that worked.

    &#xA;

  • Broken pipe when closing subprocess pipe with FFmpeg

    5 avril 2021, par Shawn

    First, I'm a total noob with ffmpeg. That said, similar to this question, I'm trying to read a video file as bytes and extract 1 frame as a thumbnail image, as bytes, so I can upload the thumbnail to AWS S3. I don't want to save files to disk and then have to delete them. I modified the accepted answer in the aforementioned question for my purposes, which is to handle different file formats, not just video. Image files work just fine with this code, but an mp4 breaks the pipe at byte_command.stdin.close(). I'm sure I'm missing something simple, but can't figure it out.

    &#xA;

    The input bytes are a valid mp4, as I'm getting the following in the Terminal :

    &#xA;

      Metadata:&#xA;    major_brand     : mp42&#xA;    minor_version   : 0&#xA;    compatible_brands: mp42isom&#xA;  Duration: 00:02:48.48, start: 0.000000, bitrate: N/A&#xA;    Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 640x480, 486 kb/s, 29.97 fps, 29.97 tbr, 90k tbn, 59.94 tbc (default)&#xA;    Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 125 kb/s (default)&#xA;Stream mapping:&#xA;  Stream #0:0 -> #0:0 (h264 (native) -> mjpeg (native))&#xA;

    &#xA;

    from FFmpeg when I write to stdin.

    &#xA;


    &#xA;

    My FFmpeg command I'm passing in :&#xA;ffmpeg -i /dev/stdin -f image2pipe -frames:v 1 -

    &#xA;

    I've tried numerous variations of this command, with -f nut, -f ... etc, to no avail.

    &#xA;


    &#xA;

    At the command line, without using python or subprocess, I've tried :&#xA;ffmpeg -i /var/www/app/thumbnail/movie.mp4 -frames:v 1 output.png and I get a nice png image of the video.

    &#xA;


    &#xA;

    My method :

    &#xA;

    def get_converted_bytes_from_bytes(input_bytes: bytes, command: str) -> bytes or None:&#xA;    byte_command = subprocess.Popen(&#xA;        shlex.split(command),&#xA;        stdin=subprocess.PIPE,&#xA;        stdout=subprocess.PIPE,&#xA;        shell=False,&#xA;        bufsize=10 ** 8,&#xA;    )&#xA;    b = b""&#xA;&#xA;    byte_command.stdin.write(input_bytes)&#xA;    byte_command.stdin.close()&#xA;    while True:&#xA;        output = byte_command.stdout.read()&#xA;        if len(output) > 0:&#xA;            b &#x2B;= output&#xA;        else:&#xA;            error_msg = byte_command.poll()&#xA;            if error_msg is not None:&#xA;                break&#xA;    return b&#xA;

    &#xA;

    What am I missing ? Thank you !

    &#xA;


    &#xA;

    UPDATE, AS REQUESTED :

    &#xA;

    Code Sample :

    &#xA;

    import shlex&#xA;import subprocess&#xA;&#xA;&#xA;def get_converted_bytes_from_bytes(input_bytes: bytes, command: str) -> bytes or None:&#xA;    byte_command = subprocess.Popen(&#xA;        shlex.split(command),&#xA;        stdin=subprocess.PIPE,&#xA;        stdout=subprocess.PIPE,&#xA;        shell=False,&#xA;        bufsize=10 ** 8,&#xA;    )&#xA;    b = b""&#xA;    # write bytes to processe&#x27;s stdin and close the pipe to pass&#xA;    # data to piped process&#xA;    byte_command.stdin.write(input_bytes)&#xA;    byte_command.stdin.close()&#xA;    while True:&#xA;        output = byte_command.stdout.read()&#xA;        if len(output) > 0:&#xA;            b &#x2B;= output&#xA;        else:&#xA;            error_msg = byte_command.poll()&#xA;            if error_msg is not None:&#xA;                break&#xA;    return b&#xA;&#xA;&#xA;def run():&#xA;    subprocess.run(&#xA;        shlex.split(&#xA;            "ffmpeg -y -f lavfi -i testsrc=size=640x480:rate=1 -vcodec libx264 -pix_fmt yuv420p -crf 23 -t 5 test.mp4"&#xA;        )&#xA;    )&#xA;    with open("test.mp4", "rb") as mp4:&#xA;        b1 = mp4.read()&#xA;        b = get_converted_bytes_from_bytes(&#xA;            b1,&#xA;            "ffmpeg -y -loglevel error -i /dev/stdin -f image2pipe -frames:v 1 -",&#xA;        )&#xA;        print(b)&#xA;&#xA;&#xA;if __name__ == "__main__":&#xA;    run()&#xA;&#xA;

    &#xA;