Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • FFMPEG : How to extract multichannel track from m4v, mix it down and save the stereo downmix as "left" and "right" ?

    1er juin 2017, par chillynilly

    just like the title already says: I want to extract a multichannel track (5.1) from an .m4v, mix this track down and save the output as separate files, so in the end I want to have something like 'downmix_left.wav' and 'downmix_right.wav' I know how to do a downmix and I know how to split the audio, but I do not know how to do it in one step, which would save me a lot of time.

    This is the command I use for splitting:

    ffmpeg -i "video.m4v" -vn -filter_complex \
    "[0:2]channelsplit=channel_layout=5.1(side)[FL][FR][FC][LFE][SL][SR]" \ 
    -map "[FL]" video_left.wav \ 
    -map "[FR]" video_right.wav \ 
    -map "[FC]" video_center.wav \ 
    -map "[LFE]" video_lfe.wav \ 
    -map "[SL]" video_back_left.wav \ 
    -map "[SR]" video_back_right.wav
    

    And this is the command for the downmix of a multichannel track:

    ffmpeg -i "video.m4v" -vn -map 0:2 -ac 2 \ 
    -af "aresample=matrix_encoding=dplii" video_downmix.wav
    

    Is it possible to combine these and if so, how can it be done :D ? I would appreciate it very much if you could help me out here.

  • Convert raw RTP packets from mediasoup to frames

    31 mai 2017, par user2205763

    I am using mediasoup as a WebRTC server. In mediasoup, you can intercept raw RTP packets on the serverside using the following code:

    peer
      .on('newrtpreceiver', (rtpReceiver) => {
        rtpReceiver.on('rtpraw', (packet) => {
          // do something with this packet
        })
      })
    

    These packets are vp8 encoded. I want to pass the packets into FFMPEG and convert them to a stream of frames. I can then send these frames to an OpenCV service for analysis in real-time.

    My first attempt at doing this used the following procedure: * Turn the rtpReceiver.on('rtpraw') event into a Readable stream. * Use that readable stream as input into ffmpeg. * Set the output to a writestream.

    Here is an example of the code:

    import {Readable, Writable} from 'stream'
    import * as ffmpeg from 'fluent-ffmpeg'
    
    peer
      .on('newrtpreceiver', (rtpReceiver) => {
        if (rtpReceiver.kind !== 'video') 
          return
    
        let readStream = new Readable({
          objectMode: false,
          read(size) { return true }
        })
    
        let writeStream = new Writable({
          objectMode: false,
          write(frame, encoding, done) {
            // send frame somewhere
          }
        })
    
        let ffmpegStream = ffmpeg(readStream)
          .noAudio()
          .videoCodec('libvpx')
          .size('640x?')
          .format('webm')
          .on('start', (cmdline) => {
            console.log('Command line: ' + cmdline)
          })
          .on('progress', (progress) => {
            console.log('Processing: ' + progress.percent + '% done')
          })
          .on('stderr', (stderrLine) => {
            console.log('Stderr output: ' + stderrLine)
          })
          .on('error', (err, stdout, stderr) => {
            console.log('Cannot process video: ' + err.message)
          })
          .on('end', () => {
            console.log('Finished processing')
          })
          .pipe(writeStream)
    
        rtpReceiver
          .on('rtpraw', (packet) => {
            readStream.push(packet)
          })
          .on('close', () => { 
            readStream.push(null) 
          })
      })
    

    When I run this, I get the error Invalid data when processing input. Here are the console logs:

    Command line: ffmpeg -i pipe:0 -y -an -vcodec libvpx -filter:v scale=w=640:h=trunc(ow/a/2)*2 -f mp4 mymov.mp4
    Stderr output: ffmpeg version 3.2.4 Copyright (c) 2000-2017 the FFmpeg developers
    Stderr output:   built with Apple LLVM version 8.0.0 (clang-800.0.42.1)
    Stderr output:   configuration: --prefix=/usr/local/Cellar/ffmpeg/3.2.4 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-ffplay --enable-frei0r --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxvid --enable-opencl --disable-lzma --enable-libopenjpeg --disable-decoder=jpeg2000 --extra-cflags=-I/usr/local/Cellar/openjpeg/2.1.2/include/openjpeg-2.1 --enable-nonfree --enable-vda
    Stderr output:   libavutil      55. 34.101 / 55. 34.101
    Stderr output:   libavcodec     57. 64.101 / 57. 64.101
    Stderr output:   libavformat    57. 56.101 / 57. 56.101
    Stderr output:   libavdevice    57.  1.100 / 57.  1.100
    Stderr output:   libavfilter     6. 65.100 /  6. 65.100
    Stderr output:   libavresample   3.  1.  0 /  3.  1.  0
    Stderr output:   libswscale      4.  2.100 /  4.  2.100
    Stderr output:   libswresample   2.  3.100 /  2.  3.100
    Stderr output:   libpostproc    54.  1.100 / 54.  1.100
    Stderr output: pipe:0: Invalid data found when processing input
    Stderr output: 
    Cannot process video: ffmpeg exited with code 1: pipe:0: Invalid data found when processing input
    

    Thank you for all your help!

  • ffmpeg's av_parser_init(AV_CODEC_ID_V210) returns null

    31 mai 2017, par VorpalSword

    I'm trying to read in a .mov file that has video encoded in V210 pixel format (AKA: uncompressed, YCbCr, 10 bits per component) for some image quality tests I'm doing.

    My tech stack is ffmpeg 3.3.1 / gcc / Darwin.

    The decode_video.c example compiles, links & runs just fine but it has the codec ID hard-coded as AV_CODEC_ID_MPEG1VIDEO. I reasonably/naïvely thought that changing this to AV_CODEC_ID_V210 would get me a long way to decoding my test files.

    Unfortunately not. The call to av_parser_init returns null.

    Can anyone tell me why? And how to fix this? Thanks.

    #include 
    #include 
    #include 
    
    #include avcodec.h>
    
    ... // irrelevant code omitted, see linked example for details
    
    avcodec_register_all();
    
    pkt = av_packet_alloc();
    if (!pkt)
        exit(1);
    
    /* set end of buffer to 0 (this ensures that no overreading happens for damaged MPEG streams) */
    memset(inbuf + INBUF_SIZE, 0, AV_INPUT_BUFFER_PADDING_SIZE);
    
    /* find the MPEG-1 video decoder */
    // codec = avcodec_find_decoder(AV_CODEC_ID_MPEG1VIDEO); this works!
    codec = avcodec_find_decoder(AV_CODEC_ID_V210);  // this injects my problem
    if (!codec) {
        fprintf(stderr, "Codec not found\n");
        exit(1);
    }
    
    printf ("codec->id: %d, %d\n", AV_CODEC_ID_V210, codec->id); // codec->id: 128, 128
    
    parser = av_parser_init(codec->id);
    if (!parser) {
        fprintf(stderr, "parser not found\n");
        exit(1);  // program exits here when AV_CODEC_ID_V210 used
    }
    
  • FFmpeg - Calculating with Framerate ?

    31 mai 2017, par dazzafact

    I have to cut my video exactly. This video could have 25fps or 30fps (i dont know). is there a Variable in ffmpeg for the framerate so i could calculate on which frame i had to Cut? I only have the seconds of my video. something like this (for example 80sec video):

    vf "fade=in:0:12,fade=out:(80*r):12"

    -vf "fade=in:0:12,fade=out:2500:12"
    
  • Install ffmpeg on openshift v3 (nodejs) [on hold]

    31 mai 2017, par Kirk

    Im looking to install ffmpeg into my openshift application but I really don't know how. At first I just tried adding ffmpeg-binaries to my dependencies but obviously that didn't work. Also tried installing globally using the web console they provide but same thing. Excluding ffmpeg-binaries from my dependencies, everything runs fine on openshift.

    I'm just trying to learn how to install ffmpeg, doesn't necessarily have to be the ffmpeg-binaries package.

    Edit: The error I'm receiving in the openshift logs is

    ...
    Pulling image "registry.access.redhat.com/rhscl/nodejs-6-rhel7@...
    Pulling image "registry.access.redhat.com/rhscl/nodejs-6rhel7@...
    ---> Installing application source
    ---> Building your Node application from source
    npm WARN deprecated node-uuid@1.4.8: Use uuid module instead
    
    > ref@1.3.4 install /opt/app-root/src/node_modules/ref
    > node-gyp rebuild
    
    /usr/libexec/s2i/assemble: line 51:    13 Killed                  npm install
    error: build error: non-zero (13) exit code from registry.access.redhat.com/rhscl/nodejs-6-rhel7@...
    

    I've simply removed ffmpeg-binaries from my dependencies and it would build fine. This is when I went to the web console and tried to install using npm i -g ffmpeg-binaries but also just says killed.