Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • Some bugs when using SDL2 and ffmpeg to make a video player

    1er juin 2017, par trycatch

    used sws_scale to resize the image, and SDL2.0 to render, pixel format is YUV420P.

    1.When I Create SDL Render with SDL_RENDERER_SOFTWARE, I Get a black image like this:enter image description here, the top-left corner has a green point; But When I Create SDL Render with SDL_RENDERER_ACCELERATED, I can See normal image;

    2.When I resize the window, the player will crash on SDL_UpdateYUVTexture. I check width, height and linesize of the data, everything is OK.

    WHY?

    Key Code:

    if (swscale_.frame.width % 2) 
      swscale_.frame.width -= 1;
    
    swscale_.free();
    swscale_.context = sws_getContext(
            vcodec->width, vcodec->height, vcodec->pix_fmt,
            swscale_.frame.width, swscale_.frame.height,
            AV_PIX_FMT_YUV420P, SWS_BICUBIC, nullptr, nullptr, nullptr);
    
    if (!swscale_.context)
        throw std::runtime_error("sws_getContext.");
    
    if (swscale_.frame.avframe)
      av_frame_free(&swscale_.frame.avframe);
    swscale_.frame.avframe = av_frame_alloc();
    unsigned char* out_buffer = (unsigned char *)av_malloc(av_image_get_buffer_size(AV_PIX_FMT_YUV420P, swscale_.frame.width, swscale_.frame.height, 1)+16);
    unsigned char* tmp = &out_buffer[16 - (unsigned long long)out_buffer % 16];
    av_image_fill_arrays(swscale_.frame.avframe->data, swscale_.frame.avframe->linesize, tmp,
      AV_PIX_FMT_YUV420P, swscale_.frame.width, swscale_.frame.height, 1);
    
    sws_scale(swscale_.context, (const unsigned char* const*)vframe_->data, vframe_->linesize, 
      0, vframe_->height,
      swscale_.frame.avframe->data, swscale_.frame.avframe->linesize);
    
    self.onRenderHandle(swscale_.frame);
    

    Code of SDL rendering:

    void Player::onRender(const frame& frame)
    {
      if (sdl_lock_.try_lock()) {
        ::ValidateRect(native_window_, NULL);
        if (frame.width != scr_width_ || frame.height != scr_height_) {
          scr_width_ = frame.width; scr_height_ = frame.height;
          SDL_SetWindowSize(screen_, scr_width_ + 2, scr_height_ + 2);
          SDL_DestroyRenderer(sdl_renderer_);
          sdl_renderer_ = SDL_CreateRenderer(screen_, -1, 0);
          video_texture_ = SDL_CreateTexture(sdl_renderer_, SDL_PIXELFORMAT_IYUV,
            SDL_TEXTUREACCESS_STREAMING, scr_width_, scr_height_);
        }
        SDL_UpdateYUVTexture(video_texture_, NULL,
          frame.avframe->data[0], frame.avframe->linesize[0],
          frame.avframe->data[1], frame.avframe->linesize[1],
          frame.avframe->data[2], frame.avframe->linesize[2]);
        sdl_lock_.unlock();
        ::InvalidateRect(native_window_, NULL, false);
      }
    }
    void Player::onPaint()
    {
      std::lock_guard lock(sdl_lock_);
      SDL_RenderClear(sdl_renderer_);
      SDL_RenderCopy(sdl_renderer_, video_texture_, NULL, NULL);
      SDL_RenderPresent(sdl_renderer_);
    }
    
  • FFMPEG : How to extract multichannel track from m4v, mix it down and save the stereo downmix as "left" and "right" ?

    1er juin 2017, par chillynilly

    just like the title already says: I want to extract a multichannel track (5.1) from an .m4v, mix this track down and save the output as separate files, so in the end I want to have something like 'downmix_left.wav' and 'downmix_right.wav' I know how to do a downmix and I know how to split the audio, but I do not know how to do it in one step, which would save me a lot of time.

    This is the command I use for splitting:

    ffmpeg -i "video.m4v" -vn -filter_complex \
    "[0:2]channelsplit=channel_layout=5.1(side)[FL][FR][FC][LFE][SL][SR]" \ 
    -map "[FL]" video_left.wav \ 
    -map "[FR]" video_right.wav \ 
    -map "[FC]" video_center.wav \ 
    -map "[LFE]" video_lfe.wav \ 
    -map "[SL]" video_back_left.wav \ 
    -map "[SR]" video_back_right.wav
    

    And this is the command for the downmix of a multichannel track:

    ffmpeg -i "video.m4v" -vn -map 0:2 -ac 2 \ 
    -af "aresample=matrix_encoding=dplii" video_downmix.wav
    

    Is it possible to combine these and if so, how can it be done :D ? I would appreciate it very much if you could help me out here.

  • Convert raw RTP packets from mediasoup to frames

    31 mai 2017, par user2205763

    I am using mediasoup as a WebRTC server. In mediasoup, you can intercept raw RTP packets on the serverside using the following code:

    peer
      .on('newrtpreceiver', (rtpReceiver) => {
        rtpReceiver.on('rtpraw', (packet) => {
          // do something with this packet
        })
      })
    

    These packets are vp8 encoded. I want to pass the packets into FFMPEG and convert them to a stream of frames. I can then send these frames to an OpenCV service for analysis in real-time.

    My first attempt at doing this used the following procedure: * Turn the rtpReceiver.on('rtpraw') event into a Readable stream. * Use that readable stream as input into ffmpeg. * Set the output to a writestream.

    Here is an example of the code:

    import {Readable, Writable} from 'stream'
    import * as ffmpeg from 'fluent-ffmpeg'
    
    peer
      .on('newrtpreceiver', (rtpReceiver) => {
        if (rtpReceiver.kind !== 'video') 
          return
    
        let readStream = new Readable({
          objectMode: false,
          read(size) { return true }
        })
    
        let writeStream = new Writable({
          objectMode: false,
          write(frame, encoding, done) {
            // send frame somewhere
          }
        })
    
        let ffmpegStream = ffmpeg(readStream)
          .noAudio()
          .videoCodec('libvpx')
          .size('640x?')
          .format('webm')
          .on('start', (cmdline) => {
            console.log('Command line: ' + cmdline)
          })
          .on('progress', (progress) => {
            console.log('Processing: ' + progress.percent + '% done')
          })
          .on('stderr', (stderrLine) => {
            console.log('Stderr output: ' + stderrLine)
          })
          .on('error', (err, stdout, stderr) => {
            console.log('Cannot process video: ' + err.message)
          })
          .on('end', () => {
            console.log('Finished processing')
          })
          .pipe(writeStream)
    
        rtpReceiver
          .on('rtpraw', (packet) => {
            readStream.push(packet)
          })
          .on('close', () => { 
            readStream.push(null) 
          })
      })
    

    When I run this, I get the error Invalid data when processing input. Here are the console logs:

    Command line: ffmpeg -i pipe:0 -y -an -vcodec libvpx -filter:v scale=w=640:h=trunc(ow/a/2)*2 -f mp4 mymov.mp4
    Stderr output: ffmpeg version 3.2.4 Copyright (c) 2000-2017 the FFmpeg developers
    Stderr output:   built with Apple LLVM version 8.0.0 (clang-800.0.42.1)
    Stderr output:   configuration: --prefix=/usr/local/Cellar/ffmpeg/3.2.4 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-ffplay --enable-frei0r --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxvid --enable-opencl --disable-lzma --enable-libopenjpeg --disable-decoder=jpeg2000 --extra-cflags=-I/usr/local/Cellar/openjpeg/2.1.2/include/openjpeg-2.1 --enable-nonfree --enable-vda
    Stderr output:   libavutil      55. 34.101 / 55. 34.101
    Stderr output:   libavcodec     57. 64.101 / 57. 64.101
    Stderr output:   libavformat    57. 56.101 / 57. 56.101
    Stderr output:   libavdevice    57.  1.100 / 57.  1.100
    Stderr output:   libavfilter     6. 65.100 /  6. 65.100
    Stderr output:   libavresample   3.  1.  0 /  3.  1.  0
    Stderr output:   libswscale      4.  2.100 /  4.  2.100
    Stderr output:   libswresample   2.  3.100 /  2.  3.100
    Stderr output:   libpostproc    54.  1.100 / 54.  1.100
    Stderr output: pipe:0: Invalid data found when processing input
    Stderr output: 
    Cannot process video: ffmpeg exited with code 1: pipe:0: Invalid data found when processing input
    

    Thank you for all your help!

  • ffmpeg's av_parser_init(AV_CODEC_ID_V210) returns null

    31 mai 2017, par VorpalSword

    I'm trying to read in a .mov file that has video encoded in V210 pixel format (AKA: uncompressed, YCbCr, 10 bits per component) for some image quality tests I'm doing.

    My tech stack is ffmpeg 3.3.1 / gcc / Darwin.

    The decode_video.c example compiles, links & runs just fine but it has the codec ID hard-coded as AV_CODEC_ID_MPEG1VIDEO. I reasonably/naïvely thought that changing this to AV_CODEC_ID_V210 would get me a long way to decoding my test files.

    Unfortunately not. The call to av_parser_init returns null.

    Can anyone tell me why? And how to fix this? Thanks.

    #include 
    #include 
    #include 
    
    #include avcodec.h>
    
    ... // irrelevant code omitted, see linked example for details
    
    avcodec_register_all();
    
    pkt = av_packet_alloc();
    if (!pkt)
        exit(1);
    
    /* set end of buffer to 0 (this ensures that no overreading happens for damaged MPEG streams) */
    memset(inbuf + INBUF_SIZE, 0, AV_INPUT_BUFFER_PADDING_SIZE);
    
    /* find the MPEG-1 video decoder */
    // codec = avcodec_find_decoder(AV_CODEC_ID_MPEG1VIDEO); this works!
    codec = avcodec_find_decoder(AV_CODEC_ID_V210);  // this injects my problem
    if (!codec) {
        fprintf(stderr, "Codec not found\n");
        exit(1);
    }
    
    printf ("codec->id: %d, %d\n", AV_CODEC_ID_V210, codec->id); // codec->id: 128, 128
    
    parser = av_parser_init(codec->id);
    if (!parser) {
        fprintf(stderr, "parser not found\n");
        exit(1);  // program exits here when AV_CODEC_ID_V210 used
    }
    
  • FFmpeg - Calculating with Framerate ?

    31 mai 2017, par dazzafact

    I have to cut my video exactly. This video could have 25fps or 30fps (i dont know). is there a Variable in ffmpeg for the framerate so i could calculate on which frame i had to Cut? I only have the seconds of my video. something like this (for example 80sec video):

    vf "fade=in:0:12,fade=out:(80*r):12"

    -vf "fade=in:0:12,fade=out:2500:12"