Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • real time decoding of SHVC bit streams

    8 mai 2018, par userDtrm

    Does anyone know an open source decoder that can perform real time SHVC bit stream decoding?. The openHEVC states that it has the capability to decode HEVC scalable bit streams, but I was not able to decode a SHVC bit stream generated by SHM 7.0 reference encoder.

    Also, does the ffmpeg support scalable extension of HEVC?.

    Thanks.

  • Get correct framerate from ffmpeg

    8 mai 2018, par JaSHin

    Good day,

    I have a problem. I need to get correct framerate from ffmpeg libs..

    I tried to use

    pFormatCtx->streams[videoStream]->avg_frame_rate.num
    

    return of avg_frame_rate is 2997. But when I dumped meta info, I got:

    Input #0, avi, from '/test.avi':
      Metadata:
        encoder         : MEncoder SVN-r33883(20110719-gcc4.5.2)
      Duration: 00:49:47.70, start: 0.000000, bitrate: 1294 kb/s
        Stream #0:0: Video: mpeg4 (Advanced Simple Profile) (XVID / 0x44495658), yuv420p, 856x480 [SAR 1:1 DAR 107:60], 1090 kb/s, SAR 491520:492521 DAR 8192:4603, 23.98 fps, 23.98 tbr, 23.98 tbn, 23.98 tbc
        Stream #0:1: Audio: mp3 (U[0][0][0] / 0x0055), 48000 Hz, stereo, s16p, 192 kb/s
    2015-09-20 15:47:02.377 TV3[21607:769601] ready to start audio
    

    sample rate is: 23.98fps. What value is correct and why are they different?

  • Undefined symbols av_register_all()

    8 mai 2018, par JaSHin

    Good day,

    I am beginner in Objective-C and Xcode IDE. I am trying use ffmpeg in my iOS application. I cloned https://github.com/kewlbear/FFmpeg-iOS-build-script and build for arm64 and x86_64.

    When I wanted to build app it crashed with

    Ld /Users/nikolajpognerebko/Library/Developer/Xcode/DerivedData/CPP3-eowdhpsbeagmxydsrsscofhtuwtl/Build/Products/Debug-iphonesimulator/CPP3.app/CPP3 normal x86_64
    cd /Volumes/sedy/xcode/CPP3
    export IPHONEOS_DEPLOYMENT_TARGET=9.1
    export PATH="/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/usr/bin:/Applications/Xcode.app/Contents/Developer/usr/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin"
    /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang++ -arch x86_64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator9.1.sdk -L/Users/nikolajpognerebko/Library/Developer/Xcode/DerivedData/CPP3-eowdhpsbeagmxydsrsscofhtuwtl/Build/Products/Debug-iphonesimulator -L/Volumes/sedy/xcode/CPP3/CPP3/ffmpeg/lib -F/Users/nikolajpognerebko/Library/Developer/Xcode/DerivedData/CPP3-eowdhpsbeagmxydsrsscofhtuwtl/Build/Products/Debug-iphonesimulator -filelist /Users/nikolajpognerebko/Library/Developer/Xcode/DerivedData/CPP3-eowdhpsbeagmxydsrsscofhtuwtl/Build/Intermediates/CPP3.build/Debug-iphonesimulator/CPP3.build/Objects-normal/x86_64/CPP3.LinkFileList -Xlinker -rpath -Xlinker @executable_path/Frameworks -mios-simulator-version-min=9.1 -Xlinker -objc_abi_version -Xlinker 2 -stdlib=libc++ -fobjc-arc -fobjc-link-runtime -lavcodec -lavdevice -lavfilter -lavformat -lavutil -lswresample -lswscale -framework AVFoundation -liconv -lbz2 -Xlinker -dependency_info -Xlinker /Users/nikolajpognerebko/Library/Developer/Xcode/DerivedData/CPP3-eowdhpsbeagmxydsrsscofhtuwtl/Build/Intermediates/CPP3.build/Debug-iphonesimulator/CPP3.build/Objects-normal/x86_64/CPP3_dependency_info.dat -o /Users/nikolajpognerebko/Library/Developer/Xcode/DerivedData/CPP3-eowdhpsbeagmxydsrsscofhtuwtl/Build/Products/Debug-iphonesimulator/CPP3.app/CPP3
    
    Undefined symbols for architecture x86_64:
      "av_register_all()", referenced from:
          Decoder::Decoder() in ViewController.o
    ld: symbol(s) not found for architecture x86_64
    clang: error: linker command failed with exit code 1 (use -v to see invocation)
    

    There is zipped project on OneDrive http://1drv.ms/1KkPAia because it is a best way to explain my problem.

    Please help me and explain, what this problem arose.

    Thanks very much.

  • Can't read video using VideoCapture in Opencv

    8 mai 2018, par batuman

    I have installed Opencv 2.4.13.6 at my Ubuntu 16.04 OS. I have ffmpeg and during Opencv installation I made WITH_FFMPEG ON. My ffmpeg is working. If I type ffmpeg at command window, I have

    ffmpeg version N-90982-gb995ec0 Copyright (c) 2000-2018 the FFmpeg developers
    
      built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.9) 20160609
      configuration: --prefix=/home/nyan/ffmpeg_build --enable-shared --extra-cflags=-I/home/nyan/ffmpeg_build/include --extra-ldflags=-L/home/nyan/ffmpeg_build/lib --extra-libs='-lpthread -lm' --bindir=/home/nyan/bin --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-nonfree
      libavutil      56. 18.100 / 56. 18.100
      libavcodec     58. 19.100 / 58. 19.100
      libavformat    58. 13.101 / 58. 13.101
      libavdevice    58.  4.100 / 58.  4.100
      libavfilter     7. 21.100 /  7. 21.100
      libswscale      5.  2.100 /  5.  2.100
      libswresample   3.  2.100 /  3.  2.100
      libpostproc    55.  2.100 / 55.  2.100
    Hyper fast Audio and Video encoder
    usage: ffmpeg [options] [[infile options] -i infile]... {[outfile options] outfile}...
    

    Then I have put ffmpeg paths to .bashrc as

    export PATH=/home/bin${PATH:+:${PATH}}
    export PATH=/home/ffmpeg_build${PATH:+:${PATH}}
    export PATH=/home/ffmpeg_build/include${PATH:+:${PATH}}
    export LD_LIBRARY_PATH=/home/ffmpeg_build/lib${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
    

    In my Opencv libraries I have libopencv_video.so. So video input/output should be fine.

    My following program gives me "can't read video". What could be the reason?

    I tried VideoCapture cap(0); gives me same error. What is wrong?

    #include opencv.hpp>
    using namespace cv;
    using namespace std;
    
    int main(void){
    
        VideoCapture cap("IMG_5715.MOV"); // open the default camera
        if(!cap.isOpened())  // check if we succeeded
        {
            cout << "can't read video"<< endl;
            return -1;
        }
    
        while(1){ 
        Mat frame;
        // Capture frame-by-frame
        cap >> frame;
            imshow( "Frame", frame );
            waitKey(1);
        // If the frame is empty, break immediately
        if (frame.empty())
           break; 
        }
    
        cap.release();
        return 0;
    }
    
  • How to fetch both live video frame and timestamp from ffmpeg to python on Windows

    8 mai 2018, par vijiboy

    Searching for an alternative as OpenCV would not provide timestamps for live camera stream (on Windows), which are required in my computer vision algorithm, I found ffmpeg and this excellent article https://zulko.github.io/blog/2013/09/27/read-and-write-video-frames-in-python-using-ffmpeg/ The solution uses ffmpeg, accessing its standard output (stdout) stream. I extended it to read the standard error (stderr) stream as well.

    Working up the python code on windows, while I received the video frames from ffmpeg stdout, but the stderr freezes after delivering the showinfo videofilter details (timestamp) for first frame.

    I recollected seeing on ffmpeg forum somewhere that the video filters like showinfo are bypassed when redirected. Is this why the following code does not work as expected?

    Expected: It should write video frames to disk as well as print timestamp details.
    Actual: It writes video files but does not get the timestamp (showinfo) details.

    Here's the code I tried:

    import subprocess as sp
    import numpy
    import cv2
    
    command = [ 'ffmpeg', 
                '-i', 'e:\sample.wmv',
                '-pix_fmt', 'rgb24',
                '-vcodec', 'rawvideo',
                '-vf', 'showinfo', # video filter - showinfo will provide frame timestamps
                '-an','-sn', #-an, -sn disables audio and sub-title processing respectively
                '-f', 'image2pipe', '-'] # we need to output to a pipe
    
    pipe = sp.Popen(command, stdout = sp.PIPE, stderr = sp.PIPE) # TODO someone on ffmpeg forum said video filters (e.g. showinfo) are bypassed when stdout is redirected to pipes??? 
    
    for i in range(10):
        raw_image = pipe.stdout.read(1280*720*3)
        img_info = pipe.stderr.read(244) # 244 characters is the current output of showinfo video filter
        print "showinfo output", img_info
        image1 =  numpy.fromstring(raw_image, dtype='uint8')
        image2 = image1.reshape((720,1280,3))  
    
        # write video frame to file just to verify
        videoFrameName = 'Video_Frame{0}.png'.format(i)
        cv2.imwrite(videoFrameName,image2)
    
        # throw away the data in the pipe's buffer.
        pipe.stdout.flush()
        pipe.stderr.flush()
    

    So how to still get the frame timestamps from ffmpeg into python code so that it can be used in my computer vision algorithm...