Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • capture image from IP camera through ffmpeg in command line or opencv getting a gray image

    4 décembre 2015, par rockycai

    Now I have a IP camera. I tried to get a image through ffmpeg (like this : ffmpeg -rtsp_transport tcp -i "my rtsp stream address" -y -f image2 test.jpg ).That's OK! Or I tried to do this through opencv,no problem too.But when I open the stream in vlc,At the same time,I tried to capture the image ,oh,I just got a gray image. why? if I open the stream in vlc two times,that's also OK! If capturing the image and view the rtsp stream together,just got a gray image.Is the reason of IP camera?normal imagegray image

  • FFMPEG Understanding AVFrame::linesize (Audio)

    4 décembre 2015, par user3584691

    As per the doucmentation of AVFrame, for audio, lineSize is size in bytes of each plane and only linesize[0] may be set. But however, am unsure whether lineszie[0] is holding per plane buffer size or is it the complete buffer size and we have to divide it by no of channels to get per plane buffer size.

    For Example, when I call int data_size = av_samples_get_buffer_size(NULL, iDesiredNoOfChannels, iAudioSamples, (AVSampleFormat)iDesiredFormat, 0) ; For iDesiredNoOfChannels = 2, iAudioSamples = 1024 & iDesiredFormat = AV_SAMPLE_FMT_FLTP data_size=8192. Pretty straightforward, as each sample is 4 bytes and since there are 2 channels total memory will be (1024 * 4 * 2) bytes. As such lineSize[0] should be 4096 for planar audio. data[0] & data[1] should be each of size 4096. However, pFrame->lineSize[0] is giving 8192. So to get the size per plane, I have to do pFrame->lineSize[0] / pFrame->channels. Isn't this behaviour different from what the documentation suggests or is my understanding of the documentaion wrong.

  • Java CV 1.0 Webcam video capture :Frame cannot be converted to IplImage

    4 décembre 2015, par user17795

    I am not able to record webcam video (i.e. Capture and Save .avi or .mp4 File) using JavaCV / OpenCV / FFMPEG, what am I doing wrong?

    Version used (all 64-bit)

    Win 7 , NetBeans8.0.2 , jdk1.7.0_10 , JavaCV 1.0 , OpenCV 3.0.0 , ffmpeg-2.1.1-win64-shared .

    My system variables are set to

    C:\Program Files\Java\jdk1.7.0_10;%SystemRoot%\system32;%SystemRoot%;%SystemRoot%\System32\Wbem;%SYSTEMROOT%\System32\WindowsPowerShell\v1.0\;C:\Program Files\Intel\WiFi\bin\;C:\Program Files\Common Files\Intel\WirelessCommon\;C:\Program Files (x86)\Intel\OpenCL SDK\2.0\bin\x86;C:\Program Files (x86)\Intel\OpenCL SDK\2.0\bin\x64;C:\Program Files (x86)\MySQL\MySQL Fabric 1.5.4 & MySQL Utilities 1.5.4 1.5\;C:\Program Files (x86)\MySQL\MySQL Fabric 1.5.4 & MySQL Utilities 1.5.4 1.5\Doctrine extensions for PHP\;C:\opencv\build\x64\vc11\bin;C:\ffmpeg\bin

    After downloading and setting path variables I added jar files to Netbeans project

    C:\opencv\build\java\opencv-300.jar C:\javacv-1.0-bin\javacv-bin\videoinput.jar C:\javacv-1.0-bin\javacv-bin\videoinput-windows-x86_64.jar C:\javacv-1.0-bin\javacv-bin\videoinput-windows-x86.jar C:\javacv-1.0-bin\javacv-bin\opencv.jar C:\javacv-1.0-bin\javacv-bin\opencv-windows-x86_64.jar C:\javacv-1.0-bin\javacv-bin\opencv-windows-x86.jar C:\javacv-1.0-bin\javacv-bin\libfreenect.jar C:\javacv-1.0-bin\javacv-bin\libfreenect-windows-x86_64.jar C:\javacv-1.0-bin\javacv-bin\libfreenect-windows-x86.jar C:\javacv-1.0-bin\javacv-bin\libdc1394.jar C:\javacv-1.0-bin\javacv-bin\junit.jar C:\javacv-1.0-bin\javacv-bin\javacv.jar C:\javacv-1.0-bin\javacv-bin\javacpp.jar C:\javacv-1.0-bin\javacv-bin\hamcrest-core.jar C:\javacv-1.0-bin\javacv-bin\flycapture.jar C:\javacv-1.0-bin\javacv-bin\flycapture-windows-x86_64.jar C:\javacv-1.0-bin\javacv-bin\flycapture-windows-x86.jar C:\javacv-1.0-bin\javacv-bin\flandmark.jar C:\javacv-1.0-bin\javacv-bin\flandmark-windows-x86_64.jar C:\javacv-1.0-bin\javacv-bin\flandmark-windows-x86.jar C:\javacv-1.0-bin\javacv-bin\ffmpeg.jar C:\javacv-1.0-bin\javacv-bin\ffmpeg-windows-x86_64.jar C:\javacv-1.0-bin\javacv-bin\ffmpeg-windows-x86.jar C:\javacv-1.0-bin\javacv-bin\artoolkitplus.jar C:\javacv-1.0-bin\javacv-bin\artoolkitplus-windows-x86_64.jar C:\javacv-1.0-bin\javacv-bin\artoolkitplus-windows-x86.jar

    Problem 1:
    First program to capture webcam video (display and save to output.avi file) is as given below.

    It displays webcam and creates output.avi. But after terminating program when I open file output.avi in Media Player it doesn't display anything :)

    It doesn't work

    import java.io.File;
    import java.net.URL;
    import org.bytedeco.javacv.*;
    import org.bytedeco.javacpp.*;
    import org.bytedeco.javacpp.indexer.*;
    import static org.bytedeco.javacpp.opencv_core.*;
    import static org.bytedeco.javacpp.opencv_imgproc.*;
    import static org.bytedeco.javacpp.opencv_calib3d.*;
    import static org.bytedeco.javacpp.opencv_objdetect.*;
    
    public class JCVdemo3 {
        public static void main(String[] args) throws Exception {
    
            // Preload the opencv_objdetect module to work around a known bug.
            Loader.load(opencv_objdetect.class);
    
            // The available FrameGrabber classes include OpenCVFrameGrabber (opencv_videoio),
            // DC1394FrameGrabber, FlyCaptureFrameGrabber, OpenKinectFrameGrabber,
            // PS3EyeFrameGrabber, VideoInputFrameGrabber, and FFmpegFrameGrabber.
            FrameGrabber grabber = FrameGrabber.createDefault(0);
            grabber.start();
    
            // CanvasFrame, FrameGrabber, and FrameRecorder use Frame objects to communicate image data.
            // We need a FrameConverter to interface with other APIs (Android, Java 2D, or OpenCV).
            OpenCVFrameConverter.ToIplImage converter = new OpenCVFrameConverter.ToIplImage();
    
    
            IplImage grabbedImage = converter.convert(grabber.grab());
            int width  = grabbedImage.width();
            int height = grabbedImage.height();
    
    
            FrameRecorder recorder = FrameRecorder.createDefault("output.avi", width, height);
            recorder.start();
    
    
            CanvasFrame frame = new CanvasFrame("Some Title");
    
            while (frame.isVisible() && (grabbedImage = converter.convert(grabber.grab())) != null) {
               // cvWarpPerspective(grabbedImage, rotatedImage, randomR);
    
                Frame rotatedFrame = converter.convert(grabbedImage);
    
                //opencv_core.IplImage grabbedImage = grabber.grab();
                frame.showImage(rotatedFrame);
                recorder.record(rotatedFrame);
            }
            frame.dispose();
            recorder.stop();
            grabber.stop();
        }
    }
    

    Problem 2: When I run following code

    opencv_core.IplImage grabbedImage = grabber.grab();  
    

    incompatible types: Frame cannot be converted to IplImage message appears

    import java.io.File;
    import java.net.URL;
    import java.util.logging.Level;
    import java.util.logging.Logger;
    import org.bytedeco.javacv.*;
    import org.bytedeco.javacpp.*;
    import org.bytedeco.javacpp.indexer.*;
    import static org.bytedeco.javacpp.opencv_core.*;
    import static org.bytedeco.javacpp.opencv_imgproc.*;
    import static org.bytedeco.javacpp.opencv_calib3d.*;
    import static org.bytedeco.javacpp.opencv_objdetect.*;
    
    public class Demo {
        public static void main(String[] args) {  
         try {  
           OpenCVFrameGrabber grabber = new OpenCVFrameGrabber(0);  
           grabber.start();  
           opencv_core.IplImage grabbedImage = grabber.grab();  
           CanvasFrame canvasFrame = new CanvasFrame("Video with JavaCV");  
           canvasFrame.setCanvasSize(grabbedImage.width(), grabbedImage.height());  
           grabber.setFrameRate(grabber.getFrameRate());  
    
           FFmpegFrameRecorder recorder = new FFmpegFrameRecorder("mytestvideo.mp4", grabber.getImageWidth(), grabber.getImageHeight());
           recorder.setFormat("mp4");  
           recorder.setFrameRate(30);  
           recorder.setVideoBitrate(10 * 1024 * 1024);  
    
    
           recorder.start();  
           while (canvasFrame.isVisible() && (grabbedImage = grabber.grab()) != null) {  
             canvasFrame.showImage(grabbedImage);  
             recorder.record(grabbedImage);  
           }  
           recorder.stop();  
           grabber.stop();  
           canvasFrame.dispose();  
    
         } catch (FrameGrabber.Exception ex) {  
           Logger.getLogger(JCVdemo.class.getName()).log(Level.SEVERE, null, ex);  
         } catch (FrameRecorder.Exception ex) {  
           Logger.getLogger(JCVdemo.class.getName()).log(Level.SEVERE, null, ex);  
         }  
       }
    }  
    

    Question is: what am i doing wrong?

    I am not able to record any sort of video; no matter what version of JavaCV/OPenCv I use.

    Please tell me working example to record video from webcam and also the working JavaCV/OpenCV /FFmpeg compatible versions.

  • How to decode audiostream to play with AudioUnits ?

    4 décembre 2015, par sadhi

    I have a PCM ulaw stream that I receive and I want to play this on iOS. To play audio in my app I made an AudioUnit implementation, but since it requires linear PCM I must decode it first. For this I use ffmpeg with the following code:

            AVCodec *codec = avcodec_find_decoder(AV_CODEC_ID_PCM_MULAW);
    
            self.audio_codec_context = avcodec_alloc_context3(codec);
            self.audio_codec_context->codec_type = AVMEDIA_TYPE_AUDIO;
            self.audio_codec_context->sample_fmt = *codec->sample_fmts;
            self.audio_codec_context->sample_rate = 48000;
            self.audio_codec_context->channels    = 1;
            //open codec
            int result = avcodec_open2(self.audio_codec_context, codec,NULL);            
    
            //this should hold the raw data
            AVFrame * audioFrm = av_frame_alloc();
    
            AVPacket pkt;
            av_init_packet(&pkt);
            pkt.data = (unsigned char*)buf;
            pkt.size = ret;
            pkt.flags = AV_PKT_FLAG_KEY;
    
            int got_packet;
            result = avcodec_decode_audio4(self.audio_codec_context, audioFrm, &got_packet, &pkt);
    
            AVPacket encodedPkt;
            av_init_packet(&encodedPkt);
            encodedPkt.size = 0;
            encodedPkt.data = NULL;
    
            if (audioFrm != NULL) {
                self.audio_codec_context = NULL;
                AVCodec *codec = avcodec_find_encoder(AV_CODEC_ID_PCM_S16LE);
    
                self.audio_codec_context = avcodec_alloc_context3(codec);
                self.audio_codec_context->codec_type = AVMEDIA_TYPE_AUDIO;
                self.audio_codec_context->sample_fmt = *codec->sample_fmts;
                self.audio_codec_context->bit_rate = 64000;
                self.audio_codec_context->sample_rate = 48000;
                self.audio_codec_context->channels    = 1;
    
    
                int result = avcodec_open2(self.audio_codec_context, codec,NULL);
                if (result < 0) {
                    NSLog(@"avcodec_open2 returned %i", result);
                }
    
    
                result = avcodec_encode_audio2(self.audio_codec_context, &encodedPkt, audioFrm, &got_packet);
    
                if (result < 0) {
                    NSLog(@"avcodec_encode_audio2 returned %s", av_err2str (result));
                    continue;
                }
            }
    

    For some reason no matter what I do the audio that comes out at the end is all noise. So my question is: How should I decode my audiostrean to play it with AudioUnits?

  • Combine 2 .FLV videos

    4 décembre 2015, par Rune

    For the last 4 hours I've been trying to combine 2 .flv files into one using ffmpeg (or well, just C# in general).

    Here's what I got so far: I've converted both the videos to .mp4 videos:

    "-i " + videoFileLocation + " -c copy -copyts " + newConvertedFileLocation
    

    I have then combined the two .mp4 files into a single one using: (txtPath is the text file with the two mp4 file locations)

    "-f concat -i " + txtPath + " -c copy " + saveLocation
    

    This ends up with an mp4 file which contains the combination of both the videos BUT with the following fault:

    The length of the first video is 0:05

    The length of the second video is 6:11

    However the length of the combined video is for some reason 07:51 - thus the video runs at a slower pace than it should.

    Furthermore the audio is async with the video.

    What am I doing wrong here?

    I haven't used ffmpeg before and I just wanna get this working.

    Any help is greatly appreciated!

    As requested here is the output from running 'ffmpeg -i input1.flv -i input2.flv':

    ffmpeg version 2.7 Copyright (c) 2000-2015 the FFmpeg developers   built with gcc 4.9.2 (GCC)...
    
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'filepath\input1.flv':   Metadata:
        major_brand     : isom
        minor_version   : 512
        compatible_brands: isomiso2avc1mp41
        creation_time   : 1970-01-01 00:00:00
        encoder         : Lavf53.24.2   Duration: 00:00:05.31, start: 0.000000, bitrate: 1589 kb/s
        Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], 1205 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc (default)
        Metadata:
          creation_time   : 1970-01-01 00:00:00
          handler_name    : VideoHandler
        Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, 5.1, fltp, 384 kb/s (default)
        Metadata:
          creation_time   : 1970-01-01 00:00:00
          handler_name    : SoundHandler Input #1, flv, from 'filepath\input2.flv':   Metadata:
        audiosize       : 4476626
        canSeekToEnd    : true
        datasize        : 23876671
        videosize       : 19004263
        hasAudio        : true
        hasCuePoints    : false
        hasKeyframes    : true
        hasMetadata     : true
        hasVideo        : true
        lasttimestamp   : 372
        metadatacreator : flvtool++ (Facebook, Motion project, dweatherford)
        totalframes     : 9298
        encoder         : Lavf56.36.100   Duration: 00:06:11.92, start: 0.080000, bitrate: 513 kb/s
        Stream #1:0: Video: h264 (High), yuv420p, 646x364 [SAR 1:1 DAR 323:182], 400 kb/s, 25 fps, 25 tbr, 1k tbn, 50 tbc
        Stream #1:1: Audio: aac (LC), 44100 Hz, stereo, fltp, 96 kb/s At least one output file must be specified