Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • Parsing the STDERR output of node.js child_process line by line

    3 janvier 2012, par primer

    I'm writing a simple online conversion tool using FFMPEG and Node.js. I'm trying to figure out how to parse each line of the conversion output received from FFMPEG and only display pertinent results client side in the browser. In my case I want the encoding time counter that FFMPEG spits out on the command line.

    My function thus far is:

    function metric(ffmpeg, res) { 
    
      ffmpeg.stdout.on('data', function(data) {
         res.writeHead(200, {'content-type': 'text/html'});
         res.write('received upload:\n\n');
         console.log(data);
      });
    
      ffmpeg.stderr.on('data', function (data) {
         var temp += data.toString();
                var lines = temp.split('\n');
    
                //for debugging purposes
                for(var i = 0;icode>

    What this ends up returning is multiple arrays, each of which includes the data from the previous array as well as the next data chunk. For example, the function returns array 1:{0=>A, 1=>B}, array 2:{0=>A, 1=>B, 2=>C}, array 3:{0=>A, 1=>B, 2=>C, 3=>D}, and so on.

    I'm quite new to Node so I'm probably missing something simple. Any guidance would be much appreciated!

  • How to encode video with ffmpeg for playback on Android ?

    3 janvier 2012, par Sean

    I've got a c++ library that is encoding video in realtime from webcams to mp4 files (H264). The settings i've got are as follows:

        codecContex->profile=FF_PROFILE_H264_BASELINE; //Baseline
        codecContex->gop_size=250;
        codecContex->max_b_frames=0;
        codecContex->max_qdiff=4;
        codecContex->me_method=libffmpeg::ME_HEX;
        codecContex->me_range=16;
        codecContex->qmin=10;
        codecContex->qmax=51;
        codecContex->qcompress=0.6;
        codecContex->keyint_min=10;
        codecContex->trellis=0;
        codecContex->level=13; //Level 1.3
        codecContex->weighted_p_pred = 2;
        codecContex->flags2|=CODEC_FLAG2_WPRED+CODEC_FLAG2_8X8DCT;
    

    This creates MP4 files that play on iOS devices and on Windows Phone 7 devices but not on Android devices. I've read that Android only supports movies encoded with the baseline profile. These settings should produce a baseline movie but when I look at the generated MP4 file with MediaInfo it says it's AVC(High@L1.3). This might be why it's not working but I can't seem to get it to generate something with AVC(Baseline@L1.3)...

    If I remove the last line:

    codecContex->flags2|=CODEC_FLAG2_WPRED+CODEC_FLAG2_8X8DCT;
    

    Then MediaInfo reports the file as being "AVC(Main@L1.3)" instead - but those flags are part of the Baseline profile!

  • Creating thumbnails with FFmpeg

    3 janvier 2012, par Calin-Andrei Burloiu

    I am using FFmpeg to extract thumbnails from specific positions of video files.

    I found on the web two approaches to do this:

    1. With -ss (seek) parameter before -i (input) parameter:

      ffmpeg -y -ss $SEEK_POINT -i input.ogv -vcodec mjpeg -vframes 1 -an -s 120x90 -f rawvideo output.jpg

    2. With -ss (seek) parameter after -i (input) parameter:

      ffmpeg -y -i input.ogv -vcodec mjpeg -ss $SEEK_POINT -vframes 1 -an -s 120x90 -f rawvideo output.jpg

    The first method generates a bad thumbnail with gray spots, but works very fast. The error returned is [theora @ 0x8097240] vp3: first frame not a keyframe.

    The second method always works but show an error which cause the extraction to take a lot of time. This amount of time is not fixed and it depends on the seek point as I noticed. Sometimes it takes a few seconds and other times several minutes to extract a thumbnail. I get the error Buffering several frames is not supported. Please consume all available frames before adding a new one. in the following output:

    Input #0, ogg, from 'input.ogv':
      Duration: 00:21:52.76, start: 0.000000, bitrate: 844 kb/s
        Stream #0.0: Video: theora, yuv420p, 800x600 [PAR 4:3 DAR 16:9], 25 fps, 25 tbr, 25 tbn, 25 tbc
        Stream #0.1: Audio: vorbis, 44100 Hz, stereo, s16, 192 kb/s
        Metadata:
          ENCODER         : Lavf52.102.0
    Incompatible pixel format 'yuv420p' for codec 'mjpeg', auto-selecting format 'yuvj420p'                                                                         
    [buffer @ 0x9250840] w:800 h:600 pixfmt:yuv420p                                 
    [scale @ 0x92508a0] w:800 h:600 fmt:yuv420p -> w:120 h:90 fmt:yuvj420p flags:0x4
    Output #0, rawvideo, to 'output.jpg':
      Metadata:
        encoder         : Lavf53.2.0
        Stream #0.0: Video: mjpeg, yuvj420p, 120x90 [PAR 4:3 DAR 16:9], q=2-31, 200 kb/s, 90k tbn, 25 tbc
    Stream mapping:
      Stream #0.0 -> #0.0
    Press ctrl-c to stop encoding
    [buffer @ 0x9250840] Buffering several frames is not supported. Please consume all available frames before adding a new one.                                    
    frame=    0 fps=  0 q=0.0 size=       0kB time=10000000000.00 bitrate=   0.0kbit
    Last message repeated 15448 times
    frame=    1 fps=  0 q=3.4 Lsize=       3kB time=0.04 bitrate= 598.8kbits/s    
    video:3kB audio:0kB global headers:0kB muxing overhead 0.000000%
    

    How can I extract thumbnails without any problems using FFmpeg from a custom position of a video regardless of the input format?

  • Choppy playback video

    2 janvier 2012, par Satheesh

    I have developed a live wallpaper application in android. For that am using a video as wallpaper. In that we cannot set the video as it is as a wallpaper. So initially i compiled that video using ffmpeg library and set wallpaper as frame by frame using opengl. These are all working fine. Now i want to play two different videos for landscape and portrait mode tilts. When the video has to be changed on either mode and should play as wallpaper on that time i get a choppy defect in that for a second. How could i resolve this.

  • Monitor FFmpeg encoding in Java

    1er janvier 2012, par Ron

    I have a video that I want to encode with FFmpeg, and I also want the encoding to be made using Java -- so I can monitor the encoding progress.

    How can I run an FFmpeg process through Java and monitor the output of it? (To calculate the progress of the encoding.)