Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • FFmpeg 1.0 causing audio playback issues

    7 novembre 2013, par Jona

    I have an audio streamer based on ffplay. It works great using ffmpeg 0.11 but when I use ffmpeg 1.0 or the latest 1.2 release the audio seems to be decoded or played weirdly.

    Essentially it sounds like chipmunks with mp3 streams and with aac streams I hear tons of static barely hearing the actual stream and the actual stream sounds slow.

    Any ideas the possible changes in ffmpeg that could have caused these types of issues?

    Similar issue was posted here but no actual answer about what is going on. Supposedly this code reproduces the same issue.

    UPDATE 1:
    I have done a step by step copy from ffplay and still no luck! :/ The channel and sampling rate look correct so there must be something internally that is returning a weird decoded format?

  • Lossless video codec squashing dim pixels using avconv

    7 novembre 2013, par Noah

    I am using avconv to convert a raw avi grayscale video to huffyuv with mkv container. I've read that huffyuv is "mathematically lossless", which is precisely what I want. avoprobe on the input file gives

    Input #0, avi, from 'myvid.avi':   Duration: 00:00:32.94, start: 0.000000, bitrate: 129167 kb/s
        Stream #0.0: Video: rawvideo, pal8, 328x246, 200 fps, 0.08 tbr, 200 tbn, 200 tbc
    

    The movie has high intensity (approx 150-250 in 8 bits) and low intensity (1-9) elements that I would like to preserve. However if I run

    avconv -y -an -i myvid.avi -r 200 -c:v huffyuv av_test.mkv
    

    I get a text.mkv where the low intensity details have vanished. In fact I was able to plot the following for the two videos. Squashing of dim pixels

    So avconv is deciding I don't need those critical dim pixels. I could just add, say, 15 to all pixel values, but then I would saturate my bright pixels and there's no guarantee the cutoff value is the same for all videos. I do some downstream processing on the output where I really need pixel values to not change when I convert video formats. Any insights as to how to get avconv or huffyuv to actually save my video without loss?

  • ffmpeg : save separate frames as still gifs

    7 novembre 2013, par Eugene M

    The question is simple: I don't want ffmpeg to create an animated GIF from given video stream, I want separate frames, each in GIF format. But when I set output file to something like frame%09d.gif ffmpeg tends to create an animation (and stores it exactly as frame%09d.gif). The same for -f gif option.

    Of course, I could save PNGs and use ImageMagic's convert utility to transform them to GIFs, but I don't want any additional invocation overhead because I'm dealing with live streams and going to crunch large amounts of data.

    Here is what I do, nothing special:

    ffmpeg -i http://brightcove03-f.akamaihd.net/valgbodmandag1378107345_1_300k@80362 -f gif -y frame_%09d.gif
    
    ffmpeg version N-54643-g15cee5e Copyright (c) 2000-2013 the FFmpeg developers
      built on Jul 11 2013 03:35:11 with gcc 4.7.3 (GCC)
      configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnu
    tls --enable-iconv --enable-libass --enable-libbluray --enable-libcaca --enable-libfreetype --enable-libgsm --enable-libilbc --enable-libmodplug --ena
    ble-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger -
    -enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-lib
    vpx --enable-libx264 --enable-libxavs --enable-libxvid --enable-zlib
      libavutil      52. 39.100 / 52. 39.100
      libavcodec     55. 18.102 / 55. 18.102
      libavformat    55. 12.101 / 55. 12.101
      libavdevice    55.  3.100 / 55.  3.100
      libavfilter     3. 80.100 /  3. 80.100
      libswscale      2.  3.100 /  2.  3.100
      libswresample   0. 17.102 /  0. 17.102
      libpostproc    52.  3.100 / 52.  3.100
    [flv @ 00000000002cb700] Stream discovered after head already parsed
    Input #0, flv, from 'http://brightcove03-f.akamaihd.net/valgbodmandag1378107345_1_300k@80362':
      Metadata:
        encoder         : Lavf54.6.100
      Duration: 00:00:00.00, start: 0.000000, bitrate: N/A
        Stream #0:0: Video: h264 (Constrained Baseline), yuv420p, 480x270 [SAR 1:1 DAR 16:9], 25 tbr, 1k tbn, 50 tbc
        Stream #0:1: Audio: aac, 44100 Hz, stereo, fltp, 128 kb/s
        Stream #0:2: Data: none
    [swscaler @ 0000000004d051e0] No accelerated colorspace conversion found from yuv420p to bgr8.
    Output #0, gif, to 'frame_%09d.gif':
      Metadata:
        encoder         : Lavf55.12.101
        Stream #0:0: Video: gif, bgr8, 480x270 [SAR 1:1 DAR 16:9], q=2-31, 200 kb/s, 100 tbn, 25 tbc
    Stream mapping:
      Stream #0:0 -> #0:0 (h264 -> gif)
    Press [q] to stop, [?] for help
    frame=  141 fps=130 q=-1.0 Lsize=    4960kB time=00:00:05.68 bitrate=7153.1kbits/s
    video:5100kB audio:0kB subtitle:0 global headers:0kB muxing overhead -2.743247%
    

    After all I get a file named "frame_%03d.gif", but instead I want to have several files "frame_001.gif", "frame_002.gif", etc.

    Any ideas? Thanks in advance.

  • Android MediaRecorder setCaptureRate() and video playback speed

    7 novembre 2013, par spitzanator

    I've got a MediaRecorder recording video, and I'm very confused by the effect of setCaptureRate().

    Specifically, I prepare my MediaRecorder as follows:

    mMediaRecorder = new MediaRecorder();
    mCamera.stopPreview();
    mCamera.unlock();
    mMediaRecorder.setCamera(mCamera);
    mMediaRecorder.setVideoSource(MediaRecorder.VideoSource.CAMERA);
    mMediaRecorder.setProfile(CamcorderProfile.QUALITY_TIME_LAPSE_480P);
    mMediaRecorder.setCaptureRate(30f);
    mMediaRecorder.setOrientationHint(270);
    mMediaRecorder.setOutputFile(...);
    mMediaRecorder.setPreviewDisplay(...);
    mMediaRecorder.prepare();
    

    I record for five seconds (with a CountDownTimer, but that's irrelevant), and this is the file that gets generated:

    $ ffmpeg -i ~/CaptureRate30fps.mp4 
    ...
    Seems stream 0 codec frame rate differs from container frame rate: 180000.00 (180000/1) -> 30.00 (30/1)
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/home/mspitz/CaptureRate30fps.mp4':
      Metadata:
        major_brand     : isom
        minor_version   : 0
        compatible_brands: isom3gp4
        creation_time   : 2013-06-04 00:52:00
      Duration: 00:00:02.59, start: 0.000000, bitrate: 5238 kb/s
        Stream #0.0(eng): Video: h264 (Baseline), yuv420p, 720x480, 5235 kb/s, PAR 65536:65536 DAR 3:2, 30 fps, 30 tbr, 90k tbn, 180k tbc
        Metadata:
          creation_time   : 2013-06-04 00:52:00
    

    Note that the Duration is just about 3 seconds. The video also plays much faster, as if it were 5 seconds of video crammed into 3.

    Now, if I record by preparing my mediaRecorder exactly as above, but subtracting the setCaptureRate(30f) line, I get a file like this:

    $ ffmpeg -i ~/NoSetCaptureRate.mp4 
    ...
    Seems stream 0 codec frame rate differs from container frame rate: 180000.00 (180000/1) -> 90000.00 (180000/2)
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/home/mspitz/NoSetCaptureRate.mp4':
      Metadata:
        major_brand     : isom
        minor_version   : 0
        compatible_brands: isom3gp4
        creation_time   : 2013-06-04 00:50:41
      Duration: 00:00:04.87, start: 0.000000, bitrate: 2803 kb/s
        Stream #0.0(eng): Video: h264 (Baseline), yuv420p, 720x480, 2801 kb/s, PAR 65536:65536 DAR 3:2, 16.01 fps, 90k tbr, 90k tbn, 180k tbc
        Metadata:
          creation_time   : 2013-06-04 00:50:41
    

    Note that the Duration is as expected, about 5 seconds. The video also plays at a normal speed.

    I'm using setCaptureRate(30f) because 30 frames per second is the value of my CamcorderProfile's videoFrameRate. On my Galaxy Nexus S2 (4.2.1), omitting setCaptureRate() is fine, but when I tested on a Galaxy Nexus S3 (4.1.1), omitting setCaptureRate() results in the ever-helpful "start failed -22" error when I called mMediaRecorder.start().

    So, what am I missing? I thought that the capture rate and the video frame rate were independent, but it's clear that they're not. Is there a way to determine programmatically what I need to set the capture rate at in order to determine that my video plays back at 1x speed?

  • c++ ffmpeg decode into raw audio which cannot be read afterwards

    7 novembre 2013, par noobed

    Since I don't want to flood with code here I'll reference a link: https://gist.github.com/anonymous/7344267

    Whoever understands ffmpeg documentation (I find it kinda insufficient) could be so kind to take a look at my convert.cpp?

    Q: Any easier way to take audio stream decode it somehow and encode it into mp3?

    After getting the raw audio in myOutputTest.mp3 I'm trying to put together some code using the audio_encode_example (here : http://ffmpeg.org/doxygen/trunk/doc_2examples_2decoding_encoding_8c-example.html)

    The way I see it I should open audio/video file using: avformat_open_input which throws me "Invalid data found when processing input"

    Q: Should I approach audio encoding differently? (But then where do I get all the frame and packet information if not from myOutputTest.mp3?)