Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • MP3 files created using FFmpeg are not starting playback in browser immediately. Is there any major difference between FFmpeg and AVCONV ?

    23 janvier 2019, par AR5

    I am working on a website that streams music. We recently changed server from Debian (with avconv) to a Centos7 (with FFmpeg) server. The mp3 files created on Debian server start playback on browser (I have tested Chrome and Firefox) start almost at the same time they start loading into the browser (I used Network tab on Developer Tools to monitor this)

    Now after the switch to Centos/FFmpeg server, the files being created on this new server are displaying a strange behavior. They only start playback after about 1MB is loaded into the browser.

    I have used identical settings for converting original file into MP3 in both AVCONV and FFmpeg but the files created using FFmpeg are showing this issue. Is there something that might be causing such an issue? Are there differences in terms of audio conversion between AVCONV and FFmpeg?

    I have already tried

    I first found that the files created on old server (Debian/Avconv) were VBR (variable bitrate) and the ones created on new server were CBR (constant bitrate), so I tried switching to VBR but the issue still persisted.

    I checked the mp3 files using MediaInfo app and there seems to be no difference between the files.

    I also checked if both files were being served as 206 Partial Content and they both are indeed.

    I am trying to create mp3 files using FFmpeg that work exactly like the ones created before using avconv

    I am trying to make the streaming site work on the new server but the mp3 files created using FFmpeg are not playing back correctly as compared to the ones created on the old server. I am trying to figure out what I might be doing wrong? or if there is a difference between avconv and FFmpeg that is causing this issue.

    I am really stuck on this issue, any help will be really appreciated.


    Edit

    I don't have access to old server anymore so I couldn't retrieve the log output of avconv. The command that I was using was as follows:

    avconv -y -i "/test/Track 01.mp3" -ac 2 -ar 44100 -acodec libmp3lame -b:a 128k "/test/Track 01 (converted).mp3"
    

    Here is the command and log output from new server:

    ffmpeg -y -i "/test/Track 01.mp3" -ac 2 -ar 44100 -acodec libmp3lame -b:a 128k "/test/Track 01 (converted).mp3"
    ffmpeg version 2.8.15 Copyright (c) 2000-2018 the FFmpeg developers
      built with gcc 4.8.5 (GCC) 20150623 (Red Hat 4.8.5-28)
      configuration: --prefix=/usr --bindir=/usr/bin --datadir=/usr/share/ffmpeg --incdir=/usr/include/ffmpeg --libdir=/usr/lib64 --mandir=/usr/share/man --arch=x86_64 --optflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' --extra-ldflags='-Wl,-z,relro ' --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libvo-amrwbenc --enable-version3 --enable-bzlib --disable-crystalhd --enable-gnutls --enable-ladspa --enable-libass --enable-libcdio --enable-libdc1394 --disable-indev=jack --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-openal --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libv4l2 --enable-libx264 --enable-libx265 --enable-libxvid --enable-x11grab --enable-avfilter --enable-avresample --enable-postproc --enable-pthreads --disable-static --enable-shared --enable-gpl --disable-debug --disable-stripping --shlibdir=/usr/lib64 --enable-runtime-cpudetect
      libavutil      54. 31.100 / 54. 31.100
      libavcodec     56. 60.100 / 56. 60.100
      libavformat    56. 40.101 / 56. 40.101
      libavdevice    56.  4.100 / 56.  4.100
      libavfilter     5. 40.101 /  5. 40.101
      libavresample   2.  1.  0 /  2.  1.  0
      libswscale      3.  1.101 /  3.  1.101
      libswresample   1.  2.101 /  1.  2.101
      libpostproc    53.  3.100 / 53.  3.100
    [mp3 @ 0xd60be0] Skipping 0 bytes of junk at 240044.
    Input #0, mp3, from '/test/Track 01.mp3':
      Metadata:
        album           : Future Hndrxx Presents: The WIZRD
        artist          : Future
        genre           : Hip-Hop
        title           : Never Stop
        track           : 1
        lyrics-eng      : rgf.is
        WEB SITE        : rgf.is
        TAGGINGTIME     : rgf.is
        WEB             : rgf.is
        date            : 2019
        encoder         : Lavf56.40.101
      Duration: 00:04:51.40, start: 0.025056, bitrate: 121 kb/s
        Stream #0:0: Audio: mp3, 44100 Hz, stereo, s16p, 114 kb/s
        Metadata:
          encoder         : Lavc56.60
        Stream #0:1: Video: png, rgb24(pc), 333x333 [SAR 1:1 DAR 1:1], 90k tbr, 90k tbn, 90k tbc
        Metadata:
          comment         : Cover (front)
    [mp3 @ 0xd66ec0] Frame rate very high for a muxer not efficiently supporting it.
    Please consider specifying a lower framerate, a different muxer or -vsync 2
    Output #0, mp3, to '/test/Track 01 (converted).mp3':
      Metadata:
        TALB            : Future Hndrxx Presents: The WIZRD
        TPE1            : Future
        TCON            : Hip-Hop
        TIT2            : Never Stop
        TRCK            : 1
        lyrics-eng      : rgf.is
        WEB SITE        : rgf.is
        TAGGINGTIME     : rgf.is
        WEB             : rgf.is
        TDRC            : 2019
        TSSE            : Lavf56.40.101
        Stream #0:0: Video: png, rgb24, 333x333 [SAR 1:1 DAR 1:1], q=2-31, 200 kb/s, 90k fps, 90k tbn, 90k tbc
        Metadata:
          comment         : Cover (front)
          encoder         : Lavc56.60.100 png
        Stream #0:1: Audio: mp3 (libmp3lame), 44100 Hz, stereo, s16p, 128 kb/s
        Metadata:
          encoder         : Lavc56.60.100 libmp3lame
    Stream mapping:
      Stream #0:1 -> #0:0 (png (native) -> png (native))
      Stream #0:0 -> #0:1 (mp3 (native) -> mp3 (libmp3lame))
    Press [q] to stop, [?] for help
    [libmp3lame @ 0xd9b0c0] Trying to remove 1152 samples, but the queue is emptys/s
    frame=    1 fps=0.1 q=-0.0 Lsize=    4788kB time=00:04:51.39 bitrate= 134.6kbits/s
    video:234kB audio:4553kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.014809%
    

    Samples of MP3 files

    I have uploaded samples of mp3 files created using both avconv and FFmpeg. Please find these here: https://drive.google.com/drive/folders/1gRTmMM2iSK0VWQ4Zaf_iBNQe5laFJl08?usp=sharing

  • Concatenate/Synchronize timestamp and video FFMPEG raspberry

    23 janvier 2019, par Andrea

    I have a raspberry pi 3 B+ and a logitech c920 webcam. I need to stream the webcam source from the raspberry along with the timestamp (per each frame) by maintaining 30fps.

    I used the following command:

    ffmpeg -ar 44100 -ac 2 -f alsa -i hw:1,0 -f v4l2 -codec:v h264 -framerate 30 -video_size 1920x1080 -itsoffset 0.5 -vsync 0 -copyts -i /dev/video0 -copyinkf -codec:v copy -use_wallclock_as_timestamps 1 -flags +global_header -codec:a aac -ab 128k -g 10 -vsync 0 -f mpegts udp://ip_of_my_machine:1234 -vsync 0 -f mkvtimestamp_v2 udp://ip_of_my_machine:1235
    

    I generate 2 udp streams but if I get the outpt with mplayer and netcat, respectively, I see a displacement between the timestamp in mplayer and the timestamp streamed of about 2 seconds.

    I need to get the two information separated because I have to process them and to maintain 30 FPS I cannot encode the output of the video.

    Is there a way to do it?

  • Concatenate timestamp and video FFMPEG raspberry

    23 janvier 2019, par Andrea

    I have a raspberry pi 3 B+ and a logitech c920 webcam. I need to stream the webcam source from the raspberry along with the timestamp (per each frame) by maintaining 30fps.

    I used the following command:

    ffmpeg -ar 44100 -ac 2 -f alsa -i hw:1,0 -f v4l2 -codec:v h264 -framerate 30 -video_size 1920x1080 -itsoffset 0.5 -vsync 0 -copyts -i /dev/video0 -copyinkf -codec:v copy -use_wallclock_as_timestamps 1 -flags +global_header -codec:a aac -ab 128k -g 10 -vsync 0 -f mpegts udp://ip_of_my_machine:1234 -vsync 0 -f mkvtimestamp_v2 udp://ip_of_my_machine:1235
    

    I generate 2 udp streams but if I get the outpt with mplayer and netcat, respectively, I see a displacement between the timestamp in mplayer and the timestamp streamed of about 2 seconds.

    I need to get the two information separated because I have to process them and to maintain 30 FPS I cannot encode the output of the video.

    Is there a way to do it?

  • Access pixel data of each frame of a video in Ruby

    23 janvier 2019, par Robin

    I want to be able to read/stream a video with Ruby, on Mac, and be able to get access to the pixel data for each frame.

    What I've tried

    • https://github.com/streamio/streamio-ffmpeg

      It's good at splitting the video into frames, but I don't know how to get access to the pixel data without saving each frame as an image first (or if it's possible at all).


    require 'streamio-ffmpeg'
    movie = FFMPEG::Movie.new("flakes.mp4")
    
    movie.screenshot("screenshots/screenshot_%d.jpg", { custom: %w(-vf crop=60:60:10:10), vframes: (movie.duration).to_i, frame_rate: movie.frame_rate/24 }, { validate: false })`
    

    • https://github.com/wedesoft/hornetseye-ffmpeg

      This seemed to have so much potential, but I don't think it's maintained anymore, and it's not meant to be used on MacOS really, so I'm having issues installing it there (headers not found and such, and no way to configure it afaik).

    Any idea what tool or method I could use for this use case?

  • ffmpeg nvenc_h264 streaming uhd2160 rawvideo. Youtube buffering

    23 janvier 2019, par Alex Deroza

    I use this batch file for starting my stream:

    echo starting transocding from native uhd2160 to fhd format.. Host: YouTube
    ffmpeg ^
        -loglevel -8 ^
        -f rawvideo -s:v 1920x1080 -r 60 -pix_fmt nv12 ^
        -f dshow -i video="Game Capture 4K60 Pro Video 01":audio="Game Capture 4K60 Pro Audio 01" ^
        -c:v h264_nvenc ^
        -level:v 4.2 ^
        -profile:v high ^
        -preset:v hq ^
        -b:v 8.8M ^
        -color_range 2 ^
        -colorspace bt709 ^
        -bf 0 ^
        -g 60 ^
        -2pass 1 ^
        -rc:v cbr ^
        -coder cabac ^
        -acodec aac ^
        -ab 128k ^
        -movflags +faststart ^
        -f flv rtmp://a.rtmp.youtube.com/live2/xxxx-xxxx-xxxx-xxxx && pause
    

    Stream on YouTube starts fine, and plays with good quality. But stream stops after when "Buffer Health" emptied. And in this situation Youtube says that "Video output low", "YouTube is not receiving enough video to maintain smooth streaming. As such, viewers will experience buffering."

    I can't understand why it happens. Because I've a good ISP, and over 30 mbps inet to YT services within 90ms latency to them.

    YouTube is not receiving enough video to maintain smooth streaming. As such, viewers will experience buffering.

    Your encoder is sending data faster than realtime (multipleseconds of video each second). You must rate limit your livevideo upload to approximately 1 second of video each second.

    The stream's current bitrate (5730.00 Kbps) is lower than the recommended bitrate. We recommend that you use a stream bitrate of 4500 Kbps.