Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • ffmpeg video segment conflicts with frame_skip setpts filters

    30 janvier, par Jeff Hansen

    My goal is to record an rtsp stream into 5 second segments, but I want those segments to only include every 5th frame and be as compact as possible. Essentially, the resulting video will be a 1 second video that shows 5 seconds of realtime content.

    I have found that ffmpeg has video segment muxing that works really well. I have also found that I can apply a -filter:v select=not(mod(n\,{frame_skip})),setpts=PTS/{frame_skip} filter to only keep every 5th frame and also ensure that the resulting mp4 is 1 second instead of 5 seconds. This time reduction saves space, AND it also ensures that there aren't duplicate frames stored and displayed in the mp4.

    This is the segment muxing command that works:

    sec_overlap = 1
    segment_time = 5
    ffmpeg_command = (
        f'ffmpeg -rtsp_transport tcp -i "{rtsp_url}" '
        f'-c:v libx264 -preset ultrafast -tune zerolatency -an '
        f'-vf "scale=iw/10:ih/10" '
        f'-f segment '
        f'-segment_time {segment_time} '
        f'-reset_timestamps 1 '
        f'-segment_time_delta {sec_overlap} '
        f'-strftime 1 "{directory}\{cam_label}_%Y-%m-%d-%H-%M-%S.mp4"'
    )
    

    And this is the the filter command that successfully skips every 5th frame and shrinks the video length by 5x:

    sec_overlap = 1
    frame_skip = 5
    segment_time = 5 #5 / frame_skip**2
    ffmpeg_command = (
        f'ffmpeg -rtsp_transport tcp -i "{rtsp_url}" '
        f'-c:v libx264 -preset ultrafast -tune zerolatency -an '
        f"-filter:v '"
            f"select=not(mod(n\,{frame_skip})),"
            f"setpts=PTS/{frame_skip},"
            f"fps={frame_skip*frame_skip}"
        f"' "
        f'-f segment '
            f'-segment_time {segment_time} '
            f'-reset_timestamps 1 '
            f'-segment_time_delta {sec_overlap} '
            f'-strftime 1 '
        f'"{directory}\{cam_label}_%Y-%m-%d-%H-%M-%S.mp4"'
    )
    

    THIS IS MY PROBLEM. The resulting video ends up being a 10 second duration video that covers 50 SECONDS of realtime rtsp video footage. It appears that adjusting the PTS parameter, causes the segment_time calculation to record much longer.

    Does anyone know how to get 1sec videos of 5sec coverage while still applying this video filter?

  • Getting ffmpeg to capture the full screen of xfvb-run screen running puppeteer script, and send it over rtmp

    30 janvier, par james

    My problem

    I can't get ffmpeg or xvfb-run to stream the full screen to ffplay/videolan, it only captures a part of the screen.

    Update 2

    I answered the question myself in a follow up answer, hopefully it can be useful for someone else with the same problem.

    Update 1

    So the problem is definitely with xvfb-run, since the two following commands, still give a webm file, that only show parts of the screen

    ffmpeg -f x11grab -i :99 -g 50 -b:v 4000k -maxrate 4000k -bufsize 8000k -f webm -s 384x216 "blank.webm"

    xvfb-run -n 99 -a --server-args="-screen 0 1024x8000x24 -ac -nolisten tcp -dpi 96 +extension RANDR" "node index.js"

    What I've tried

    • I've tried changing ffmpeg command and xvfb-run, for example adding "-vf format=yuv420p" or "-filter:v "crop=iw-400:ih-40,scale=960:720" to ffmpeg command
    • I've tried to show other applications under xvfb-run instead of puppeteer (chrome..)
    • Recording screen with ffmpeg and saving it to a file, to see if there's a problem with the rtmp stream

    But still no luck. That's why I'm reaching out to the stackoverflow community.

    xvfb-run command

    xvfb-run -n 99 -a --server-args="-screen 0 1024x8000x24 -ac -nolisten tcp -dpi 96 +extension RANDR" "node index.js"

    ffmpeg command to capture xvfb-run virtual screen

    ffmpeg -f x11grab -i :99 -f pulse -i default -c:v libx264 -c:a aac -g 50 -b:v 4000k -maxrate 4000k -bufsize 8000k -f flv -listen 1 rtmp://localhost:4444/stream

    And finally to show the rtmp stream

    ffplay -fflags -nobuffer -flags low_delay -probesize 32 -flags low_delay -analyzeduration 0 -i rtmp://localhost:4444/stream

    The puppeteer script (index.js) which xfvb-runs

    
    
    import puppeteer from 'puppeteer';
    let initPuppeteer = async () => {
      const launchArgs = [
        //'--window-size=1280,1024',
        '--disable-web-security',
        '--disable-features=IsolateOrigins',
        '--disable-site-isolation-trials',
        '--app',
        '--kiosk',
      ]
      await puppeteer.launch({headless: false, ignoreDefaultArgs: ["--enable-automation"], args: launchArgs});
      const page = await this.browser.newPage();
      const device = puppeteer.devices['Nexus 10'];
      await page.emulate(device);
      await page.goto("https://google.com");
    }
    initPuppeteer()
    
  • How to split an incoming stream into multiple streams with different resolutions ?

    30 janvier, par Cobalt

    How to split an incoming stream into several streams with different resolutions on the MediaMTX server? There is one incoming stream in rtsp, it needs to be split into several channels with different resolutions. I think in the direction of ffmpeg, but I don't understand how to register this in the mediamtx settings.

  • FFmpeg move overlay from one pixel coordinate to another

    30 janvier, par occvtech

    I'm trying to write a command within FFmpeg that will first overlay one video stream on top of another one. Then I want the overlayed video to move from one pixel coordinate at a known time, and end at another pixel coordinate at a later time.

    I'm comfortable with the basics of the -filter_complex, but I cannot figure out how to apply any arithmetic expressions - like the one's referenced here: https://www.ffmpeg.org/ffmpeg-utils.html#Expression-Evaluation

    Here's an example of something I'd like to accomplish.

    ffmpeg -i [INPUT1] -i [INPUT2] -filter_complex "[0:v][1:v]overlay=shortest=1:x=720:y=0:enable='between(t,10,20)'[overlay];...
    

    In this example the overlay is stationary at pixel coordinate 720x0 from the 10th second to the 20th second.

    However, Id like to have it move to a new location in a linear fashion and end at a different pixel coordinate.

    For example, during that 10 second overlay, I'd like to have it start at 720x0, but then end at 1000x100.

    Is this possible?

  • python converting video to audio

    29 janvier, par programmer44

    We are working on a project to convert video to audio, and this is the sample code:

    from converter import Converter
    from moviepy.editor import *
    c = Converter()
    clipv = 'g.mp4'
    clipc = VideoFileClip(clipv).subclip(0,20)
    conv = c.convert(clipc, 'clip5.mp3', {'format':'mp3','audio':{'codec': 'mp3','bitrate':'22050','channels':1}})
    for timecode in conv:
        pass    
    

    However, it gives me this error

    Traceback (most recent call last)
     File "del.py", line 7, in 
        for timecode in conv:
     File "/usr/local/lib/python2.7/dist-packages/converter/__init__.py", line 181, in convert
        if not os.path.exists(infile):
     File "/usr/lib/python2.7/genericpath.py", line 18, in exists
        os.stat(path)
    TypeError: coercing to Unicode: need string or buffer, instance found
    

    Of course, the other alternative is to use ffmpeg, but the problem is that the video in this case is an object instance, and as of now I am yet to find a way of passing object instances from python to bash.

    The video object could be written as a video file, but that will lead to lots of time wastage, as the conversion takes place inside a loop.

    It is quite time consuming to have to write the video file time and again, so as to easily extract audio from it.

    I would highly appreciate any solution that will either help me get around the above error, or any that will allow me to pass the video fileclip object instance to bash ffmpeg as a variable.