Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Articles published on the website

  • ffmpeg timelapse resets PTS on frame size change

    19 April, by Ral

    I have 601 sequential images, they change size and aspect ratio on frame 36 and 485, creating 3 distinct sizes of images in the set.

    I want to create a timelapse and shave off the first 200 frames and only show the remaining 401, but if I do a trim filter on the input, the filter treats each of the 3 sections of different sized frames as separate 'streams' with their own reset PTS, all of which start at the exact same time. This means the final output of the below commmand is only 249 frames long instead of 401.

    How can I fix this so I just get the final 401 frames?

    ffmpeg \
    -framerate 60 \
    -i "./%07d.jpg" \
    -filter_complex "
     [0]scale=1000:1000[in1];
     [in1]trim=start_frame=200[in2];
     [in2]setpts=PTS-STARTPTS
     " \
    -r 60 -y trimmed.webm
    

    Filters like setpts=N/(60*TB) or setpts=PTS-SETPTS after the scale to try and fix the frame times also seem to change nothing.

    If I remove the trim and pts reset, the timelapse exports all 601 perfectly. If I remove the pts reset and leave the trim, it exports 449 frames starting on frame 0.

    There's no errors or warning associated with the problem, other than the debug states the input reaches EOF on frame 449. (which is 485-36, the two section lengths, for some reason)

    I understand pre-processing the image sizes is a way to fix this, but I'd like to understand why this isn't possible in one command.

    version 6.0-6ubuntu1, but also happens on 6.1 and 5.1.

    Even if I whittle it down to the bare minimum, it still incorrectly exports 450 frames:

    ffmpeg -i "./%07d.jpg" -filter setpts=PTS-STARTPTS -y tpad.webm

  • How to create an encoding ladder for any aspect ratio?

    18 April, by volume one

    For a given video uploaded by a user, I need to create three versions of it to cover standard definition (SD), high definition (HD), full high definition (FHD), and ultra high definition (UHD e.g. 4K). "resolution/encoding ladders" for standard aspect ratios like 16:9 and 4:3.

    For 4:3 we might have:

    640 x 480
    960 x 720
    1440 x 1080
    2880 x 2160
    

    For 16:9 we might have:

    854 x 480
    1280 x 720
    1920 x 1080
    3840 x 2160
    

    If a user uploads a file in either of those aspect ratios, we can create the four different versions because the resolution standards are known.

    However if a user uploads a video with an unforseen aspect ratio, say 23:19, then how would you go about formatting that video into SD, HD, FHD, and UHD versions?

    If a 23:19 video is indeed uploaded then I am not looking to resize it to fit a different 'standard' aspect ratio. It must remain the same aspect ratio, but have four quality versions. The problem is what height and width sizes to create for non-standard resolutions?

    I have already come accross many aspect ratios like 16:10, 21:9, 1.85:1, 2.39:1. How could I take care of making quality variations of those?

    I am using Node.js and FFMpeg for video processing.

  • ffmpeg: `ffmpeg -i "/video.mp4" -i "/audio.m4a" -c copy -map 0:v:0 -map 1:a:0 -shortest "/nu.mp4"` truncates, how to loop audio to match videos? [closed]

    18 April, by Swudu Susuwu

    This is with "FFmpeg Media Encoder" from Google Store (Linux-based Android OS), but it has all the commands of ffmpeg for normal Linux.

    -shortest truncates the video to match the audio, and -longest has the last half of the video not have audio (for videos twice as long as audio,)

    what to use to loop audio (to match length of video with this)?

    Video length is 15:02, so used ffmpeg -i "/audio.m4a" -c copy -map 0:a:0 "/audionew.m4a"-t 15:02 -stream_loop -1`, but got errors.

  • Replacing avcodec_decode_video/avcodec_decode_video2 with currently supported functions

    18 April, by Silverstock

    I am trying to build a C++ program, and the creator is using depreciated functions, I have been replacing them fairly easily, but I am not totally sure how to replace these.

    This is a sample from the decode file:

    #if defined HAVE_AVCODEC_SEND_PACKET && defined HAVE_AVCODEC_RECEIVE_FRAME
        AVPacket* pkt = av_packet_alloc();
        if (!pkt)
            return 0;
        pkt->data=data;
        pkt->size=datalen;
        int ret = avcodec_send_packet(codecContext, pkt);
        while (ret == 0)
        {
            ret = avcodec_receive_frame(codecContext,frameIn);
            if (ret != 0)
            {
                if (ret != AVERROR(EAGAIN))
                {
                    LOG(LOG_INFO,"not decoded:"<pts==(int64_t)AV_NOPTS_VALUE || frameIn->pts==0);
                if (time != UINT32_MAX)
                    copyFrameToBuffers(frameIn, time);
            }
        }
    #ifdef HAVE_AV_PACKET_UNREF
        av_packet_unref(pkt);
    #else
        av_free_packet(pkt);
    #endif
        av_packet_free(&pkt);
    #else
        int frameOk=0;
    #if HAVE_AVCODEC_DECODE_VIDEO2
        AVPacket pkt;
        av_init_packet(&pkt);
        pkt.data=data;
        pkt.size=datalen;
        int ret=avcodec_decode_video2(codecContext, frameIn, &frameOk, &pkt);
    #else
        int ret=avcodec_decode_video(codecContext, frameIn, &frameOk, data, datalen);
    #endif
    

    I ran emmake make (I am building with emscripten, and all dependencies are static),

    And this is my output(Trimmed for relevance):

    /workspaces/WasmFlash/lightspark/src/backends/decoder.cpp:188:3: warning: 'avcodec_close' is deprecated [-Wdeprecated-declarations]
      188 |                 avcodec_close(codecContext);
          |                 ^
    /workspaces/WasmFlash/lightspark/src/backends/../../../PKGCONFIG/FFmpeg/build/include/libavcodec/avcodec.h:2386:1: note: 'avcodec_close' has been explicitly marked deprecated here
     2386 | attribute_deprecated
          | ^
    /workspaces/WasmFlash/lightspark/src/backends/../../../PKGCONFIG/FFmpeg/build/include/libavcodec/../libavutil/attributes.h:100:49: note: expanded from macro 'attribute_deprecated'
      100 | #    define attribute_deprecated __attribute__((deprecated))
          |                                                 ^
    /workspaces/WasmFlash/lightspark/src/backends/decoder.cpp:338:2: warning: 'avcodec_close' is deprecated [-Wdeprecated-declarations]
      338 |         avcodec_close(codecContext);
          |         ^
    /workspaces/WasmFlash/lightspark/src/backends/../../../PKGCONFIG/FFmpeg/build/include/libavcodec/avcodec.h:2386:1: note: 'avcodec_close' has been explicitly marked deprecated here
     2386 | attribute_deprecated
          | ^
    /workspaces/WasmFlash/lightspark/src/backends/../../../PKGCONFIG/FFmpeg/build/include/libavcodec/../libavutil/attributes.h:100:49: note: expanded from macro 'attribute_deprecated'
      100 | #    define attribute_deprecated __attribute__((deprecated))
          |                                                 ^
    /workspaces/WasmFlash/lightspark/src/backends/decoder.cpp:483:10: error: use of undeclared identifier 'avcodec_decode_video'
      483 |         int ret=avcodec_decode_video(codecContext, frameIn, &frameOk, data, datalen);
          |                 ^
    /workspaces/WasmFlash/lightspark/src/backends/decoder.cpp:547:10: error: use of undeclared identifier 'avcodec_decode_video'
      547 |         int ret=avcodec_decode_video(codecContext, frameIn, &frameOk, pkt->data, pkt->size);
    

    I attempted to compile, and I expected the compilation to run without error.

  • Splitting my code into OpenCV and UDP and the differences between using OpenCV and UDP

    18 April, by Sagiv Shaniv

    1) I wrote a Python code to receive video in real-time, compress it, duplicate the video, and then send it to OpenCV and UDP using ffmpeg. I would like to know how I can duplicate the code to send it to both UDP and OpenCV (without sending it to another device) without affecting the frame rate.

    This is the code I used so far:

    import subprocess
    import cv2
    
    # Start ffmpeg process to capture video from USB, encode it in H.264, and send it over UDP and to virtual video device
    ffmpeg_cmd = [
        'ffmpeg',
        '-f', 'v4l2',            # Input format for USB camera
        '-video_size', '1920x1080', # Video size
        '-i', '/dev/video2',     # USB device
        '-c:v', 'libx264',       # H.264 codec
        '-preset', 'ultrafast',  # Preset for speed
        '-tune', 'zerolatency',  # Tune for zero latency
        '-b:v', '2M',            # Bitrate
        '-bufsize', '5M',        # Buffer size
        '-pix_fmt', 'yuv420p',   # Specify pixel format
        '-filter_complex', '[0:v]split=2[out1][out2]',  # Split the video stream
        '-map', '[out1]',        # Map the first output to UDP
        '-f', 'mpegts',          # Output format for UDP
        'udp://192.168.1.100:8000',  # UDP destination
        '-map', '[out2]',        # Map the second output to virtual video device
        '-f', 'v4l2',            # Output format for virtual video device
        '-video_size', '1920x1080', # Video size for virtual video device
        '-pix_fmt', 'yuv420p',   # Specify pixel format for virtual video device
        '/dev/video1'            # Virtual video device
    ]
    
    ffmpeg_process = subprocess.Popen(ffmpeg_cmd)
    
    v4l2_cap = cv2.VideoCapture(1)
    
    while True:
        ret, frame = v4l2_cap.read()  # Read frame from virtual video device
        if not ret:
            break
        cv2.imshow('Frame', frame)  # Display frame
        if cv2.waitKey(1) & 0xFF == ord('q'):  # Exit on 'q' key press
            break
    
    # Clean up
    cv2.destroyAllWindows()
    ffmpeg_process.terminate()
    

    2) When I get the video straight from the device and send it over UDP I get 25 FPS with this code:

    import cv2
    import subprocess
    import time
    import os
    
    width = 1920
    height  = 1080
    fps = 40  # Increase FPS to 60
    proc = None
    os.environ['LD_LIBRARY_PATH'] = '/opt/vc/lib'  # Set the library path
    
    def stream_video():
        command = [
        'ffmpeg',
        '-f', 'v4l2',
        '-input_format', 'mjpeg',
        '-video_size', '1920x1080',
        '-framerate', '30',
        '-thread_queue_size', '512',
        '-i', '/dev/video2',
        '-f', 'lavfi',
        '-i', 'sine=frequency=440:sample_rate=48000',
        '-pix_fmt', 'yuvj420p',
        '-c:v', 'libx264',
        '-c:a', 'aac',
        '-b:v', '5000k',
        '-b:a', '128k',
        '-profile:v', 'baseline',
        '-preset', 'ultrafast',
         '-x264-params', 'tune=zerolatency',
        '-g', '60',
        '-f', 'mpegts',
        'udp://192.168.1.100:8000'
    ]
    
    
        try:
            proc = subprocess.Popen(command, stdin=subprocess.PIPE)
        finally:
            proc.stdin.close()
            proc.wait()
    
    if _name_ == '_main_':
        stream_video()
    

    However when I get the video from openCV I get 9 FPS with this code:

    import subprocess
    import cv2
    import numpy as np
    
    width = 1920
    height = 1080
    fps = 30
    
    cap = cv2.VideoCapture(0)
    cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)
    cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)
    cap.set(cv2.CAP_PROP_FPS, fps)
    
    command = [
        'ffmpeg',
       
        '-f', 'rawvideo',
        '-s', f'{width}x{height}',
        '-r', str(fps),
        '-i', '-',  # Read from stdin
        '-f', 'lavfi',
        '-i', 'sine=frequency=440:sample_rate=48000',
        '-pix_fmt', 'yuv420p',
        '-c:v', 'libx264',
        '-c:a', 'aac',
        '-b:v', '5M',
        '-b:a', '128k',
        '-profile:v', 'baseline',
        '-preset', 'ultrafast',
        '-x264-params', 'tune=zerolatency',
        '-g', '60',
        '-f', 'mpegts',
        'udp://192.168.1.100:8000'
    ]
    
    try:
        proc = subprocess.Popen(command, stdin=subprocess.PIPE)
        while True:
            ret, frame = cap.read()
            if not ret:
                break
            proc.stdin.write(frame.tobytes())
    finally:
        cap.release()
        proc.stdin.close()
        proc.wait()
    

    How can I receive video from OpenCV without affecting the frame rate (as I need it for later image processing) and without compromising on quality or resolution?

    I have tried capturing video frames using OpenCV and sending them over UDP using ffmpeg. I expected to maintain the original frame rate of the video without compromising on quality or resolution. However, I noticed a significant drop in frame rate when using OpenCV compared to directly capturing from the device. Specifically, I achieved 25 FPS when capturing and sending directly from the device using ffmpeg, but only 9 FPS when capturing frames using OpenCV and then sending them over UDP using ffmpeg.

    Thank you