Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • ffmpeg fast seek not working for MP4 over HTTP

    27 juillet, par Gmanicus

    I'm attempting to download snapshots from a video provided by the U.S House of Representatives:

    https://houseliveprod-f9h4cpb9dyb8gegg.a01.azurefd.net/east/2024-04-11T08-55-12_Download/video_3000000_1.mp4
    

    I am using fluent-ffmpeg in Node to execute this command:

    ffmpeg('https://houseliveprod-f9h4cpb9dyb8gegg.a01.azurefd.net/east/2024-04-11T08-55-12_Download/video_3000000_1.mp4')
      .inputOption(`-ss 03:33:33`)
      .outputOptions([
         '-vframes 1'
      ])
      .output('test.png')
    
    // Effectively:
    // ffmpeg -ss 03:33:33 -i  -y -vframes 1 test.png
    

    My intention is to fast-seek to the desired timestamp and take a snapshot over HTTP. However, when doing so, the performance is not great. A snapshot takes about 10s per 3hrs of video and seems to increase fairly linearly at that rate.

    However, when using ffmpeg on the same video locally, it's super fast! Sub-500ms regardless of the desired timestamp.

    Is there some magic that could be done via ffmpeg options or perhaps some sort of technique with manual requests to get a snapshot at the desired segment of video more efficiently?

  • FFmpeg not copying all audio streams [closed]

    26 juillet, par Dotl

    I'm having trouble getting ffmpeg to copy all audio streams from a .mp4 file. After hours of searching online, it appears this should copy all streams (as shown in example 4 here):

    ffmpeg -i in.mp4 -map 0 -c copy out.mp4
    

    in.mp4 contains 3 streams:

    • Video
    • Audio track 1
    • Audio track 2

    out.mp4 (which should be identical to in.mp4) contains only 2 streams:

    • Video
    • Audio track 1

    FFmpeg does appear to correctly identify all 3 streams, but doesn't copy all of them over. Output from FFmpeg:

    Stream mapping:
      Stream #0:0 -> #0:0 (copy)
      Stream #0:1 -> #0:1 (copy)
      Stream #0:2 -> #0:2 (copy)
    

    Edit: Output from ffmpeg -v 9 -loglevel 99 -i in.mp4:

    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from in.mp4':
      Metadata:
        major_brand     : isom
        minor_version   : 512
        compatible_brands: isomiso2avc1mp41
        encoder         : Lavf57.36.100
      Duration: 00:00:06.03, start: 0.000000, bitrate: 5582 kb/s
        Stream #0:0(und), 1, 1/15360: Video: h264 (Main), 1 reference frame (avc1 /
    0x31637661), yuv420p(tv, bt470bg/unknown/unknown, left), 1920x1080 (0x0) [SAR 1:
    1 DAR 16:9], 0/1, 5317 kb/s, 30 fps, 30 tbr, 15360 tbn, 60 tbc (default)
        Metadata:
          handler_name    : VideoHandler
        Stream #0:1(und), 1, 1/48000: Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz,
     stereo, fltp, 128 kb/s (default)
        Metadata:
          handler_name    : SoundHandler
        Stream #0:2(und), 1, 1/48000: Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz,
     stereo, fltp, 128 kb/s
        Metadata:
          handler_name    : SoundHandler
    Successfully opened the file.
    At least one output file must be specified
    [AVIOContext @ 0000000001c2b9e0] Statistics: 153350 bytes read, 2 seeks
    

    Edit 2 (solved): I managed to find the correct syntax from this ticket. For any others that are interested, the correct syntax is:

    ffmpeg -i in.mp4 -vcodec copy -c:a copy -map 0 out.mp4

    This will copy all streams.

  • Trying to capture display output for real-time analysis with OpenCV ; I need help with interfacing with the OS for input

    26 juillet, par mirari

    I want to apply operations from the OpenCV computer vision library, in real time, to video captured from my computer display. The idea in this particular case is to detect interesting features during gameplay in a popular game and provide the user with an enhanced experience; but I could think of several other scenarios where one would want to have live access to this data as well. At any rate, for the development phase it might be acceptable using canned video, but for the final application performance and responsiveness are obviously critical.

    I am trying to do this on Ubuntu 10.10 as of now, and would prefer to use a UNIX-like system, but any options are of interest. My C skills are very limited, so whenever talking to OpenCV through Python is possible, I try to use that instead. Please note that I am trying to capture NOT from a camera device, but from a live stream of display output; and I'm at a loss as to how to take the input. As far as I can tell, CaptureFromCAM works only for camera devices, and it seems to me that the requirement for real-time performance in the end result makes storage in file and reading back through CaptureFromFile a bad option.

    The most promising route I have found so far seems to be using ffmpeg with the x11grab option to capture from an X11 display; (e.g. the command ffmpeg -f x11grab -sameq -r 25 -s wxga -i :0.0 out.mpg captures 1366x768 of display 0 to 'out.mpg'). I imagine it should be possible to treat the output stream from ffmpeg as a file to be read by OpenCV (presumably by using the CaptureFromFile function) maybe by using pipes; but this is all on a much higher level than I have ever dealt with before and I could really use some directions. Do you think this approach is feasible? And more importantly can you think of a better one? How would you do it?

  • FFMPEG C++/Kotlin Encode data from ByteArray (ByteBuffer) to subtitles and mux it with video

    26 juillet, par Алексей Красноноженко

    I need to mux some byte data encoded to base64 String to mpeg-ts container. Can't find any solution to somehow encode this data to AVPacket. Using of "avcodec_encode_subtitle" supposes to have AVSubtitle with proper data. But how can I create/allocate AVSubtitle with my data?

    Upd. The only variant I managed to create (in Kotlin, as my app is KMP) is that, but it gives error "Invalid data found when processing input" when encoding

    val data = AVSubtitleRect()
    data.type(SUBTITLE_ASS)
    data.ass(BytePointer(buffer))
    val subtitle = AVSubtitle()
    val pointerPointer = PointerPointer(1)
    pointerPointer.put(data)
    subtitle.rects(pointerPointer)
    subtitle.pts(packet.pts())
    
    val bufferSize = 1024 * 1024
    val encodedBuffer = BytePointer(av_malloc(bufferSize.toLong()))
    
    val result = avcodec_encode_subtitle(subtitleContext, encodedBuffer, bufferSize, subtitle)
    
    if (result >= 0) {
        val outSubtitlePacket = AVPacket()
        outSubtitlePacket.apply {
            data(encodedBuffer)
            outSubtitlePacket.size(result)
            outSubtitlePacket.stream_index(subtitleStreamIndex)
            duration(packet.duration())
            dts(packet.dts())
            pts(packet.pts())
            pos(packet.pos())
        }
    
        av_interleaved_write_frame(avOutputCtx, outSubtitlePacket)
    
        av_packet_unref(outSubtitlePacket)
        av_free(encodedBuffer)
    }
    
  • How do I include FFMpeg for development in a windows program ?

    26 juillet, par Lama Mao

    I'm trying to follow a tutorial about using the FFMpeg libraries to extract frames from videos in C++ (by Bartholomew).

    In order to include the libraries, according to the tutorial, it is advised to use pkg-config to find the libraries and put them together so that CMake can include them. In the video he uses homebrew to install ffmpeg and somehow pkg-config is able to find the header files for ffmpeg (libavcodec, libavformat, libavdevice ...)

    The problem is I don't know how to get the ffmpeg libraries installed so that pkg-config can find them.

    On windows, I've tried installing the compiled windows binaries, and I've tried installing ffmpeg-full package using chocolatey. However, I fail to see where these header files are installed to.

    There's a ffmpeg folder in C:/ffmpeg/ but when I look in that there's no header files or libraries, just the binaries. Perhaps I need to clone the entire source project, but then how is pkg-config supposed to find them? When I try and compile I get this output:

     Found PkgConfig: C:/Strawberry/perl/bin/pkg-config.bat (found version "0.26") 
     Checking for module 'libavcodec'
     Can't find libavcodec.pc in any of C:/Strawberry/c/lib/pkgconfig
     use the PKG_CONFIG_PATH environment variable, or
     specify extra search paths via 'search_paths'
     CMake Error at C:/Program Files/CMake/share/cmake-3.24/Modules/FindPkgConfig.cmake:607 (message):
     A required package was not found
     Call Stack (most recent call first):
     C:/Program Files/CMake/share/cmake-3.24/Modules/FindPkgConfig.cmake:829 (_pkg_check_modules_internal)
     lib/FFMpeg/CMakeLists.txt:5 (pkg_check_modules)
     Checking for module 'libavcodec'        <----- here
     Can't find libavcodec.pc in any of C:/Strawberry/c/lib/pkgconfig
     use the PKG_CONFIG_PATH environment variable, or
     specify extra search paths via 'search_paths'
     CMake Error at C:/Program Files/CMake/share/cmake-3.24/Modules/FindPkgConfig.cmake:607 (message):
     A required package was not found
     Call Stack (most recent call first):
     C:/Program Files/CMake/share/cmake-3.24/Modules/FindPkgConfig.cmake:829 (_pkg_check_modules_internal)
     lib/FFMpeg/CMakeLists.txt:5 (pkg_check_modules)
    

    The contents of C:/Strawberry/c/lib/pkgconfig seem to be a whole load of libraries none of which are from ffmpeg.

    In case the problem is in my CMakeLists file, here are the contents of the subdirectory file:

    cmake_minimum_required(VERSION 3.9)
    project(FFMpeg)
    
    find_package(PkgConfig REQUIRED)
    pkg_check_modules(AVCODEC    REQUIRED    IMPORTED_TARGET libavcodec)
    pkg_check_modules(AVFILTER   REQUIRED    IMPORTED_TARGET libavformat)
    pkg_check_modules(AVDEVICE   REQUIRED    IMPORTED_TARGET libavdevice)
    pkg_check_modules(AVUTIL     REQUIRED    IMPORTED_TARGET libavutil)
    pkg_check_modules(SWRESAMPLE REQUIRED    IMPORTED_TARGET libswresample)
    pkg_check_modules(SWSCALE    REQUIRED    IMPORTED_TARGET libswscale)
    
    add_library(FFMpeg INTERFACE IMPORTED GLOBAL)
    
    target_include_directories(
        FFmpeg INTERFACE
        ${AVCODEC_INCLUDE_DIRS}
        ${AVFILTER_INCLUDE_DIRS}
        ${AVDEVICE_INCLUDE_DIRS}
        ${AVUTIL_INCLUDE_DIRS}
        ${SWRESAMPLE_INCLUDE_DIRS}
        ${SWSCALE_INCLUDE_DIRS}
    )
    
    target_link_options(
        FFmpeg INTERFACE
        ${AVCODEC_LDFLAGS}
        ${AVFILTER_LDFLAGS}
        ${AVDEVICE_LDFLAGS}
        ${AVUTIL_LDFLAGS}
        ${SWRESAMPLE_LDFLAGS}
        ${SWSCALE_LDFLAGS}
    )
    

    What have I done wrong/am not understanding?