Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • FFmpeg Drawtext Function to output Pixel Positions of Rendered Characters ?

    13 mai 2019, par user1656125

    I would like to create a python function that takes a Drawtext function call and outputs the pixel box of each rendered character.

    eg.

    ffmpeg.drawtext(stream, text='C Illiterate.', fontfile=fontfile, fontcolor='white', fontsize=24, x=0, y=0, escape_text=True)
    
    -->
    
    C[(x,y)(x,y)],  [(x,y)(x,y)], I[(x,y)(x,y)], l[(x,y)(x,y)], l[(x,y)(x,y)], i[(x,y)(x,y)], t[(x,y)(x,y)], e[(x,y)(x,y)], r[(x,y)(x,y)], a[(x,y)(x,y)], t[(x,y)(x,y)], e[(x,y)(x,y)], .[(x,y)(x,y)]
    

    The function inputs I believe would be:

    "max_glyph_a", "ascent",  ///< max glyph ascent
    "max_glyph_d", "descent", ///< min glyph descent
    "max_glyph_h",            ///< max glyph height
    "max_glyph_w",            ///< max glyph width
    ...
    "text_h", "th",           ///< height of the rendered text
    "text_w", "tw",           ///< width  of the rendered text
    "x",
    "y",
    ...
    int use_kerning;                ///< font kerning is used - true/false
    

    as specified by file: https://github.com/FFmpeg/FFmpeg/blob/1a31c2b5df1179fdc1b8e84c8fa89d853e517309/libavfilter/vf_drawtext.c

    Would you be able to help me construct this function/decipher the inputs. It's for a machine vision application.

  • Is it possible to cut a online streaming video with ffmpeg ?

    12 mai 2019, par Sam Zorn

    Is ffmpeg able to cut out a certain part of an online video stream? for example, I would like to cut out only the 40 - 55 minutes from a one-hour online streaming video.

    I am aware of how I can do this with a local one. so for example:

    ffmpeg -ss [start] -i in.mp4 -t [duration] -c:v libx264 -c:a aac -strict experimental -b:a 128k out.mp4

    the little bit I know about codecs etc tells me that it either takes disproportionately long or is not possible at all...

    if someone could give me some advice, maybe even a short explanation, I would be very grateful... :)

  • Is it possible to split frame accurate an AAC file using FFmpeg ?

    12 mai 2019, par CAHbKA

    What I did:

    # create source material
    ffmpeg -y -i some.file -c:a libfdk_aac -profile:a aac_he -b:a 128k -ar 44100 source.m4a
    
    # split into two parts 
    ffmpeg -y -ss 00:00:00 -i source.m4a -to 6 -c copy part1.m4a
    ffmpeg -y -ss 00:00:06 -i source.m4a -c copy part2.m4a
    
    # re-encode only the first part with the same setting as source file
    fmpeg -y -i part1.m4a -c:a libfdk_aac -profile:a aac_he -b:a 128k -ar 44100 part1reencoded.m4a
    
    # create file list to be concatenated
    echo 'ffconcat version 1.0
    file part1reencoded.m4a
    file part2.m4a' > my.list
    
    # finally concatenate both parts
    ffmpeg -y -f concat -safe 0 -i my.list -c copy parts.m4a
    
    # play the result
    ffplay parts.m4a
    

    Unfortunately, the result file has noises at 00:00:06.

    Is it possible to split frame accurate an AAC file using FFmpeg?

  • How to extract anamorphic video frame correctly using ffmpeg ?

    12 mai 2019, par user10246830

    I can extract the frames using below ffmpeg but it comes out 720x576 square pixel instead of anamorphic non-square 1024x576. How do I output 720x576 rectangle pixel as shown on TV?

    How do i deinterlace the frames as output is interlaced?

    ffmpeg -i Midnight.vob -vf fps=1,setdar=16:9 -q:v 2 Midnight%06d.jpg
    

    How do i deal with this below in ffmpeg.

    [swscaler @ 0000000002a8ec40] deprecated pixel format used, make sure you did set range correctly. Video: mjpeg, yuvj420p(pc).

    Am I to understand that the video colour format is out of date and that (pc) is the range 0-255 for colours?

    ffmpeg -i Midnight.vob -vf fps=1,setdar=16:9 -q:v 2 Midnight%06d.jpg ffmpeg version N-93828-g68bac50604 Copyright (c) 2000-2019 the FFmpeg developers
    
    built with gcc 8.3.1 (GCC) 20190414 configuration: --enable-gpl
    --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt libavutil      56. 26.101 / 56. 26.101 libavcodec     58. 52.101 / 58.
    52.101 libavformat    58. 27.103 / 58. 27.103 libavdevice    58.  7.100 / 58.  7.100 libavfilter     7. 50.100 /  7. 50.100 libswscale      5.  4.100 /  5.  4.100 libswresample   3.  4.100 /  3.  4.100 libpostproc    55.  4.100 / 55.  4.100 Input #0, mpeg, from 'Midnight.vob':   Duration: 00:42:04.58, start: 0.287267, bitrate: 5829 kb/s
        Stream #0:0[0x1bf]: Data: dvd_nav_packet
        Stream #0:1[0x1e0]: Video: mpeg2video (Main), yuv420p(tv, top first), 720x576 [SAR 64:45 DAR 16:9], 25 fps, 25 tbr, 90k tbn, 50 tbc
        Stream #0:2[0x80]: Audio: ac3, 48000 Hz, 5.1(side), fltp, 384 kb/s
        Stream #0:3[0x81]: Audio: ac3, 48000 Hz, 5.1(side), fltp, 384 kb/s
        Stream #0:4[0x82]: Audio: ac3, 48000 Hz, mono, fltp, 192 kb/s
        Stream #0:5[0x22]: Subtitle: dvd_subtitle
        Stream #0:6[0x24]: Subtitle: dvd_subtitle
        Stream #0:7[0x25]: Subtitle: dvd_subtitle
        Stream #0:8[0x26]: Subtitle: dvd_subtitle
        Stream #0:9[0x28]: Subtitle: dvd_subtitle
        Stream #0:10[0x21]: Subtitle: dvd_subtitle
        Stream #0:11[0x23]: Subtitle: dvd_subtitle Stream mapping:   Stream #0:1 -> #0:0 (mpeg2video (native) -> mjpeg (native)) Press [q] to stop, [?] for help [swscaler @ 000000000295ec40] deprecated pixel format used, make sure you did set range correctly Output #0, image2, to 'Midnight%06d.jpg':   Metadata:
        encoder         : Lavf58.27.103
        Stream #0:0: Video: mjpeg, yuvj420p(pc), 720x576 [SAR 36:5 DAR 9:1], q=2-31, 200 kb/s, 1 fps, 1 tbn, 1 tbc
        Metadata:
          encoder         : Lavc58.52.101 mjpeg
        Side data:
          cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: -1 [mpeg @ 0000000000400340] New subtitle stream 0:12 at pos:7458830 and DTS:12.4873s [mpeg @ 0000000000400340] New subtitle stream 0:13 at pos:7475214 and DTS:12.4873s frame=  951 fps=115 q=2.0 Lsize=N/A time=00:15:51.00 bitrate=N/A speed= 115x
    
    video:49190kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
    

    Settled on for large jpg to render then shrink to 1024x576. If deinterlace remove yadif=1.

    ffmpeg -i input.vob -vf yadif=1,scale=4096x2304,setdar=16:9 -qmin 1 -q:v 1 output%06d.jpg
    

    Jpg is original size 1024x576 square pixel.

    ffmpeg -i input.vob -vf yadif=1,fps=1,scale=iw*sar:ih,setsar=1 -qmin 1 -q:v 1 output%06d.jpg
    

    Png gives better quality over jpg.

    Thanks for your contributions.

  • Is it possible to pull a RTMP stream from one server and broadcast it to another ?

    12 mai 2019, par Naftuli Kay

    I essentially have a situation where I need to pull a stream from one Wowza media server and publish it to a Red5 or Flash Media Server instance with FFMPEG. Is there a command to do this? I'm essentially looking for something like this:

    while [ true ]; do 
        ffmpeg -i rtmp://localhost:2000/vod/streamName.flv rtmp://localhost:1935/live/streamName
    done
    

    Is this currently possible from FFMPEG? I remembered reading something like this, but I can't remember how exactly to do it.