Recherche avancée

Médias (1)

Mot : - Tags -/artwork

Autres articles (40)

  • Demande de création d’un canal

    12 mars 2010, par

    En fonction de la configuration de la plateforme, l’utilisateur peu avoir à sa disposition deux méthodes différentes de demande de création de canal. La première est au moment de son inscription, la seconde, après son inscription en remplissant un formulaire de demande.
    Les deux manières demandent les mêmes choses fonctionnent à peu près de la même manière, le futur utilisateur doit remplir une série de champ de formulaire permettant tout d’abord aux administrateurs d’avoir des informations quant à (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (5793)

  • FFmpeg saturates memory + CPU usage drops to 0% during very basic conversion of PNG files to MP4 video

    7 août 2022, par mattze_frisch

    I have this Python function that runs ffmpeg with minimal options from the Windows command line :

    


    def run_ffmpeg(frames_path, ffmpeg_path=notebook_directory):
    '''
    This function runs ffmpeg.exe to convert PNG image files into a MP4 video.
    
    Parameters
    ----------
    frames_path : string
        Absolute path to the PNG files
    ffmpeg_path : string
        Absolute path to the FFmpeg executable (ffmpeg.exe)
    '''
    
    from subprocess import check_call
    
    
    check_call(
        [
            os.path.join(ffmpeg_path, 'ffmpeg'),
            '-y',    # Overwrite output files without asking
            '-report',    # Write logfile to current working directory
            '-framerate', '60',    # Input frame rate
            '-i', os.path.join(frames_path, 'frame%05d.png'),    # Path to input frames
            os.path.join(frames_path, 'video.mp4')    # Path to store output video
        ]
    )


    


    When running it from a Jupyter notebook over 2500 PNG files (RGBA, ca. 600-700 kB each, 9000 x 13934 pixels), CPU usage briefly peaks to 100% before dropping to 0%, while memory usage quickly saturates to 100% and stays there, slowing the system down almost to a freeze, so I need to terminate ffmpeg from the task manager :

    


    Screenshot

    


    The generated video file has a size of only 48 bytes and contains just a black frame when viewed in the VLC player.

    


    This is the ffmpeg log output :

    


    ffmpeg started on 2022-08-05 at 17:17:55
Report written to "ffmpeg-20220805-171755.log"
Log level: 48
Command line:
"C:\\Users\\Username\\Desktop\\folder\\ffmpeg" -y -report -framerate 60 -i "C:\\Users\\Username\\Desktop\\e\\frame%05d.png" "C:\\Users\\Username\\Desktop\\e\\video.mp4"
ffmpeg version 2022-07-14-git-882aac99d2-full_build-www.gyan.dev Copyright (c) 2000-2022 the FFmpeg developers
  built with gcc 12.1.0 (Rev2, Built by MSYS2 project)
  configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libmfx --enable-libshaderc --enable-vulkan --enable-libplacebo --ena  libavutil      57. 29.100 / 57. 29.100
  libavcodec     59. 38.100 / 59. 38.100
  libavformat    59. 28.100 / 59. 28.100
  libavdevice    59.  8.100 / 59.  8.100
  libavfilter     8. 45.100 /  8. 45.100
  libswscale      6.  8.100 /  6.  8.100
  libswresample   4.  8.100 /  4.  8.100
  libpostproc    56.  7.100 / 56.  7.100
Splitting the commandline.
Reading option '-y' ... matched as option 'y' (overwrite output files) with argument '1'.
Reading option '-report' ... matched as option 'report' (generate a report) with argument '1'.
Reading option '-framerate' ... matched as AVOption 'framerate' with argument '60'.
Reading option '-i' ... matched as input url with argument 'C:\Users\Username\Desktop\e\frame%05d.png'.
Reading option 'C:\Users\Username\Desktop\e\video.mp4' ... matched as output url.
Finished splitting the commandline.
Parsing a group of options: global .
Applying option y (overwrite output files) with argument 1.
Applying option report (generate a report) with argument 1.
Successfully parsed a group of options.
Parsing a group of options: input url C:\Users\Username\Desktop\e\frame%05d.png.
Successfully parsed a group of options.
Opening an input file: C:\Users\Username\Desktop\e\frame%05d.png.
[image2 @ 000000000041ff80] Opening 'C:\Users\Username\Desktop\e\frame00000.png' for reading
[file @ 0000000000425680] Setting default whitelist 'file,crypto,data'
[AVIOContext @ 000000000042d800] Statistics: 668318 bytes read, 0 seeks
[image2 @ 000000000041ff80] Opening 'C:\Users\Username\Desktop\e\frame00001.png' for reading
[file @ 000000000042dac0] Setting default whitelist 'file,crypto,data'
[AVIOContext @ 000000000042d6c0] Statistics: 668371 bytes read, 0 seeks
[image2 @ 000000000041ff80] Opening 'C:\Users\Username\Desktop\e\frame00002.png' for reading
[file @ 000000000042d6c0] Setting default whitelist 'file,crypto,data'
[AVIOContext @ 000000000042dac0] Statistics: 669177 bytes read, 0 seeks
[image2 @ 000000000041ff80] Opening 'C:\Users\Username\Desktop\e\frame00003.png' for reading
[file @ 000000000042dac0] Setting default whitelist 'file,crypto,data'
[AVIOContext @ 0000000000437a40] Statistics: 684594 bytes read, 0 seeks
[image2 @ 000000000041ff80] Opening 'C:\Users\Username\Desktop\e\frame00004.png' for reading
[file @ 0000000000437a40] Setting default whitelist 'file,crypto,data'
[AVIOContext @ 0000000000437c00] Statistics: 703014 bytes read, 0 seeks
[image2 @ 000000000041ff80] Opening 'C:\Users\Username\Desktop\e\frame00005.png' for reading
[file @ 0000000000437c00] Setting default whitelist 'file,crypto,data'
[AVIOContext @ 0000000000437d00] Statistics: 721604 bytes read, 0 seeks
[image2 @ 000000000041ff80] Opening 'C:\Users\Username\Desktop\e\frame00006.png' for reading
[file @ 0000000000437cc0] Setting default whitelist 'file,crypto,data'
[AVIOContext @ 0000000000437f40] Statistics: 739761 bytes read, 0 seeks
[image2 @ 000000000041ff80] Opening 'C:\Users\Username\Desktop\e\frame00007.png' for reading
[file @ 0000000000437f40] Setting default whitelist 'file,crypto,data'
[AVIOContext @ 0000000000438040] Statistics: 757327 bytes read, 0 seeks
[image2 @ 000000000041ff80] Probe buffer size limit of 5000000 bytes reached
Input #0, image2, from 'C:\Users\Username\Desktop\e\frame%05d.png':
  Duration: 00:00:41.67, start: 0.000000, bitrate: N/A
  Stream #0:0, 8, 1/60: Video: png, rgba(pc), 9000x13934 [SAR 29528:29528 DAR 4500:6967], 60 fps, 60 tbr, 60 tbn
Successfully opened the file.
Parsing a group of options: output url C:\Users\Username\Desktop\e\video.mp4.
Successfully parsed a group of options.
Opening an output file: C:\Users\Username\Desktop\e\video.mp4.
[file @ 000000002081e3c0] Setting default whitelist 'file,crypto,data'
Successfully opened the file.
detected 12 logical cores
Stream mapping:
  Stream #0:0 -> #0:0 (png (native) -> h264 (libx264))
Press [q] to stop, [?] for help
cur_dts is invalid st:0 (0) [init:0 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
cur_dts is invalid st:0 (0) [init:0 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
cur_dts is invalid st:0 (0) [init:0 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
cur_dts is invalid st:0 (0) [init:0 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
cur_dts is invalid st:0 (0) [init:0 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
cur_dts is invalid st:0 (0) [init:0 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
cur_dts is invalid st:0 (0) [init:0 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
cur_dts is invalid st:0 (0) [init:0 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
cur_dts is invalid st:0 (0) [init:0 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
[image2 @ 000000000041ff80] Opening 'C:\Users\Username\Desktop\e\frame00008.png' for reading
[file @ 00000000024ad980] Setting default whitelist 'file,crypto,data'
[AVIOContext @ 00000000004379c0] Statistics: 767857 bytes read, 0 seeks
cur_dts is invalid st:0 (0) [init:0 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
[image2 @ 000000000041ff80] Opening 'C:\Users\Username\Desktop\e\frame00009.png' for reading
[file @ 000000000042d600] Setting default whitelist 'file,crypto,data'
[AVIOContext @ 00000000004379c0] Statistics: 774848 bytes read, 0 seeks
cur_dts is invalid st:0 (0) [init:0 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
[image2 @ 000000000041ff80] Opening 'C:\Users\Username\Desktop\e\frame00010.png' for reading
[file @ 00000000004379c0] Setting default whitelist 'file,crypto,data'
[AVIOContext @ 000000000042da00] Statistics: 787178 bytes read, 0 seeks
cur_dts is invalid st:0 (0) [init:0 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
[image2 @ 000000000041ff80] Opening 'C:\Users\Username\Desktop\e\frame00011.png' for reading
[file @ 00000000004379c0] Setting default whitelist 'file,crypto,data'
[AVIOContext @ 000000000042da00] Statistics: 797084 bytes read, 0 seeks
cur_dts is invalid st:0 (0) [init:0 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
[image2 @ 000000000041ff80] Opening 'C:\Users\Username\Desktop\e\frame00012.png' for reading
[file @ 0000000000437a80] Setting default whitelist 'file,crypto,data'
[AVIOContext @ 000000000042da00] Statistics: 802870 bytes read, 0 seeks
[graph 0 input from stream 0:0 @ 00000000208bf800] Setting 'video_size' to value '9000x13934'
[graph 0 input from stream 0:0 @ 00000000208bf800] Setting 'pix_fmt' to value '26'
[graph 0 input from stream 0:0 @ 00000000208bf800] Setting 'time_base' to value '1/60'
[graph 0 input from stream 0:0 @ 00000000208bf800] Setting 'pixel_aspect' to value '29528/29528'
[graph 0 input from stream 0:0 @ 00000000208bf800] Setting 'frame_rate' to value '60/1'
[graph 0 input from stream 0:0 @ 00000000208bf800] w:9000 h:13934 pixfmt:rgba tb:1/60 fr:60/1 sar:29528/29528
[format @ 00000000025ef840] Setting 'pix_fmts' to value 'yuv420p|yuvj420p|yuv422p|yuvj422p|yuv444p|yuvj444p|nv12|nv16|nv21|yuv420p10le|yuv422p10le|yuv444p10le|nv20le|gray|gray10le'
[auto_scale_0 @ 00000000025efe40] w:iw h:ih flags:'' interl:0
[format @ 00000000025ef840] auto-inserting filter 'auto_scale_0' between the filter 'Parsed_null_0' and the filter 'format'
[AVFilterGraph @ 000000000042da00] query_formats: 4 queried, 3 merged, 1 already done, 0 delayed
[auto_scale_0 @ 00000000025efe40] picking yuv444p out of 13 ref:rgba alpha:1
[auto_scale_0 @ 00000000025efe40] w:9000 h:13934 fmt:rgba sar:29528/29528 -> w:9000 h:13934 fmt:yuv444p sar:1/1 flags:0x0
[auto_scale_0 @ 00000000025efe40] w:9000 h:13934 fmt:rgba sar:29528/29528 -> w:9000 h:13934 fmt:yuv444p sar:1/1 flags:0x0
[auto_scale_0 @ 00000000025efe40] w:9000 h:13934 fmt:rgba sar:29528/29528 -> w:9000 h:13934 fmt:yuv444p sar:1/1 flags:0x0
[auto_scale_0 @ 00000000025efe40] w:9000 h:13934 fmt:rgba sar:29528/29528 -> w:9000 h:13934 fmt:yuv444p sar:1/1 flags:0x0
[libx264 @ 000000002081d280] using mv_range_thread = 376
[libx264 @ 000000002081d280] using SAR=1/1
[libx264 @ 000000002081d280] frame MB size (563x871) > level limit (139264)
[libx264 @ 000000002081d280] DPB size (4 frames, 1961492 mbs) > level limit (1 frames, 696320 mbs)
[libx264 @ 000000002081d280] MB rate (29422380) > level limit (16711680)
[libx264 @ 000000002081d280] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
[libx264 @ 000000002081d280] profile High 4:4:4 Predictive, level 6.2, 4:4:4, 8-bit
[libx264 @ 000000002081d280] 264 - core 164 r3095 baee400 - H.264/MPEG-4 AVC codec - Copyleft 2003-2022 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=4 threads=18 lookahead_threads=3 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to 'C:\Users\Username\Desktop\e\video.mp4':
  Metadata:
    encoder         : Lavf59.28.100
  Stream #0:0, 0, 1/15360: Video: h264 (avc1 / 0x31637661), yuv444p(tv, progressive), 9000x13934 [SAR 1:1 DAR 4500:6967], q=2-31, 60 fps, 15360 tbn
    Metadata:
      encoder         : Lavc59.38.100 libx264
    Side data:
      cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
Clipping frame in rate conversion by 0.000008
frame=    1 fps=0.8 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A speed=   0x    
cur_dts is invalid st:0 (0) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
[image2 @ 000000000041ff80] Opening 'C:\Users\Username\Desktop\e\frame00013.png' for reading
[file @ 000000000a6a2180] Setting default whitelist 'file,crypto,data'
[AVIOContext @ 000000000b38de80] Statistics: 810395 bytes read, 0 seeks
frame=    2 fps=0.8 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A speed=   0x    
cur_dts is invalid st:0 (0) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
[image2 @ 000000000041ff80] Opening 'C:\Users\Username\Desktop\e\frame00014.png' for reading
[file @ 000000001ec86c80] Setting default whitelist 'file,crypto,data'
[AVIOContext @ 000000000b38de80] Statistics: 818213 bytes read, 0 seeks
cur_dts is invalid st:0 (0) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
[image2 @ 000000000041ff80] Opening 'C:\Users\Username\Desktop\e\frame00015.png' for reading
[file @ 000000001ec86c80] Setting default whitelist 'file,crypto,data'
[AVIOContext @ 000000000b38de80] Statistics: 817936 bytes read, 0 seeks
frame=    4 fps=1.2 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A speed=   0x    
cur_dts is invalid st:0 (0) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
[image2 @ 000000000041ff80] Opening 'C:\Users\Username\Desktop\e\frame00016.png' for reading
[file @ 000000001ec86c80] Setting default whitelist 'file,crypto,data'
[AVIOContext @ 000000000b38de80] Statistics: 817014 bytes read, 0 seeks
cur_dts is invalid st:0 (0) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
[image2 @ 000000000041ff80] Opening 'C:\Users\Username\Desktop\e\frame00017.png' for reading
[file @ 000000001ec86c80] Setting default whitelist 'file,crypto,data'
[AVIOContext @ 000000000b38de80] Statistics: 828088 bytes read, 0 seeks
frame=    6 fps=1.5 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A speed=   0x    
cur_dts is invalid st:0 (0) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
[image2 @ 000000000041ff80] Opening 'C:\Users\Username\Desktop\e\frame00018.png' for reading
[file @ 000000001ec86c80] Setting default whitelist 'file,crypto,data'
[AVIOContext @ 000000000b38de80] Statistics: 831007 bytes read, 0 seeks
cur_dts is invalid st:0 (0) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
[image2 @ 000000000041ff80] Opening 'C:\Users\Username\Desktop\e\frame00019.png' for reading
[file @ 000000001ec86c80] Setting default whitelist 'file,crypto,data'
[AVIOContext @ 000000000b38de80] Statistics: 845203 bytes read, 0 seeks
frame=    8 fps=1.7 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A speed=   0x    
cur_dts is invalid st:0 (0) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
[image2 @ 000000000041ff80] Opening 'C:\Users\Username\Desktop\e\frame00020.png' for reading
[file @ 000000001ec86c80] Setting default whitelist 'file,crypto,data'
[AVIOContext @ 000000000b38de80] Statistics: 851548 bytes read, 0 seeks
cur_dts is invalid st:0 (0) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
[image2 @ 000000000041ff80] Opening 'C:\Users\Username\Desktop\e\frame00021.png' for reading
[file @ 000000001ec86c80] Setting default whitelist 'file,crypto,data'
[AVIOContext @ 000000000b38de80] Statistics: 847629 bytes read, 0 seeks
frame=   10 fps=1.8 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A speed=   0x    
cur_dts is invalid st:0 (0) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
[image2 @ 000000000041ff80] Opening 'C:\Users\Username\Desktop\e\frame00022.png' for reading
[file @ 000000001ec86c80] Setting default whitelist 'file,crypto,data'
[AVIOContext @ 000000000b38de80] Statistics: 860169 bytes read, 0 seeks
frame=   11 fps=1.4 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A speed=   0x    
cur_dts is invalid st:0 (0) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
[image2 @ 000000000041ff80] Opening 'C:\Users\Username\Desktop\e\frame00023.png' for reading
[file @ 000000001ec86c80] Setting default whitelist 'file,crypto,data'
[AVIOContext @ 000000000b38de80] Statistics: 857243 bytes read, 0 seeks
frame=   12 fps=1.2 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A speed=   0x    
cur_dts is invalid st:0 (0) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
[image2 @ 000000000041ff80] Opening 'C:\Users\Username\Desktop\e\frame00024.png' for reading
[file @ 000000001ec86c80] Setting default whitelist 'file,crypto,data'
[AVIOContext @ 000000000b38de80] Statistics: 835155 bytes read, 0 seeks


    


    What is the problem ?

    


    By the way, the color model of the image files was confirmed by doing

    


    from PIL import Image


img = Image.open('C:\\Users\\EPI-SMLM\\Desktop\\e\\frame00000.png')
img.mode
-------------------------------------------------------------------
C:\Program Files\Python38\lib\site-packages\PIL\Image.py:3035: DecompressionBombWarning: Image size (125406000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.
  warnings.warn(

'RGBA'


    


    The "decompression bomb warning" appears to be a false alarm/bug.

    


    UPDATE : I can confirm that this happens even when there are only 50 image files, i.e. 50 x 700 kB = 35 MB in total size. ffmpeg still gobbles up all available memory (almost 60 GB of private bytes !!!).

    


    And it also happens if ffmpeg is run from the command line.

    


    This must be a bug !

    


  • Libav (ffmpeg) copying decoded video timestamps to encoder

    31 octobre 2016, par Jason C

    I am writing an application that decodes a single video stream from an input file (any codec, any container), does a bunch of image processing, and encodes the results to an output file (single video stream, Quicktime RLE, MOV). I am using ffmpeg’s libav 3.1.5 (Windows build for now, but the application will be cross-platform).

    There is a 1:1 correspondence between input and output frames and I want the frame timing in the output to be identical to the input. I am having a really, really hard time accomplishing this. So my general question is : How do I reliably (as in, in all cases of inputs) set the output frame timing identical to the input ?

    It took me a very long time to slog through the API and get to the point I am at now. I put together a minimal test program to work with :

    #include <cstdio>

    extern "C" {
    #include <libavcodec></libavcodec>avcodec.h>
    #include <libavformat></libavformat>avformat.h>
    #include <libavutil></libavutil>avutil.h>
    #include <libavutil></libavutil>imgutils.h>
    #include <libswscale></libswscale>swscale.h>
    }

    using namespace std;


    struct DecoderStuff {
       AVFormatContext *formatx;
       int nstream;
       AVCodec *codec;
       AVStream *stream;
       AVCodecContext *codecx;
       AVFrame *rawframe;
       AVFrame *rgbframe;
       SwsContext *swsx;
    };


    struct EncoderStuff {
       AVFormatContext *formatx;
       AVCodec *codec;
       AVStream *stream;
       AVCodecContext *codecx;
    };


    template <typename t="t">
    static void dump_timebase (const char *what, const T *o) {
       if (o)
           printf("%s timebase: %d/%d\n", what, o->time_base.num, o->time_base.den);
       else
           printf("%s timebase: null object\n", what);
    }


    // reads next frame into d.rawframe and d.rgbframe. returns false on error/eof.
    static bool read_frame (DecoderStuff &amp;d) {

       AVPacket packet;
       int err = 0, haveframe = 0;

       // read
       while (!haveframe &amp;&amp; err >= 0 &amp;&amp; ((err = av_read_frame(d.formatx, &amp;packet)) >= 0)) {
          if (packet.stream_index == d.nstream) {
              err = avcodec_decode_video2(d.codecx, d.rawframe, &amp;haveframe, &amp;packet);
          }
          av_packet_unref(&amp;packet);
       }

       // error output
       if (!haveframe &amp;&amp; err != AVERROR_EOF) {
           char buf[500];
           av_strerror(err, buf, sizeof(buf) - 1);
           buf[499] = 0;
           printf("read_frame: %s\n", buf);
       }

       // convert to rgb
       if (haveframe) {
           sws_scale(d.swsx, d.rawframe->data, d.rawframe->linesize, 0, d.rawframe->height,
                     d.rgbframe->data, d.rgbframe->linesize);
       }

       return haveframe;

    }


    // writes an output frame, returns false on error.
    static bool write_frame (EncoderStuff &amp;e, AVFrame *inframe) {

       // see note in so post about outframe here
       AVFrame *outframe = av_frame_alloc();
       outframe->format = inframe->format;
       outframe->width = inframe->width;
       outframe->height = inframe->height;
       av_image_alloc(outframe->data, outframe->linesize, outframe->width, outframe->height,
                      AV_PIX_FMT_RGB24, 1);
       //av_frame_copy(outframe, inframe);
       static int count = 0;
       for (int n = 0; n &lt; outframe->width * outframe->height; ++ n) {
           outframe->data[0][n*3+0] = ((n+count) % 100) ? 0 : 255;
           outframe->data[0][n*3+1] = ((n+count) % 100) ? 0 : 255;
           outframe->data[0][n*3+2] = ((n+count) % 100) ? 0 : 255;
       }
       ++ count;

       AVPacket packet;
       av_init_packet(&amp;packet);
       packet.size = 0;
       packet.data = NULL;

       int err, havepacket = 0;
       if ((err = avcodec_encode_video2(e.codecx, &amp;packet, outframe, &amp;havepacket)) >= 0 &amp;&amp; havepacket) {
           packet.stream_index = e.stream->index;
           err = av_interleaved_write_frame(e.formatx, &amp;packet);
       }

       if (err &lt; 0) {
           char buf[500];
           av_strerror(err, buf, sizeof(buf) - 1);
           buf[499] = 0;
           printf("write_frame: %s\n", buf);
       }

       av_packet_unref(&amp;packet);
       av_freep(&amp;outframe->data[0]);
       av_frame_free(&amp;outframe);

       return err >= 0;

    }


    int main (int argc, char *argv[]) {

       const char *infile = "wildlife.wmv";
       const char *outfile = "test.mov";
       DecoderStuff d = {};
       EncoderStuff e = {};

       av_register_all();

       // decoder
       avformat_open_input(&amp;d.formatx, infile, NULL, NULL);
       avformat_find_stream_info(d.formatx, NULL);
       d.nstream = av_find_best_stream(d.formatx, AVMEDIA_TYPE_VIDEO, -1, -1, &amp;d.codec, 0);
       d.stream = d.formatx->streams[d.nstream];
       d.codecx = avcodec_alloc_context3(d.codec);
       avcodec_parameters_to_context(d.codecx, d.stream->codecpar);
       avcodec_open2(d.codecx, NULL, NULL);
       d.rawframe = av_frame_alloc();
       d.rgbframe = av_frame_alloc();
       d.rgbframe->format = AV_PIX_FMT_RGB24;
       d.rgbframe->width = d.codecx->width;
       d.rgbframe->height = d.codecx->height;
       av_frame_get_buffer(d.rgbframe, 1);
       d.swsx = sws_getContext(d.codecx->width, d.codecx->height, d.codecx->pix_fmt,
                               d.codecx->width, d.codecx->height, AV_PIX_FMT_RGB24,
                               SWS_POINT, NULL, NULL, NULL);
       //av_dump_format(d.formatx, 0, infile, 0);
       dump_timebase("in stream", d.stream);
       dump_timebase("in stream:codec", d.stream->codec); // note: deprecated
       dump_timebase("in codec", d.codecx);

       // encoder
       avformat_alloc_output_context2(&amp;e.formatx, NULL, NULL, outfile);
       e.codec = avcodec_find_encoder(AV_CODEC_ID_QTRLE);
       e.stream = avformat_new_stream(e.formatx, e.codec);
       e.codecx = avcodec_alloc_context3(e.codec);
       e.codecx->bit_rate = 4000000; // arbitrary for qtrle
       e.codecx->width = d.codecx->width;
       e.codecx->height = d.codecx->height;
       e.codecx->gop_size = 30; // 99% sure this is arbitrary for qtrle
       e.codecx->pix_fmt = AV_PIX_FMT_RGB24;
       e.codecx->time_base = d.stream->time_base; // ???
       e.codecx->flags |= (e.formatx->flags &amp; AVFMT_GLOBALHEADER) ? AV_CODEC_FLAG_GLOBAL_HEADER : 0;
       avcodec_open2(e.codecx, NULL, NULL);
       avcodec_parameters_from_context(e.stream->codecpar, e.codecx);
       //av_dump_format(e.formatx, 0, outfile, 1);
       dump_timebase("out stream", e.stream);
       dump_timebase("out stream:codec", e.stream->codec); // note: deprecated
       dump_timebase("out codec", e.codecx);

       // open file and write header
       avio_open(&amp;e.formatx->pb, outfile, AVIO_FLAG_WRITE);
       avformat_write_header(e.formatx, NULL);

       // frames
       while (read_frame(d) &amp;&amp; write_frame(e, d.rgbframe))
           ;

       // write trailer and close file
       av_write_trailer(e.formatx);
       avio_closep(&amp;e.formatx->pb);

    }
    </typename></cstdio>

    A few notes about that :

    • Since all of my attempts at frame timing so far have failed, I’ve removed almost all timing-related stuff from this code to start with a clean slate.
    • Almost all error checking and cleanup omitted for brevity.
    • The reason I allocate a new output frame with a new buffer in write_frame, rather than using inframe directly, is because this is more representative of what my real application is doing. My real app also uses RGB24 internally, hence the conversions here.
    • The reason I generate a weird pattern in outframe, rather than using e.g. av_copy_frame, is because I just wanted a test pattern that compressed well with Quicktime RLE (my test input ends up generating a 1.7GB output file otherwise).
    • The input video I am using, "wildlife.wmv", can be found here. I’ve hard-coded the filenames.
    • I am aware that avcodec_decode_video2 and avcodec_encode_video2 are deprecated, but don’t care. They work fine, I’ve already struggled too much getting my head around the latest version of the API, ffmpeg changes their API with nearly every release, and I really don’t feel like dealing with avcodec_send_* and avcodec_receive_* right now.
    • I think I’m supposed to be finishing off by passing a NULL frame to avcodec_encode_video2 to flush some buffers or something but I’m a bit confused about that. Unless somebody feels like explaining that let’s ignore it for now, it’s a separate question. The docs are as vague about this point as they are about everything else.
    • My test input file’s frame rate is 29.97.

    Now, as for my current attempts. The following timing related fields are present in the above code, with details/confusion in bold. There’s a lot of them, because the API is mind-bogglingly convoluted :

    • main: d.stream->time_base : Input video stream time base. For my test input file this is 1/1000.
    • main: d.stream->codec->time_base : Not sure what this is (I never could make sense of why AVStream has an AVCodecContext field when you always use your own new context anyways) and also the codec field is deprecated. For my test input file this is 1/1000.
    • main: d.codecx->time_base : Input codec context time-base. For my test input file this is 0/1. Am I supposed to set it ?
    • main: e.stream->time_base : Time base of the output stream I create. What do I set this to ?
    • main: e.stream->codec->time_base : Time base of the deprecated and mysterious codec field of the output stream I create. Do I set this to anything ?
    • main: e.codecx->time_base : Time base of the encoder context I create. What do I set this to ?
    • read_frame: packet.dts : Decoding timestamp of packet read.
    • read_frame: packet.pts : Presentation timestamp of packet read.
    • read_frame: packet.duration : Duration of packet read.
    • read_frame: d.rawframe->pts : Presentation timestamp of raw frame decoded. This is always 0. Why isn’t it read by the decoder...?
    • read_frame: d.rgbframe->pts / write_frame: inframe->pts : Presentation timestamp of decoded frame converted to RGB. Not set to anything currently.
    • read_frame: d.rawframe->pkt_* : Fields copied from packet, discovered after reading this post. They are set correctly but I don’t know if they are useful.
    • write_frame: outframe->pts : Presentation timestamp of frame being encoded. Should I set this to something ?
    • write_frame: outframe->pkt_* : Timing fields from a packet. Should I set these ? They seem to be ignored by the encoder.
    • write_frame: packet.dts : Decoding timestamp of packet being encoded. What do I set it to ?
    • write_frame: packet.pts : Presentation timestamp of packet being encoded. What do I set it to ?
    • write_frame: packet.duration : Duration of packet being encoded. What do I set it to ?

    I have tried the following, with the described results. Note that inframe is d.rgbframe :

    1.  
      • Init e.stream->time_base = d.stream->time_base
      • Init e.codecx->time_base = d.codecx->time_base
      • Set d.rgbframe->pts = packet.dts in read_frame
      • Set outframe->pts = inframe->pts in write_frame
      • Result : Warning that encoder time base is not set (since d.codecx->time_base was 0/1), seg fault.
    2.  
      • Init e.stream->time_base = d.stream->time_base
      • Init e.codecx->time_base = d.stream->time_base
      • Set d.rgbframe->pts = packet.dts in read_frame
      • Set outframe->pts = inframe->pts in write_frame
      • Result : No warnings, but VLC reports frame rate as 480.048 (no idea where this number came from) and file plays too fast. Also the encoder sets all the timing fields in packet to 0, which was not what I expected. (Edit : Turns out this is because av_interleaved_write_frame, unlike av_write_frame, takes ownership of the packet and swaps it with a blank one, and I was printing the values after that call. So they are not ignored.)
    3.  
      • Init e.stream->time_base = d.stream->time_base
      • Init e.codecx->time_base = d.stream->time_base
      • Set d.rgbframe->pts = packet.dts in read_frame
      • Set any of pts/dts/duration in packet in write_frame to anything.
      • Result : Warnings about packet timestamps not set. Encoder seems to reset all packet timing fields to 0, so none of this has any effect.
    4.  
      • Init e.stream->time_base = d.stream->time_base
      • Init e.codecx->time_base = d.stream->time_base
      • I found these fields, pkt_pts, pkt_dts, and pkt_duration in AVFrame after reading this post, so I tried copying those all the way through to outframe.
      • Result : Really had my hopes up, but ended up with same results as attempt 3 (packet timestamp not set warning, incorrect results).

    I tried various other hand-wavey permutations of the above and nothing worked. What I want to do is create an output file that plays back with the same timing and frame rate as the input (29.97 constant frame rate in this case).

    So how do I do this ? Of the zillions of timing related fields here, what do I do to make the output be the same as the input ? And how do I do it in such a way that handles arbitrary video input formats that may store their time stamps and time bases in different places ? I need this to always work.


    For reference, here is a table of all the packet and frame timestamps read from the video stream of my test input file, to give a sense of what my test file looks like. None of the input packet pts’ are set, same with frame pts, and for some reason the duration of the first 108 frames is 0. VLC plays the file fine and reports the frame rate as 29.9700089 :

  • FFmpeg filtergraph memory leak

    5 juillet 2017, par Leif Andersen

    I have an FFmpeg program that :

    1. Demuxes and decodes a video file.
    2. Passes it through a filtergraph
    3. encodes and muxes the new video.

    The filtergraph itself is rather complex, and can be run directly from the command line as such :

    ffmpeg -i demo.mp4 -filter_complex \
    "[audio3]atrim=end=30:start=10[audio2];\
     [video5]trim=end=30:start=10[video4];[audio2]anull[audio6];\
     [video4]scale=width=1920:height=1080[video7];[audio6]anull[audio8];\
     [video7]fps=fps=30[video9];[audio8]anull[audio10];\
     [video9]format=pix_fmts=yuv420p[video11];\
     [audio10]asetpts=expr=PTS-STARTPTS[audio12];\
     [video11]setpts=expr=PTS-STARTPTS[video13];\
     [audio15]concat=v=0:a=1:n=1[audio14];\
     [video17]concat=v=1:a=0:n=1[video16];\
     [audio12]afifo[audio15];[video13]fifo[video17];\
     [audio14]afifo[audio18];[video16]fifo[video19];\
     [audio18]anull[audio20];\
     [video19]pad=width=1920:height=1080[video21];\
     [audio20]anull[audio22];[video21]fps=fps=25[video23];\
     [audio22]aformat=sample_fmts=fltp:sample_rates=44100:channel_layouts=stereo[fa];\
     [video23]format=pix_fmts=yuv420p[fv];[0:a]afifo[audio3];\
     [0:v]fifo[video5]" \
    -map "[fv]" -map "[fa]" out.mp4

    I realize this is a massive filtergraph with a lot of no-op filters, it was autogenerated rather than being hand written. Here is a more cleaner version of the graph. (Its a graphviz file, you can run it in the command line or here.)

    Anyway, when I run the program that uses this filtergraph my memory usage spikes. I end up using about 7 GB of RAM for a 30 second clip. However, when I run the program using the ffmpeg command above, it peaks out at about 600 MB of RAM. This causes me to believe that the problem is not the ungodly size of the filtergraph, but a problem with how my program is using it.

    The program sets up the filtergraph (using av_filter_parse_ptr, giving the filtergraph string shown above), encoder, muxer, decoder, and demuxer, then spawns two threads, one that sends frames into the filtergraph, and one that receives them. The frame that sends them looks something like :

    void decode () {
       while(... more_frames ...) {
           AVFrame *frame = av_frame_alloc();
           ... fill next frame of stream ...
           av_buffersrc_write_frame(ctx, frame);
           av_frame_free(&amp;frame);
       }
    }

    (I have elided the av_send_packet/av_receive_frame functions as they don’t seem to be leaking memory. I have also elided the process of flushing the buffersrc as that won’t happen until the end, and the memory spikes long before that.)

    And the encoder thread looks similar :

    void encode() {
       while(... nodes_in_graph ...) {
           AVFrame *frame = av_frame_alloc();
           av_buffersink_get_frame(ctx, frame);
           ... ensure frame actually was filled ...
           ... send frame to encoder ...
           av_frame_free(&amp;frame);
       }
    }

    As with the decoder, I have elided the send_frame/receive_packet combo as they don’t seem to be leaking memory. Additionally I have elided the details of ensuring that the frame actually was filled. The code loops until the frame eventually does get filled.

    Every frame I allocate I fairly quickly deallocate. I additionally handled all of the error cases that the ffmpeg can give (Elided in the example).

    I have also tried having only one frame for the encoder and one for the decoder (and calling av_frame_unref in each iteration of the loop).

    Am I forgetting to free something, or am I just using the calls to libavfilter incorrectly such that it has to buffer all of the data ? I don’t think the leak is caused by the memory graph because running it from the command line doesn’t seem to cause the same memory explosion.

    FWIW, the actual code is here, although its written in Racket. I do have a minimal example that also seems to duplicate this behavior (modified from the doc/example/filtering_video.c file from the ffmpeg code :

    #include

    #include <libavcodec></libavcodec>avcodec.h>
    #include <libavformat></libavformat>avformat.h>
    #include <libavfilter></libavfilter>avfiltergraph.h>
    #include <libavfilter></libavfilter>buffersink.h>
    #include <libavfilter></libavfilter>buffersrc.h>
    #include <libavutil></libavutil>opt.h>

    const char *filter_descr = "trim=start=10:end=30,scale=78:24,transpose=cclock";

    static AVFormatContext *fmt_ctx;
    static AVCodecContext *dec_ctx;
    AVFilterContext *buffersink_ctx;
    AVFilterContext *buffersrc_ctx;
    AVFilterGraph *filter_graph;
    static int video_stream_index = -1;
    static int64_t last_pts = AV_NOPTS_VALUE;

    static int open_input_file(const char *filename)
    {
       int ret;
       AVCodec *dec;

       if ((ret = avformat_open_input(&amp;fmt_ctx, filename, NULL, NULL)) &lt; 0) {
           av_log(NULL, AV_LOG_ERROR, "Cannot open input file\n");
           return ret;
       }

       if ((ret = avformat_find_stream_info(fmt_ctx, NULL)) &lt; 0) {
           av_log(NULL, AV_LOG_ERROR, "Cannot find stream information\n");
           return ret;
       }

       /* select the video stream */
       ret = av_find_best_stream(fmt_ctx, AVMEDIA_TYPE_VIDEO, -1, -1, &amp;dec, 0);
       if (ret &lt; 0) {
           av_log(NULL, AV_LOG_ERROR, "Cannot find a video stream in the input file\n");
           return ret;
       }
       video_stream_index = ret;

       /* create decoding context */
       dec_ctx = avcodec_alloc_context3(dec);
       if (!dec_ctx)
           return AVERROR(ENOMEM);
       avcodec_parameters_to_context(dec_ctx, fmt_ctx->streams[video_stream_index]->codecpar);
       av_opt_set_int(dec_ctx, "refcounted_frames", 1, 0);

       /* init the video decoder */
       if ((ret = avcodec_open2(dec_ctx, dec, NULL)) &lt; 0) {
           av_log(NULL, AV_LOG_ERROR, "Cannot open video decoder\n");
           return ret;
       }

       return 0;
    }

    static int init_filters(const char *filters_descr)
    {
       char args[512];
       int ret = 0;
       AVFilter *buffersrc  = avfilter_get_by_name("buffer");
       AVFilter *buffersink = avfilter_get_by_name("buffersink");
       AVFilterInOut *outputs = avfilter_inout_alloc();
       AVFilterInOut *inputs  = avfilter_inout_alloc();
       AVRational time_base = fmt_ctx->streams[video_stream_index]->time_base;
       enum AVPixelFormat pix_fmts[] = { AV_PIX_FMT_GRAY8, AV_PIX_FMT_NONE };

       filter_graph = avfilter_graph_alloc();
       if (!outputs || !inputs || !filter_graph) {
           ret = AVERROR(ENOMEM);
           goto end;
       }

       /* buffer video source: the decoded frames from the decoder will be inserted here. */
       snprintf(args, sizeof(args),
               "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
               dec_ctx->width, dec_ctx->height, dec_ctx->pix_fmt,
               time_base.num, time_base.den,
               dec_ctx->sample_aspect_ratio.num, dec_ctx->sample_aspect_ratio.den);

       ret = avfilter_graph_create_filter(&amp;buffersrc_ctx, buffersrc, "in",
                                      args, NULL, filter_graph);
       if (ret &lt; 0) {
           av_log(NULL, AV_LOG_ERROR, "Cannot create buffer source\n");
           goto end;
       }

       /* buffer video sink: to terminate the filter chain. */
       ret = avfilter_graph_create_filter(&amp;buffersink_ctx, buffersink, "out",
                                          NULL, NULL, filter_graph);
       if (ret &lt; 0) {
           av_log(NULL, AV_LOG_ERROR, "Cannot create buffer sink\n");
           goto end;
       }

       ret = av_opt_set_int_list(buffersink_ctx, "pix_fmts", pix_fmts,
                                 AV_PIX_FMT_NONE, AV_OPT_SEARCH_CHILDREN);
       if (ret &lt; 0) {
           av_log(NULL, AV_LOG_ERROR, "Cannot set output pixel format\n");
           goto end;
       }

       outputs->name       = av_strdup("in");
       outputs->filter_ctx = buffersrc_ctx;
       outputs->pad_idx    = 0;
       outputs->next       = NULL;
       inputs->name       = av_strdup("out");
       inputs->filter_ctx = buffersink_ctx;
       inputs->pad_idx    = 0;
       inputs->next       = NULL;

       if ((ret = avfilter_graph_parse_ptr(filter_graph, filters_descr,
                                       &amp;inputs, &amp;outputs, NULL)) &lt; 0)
           goto end;

       if ((ret = avfilter_graph_config(filter_graph, NULL)) &lt; 0)
           goto end;

    end:
       avfilter_inout_free(&amp;inputs);
       avfilter_inout_free(&amp;outputs);

       return ret;
    }

    int main(int argc, char **argv)
    {
       int ret;
       AVPacket packet;
       AVFrame *frame = av_frame_alloc();
       AVFrame *filt_frame = av_frame_alloc();

       if (!frame || !filt_frame) {
           perror("Could not allocate frame");
           exit(1);
       }
       if (argc != 2) {
           fprintf(stderr, "Usage: %s file\n", argv[0]);
           exit(1);
       }

       av_register_all();
       avfilter_register_all();

       if ((ret = open_input_file(argv[1])) &lt; 0)
           goto end;
       if ((ret = init_filters(filter_descr)) &lt; 0)
           goto end;

       /* read all packets */
       while (1) {
           if ((ret = av_read_frame(fmt_ctx, &amp;packet)) &lt; 0)
               break;

           if (packet.stream_index == video_stream_index) {
               ret = avcodec_send_packet(dec_ctx, &amp;packet);
               if (ret &lt; 0) {
                   av_log(NULL, AV_LOG_ERROR, "Error while sending a packet to the decoder\n");
                   break;
               }

               while (ret >= 0) {
                   ret = avcodec_receive_frame(dec_ctx, frame);
                   if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
                       break;
                   } else if (ret &lt; 0) {
                       av_log(NULL, AV_LOG_ERROR, "Error while receiving a frame from the decoder\n");
                       goto end;
                   }

                   if (ret >= 0) {
                        frame->pts = av_frame_get_best_effort_timestamp(frame);

                       /* push the decoded frame into the filtergraph */
                       if (av_buffersrc_add_frame_flags(buffersrc_ctx, frame, AV_BUFFERSRC_FLAG_KEEP_REF) &lt; 0) {
                           av_log(NULL, AV_LOG_ERROR, "Error while feeding the filtergraph\n");
                           break;
                       }

                       /* pull filtered frames from the filtergraph */
                       while (1) {
                           ret = av_buffersink_get_frame(buffersink_ctx, filt_frame);
                           if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
                               break;
                           if (ret &lt; 0)
                               goto end;
                           av_frame_unref(filt_frame);
                       }
                       av_frame_unref(frame);
                   }
               }
           }
           av_packet_unref(&amp;packet);
       }
    end:
       avfilter_graph_free(&amp;filter_graph);
       avcodec_free_context(&amp;dec_ctx);
       avformat_close_input(&amp;fmt_ctx);
       av_frame_free(&amp;frame);
       av_frame_free(&amp;filt_frame);

       return ret;
    }