Recherche avancée

Médias (1)

Mot : - Tags -/censure

Autres articles (10)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Selection of projects using MediaSPIP

    2 mai 2011, par

    The examples below are representative elements of MediaSPIP specific uses for specific projects.
    MediaSPIP farm @ Infini
    The non profit organizationInfini develops hospitality activities, internet access point, training, realizing innovative projects in the field of information and communication technologies and Communication, and hosting of websites. It plays a unique and prominent role in the Brest (France) area, at the national level, among the half-dozen such association. Its members (...)

  • Monitoring de fermes de MediaSPIP (et de SPIP tant qu’à faire)

    31 mai 2013, par

    Lorsque l’on gère plusieurs (voir plusieurs dizaines) de MediaSPIP sur la même installation, il peut être très pratique d’obtenir d’un coup d’oeil certaines informations.
    Cet article a pour but de documenter les scripts de monitoring Munin développés avec l’aide d’Infini.
    Ces scripts sont installés automatiquement par le script d’installation automatique si une installation de munin est détectée.
    Description des scripts
    Trois scripts Munin ont été développés :
    1. mediaspip_medias
    Un script de (...)

Sur d’autres sites (3210)

  • Getting shifted timestamps when encoding a fragmented h264 mp4 with ffmpeg

    14 septembre 2022, par Martin Castin

    I am trying to encode a fragmented h264 mp4 with ffmpeg. I tried the following command :

    


    ffmpeg -i input.mp4 -movflags +frag_keyframe+separate_moof+omit_tfhd_offset+empty_moov output.mp4


    


    It does give me a fragmented mp4 but the timestamps of the frames seem to be shifted by 0.04s when I read the video with mpv. The first frame has a timestamp of 0.04s instead of 0s, as in the input video (1920x1080, 50 fps). I encountered the problem both with ffmpeg 5.1 and ffmpeg 3.4.11.

    


    I tried to add several flags, as -avoid_negative_ts make_zero or -copyts -output_ts_offset -0.04, but it did not help.

    


    I am also trying to achieve this using the ffmpeg libav libraries in C++ but did not get to better result. Here are the code fragments I used.

    


     avformat_alloc_output_context2(&oc, NULL, NULL, filename);

 if (oc_->oformat->flags & AVFMT_GLOBALHEADER) {
    codecCtx_->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
 }
...
 AVDictionary* opts = NULL;

 av_dict_set(&opts, "movflags", "frag_keyframe+separate_moof+omit_tfhd_offset+empty_moov", 0);

 ret = avformat_write_header(oc_, &opts);


    


    Do you know how to avoid this behaviour of shifted timestamps for fragmented mp4, either with ffmpeg or libav ?

    


    Edit : example videos and complete code example

    


    I also tried with the following ffmpeg build

    


    ffmpeg version 5.0.1-static https://johnvansickle.com/ffmpeg/  Copyright (c) 2000-2022 the FFmpeg developers
built with gcc 8 (Debian 8.3.0-6)
configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gmp --enable-libgme --enable-gray --enable-libaom --enable-libfribidi --enable-libass --enable-libvmaf --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librubberband --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libvorbis --enable-libopus --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libdav1d --enable-libxvid --enable-libzvbi --enable-libzimg
libavutil      57. 17.100 / 57. 17.100
libavcodec     59. 18.100 / 59. 18.100
libavformat    59. 16.100 / 59. 16.100
libavdevice    59.  4.100 / 59.  4.100
libavfilter     8. 24.100 /  8. 24.100
libswscale      6.  4.100 /  6.  4.100
libswresample   4.  3.100 /  4.  3.100
libpostproc    56.  3.100 / 56.  3.100


    


    and with the sintel trailer as input video, which is 24fps, and I thus get a timeshift of 83ms. Here is the output I get.

    


    Here is a complete code example, slightly adapted from the muxing.c ffmpeg example (audio removed and adapted for c++). This code shows exactly the same problem.

    


    You can just comment the line 383 (that is calling av_dict_set) to switch back to a not fragmented mp4 that will not have the timestamp shift.

    


    /*&#xA; * Copyright (c) 2003 Fabrice Bellard&#xA; *&#xA; * Permission is hereby granted, free of charge, to any person obtaining a copy&#xA; * of this software and associated documentation files (the "Software"), to deal&#xA; * in the Software without restriction, including without limitation the rights&#xA; * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell&#xA; * copies of the Software, and to permit persons to whom the Software is&#xA; * furnished to do so, subject to the following conditions:&#xA; *&#xA; * The above copyright notice and this permission notice shall be included in&#xA; * all copies or substantial portions of the Software.&#xA; *&#xA; * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR&#xA; * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,&#xA; * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL&#xA; * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER&#xA; * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,&#xA; * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN&#xA; * THE SOFTWARE.&#xA; */&#xA;&#xA;/**&#xA; * @file&#xA; * libavformat API example.&#xA; *&#xA; * Output a media file in any supported libavformat format. The default&#xA; * codecs are used.&#xA; * @example muxing.c&#xA; */&#xA;&#xA;#include <cstdlib>&#xA;#include <cstdio>&#xA;#include <cstring>&#xA;#include <cmath>&#xA;&#xA;extern "C"&#xA;{&#xA;#define __STDC_CONSTANT_MACROS&#xA;#include <libavutil></libavutil>avassert.h>&#xA;#include <libavutil></libavutil>channel_layout.h>&#xA;#include <libavutil></libavutil>opt.h>&#xA;#include <libavutil></libavutil>mathematics.h>&#xA;#include <libavutil></libavutil>timestamp.h>&#xA;#include <libavcodec></libavcodec>avcodec.h>&#xA;#include <libavformat></libavformat>avformat.h>&#xA;#include <libswscale></libswscale>swscale.h>&#xA;#include <libswresample></libswresample>swresample.h>&#xA;}&#xA;&#xA;#define STREAM_DURATION   10.0&#xA;#define STREAM_FRAME_RATE 25 /* 25 images/s */&#xA;#define STREAM_PIX_FMT    AV_PIX_FMT_YUV420P /* default pix_fmt */&#xA;&#xA;#define SCALE_FLAGS SWS_BICUBIC&#xA;&#xA;// a wrapper around a single output AVStream&#xA;typedef struct OutputStream {&#xA;  AVStream *st;&#xA;  AVCodecContext *enc;&#xA;&#xA;  /* pts of the next frame that will be generated */&#xA;  int64_t next_pts;&#xA;  int samples_count;&#xA;&#xA;  AVFrame *frame;&#xA;  AVFrame *tmp_frame;&#xA;&#xA;  AVPacket *tmp_pkt;&#xA;&#xA;  float t, tincr, tincr2;&#xA;&#xA;  struct SwsContext *sws_ctx;&#xA;  struct SwrContext *swr_ctx;&#xA;} OutputStream;&#xA;&#xA;static void log_packet(const AVFormatContext *fmt_ctx, const AVPacket *pkt)&#xA;{&#xA;  AVRational *time_base = &amp;fmt_ctx->streams[pkt->stream_index]->time_base;&#xA;&#xA;//  printf("pts:%s pts_time:%s dts:%s dts_time:%s duration:%s duration_time:%s stream_index:%d\n",&#xA;//         av_ts2str(pkt->pts), av_ts2timestr(pkt->pts, time_base),&#xA;//         av_ts2str(pkt->dts), av_ts2timestr(pkt->dts, time_base),&#xA;//         av_ts2str(pkt->duration), av_ts2timestr(pkt->duration, time_base),&#xA;//         pkt->stream_index);&#xA;}&#xA;&#xA;static int write_frame(AVFormatContext *fmt_ctx, AVCodecContext *c,&#xA;                       AVStream *st, AVFrame *frame, AVPacket *pkt)&#xA;{&#xA;  int ret;&#xA;&#xA;  // send the frame to the encoder&#xA;  ret = avcodec_send_frame(c, frame);&#xA;  if (ret &lt; 0) {&#xA;    fprintf(stderr, "Error sending a frame to the encoder");&#xA;    exit(1);&#xA;  }&#xA;&#xA;  while (ret >= 0) {&#xA;    ret = avcodec_receive_packet(c, pkt);&#xA;    if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)&#xA;      break;&#xA;    else if (ret &lt; 0) {&#xA;      fprintf(stderr, "Error encoding a frame\n");&#xA;      exit(1);&#xA;    }&#xA;&#xA;    /* rescale output packet timestamp values from codec to stream timebase */&#xA;    av_packet_rescale_ts(pkt, c->time_base, st->time_base);&#xA;    pkt->stream_index = st->index;&#xA;&#xA;    /* Write the compressed frame to the media file. */&#xA;    log_packet(fmt_ctx, pkt);&#xA;    ret = av_interleaved_write_frame(fmt_ctx, pkt);&#xA;    /* pkt is now blank (av_interleaved_write_frame() takes ownership of&#xA;     * its contents and resets pkt), so that no unreferencing is necessary.&#xA;     * This would be different if one used av_write_frame(). */&#xA;    if (ret &lt; 0) {&#xA;      fprintf(stderr, "Error while writing output packet\n");&#xA;      exit(1);&#xA;    }&#xA;  }&#xA;&#xA;  return ret == AVERROR_EOF ? 1 : 0;&#xA;}&#xA;&#xA;/* Add an output stream. */&#xA;static void add_stream(OutputStream *ost, AVFormatContext *oc,&#xA;                       const AVCodec **codec,&#xA;                       enum AVCodecID codec_id)&#xA;{&#xA;  AVCodecContext *c;&#xA;  int i;&#xA;&#xA;  /* find the encoder */&#xA;  *codec = avcodec_find_encoder(codec_id);&#xA;  if (!(*codec)) {&#xA;    fprintf(stderr, "Could not find encoder for &#x27;%s&#x27;\n",&#xA;            avcodec_get_name(codec_id));&#xA;    exit(1);&#xA;  }&#xA;&#xA;  ost->tmp_pkt = av_packet_alloc();&#xA;  if (!ost->tmp_pkt) {&#xA;    fprintf(stderr, "Could not allocate AVPacket\n");&#xA;    exit(1);&#xA;  }&#xA;&#xA;  ost->st = avformat_new_stream(oc, NULL);&#xA;  if (!ost->st) {&#xA;    fprintf(stderr, "Could not allocate stream\n");&#xA;    exit(1);&#xA;  }&#xA;  ost->st->id = oc->nb_streams-1;&#xA;  c = avcodec_alloc_context3(*codec);&#xA;  if (!c) {&#xA;    fprintf(stderr, "Could not alloc an encoding context\n");&#xA;    exit(1);&#xA;  }&#xA;  ost->enc = c;&#xA;&#xA;  switch ((*codec)->type) {&#xA;    case AVMEDIA_TYPE_VIDEO:&#xA;      c->codec_id = codec_id;&#xA;&#xA;      c->bit_rate = 400000;&#xA;      /* Resolution must be a multiple of two. */&#xA;      c->width    = 352;&#xA;      c->height   = 288;&#xA;      /* timebase: This is the fundamental unit of time (in seconds) in terms&#xA;       * of which frame timestamps are represented. For fixed-fps content,&#xA;       * timebase should be 1/framerate and timestamp increments should be&#xA;       * identical to 1. */&#xA;      ost->st->time_base = (AVRational){ 1, STREAM_FRAME_RATE };&#xA;      c->time_base       = ost->st->time_base;&#xA;&#xA;      c->gop_size      = 12; /* emit one intra frame every twelve frames at most */&#xA;      c->pix_fmt       = STREAM_PIX_FMT;&#xA;      if (c->codec_id == AV_CODEC_ID_MPEG2VIDEO) {&#xA;        /* just for testing, we also add B-frames */&#xA;        c->max_b_frames = 2;&#xA;      }&#xA;      if (c->codec_id == AV_CODEC_ID_MPEG1VIDEO) {&#xA;        /* Needed to avoid using macroblocks in which some coeffs overflow.&#xA;         * This does not happen with normal video, it just happens here as&#xA;         * the motion of the chroma plane does not match the luma plane. */&#xA;        c->mb_decision = 2;&#xA;      }&#xA;      break;&#xA;&#xA;    default:&#xA;      break;&#xA;  }&#xA;&#xA;  /* Some formats want stream headers to be separate. */&#xA;  if (oc->oformat->flags &amp; AVFMT_GLOBALHEADER)&#xA;    c->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;&#xA;}&#xA;&#xA;/**************************************************************/&#xA;/* video output */&#xA;&#xA;static AVFrame *alloc_picture(enum AVPixelFormat pix_fmt, int width, int height)&#xA;{&#xA;  AVFrame *picture;&#xA;  int ret;&#xA;&#xA;  picture = av_frame_alloc();&#xA;  if (!picture)&#xA;    return NULL;&#xA;&#xA;  picture->format = pix_fmt;&#xA;  picture->width  = width;&#xA;  picture->height = height;&#xA;&#xA;  /* allocate the buffers for the frame data */&#xA;  ret = av_frame_get_buffer(picture, 0);&#xA;  if (ret &lt; 0) {&#xA;    fprintf(stderr, "Could not allocate frame data.\n");&#xA;    exit(1);&#xA;  }&#xA;&#xA;  return picture;&#xA;}&#xA;&#xA;static void open_video(AVFormatContext *oc, const AVCodec *codec,&#xA;                       OutputStream *ost, AVDictionary *opt_arg)&#xA;{&#xA;  int ret;&#xA;  AVCodecContext *c = ost->enc;&#xA;  AVDictionary *opt = NULL;&#xA;&#xA;  av_dict_copy(&amp;opt, opt_arg, 0);&#xA;&#xA;  /* open the codec */&#xA;  ret = avcodec_open2(c, codec, &amp;opt);&#xA;  av_dict_free(&amp;opt);&#xA;  if (ret &lt; 0) {&#xA;    fprintf(stderr, "Could not open video codec\n");&#xA;    exit(1);&#xA;  }&#xA;&#xA;  /* allocate and init a re-usable frame */&#xA;  ost->frame = alloc_picture(c->pix_fmt, c->width, c->height);&#xA;  if (!ost->frame) {&#xA;    fprintf(stderr, "Could not allocate video frame\n");&#xA;    exit(1);&#xA;  }&#xA;&#xA;  /* If the output format is not YUV420P, then a temporary YUV420P&#xA;   * picture is needed too. It is then converted to the required&#xA;   * output format. */&#xA;  ost->tmp_frame = NULL;&#xA;  if (c->pix_fmt != AV_PIX_FMT_YUV420P) {&#xA;    ost->tmp_frame = alloc_picture(AV_PIX_FMT_YUV420P, c->width, c->height);&#xA;    if (!ost->tmp_frame) {&#xA;      fprintf(stderr, "Could not allocate temporary picture\n");&#xA;      exit(1);&#xA;    }&#xA;  }&#xA;&#xA;  /* copy the stream parameters to the muxer */&#xA;  ret = avcodec_parameters_from_context(ost->st->codecpar, c);&#xA;  if (ret &lt; 0) {&#xA;    fprintf(stderr, "Could not copy the stream parameters\n");&#xA;    exit(1);&#xA;  }&#xA;}&#xA;&#xA;/* Prepare a dummy image. */&#xA;static void fill_yuv_image(AVFrame *pict, int frame_index,&#xA;                           int width, int height)&#xA;{&#xA;  int x, y, i;&#xA;&#xA;  i = frame_index;&#xA;&#xA;  /* Y */&#xA;  for (y = 0; y &lt; height; y&#x2B;&#x2B;)&#xA;    for (x = 0; x &lt; width; x&#x2B;&#x2B;)&#xA;      pict->data[0][y * pict->linesize[0] &#x2B; x] = x &#x2B; y &#x2B; i * 3;&#xA;&#xA;  /* Cb and Cr */&#xA;  for (y = 0; y &lt; height / 2; y&#x2B;&#x2B;) {&#xA;    for (x = 0; x &lt; width / 2; x&#x2B;&#x2B;) {&#xA;      pict->data[1][y * pict->linesize[1] &#x2B; x] = 128 &#x2B; y &#x2B; i * 2;&#xA;      pict->data[2][y * pict->linesize[2] &#x2B; x] = 64 &#x2B; x &#x2B; i * 5;&#xA;    }&#xA;  }&#xA;}&#xA;&#xA;static AVFrame *get_video_frame(OutputStream *ost)&#xA;{&#xA;  AVCodecContext *c = ost->enc;&#xA;&#xA;  /* check if we want to generate more frames */&#xA;  if (av_compare_ts(ost->next_pts, c->time_base,&#xA;                    STREAM_DURATION, (AVRational){ 1, 1 }) > 0)&#xA;    return NULL;&#xA;&#xA;  /* when we pass a frame to the encoder, it may keep a reference to it&#xA;   * internally; make sure we do not overwrite it here */&#xA;  if (av_frame_make_writable(ost->frame) &lt; 0)&#xA;    exit(1);&#xA;&#xA;  if (c->pix_fmt != AV_PIX_FMT_YUV420P) {&#xA;    /* as we only generate a YUV420P picture, we must convert it&#xA;     * to the codec pixel format if needed */&#xA;    if (!ost->sws_ctx) {&#xA;      ost->sws_ctx = sws_getContext(c->width, c->height,&#xA;                                    AV_PIX_FMT_YUV420P,&#xA;                                    c->width, c->height,&#xA;                                    c->pix_fmt,&#xA;                                    SCALE_FLAGS, NULL, NULL, NULL);&#xA;      if (!ost->sws_ctx) {&#xA;        fprintf(stderr,&#xA;                "Could not initialize the conversion context\n");&#xA;        exit(1);&#xA;      }&#xA;    }&#xA;    fill_yuv_image(ost->tmp_frame, ost->next_pts, c->width, c->height);&#xA;    sws_scale(ost->sws_ctx, (const uint8_t * const *) ost->tmp_frame->data,&#xA;              ost->tmp_frame->linesize, 0, c->height, ost->frame->data,&#xA;              ost->frame->linesize);&#xA;  } else {&#xA;    fill_yuv_image(ost->frame, ost->next_pts, c->width, c->height);&#xA;  }&#xA;&#xA;  ost->frame->pts = ost->next_pts&#x2B;&#x2B;;&#xA;&#xA;  return ost->frame;&#xA;}&#xA;&#xA;/*&#xA; * encode one video frame and send it to the muxer&#xA; * return 1 when encoding is finished, 0 otherwise&#xA; */&#xA;static int write_video_frame(AVFormatContext *oc, OutputStream *ost)&#xA;{&#xA;  return write_frame(oc, ost->enc, ost->st, get_video_frame(ost), ost->tmp_pkt);&#xA;}&#xA;&#xA;static void close_stream(AVFormatContext *oc, OutputStream *ost)&#xA;{&#xA;  avcodec_free_context(&amp;ost->enc);&#xA;  av_frame_free(&amp;ost->frame);&#xA;  av_frame_free(&amp;ost->tmp_frame);&#xA;  av_packet_free(&amp;ost->tmp_pkt);&#xA;  sws_freeContext(ost->sws_ctx);&#xA;  swr_free(&amp;ost->swr_ctx);&#xA;}&#xA;&#xA;/**************************************************************/&#xA;/* media file output */&#xA;&#xA;int main(int argc, char **argv)&#xA;{&#xA;  OutputStream video_st = { 0 }, audio_st = { 0 };&#xA;  const AVOutputFormat *fmt;&#xA;  const char *filename;&#xA;  AVFormatContext *oc;&#xA;  const AVCodec *audio_codec, *video_codec;&#xA;  int ret;&#xA;  int have_video = 0, have_audio = 0;&#xA;  int encode_video = 0, encode_audio = 0;&#xA;  AVDictionary *opt = NULL;&#xA;  int i;&#xA;&#xA;  if (argc &lt; 2) {&#xA;    printf("usage: %s output_file\n"&#xA;           "API example program to output a media file with libavformat.\n"&#xA;           "This program generates a synthetic audio and video stream, encodes and\n"&#xA;           "muxes them into a file named output_file.\n"&#xA;           "The output format is automatically guessed according to the file extension.\n"&#xA;           "Raw images can also be output by using &#x27;%%d&#x27; in the filename.\n"&#xA;           "\n", argv[0]);&#xA;    return 1;&#xA;  }&#xA;&#xA;  filename = argv[1];&#xA;&#xA;  av_dict_set(&amp;opt, "movflags", "frag_keyframe&#x2B;separate_moof&#x2B;omit_tfhd_offset&#x2B;empty_moov", 0);&#xA;&#xA;  /* allocate the output media context */&#xA;  avformat_alloc_output_context2(&amp;oc, NULL, NULL, filename);&#xA;  if (!oc) {&#xA;    printf("Could not deduce output format from file extension: using MPEG.\n");&#xA;    avformat_alloc_output_context2(&amp;oc, NULL, "mpeg", filename);&#xA;  }&#xA;  if (!oc)&#xA;    return 1;&#xA;&#xA;  fmt = oc->oformat;&#xA;&#xA;  /* Add the audio and video streams using the default format codecs&#xA;   * and initialize the codecs. */&#xA;  if (fmt->video_codec != AV_CODEC_ID_NONE) {&#xA;    add_stream(&amp;video_st, oc, &amp;video_codec, fmt->video_codec);&#xA;    have_video = 1;&#xA;    encode_video = 1;&#xA;  }&#xA;&#xA;  /* Now that all the parameters are set, we can open the audio and&#xA;   * video codecs and allocate the necessary encode buffers. */&#xA;  if (have_video)&#xA;    open_video(oc, video_codec, &amp;video_st, opt);&#xA;&#xA;&#xA;  av_dump_format(oc, 0, filename, 1);&#xA;&#xA;  /* open the output file, if needed */&#xA;  if (!(fmt->flags &amp; AVFMT_NOFILE)) {&#xA;    ret = avio_open(&amp;oc->pb, filename, AVIO_FLAG_WRITE);&#xA;    if (ret &lt; 0) {&#xA;      fprintf(stderr, "Could not open &#x27;%s&#x27;\n", filename);&#xA;      return 1;&#xA;    }&#xA;  }&#xA;&#xA;  /* Write the stream header, if any. */&#xA;  ret = avformat_write_header(oc, &amp;opt);&#xA;  if (ret &lt; 0) {&#xA;    fprintf(stderr, "Error occurred when opening output file\n");&#xA;    return 1;&#xA;  }&#xA;&#xA;  while (encode_video || encode_audio) {&#xA;    /* select the stream to encode */&#xA;    if (encode_video &amp;&amp;&#xA;        (!encode_audio || av_compare_ts(video_st.next_pts, video_st.enc->time_base,&#xA;                                        audio_st.next_pts, audio_st.enc->time_base) &lt;= 0)) {&#xA;      encode_video = !write_video_frame(oc, &amp;video_st);&#xA;    }&#xA;  }&#xA;&#xA;  av_write_trailer(oc);&#xA;&#xA;  /* Close each codec. */&#xA;  if (have_video)&#xA;    close_stream(oc, &amp;video_st);&#xA;  if (have_audio)&#xA;    close_stream(oc, &amp;audio_st);&#xA;&#xA;  if (!(fmt->flags &amp; AVFMT_NOFILE))&#xA;    /* Close the output file. */&#xA;    avio_closep(&amp;oc->pb);&#xA;&#xA;  /* free the stream */&#xA;  avformat_free_context(oc);&#xA;&#xA;  return 0;&#xA;}&#xA;</cmath></cstring></cstdio></cstdlib>

    &#xA;

  • How to convert Image RAW10 format to cv::Mat

    14 août 2022, par ZeusBios

    I'm trying to convert Image RAW10 to cv::Mat. But in the finish, I have always gray square. The conversation I make like :

    &#xA;

    void ConvertToMat(unsigned char * buffer, unsigned int width, unsigned int height) {&#xA;        cv::Mat img;&#xA;        cv::Mat raw(height, width, CV_8UC1, buffer);&#xA;        raw.convertTo(img, CV_32FC1, 255.0); // -> don&#x27;t work&#xA;        cv::imwrite("myimage.png", img);&#xA; &#xA;        cv::cvtColor(raw, raw, cv::COLOR_BayerGR2RGBA); //-> also don&#x27;t work&#xA;        cv::imwrite("myimage.png", raw);&#xA;}&#xA;

    &#xA;

  • FFmpeg saturates memory + CPU usage drops to 0% during very basic conversion of PNG files to MP4 video

    7 août 2022, par mattze_frisch

    I have this Python function that runs ffmpeg with minimal options from the Windows command line :

    &#xA;

    def run_ffmpeg(frames_path, ffmpeg_path=notebook_directory):&#xA;    &#x27;&#x27;&#x27;&#xA;    This function runs ffmpeg.exe to convert PNG image files into a MP4 video.&#xA;    &#xA;    Parameters&#xA;    ----------&#xA;    frames_path : string&#xA;        Absolute path to the PNG files&#xA;    ffmpeg_path : string&#xA;        Absolute path to the FFmpeg executable (ffmpeg.exe)&#xA;    &#x27;&#x27;&#x27;&#xA;    &#xA;    from subprocess import check_call&#xA;    &#xA;    &#xA;    check_call(&#xA;        [&#xA;            os.path.join(ffmpeg_path, &#x27;ffmpeg&#x27;),&#xA;            &#x27;-y&#x27;,    # Overwrite output files without asking&#xA;            &#x27;-report&#x27;,    # Write logfile to current working directory&#xA;            &#x27;-framerate&#x27;, &#x27;60&#x27;,    # Input frame rate&#xA;            &#x27;-i&#x27;, os.path.join(frames_path, &#x27;frame%05d.png&#x27;),    # Path to input frames&#xA;            os.path.join(frames_path, &#x27;video.mp4&#x27;)    # Path to store output video&#xA;        ]&#xA;    )&#xA;

    &#xA;

    When running it from a Jupyter notebook over 2500 PNG files (RGBA, ca. 600-700 kB each, 9000 x 13934 pixels), CPU usage briefly peaks to 100% before dropping to 0%, while memory usage quickly saturates to 100% and stays there, slowing the system down almost to a freeze, so I need to terminate ffmpeg from the task manager :

    &#xA;

    Screenshot

    &#xA;

    The generated video file has a size of only 48 bytes and contains just a black frame when viewed in the VLC player.

    &#xA;

    This is the ffmpeg log output :

    &#xA;

    ffmpeg started on 2022-08-05 at 17:17:55&#xA;Report written to "ffmpeg-20220805-171755.log"&#xA;Log level: 48&#xA;Command line:&#xA;"C:\\Users\\Username\\Desktop\\folder\\ffmpeg" -y -report -framerate 60 -i "C:\\Users\\Username\\Desktop\\e\\frame%05d.png" "C:\\Users\\Username\\Desktop\\e\\video.mp4"&#xA;ffmpeg version 2022-07-14-git-882aac99d2-full_build-www.gyan.dev Copyright (c) 2000-2022 the FFmpeg developers&#xA;  built with gcc 12.1.0 (Rev2, Built by MSYS2 project)&#xA;  configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libmfx --enable-libshaderc --enable-vulkan --enable-libplacebo --ena  libavutil      57. 29.100 / 57. 29.100&#xA;  libavcodec     59. 38.100 / 59. 38.100&#xA;  libavformat    59. 28.100 / 59. 28.100&#xA;  libavdevice    59.  8.100 / 59.  8.100&#xA;  libavfilter     8. 45.100 /  8. 45.100&#xA;  libswscale      6.  8.100 /  6.  8.100&#xA;  libswresample   4.  8.100 /  4.  8.100&#xA;  libpostproc    56.  7.100 / 56.  7.100&#xA;Splitting the commandline.&#xA;Reading option &#x27;-y&#x27; ... matched as option &#x27;y&#x27; (overwrite output files) with argument &#x27;1&#x27;.&#xA;Reading option &#x27;-report&#x27; ... matched as option &#x27;report&#x27; (generate a report) with argument &#x27;1&#x27;.&#xA;Reading option &#x27;-framerate&#x27; ... matched as AVOption &#x27;framerate&#x27; with argument &#x27;60&#x27;.&#xA;Reading option &#x27;-i&#x27; ... matched as input url with argument &#x27;C:\Users\Username\Desktop\e\frame%05d.png&#x27;.&#xA;Reading option &#x27;C:\Users\Username\Desktop\e\video.mp4&#x27; ... matched as output url.&#xA;Finished splitting the commandline.&#xA;Parsing a group of options: global .&#xA;Applying option y (overwrite output files) with argument 1.&#xA;Applying option report (generate a report) with argument 1.&#xA;Successfully parsed a group of options.&#xA;Parsing a group of options: input url C:\Users\Username\Desktop\e\frame%05d.png.&#xA;Successfully parsed a group of options.&#xA;Opening an input file: C:\Users\Username\Desktop\e\frame%05d.png.&#xA;[image2 @ 000000000041ff80] Opening &#x27;C:\Users\Username\Desktop\e\frame00000.png&#x27; for reading&#xA;[file @ 0000000000425680] Setting default whitelist &#x27;file,crypto,data&#x27;&#xA;[AVIOContext @ 000000000042d800] Statistics: 668318 bytes read, 0 seeks&#xA;[image2 @ 000000000041ff80] Opening &#x27;C:\Users\Username\Desktop\e\frame00001.png&#x27; for reading&#xA;[file @ 000000000042dac0] Setting default whitelist &#x27;file,crypto,data&#x27;&#xA;[AVIOContext @ 000000000042d6c0] Statistics: 668371 bytes read, 0 seeks&#xA;[image2 @ 000000000041ff80] Opening &#x27;C:\Users\Username\Desktop\e\frame00002.png&#x27; for reading&#xA;[file @ 000000000042d6c0] Setting default whitelist &#x27;file,crypto,data&#x27;&#xA;[AVIOContext @ 000000000042dac0] Statistics: 669177 bytes read, 0 seeks&#xA;[image2 @ 000000000041ff80] Opening &#x27;C:\Users\Username\Desktop\e\frame00003.png&#x27; for reading&#xA;[file @ 000000000042dac0] Setting default whitelist &#x27;file,crypto,data&#x27;&#xA;[AVIOContext @ 0000000000437a40] Statistics: 684594 bytes read, 0 seeks&#xA;[image2 @ 000000000041ff80] Opening &#x27;C:\Users\Username\Desktop\e\frame00004.png&#x27; for reading&#xA;[file @ 0000000000437a40] Setting default whitelist &#x27;file,crypto,data&#x27;&#xA;[AVIOContext @ 0000000000437c00] Statistics: 703014 bytes read, 0 seeks&#xA;[image2 @ 000000000041ff80] Opening &#x27;C:\Users\Username\Desktop\e\frame00005.png&#x27; for reading&#xA;[file @ 0000000000437c00] Setting default whitelist &#x27;file,crypto,data&#x27;&#xA;[AVIOContext @ 0000000000437d00] Statistics: 721604 bytes read, 0 seeks&#xA;[image2 @ 000000000041ff80] Opening &#x27;C:\Users\Username\Desktop\e\frame00006.png&#x27; for reading&#xA;[file @ 0000000000437cc0] Setting default whitelist &#x27;file,crypto,data&#x27;&#xA;[AVIOContext @ 0000000000437f40] Statistics: 739761 bytes read, 0 seeks&#xA;[image2 @ 000000000041ff80] Opening &#x27;C:\Users\Username\Desktop\e\frame00007.png&#x27; for reading&#xA;[file @ 0000000000437f40] Setting default whitelist &#x27;file,crypto,data&#x27;&#xA;[AVIOContext @ 0000000000438040] Statistics: 757327 bytes read, 0 seeks&#xA;[image2 @ 000000000041ff80] Probe buffer size limit of 5000000 bytes reached&#xA;Input #0, image2, from &#x27;C:\Users\Username\Desktop\e\frame%05d.png&#x27;:&#xA;  Duration: 00:00:41.67, start: 0.000000, bitrate: N/A&#xA;  Stream #0:0, 8, 1/60: Video: png, rgba(pc), 9000x13934 [SAR 29528:29528 DAR 4500:6967], 60 fps, 60 tbr, 60 tbn&#xA;Successfully opened the file.&#xA;Parsing a group of options: output url C:\Users\Username\Desktop\e\video.mp4.&#xA;Successfully parsed a group of options.&#xA;Opening an output file: C:\Users\Username\Desktop\e\video.mp4.&#xA;[file @ 000000002081e3c0] Setting default whitelist &#x27;file,crypto,data&#x27;&#xA;Successfully opened the file.&#xA;detected 12 logical cores&#xA;Stream mapping:&#xA;  Stream #0:0 -> #0:0 (png (native) -> h264 (libx264))&#xA;Press [q] to stop, [?] for help&#xA;cur_dts is invalid st:0 (0) [init:0 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)&#xA;cur_dts is invalid st:0 (0) [init:0 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)&#xA;cur_dts is invalid st:0 (0) [init:0 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)&#xA;cur_dts is invalid st:0 (0) [init:0 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)&#xA;cur_dts is invalid st:0 (0) [init:0 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)&#xA;cur_dts is invalid st:0 (0) [init:0 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)&#xA;cur_dts is invalid st:0 (0) [init:0 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)&#xA;cur_dts is invalid st:0 (0) [init:0 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)&#xA;cur_dts is invalid st:0 (0) [init:0 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)&#xA;[image2 @ 000000000041ff80] Opening &#x27;C:\Users\Username\Desktop\e\frame00008.png&#x27; for reading&#xA;[file @ 00000000024ad980] Setting default whitelist &#x27;file,crypto,data&#x27;&#xA;[AVIOContext @ 00000000004379c0] Statistics: 767857 bytes read, 0 seeks&#xA;cur_dts is invalid st:0 (0) [init:0 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)&#xA;[image2 @ 000000000041ff80] Opening &#x27;C:\Users\Username\Desktop\e\frame00009.png&#x27; for reading&#xA;[file @ 000000000042d600] Setting default whitelist &#x27;file,crypto,data&#x27;&#xA;[AVIOContext @ 00000000004379c0] Statistics: 774848 bytes read, 0 seeks&#xA;cur_dts is invalid st:0 (0) [init:0 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)&#xA;[image2 @ 000000000041ff80] Opening &#x27;C:\Users\Username\Desktop\e\frame00010.png&#x27; for reading&#xA;[file @ 00000000004379c0] Setting default whitelist &#x27;file,crypto,data&#x27;&#xA;[AVIOContext @ 000000000042da00] Statistics: 787178 bytes read, 0 seeks&#xA;cur_dts is invalid st:0 (0) [init:0 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)&#xA;[image2 @ 000000000041ff80] Opening &#x27;C:\Users\Username\Desktop\e\frame00011.png&#x27; for reading&#xA;[file @ 00000000004379c0] Setting default whitelist &#x27;file,crypto,data&#x27;&#xA;[AVIOContext @ 000000000042da00] Statistics: 797084 bytes read, 0 seeks&#xA;cur_dts is invalid st:0 (0) [init:0 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)&#xA;[image2 @ 000000000041ff80] Opening &#x27;C:\Users\Username\Desktop\e\frame00012.png&#x27; for reading&#xA;[file @ 0000000000437a80] Setting default whitelist &#x27;file,crypto,data&#x27;&#xA;[AVIOContext @ 000000000042da00] Statistics: 802870 bytes read, 0 seeks&#xA;[graph 0 input from stream 0:0 @ 00000000208bf800] Setting &#x27;video_size&#x27; to value &#x27;9000x13934&#x27;&#xA;[graph 0 input from stream 0:0 @ 00000000208bf800] Setting &#x27;pix_fmt&#x27; to value &#x27;26&#x27;&#xA;[graph 0 input from stream 0:0 @ 00000000208bf800] Setting &#x27;time_base&#x27; to value &#x27;1/60&#x27;&#xA;[graph 0 input from stream 0:0 @ 00000000208bf800] Setting &#x27;pixel_aspect&#x27; to value &#x27;29528/29528&#x27;&#xA;[graph 0 input from stream 0:0 @ 00000000208bf800] Setting &#x27;frame_rate&#x27; to value &#x27;60/1&#x27;&#xA;[graph 0 input from stream 0:0 @ 00000000208bf800] w:9000 h:13934 pixfmt:rgba tb:1/60 fr:60/1 sar:29528/29528&#xA;[format @ 00000000025ef840] Setting &#x27;pix_fmts&#x27; to value &#x27;yuv420p|yuvj420p|yuv422p|yuvj422p|yuv444p|yuvj444p|nv12|nv16|nv21|yuv420p10le|yuv422p10le|yuv444p10le|nv20le|gray|gray10le&#x27;&#xA;[auto_scale_0 @ 00000000025efe40] w:iw h:ih flags:&#x27;&#x27; interl:0&#xA;[format @ 00000000025ef840] auto-inserting filter &#x27;auto_scale_0&#x27; between the filter &#x27;Parsed_null_0&#x27; and the filter &#x27;format&#x27;&#xA;[AVFilterGraph @ 000000000042da00] query_formats: 4 queried, 3 merged, 1 already done, 0 delayed&#xA;[auto_scale_0 @ 00000000025efe40] picking yuv444p out of 13 ref:rgba alpha:1&#xA;[auto_scale_0 @ 00000000025efe40] w:9000 h:13934 fmt:rgba sar:29528/29528 -> w:9000 h:13934 fmt:yuv444p sar:1/1 flags:0x0&#xA;[auto_scale_0 @ 00000000025efe40] w:9000 h:13934 fmt:rgba sar:29528/29528 -> w:9000 h:13934 fmt:yuv444p sar:1/1 flags:0x0&#xA;[auto_scale_0 @ 00000000025efe40] w:9000 h:13934 fmt:rgba sar:29528/29528 -> w:9000 h:13934 fmt:yuv444p sar:1/1 flags:0x0&#xA;[auto_scale_0 @ 00000000025efe40] w:9000 h:13934 fmt:rgba sar:29528/29528 -> w:9000 h:13934 fmt:yuv444p sar:1/1 flags:0x0&#xA;[libx264 @ 000000002081d280] using mv_range_thread = 376&#xA;[libx264 @ 000000002081d280] using SAR=1/1&#xA;[libx264 @ 000000002081d280] frame MB size (563x871) > level limit (139264)&#xA;[libx264 @ 000000002081d280] DPB size (4 frames, 1961492 mbs) > level limit (1 frames, 696320 mbs)&#xA;[libx264 @ 000000002081d280] MB rate (29422380) > level limit (16711680)&#xA;[libx264 @ 000000002081d280] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX&#xA;[libx264 @ 000000002081d280] profile High 4:4:4 Predictive, level 6.2, 4:4:4, 8-bit&#xA;[libx264 @ 000000002081d280] 264 - core 164 r3095 baee400 - H.264/MPEG-4 AVC codec - Copyleft 2003-2022 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=4 threads=18 lookahead_threads=3 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00&#xA;Output #0, mp4, to &#x27;C:\Users\Username\Desktop\e\video.mp4&#x27;:&#xA;  Metadata:&#xA;    encoder         : Lavf59.28.100&#xA;  Stream #0:0, 0, 1/15360: Video: h264 (avc1 / 0x31637661), yuv444p(tv, progressive), 9000x13934 [SAR 1:1 DAR 4500:6967], q=2-31, 60 fps, 15360 tbn&#xA;    Metadata:&#xA;      encoder         : Lavc59.38.100 libx264&#xA;    Side data:&#xA;      cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A&#xA;Clipping frame in rate conversion by 0.000008&#xA;frame=    1 fps=0.8 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A speed=   0x    &#xA;cur_dts is invalid st:0 (0) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)&#xA;[image2 @ 000000000041ff80] Opening &#x27;C:\Users\Username\Desktop\e\frame00013.png&#x27; for reading&#xA;[file @ 000000000a6a2180] Setting default whitelist &#x27;file,crypto,data&#x27;&#xA;[AVIOContext @ 000000000b38de80] Statistics: 810395 bytes read, 0 seeks&#xA;frame=    2 fps=0.8 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A speed=   0x    &#xA;cur_dts is invalid st:0 (0) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)&#xA;[image2 @ 000000000041ff80] Opening &#x27;C:\Users\Username\Desktop\e\frame00014.png&#x27; for reading&#xA;[file @ 000000001ec86c80] Setting default whitelist &#x27;file,crypto,data&#x27;&#xA;[AVIOContext @ 000000000b38de80] Statistics: 818213 bytes read, 0 seeks&#xA;cur_dts is invalid st:0 (0) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)&#xA;[image2 @ 000000000041ff80] Opening &#x27;C:\Users\Username\Desktop\e\frame00015.png&#x27; for reading&#xA;[file @ 000000001ec86c80] Setting default whitelist &#x27;file,crypto,data&#x27;&#xA;[AVIOContext @ 000000000b38de80] Statistics: 817936 bytes read, 0 seeks&#xA;frame=    4 fps=1.2 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A speed=   0x    &#xA;cur_dts is invalid st:0 (0) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)&#xA;[image2 @ 000000000041ff80] Opening &#x27;C:\Users\Username\Desktop\e\frame00016.png&#x27; for reading&#xA;[file @ 000000001ec86c80] Setting default whitelist &#x27;file,crypto,data&#x27;&#xA;[AVIOContext @ 000000000b38de80] Statistics: 817014 bytes read, 0 seeks&#xA;cur_dts is invalid st:0 (0) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)&#xA;[image2 @ 000000000041ff80] Opening &#x27;C:\Users\Username\Desktop\e\frame00017.png&#x27; for reading&#xA;[file @ 000000001ec86c80] Setting default whitelist &#x27;file,crypto,data&#x27;&#xA;[AVIOContext @ 000000000b38de80] Statistics: 828088 bytes read, 0 seeks&#xA;frame=    6 fps=1.5 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A speed=   0x    &#xA;cur_dts is invalid st:0 (0) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)&#xA;[image2 @ 000000000041ff80] Opening &#x27;C:\Users\Username\Desktop\e\frame00018.png&#x27; for reading&#xA;[file @ 000000001ec86c80] Setting default whitelist &#x27;file,crypto,data&#x27;&#xA;[AVIOContext @ 000000000b38de80] Statistics: 831007 bytes read, 0 seeks&#xA;cur_dts is invalid st:0 (0) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)&#xA;[image2 @ 000000000041ff80] Opening &#x27;C:\Users\Username\Desktop\e\frame00019.png&#x27; for reading&#xA;[file @ 000000001ec86c80] Setting default whitelist &#x27;file,crypto,data&#x27;&#xA;[AVIOContext @ 000000000b38de80] Statistics: 845203 bytes read, 0 seeks&#xA;frame=    8 fps=1.7 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A speed=   0x    &#xA;cur_dts is invalid st:0 (0) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)&#xA;[image2 @ 000000000041ff80] Opening &#x27;C:\Users\Username\Desktop\e\frame00020.png&#x27; for reading&#xA;[file @ 000000001ec86c80] Setting default whitelist &#x27;file,crypto,data&#x27;&#xA;[AVIOContext @ 000000000b38de80] Statistics: 851548 bytes read, 0 seeks&#xA;cur_dts is invalid st:0 (0) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)&#xA;[image2 @ 000000000041ff80] Opening &#x27;C:\Users\Username\Desktop\e\frame00021.png&#x27; for reading&#xA;[file @ 000000001ec86c80] Setting default whitelist &#x27;file,crypto,data&#x27;&#xA;[AVIOContext @ 000000000b38de80] Statistics: 847629 bytes read, 0 seeks&#xA;frame=   10 fps=1.8 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A speed=   0x    &#xA;cur_dts is invalid st:0 (0) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)&#xA;[image2 @ 000000000041ff80] Opening &#x27;C:\Users\Username\Desktop\e\frame00022.png&#x27; for reading&#xA;[file @ 000000001ec86c80] Setting default whitelist &#x27;file,crypto,data&#x27;&#xA;[AVIOContext @ 000000000b38de80] Statistics: 860169 bytes read, 0 seeks&#xA;frame=   11 fps=1.4 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A speed=   0x    &#xA;cur_dts is invalid st:0 (0) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)&#xA;[image2 @ 000000000041ff80] Opening &#x27;C:\Users\Username\Desktop\e\frame00023.png&#x27; for reading&#xA;[file @ 000000001ec86c80] Setting default whitelist &#x27;file,crypto,data&#x27;&#xA;[AVIOContext @ 000000000b38de80] Statistics: 857243 bytes read, 0 seeks&#xA;frame=   12 fps=1.2 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A speed=   0x    &#xA;cur_dts is invalid st:0 (0) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)&#xA;[image2 @ 000000000041ff80] Opening &#x27;C:\Users\Username\Desktop\e\frame00024.png&#x27; for reading&#xA;[file @ 000000001ec86c80] Setting default whitelist &#x27;file,crypto,data&#x27;&#xA;[AVIOContext @ 000000000b38de80] Statistics: 835155 bytes read, 0 seeks&#xA;

    &#xA;

    What is the problem ?

    &#xA;

    By the way, the color model of the image files was confirmed by doing

    &#xA;

    from PIL import Image&#xA;&#xA;&#xA;img = Image.open(&#x27;C:\\Users\\EPI-SMLM\\Desktop\\e\\frame00000.png&#x27;)&#xA;img.mode&#xA;-------------------------------------------------------------------&#xA;C:\Program Files\Python38\lib\site-packages\PIL\Image.py:3035: DecompressionBombWarning: Image size (125406000 pixels) exceeds limit of 89478485 pixels, could be decompression bomb DOS attack.&#xA;  warnings.warn(&#xA;&#xA;&#x27;RGBA&#x27;&#xA;

    &#xA;

    The "decompression bomb warning" appears to be a false alarm/bug.

    &#xA;

    UPDATE : I can confirm that this happens even when there are only 50 image files, i.e. 50 x 700 kB = 35 MB in total size. ffmpeg still gobbles up all available memory (almost 60 GB of private bytes !!!).

    &#xA;

    And it also happens if ffmpeg is run from the command line.

    &#xA;

    This must be a bug !

    &#xA;