Recherche avancée

Médias (2)

Mot : - Tags -/media

Autres articles (22)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

Sur d’autres sites (3214)

  • MP3 files created using FFmpeg are not starting playback in browser immediately. Is there any major difference between FFmpeg and AVCONV ?

    23 janvier 2019, par AR5

    I am working on a website that streams music. We recently changed server from Debian (with avconv) to a Centos7 (with FFmpeg) server.
    The mp3 files created on Debian server start playback on browser (I have tested Chrome and Firefox) start almost at the same time they start loading into the browser (I used Network tab on Developer Tools to monitor this)

    Now after the switch to Centos/FFmpeg server, the files being created on this new server are displaying a strange behavior. They only start playback after about 1MB is loaded into the browser.

    I have used identical settings for converting original file into MP3 in both AVCONV and FFmpeg but the files created using FFmpeg are showing this issue. Is there something that might be causing such an issue ? Are there differences in terms of audio conversion between AVCONV and FFmpeg ?

    I have already tried

    I first found that the files created on old server (Debian/Avconv) were VBR (variable bitrate) and the ones created on new server were CBR (constant bitrate), so I tried switching to VBR but the issue still persisted.

    I checked the mp3 files using MediaInfo app and there seems to be no difference between the files.

    I also checked if both files were being served as 206 Partial Content and they both are indeed.

    I am trying to create mp3 files using FFmpeg that work exactly like the ones created before using avconv

    I am trying to make the streaming site work on the new server but the mp3 files created using FFmpeg are not playing back correctly as compared to the ones created on the old server. I am trying to figure out what I might be doing wrong ? or if there is a difference between avconv and FFmpeg that is causing this issue.

    I am really stuck on this issue, any help will be really appreciated.


    Edit

    I don’t have access to old server anymore so I couldn’t retrieve the log output of avconv. The command that I was using was as follows :

    avconv -y -i "/test/Track 01.mp3" -ac 2 -ar 44100 -acodec libmp3lame -b:a 128k "/test/Track 01 (converted).mp3"

    Here is the command and log output from new server :

    ffmpeg -y -i "/test/Track 01.mp3" -ac 2 -ar 44100 -acodec libmp3lame -b:a 128k "/test/Track 01 (converted).mp3"
    ffmpeg version 2.8.15 Copyright (c) 2000-2018 the FFmpeg developers
     built with gcc 4.8.5 (GCC) 20150623 (Red Hat 4.8.5-28)
     configuration: --prefix=/usr --bindir=/usr/bin --datadir=/usr/share/ffmpeg --incdir=/usr/include/ffmpeg --libdir=/usr/lib64 --mandir=/usr/share/man --arch=x86_64 --optflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' --extra-ldflags='-Wl,-z,relro ' --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libvo-amrwbenc --enable-version3 --enable-bzlib --disable-crystalhd --enable-gnutls --enable-ladspa --enable-libass --enable-libcdio --enable-libdc1394 --disable-indev=jack --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-openal --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libv4l2 --enable-libx264 --enable-libx265 --enable-libxvid --enable-x11grab --enable-avfilter --enable-avresample --enable-postproc --enable-pthreads --disable-static --enable-shared --enable-gpl --disable-debug --disable-stripping --shlibdir=/usr/lib64 --enable-runtime-cpudetect
     libavutil      54. 31.100 / 54. 31.100
     libavcodec     56. 60.100 / 56. 60.100
     libavformat    56. 40.101 / 56. 40.101
     libavdevice    56.  4.100 / 56.  4.100
     libavfilter     5. 40.101 /  5. 40.101
     libavresample   2.  1.  0 /  2.  1.  0
     libswscale      3.  1.101 /  3.  1.101
     libswresample   1.  2.101 /  1.  2.101
     libpostproc    53.  3.100 / 53.  3.100
    [mp3 @ 0xd60be0] Skipping 0 bytes of junk at 240044.
    Input #0, mp3, from '/test/Track 01.mp3':
     Metadata:
       album           : Future Hndrxx Presents: The WIZRD
       artist          : Future
       genre           : Hip-Hop
       title           : Never Stop
       track           : 1
       lyrics-eng      : rgf.is
       WEB SITE        : rgf.is
       TAGGINGTIME     : rgf.is
       WEB             : rgf.is
       date            : 2019
       encoder         : Lavf56.40.101
     Duration: 00:04:51.40, start: 0.025056, bitrate: 121 kb/s
       Stream #0:0: Audio: mp3, 44100 Hz, stereo, s16p, 114 kb/s
       Metadata:
         encoder         : Lavc56.60
       Stream #0:1: Video: png, rgb24(pc), 333x333 [SAR 1:1 DAR 1:1], 90k tbr, 90k tbn, 90k tbc
       Metadata:
         comment         : Cover (front)
    [mp3 @ 0xd66ec0] Frame rate very high for a muxer not efficiently supporting it.
    Please consider specifying a lower framerate, a different muxer or -vsync 2
    Output #0, mp3, to '/test/Track 01 (converted).mp3':
     Metadata:
       TALB            : Future Hndrxx Presents: The WIZRD
       TPE1            : Future
       TCON            : Hip-Hop
       TIT2            : Never Stop
       TRCK            : 1
       lyrics-eng      : rgf.is
       WEB SITE        : rgf.is
       TAGGINGTIME     : rgf.is
       WEB             : rgf.is
       TDRC            : 2019
       TSSE            : Lavf56.40.101
       Stream #0:0: Video: png, rgb24, 333x333 [SAR 1:1 DAR 1:1], q=2-31, 200 kb/s, 90k fps, 90k tbn, 90k tbc
       Metadata:
         comment         : Cover (front)
         encoder         : Lavc56.60.100 png
       Stream #0:1: Audio: mp3 (libmp3lame), 44100 Hz, stereo, s16p, 128 kb/s
       Metadata:
         encoder         : Lavc56.60.100 libmp3lame
    Stream mapping:
     Stream #0:1 -> #0:0 (png (native) -> png (native))
     Stream #0:0 -> #0:1 (mp3 (native) -> mp3 (libmp3lame))
    Press [q] to stop, [?] for help
    [libmp3lame @ 0xd9b0c0] Trying to remove 1152 samples, but the queue is emptys/s
    frame=    1 fps=0.1 q=-0.0 Lsize=    4788kB time=00:04:51.39 bitrate= 134.6kbits/s
    video:234kB audio:4553kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.014809%

    Samples of MP3 files

    I have uploaded samples of mp3 files created using both avconv and FFmpeg. Please find these here : https://drive.google.com/drive/folders/1gRTmMM2iSK0VWQ4Zaf_iBNQe5laFJl08?usp=sharing

  • Video concatenation puts sound out of sync

    9 août 2019, par mmorin

    (Cross-posted from Video Production, where the question received no answers and may be more technical than usual video production.)

    I have several MOV files from a DSLR camera. I concatenate them with directions from this thread :

    ffmpeg -safe 0 -f concat -i files_to_combine -vcodec copy -acodec copy temp.MOV

    where files_to_combine is :

    file ./DSC_0013.MOV
    ...
    file ./DSC_0019.MOV

    The result has image and sound in sync for the first clip and is out of sync by fractions of a second in the second clip, and out of sync by around a second for the last clip. It is probably related to this error from the log :

    [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f82dd802200] st: 0 edit list: 1 Missing key frame while searching for timestamp: 1000
    [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f82dd802200] st: 0 edit list 1 Cannot find an index entry before timestamp: 1000.
    [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f82dd802200] Auto-inserting h264_mp4toannexb bitstream filter

    How can I trim the frames to the available sound stream, then concatenate the two videos ?

    The full log from the ffmpeg command is :

    ffmpeg version 4.1.3 Copyright (c) 2000-2019 the FFmpeg developers
     built with Apple LLVM version 10.0.1 (clang-1001.0.46.4)
     configuration: --prefix=/usr/local/Cellar/ffmpeg/4.1.3_1 --enable-shared --enable-pthreads --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags='-I/Library/Java/JavaVirtualMachines/adoptopenjdk-11.0.2.jdk/Contents/Home/include -I/Library/Java/JavaVirtualMachines/adoptopenjdk-11.0.2.jdk/Contents/Home/include/darwin' --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libmp3lame --enable-libopus --enable-librubberband --enable-libsnappy --enable-libtesseract --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librtmp --enable-libspeex --enable-videotoolbox --disable-libjack --disable-indev=jack --enable-libaom --enable-libsoxr
     libavutil      56. 22.100 / 56. 22.100
     libavcodec     58. 35.100 / 58. 35.100
     libavformat    58. 20.100 / 58. 20.100
     libavdevice    58.  5.100 / 58.  5.100
     libavfilter     7. 40.101 /  7. 40.101
     libavresample   4.  0.  0 /  4.  0.  0
     libswscale      5.  3.100 /  5.  3.100
     libswresample   3.  3.100 /  3.  3.100
     libpostproc    55.  3.100 / 55.  3.100
    [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f82dc00e000] Auto-inserting h264_mp4toannexb bitstream filter
    Input #0, concat, from 'files_to_combine':
     Duration: N/A, start: -0.592000, bitrate: 36888 kb/s
       Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuvj420p(pc, smpte170m/bt709/bt470m), 1920x1080, 35352 kb/s, 50 fps, 50 tbr, 50k tbn, 100 tbc
       Metadata:
         handler_name    : VideoHandler
       Stream #0:1(eng): Audio: pcm_s16le (sowt / 0x74776F73), 48000 Hz, stereo, s16, 1536 kb/s
       Metadata:
         handler_name    : SoundHandler
    Output #0, mov, to 'temp.MOV':
     Metadata:
       encoder         : Lavf58.20.100
       Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuvj420p(pc, smpte170m/bt709/bt470m), 1920x1080, q=2-31, 35352 kb/s, 50 fps, 50 tbr, 50k tbn, 50k tbc
       Metadata:
         handler_name    : VideoHandler
       Stream #0:1(eng): Audio: pcm_s16le (sowt / 0x74776F73), 48000 Hz, stereo, s16, 1536 kb/s
       Metadata:
         handler_name    : SoundHandler
    Stream mapping:
     Stream #0:0 -> #0:0 (copy)
     Stream #0:1 -> #0:1 (copy)
    Press [q] to stop, [?] for help
    [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f82dd802200] st: 0 edit list: 1 Missing key frame while searching for timestamp: 1000
    [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f82dd802200] st: 0 edit list 1 Cannot find an index entry before timestamp: 1000.
    [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f82dd802200] Auto-inserting h264_mp4toannexb bitstream filter
    frame=41886 fps=547 q=-1.0 Lsize= 3789826kB time=00:13:58.75 bitrate=37014.8kbits/s speed=10.9x    
    video:3631879kB audio:157123kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.021759%

    Update (1 July 2019)

    I thought that the files had a problem at the beginning or at the end, so I
    trimmed one second from each end, but it still had the sound out of sync :

    FILES=files_to_combine
    OUTPUT=show2.MOV
    rm $FILES
    for i in 3 4 5 6 7 8 9; do
       rm ${i}.MOV
       duration=$(ffprobe -v 0 -show_entries format=duration -of compact=p=0:nk=1  DSC_001${i}.MOV)
       trimmed=$(echo $duration - 1 | bc)
       ffmpeg -ss 1 -t $trimmed -i DSC_001${i}.MOV -vcodec copy -acodec copy ${i}.MOV
       echo file ./${i}.MOV >> $FILES
    done

    rm $OUTPUT
    ffmpeg -safe 0 -f concat -i $FILES -vcodec copy -acodec copy $OUTPUT

    When I trim a single file near the end, the sound and video do not seem out of sync :

    ffmpeg -ss 00:09:20 -t 20 -i DSC_0014.MOV -vcodec copy -acodec copy end.MOV

    When I concatenate only 30 seconds from each video, the result seems OK :

    FILES=files_to_combine
    OUTPUT=show2.MOV
    rm $FILES
    for i in 3 4 5 6 7 8 9; do
       rm ${i}.MOV
       duration=$(ffprobe -v 0 -show_entries format=duration -of compact=p=0:nk=1  DSC_001${i}.MOV)
       start=$(echo $duration - 30 | bc)
       end=$(echo $duration - 1 | bc)
       ffmpeg -ss $start -t $end -i DSC_001${i}.MOV -vcodec copy -acodec copy ${i}.MOV
       echo file ./${i}.MOV >> $FILES
    done

    rm $OUTPUT
    ffmpeg -safe 0 -f concat -i $FILES -vcodec copy -acodec copy $OUTPUT

    This last concatenation gives this error multiple times :

    [mov @ 0x7fc3c7837400] Non-monotonous DTS in output stream 0:0; previous: 9080205, current: 9080200; changing to 9080206. This may result in incorrect timestamps in the output file.

    So I am guessing that the problem is small differences in timestamps that
    accumulate and become more noticeable with longer durations and the
    concatenation of multiple files.

    For reference, the DSLR that shot these clips is a Nikon D3300 and the result
    of ffprobe on one of the files is :

    $ ffprobe DSC_0017.MOV -hide_banner
    [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7fab70003800] st: 0 edit list: 1 Missing key frame while searching for timestamp: 1000
    [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7fab70003800] st: 0 edit list 1 Cannot find an index entry before timestamp: 1000.
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'DSC_0017.MOV':
     Metadata:
       major_brand     : qt  
       minor_version   : 537331968
       compatible_brands: qt  niko
       creation_time   : 2019-06-12T23:52:37.000000Z
     Duration: 00:09:53.58, start: 0.000000, bitrate: 36843 kb/s
       Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuvj420p(pc, smpte170m/bt709/bt470m), 1920x1080, 35300 kb/s, 50 fps, 50 tbr, 50k tbn, 100 tbc (default)
       Metadata:
         creation_time   : 2019-06-12T23:52:37.000000Z
       Stream #0:1(eng): Audio: pcm_s16le (sowt / 0x74776F73), 48000 Hz, 2 channels, s16, 1536 kb/s (default)
       Metadata:
         creation_time   : 2019-06-12T23:52:37.000000Z

    Update (9 August 2019)

    I concatenated the files in iMovie and the sound and image are not as out of sync as with FFMPEG. Maybe iMovie aligns the timestamps at the end of each clip instead of concatenating the audio and image streams separately.

    I ran the concatenation again with the latest ffmpeg 4.1.4_1 on these files and others from the same camera. The audio and image are in sync in one case (the results lasts 46 minutes) out of sync in another (the result lasts 48 minutes).

  • ffmpeg avcodec_send_packet/avcodec_receive_frame memory leak

    23 janvier 2019, par G Hamlin

    I’m attempting to decode frames, but memory usage grows with every frame (more specifically, with every call to avcodec_send_packet) until finally the code crashes with a bad_alloc. Here’s the basic decode loop :

    int rfret = 0;
    while((rfret = av_read_frame(inctx.get(), &packet)) >= 0){
       if (packet.stream_index == vstrm_idx) {

           //std::cout << "Sending Packet" << std::endl;
           int ret = avcodec_send_packet(ctx.get(), &packet);
           if (ret < 0 || ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
               std::cout << "avcodec_send_packet: " << ret << std::endl;
               break;
           }

           while (ret  >= 0) {
               //std::cout << "Receiving Frame" << std::endl;
               ret = avcodec_receive_frame(ctx.get(), fr);
               if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
                   //std::cout << "avcodec_receive_frame: " << ret << std::endl;
                   av_frame_unref(fr);
                   // av_frame_free(&fr);
                   break;
               }

               std::cout << "frame: " << ctx->frame_number << std::endl;

               // eventually do something with the frame here...

               av_frame_unref(fr);
               // av_frame_free(&fr);
           }
       }
       else {
           //std::cout << "Not Video" << std::endl;
       }
       av_packet_unref(&packet);
    }

    Memory usage/leakage seems to scale with the resolution of the video I’m decoding. For example, for a 3840x2160 resolution video, the memory usage in windows task manager consistently jumps up by about 8mb (1 byte per pixel ??) for each received frame. Do I need to do something besides call av_frame_unref to release the memory ?

    (more) complete code below


    void AVFormatContextDeleter(AVFormatContext* ptr)
    {
       if (ptr) {
           avformat_close_input(&ptr);
       }
    }

    void AVCodecContextDeleter(AVCodecContext* ptr)
    {
       if (ptr) {
           avcodec_free_context(&ptr);
       }
    }

    typedef std::unique_ptr AVFormatContextPtr;
    typedef std::unique_ptr AVCodecContextPtr;

    AVCodecContextPtr createAvCodecContext(AVCodec *vcodec)
    {
       AVCodecContextPtr ctx(avcodec_alloc_context3(vcodec), AVCodecContextDeleter);
       return ctx;
    }

    AVFormatContextPtr createFormatContext(const std::string& filename)
    {
       AVFormatContext* inctxPtr = nullptr;
       int ret = avformat_open_input(&inctxPtr, filename.c_str(), nullptr, nullptr);
       //    int ret = avformat_open_input(&inctx, "D:/Videos/test.mp4", nullptr, nullptr);
       if (ret != 0) {
           inctxPtr = nullptr;
       }

       return AVFormatContextPtr(inctxPtr, AVFormatContextDeleter);
    }

    int testDecode()
    {
       // open input file context
       AVFormatContextPtr inctx = createFormatContext("D:/Videos/Matt Chapman Hi Greg.MOV");

       if (!inctx) {
           // std::cerr << "fail to avforamt_open_input(\"" << infile << "\"): ret=" << ret;
           return 1;
       }

       // retrieve input stream information
       int ret = avformat_find_stream_info(inctx.get(), nullptr);
       if (ret < 0) {
           //std::cerr << "fail to avformat_find_stream_info: ret=" << ret;
           return 2;
       }

       // find primary video stream
       AVCodec* vcodec = nullptr;
       const int vstrm_idx = av_find_best_stream(inctx.get(), AVMEDIA_TYPE_VIDEO, -1, -1, &vcodec, 0);
       if (vstrm_idx < 0) {
           //std::cerr << "fail to av_find_best_stream: vstrm_idx=" << vstrm_idx;
           return 3;
       }

       AVCodecParameters* origin_par = inctx->streams[vstrm_idx]->codecpar;
       if (vcodec == nullptr) {  // is this even necessary?
           vcodec = avcodec_find_decoder(origin_par->codec_id);
           if (!vcodec) {
               // Can't find decoder
               return 4;
           }
       }

       AVCodecContextPtr ctx = createAvCodecContext(vcodec);
       if (!ctx) {
           return 5;
       }

       ret = avcodec_parameters_to_context(ctx.get(), origin_par);
       if (ret) {
           return 6;
       }

       ret = avcodec_open2(ctx.get(), vcodec, nullptr);
       if (ret < 0) {
           return 7;
       }

       //print input video stream informataion
       std::cout
               //<< "infile: " << infile << "\n"
               << "format: " << inctx->iformat->name << "\n"
               << "vcodec: " << vcodec->name << "\n"
               << "size:   " << origin_par->width << 'x' << origin_par->height << "\n"
               << "fps:    " << av_q2d(ctx->framerate) << " [fps]\n"
               << "length: " << av_rescale_q(inctx->duration, ctx->time_base, {1,1000}) / 1000. << " [sec]\n"
               << "pixfmt: " << av_get_pix_fmt_name(ctx->pix_fmt) << "\n"
               << "frame:  " << inctx->streams[vstrm_idx]->nb_frames << "\n"
               << std::flush;


       AVPacket packet;

       av_init_packet(&packet);
       packet.data = nullptr;
       packet.size = 0;

       AVFrame *fr = av_frame_alloc();
       if (!fr) {
           return 8;
       }

       int rfret = 0;
       while((rfret = av_read_frame(inctx.get(), &packet)) >= 0){
           if (packet.stream_index == vstrm_idx) {

               //std::cout << "Sending Packet" << std::endl;
               int ret = avcodec_send_packet(ctx.get(), &packet);
               if (ret < 0 || ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
                   std::cout << "avcodec_send_packet: " << ret << std::endl;
                   break;
               }

               while (ret  >= 0) {
                   //std::cout << "Receiving Frame" << std::endl;
                   ret = avcodec_receive_frame(ctx.get(), fr);
                   if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
                       //std::cout << "avcodec_receive_frame: " << ret << std::endl;
                       av_frame_unref(fr);
                       // av_frame_free(&fr);
                       break;
                   }

                   std::cout << "frame: " << ctx->frame_number << std::endl;

                   // do something with the frame here...

                   av_frame_unref(fr);
                   // av_frame_free(&fr);
               }
           }
           else {
               //std::cout << "Not Video" << std::endl;
           }
           av_packet_unref(&packet);
       }

       std::cout << "RFRET = " << rfret << std::endl;

       return 0;
    }

    Update 1 : (1/21/2019) Compiling on a different machine and running with different video files I am not seeing the memory usage growing without bound. I’ll try to narrow down where the difference lies (compiler ?, ffmpeg version ?, or video encoding ?)

    Update 2 : (1/21/2019) Ok, it looks like there is some interaction occurring between ffmpeg and Qt’s QCamera. In my application, I’m using Qt to manage the webcam, but decided to use ffmpeg libraries to handle decoding/encoding since Qt doesn’t have as comprehensive support for different codecs. If I have the camera turned on (through Qt), ffmpeg decoding memory consumption grows without bound. If the camera is off, ffmpeg behaves fine. I’ve tried this both with a physical camera (Logitech C920) and with a virtual camera using OBS-Virtualcam, with the same result. So far I’m baffled as to how the two systems are interacting...