Recherche avancée

Médias (1)

Mot : - Tags -/stallman

Autres articles (103)

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

  • Qu’est ce qu’un éditorial

    21 juin 2013, par

    Ecrivez votre de point de vue dans un article. Celui-ci sera rangé dans une rubrique prévue à cet effet.
    Un éditorial est un article de type texte uniquement. Il a pour objectif de ranger les points de vue dans une rubrique dédiée. Un seul éditorial est placé à la une en page d’accueil. Pour consulter les précédents, consultez la rubrique dédiée.
    Vous pouvez personnaliser le formulaire de création d’un éditorial.
    Formulaire de création d’un éditorial Dans le cas d’un document de type éditorial, les (...)

  • Activation de l’inscription des visiteurs

    12 avril 2011, par

    Il est également possible d’activer l’inscription des visiteurs ce qui permettra à tout un chacun d’ouvrir soit même un compte sur le canal en question dans le cadre de projets ouverts par exemple.
    Pour ce faire, il suffit d’aller dans l’espace de configuration du site en choisissant le sous menus "Gestion des utilisateurs". Le premier formulaire visible correspond à cette fonctionnalité.
    Par défaut, MediaSPIP a créé lors de son initialisation un élément de menu dans le menu du haut de la page menant (...)

Sur d’autres sites (11604)

  • Capture and encode desktop with libav in real time not giving corect images

    3 septembre 2022, par thoxey

    As part of a larger project I want to be able to capture and encode the desktop frame by frame in real time. I have the following test code to reproduce the issue shown in the screenshot :

    


    #include &#xA;#include &#xA;#include <iostream>&#xA;#include <fstream>&#xA;#include <string>&#xA;#include &#xA;#include &#xA;&#xA;extern "C"&#xA;{&#xA;#include "libavdevice/avdevice.h"&#xA;#include "libavutil/channel_layout.h"&#xA;#include "libavutil/mathematics.h"&#xA;#include "libavutil/opt.h"&#xA;#include "libavformat/avformat.h"&#xA;#include "libswscale/swscale.h"&#xA;}&#xA;&#xA;&#xA;/* 5 seconds stream duration */&#xA;#define STREAM_DURATION   5.0&#xA;#define STREAM_FRAME_RATE 25 /* 25 images/s */&#xA;#define STREAM_NB_FRAMES  ((int)(STREAM_DURATION * STREAM_FRAME_RATE))&#xA;#define STREAM_PIX_FMT    AV_PIX_FMT_YUV420P /* default pix_fmt */&#xA;&#xA;int videoStreamIndx;&#xA;int framerate = 30;&#xA;&#xA;int width = 1920;&#xA;int height = 1080;&#xA;&#xA;int encPacketCounter;&#xA;&#xA;AVFormatContext* ifmtCtx;&#xA;AVCodecContext* avcodecContx;&#xA;AVFormatContext* ofmtCtx;&#xA;AVStream* videoStream;&#xA;AVCodecContext* avCntxOut;&#xA;AVPacket* avPkt;&#xA;AVFrame* avFrame;&#xA;AVFrame* outFrame;&#xA;SwsContext* swsCtx;&#xA;&#xA;std::ofstream fs;&#xA;&#xA;&#xA;AVDictionary* ConfigureScreenCapture()&#xA;{&#xA;&#xA;    AVDictionary* options = NULL;&#xA;    //Try adding "-rtbufsize 100M" as in https://stackoverflow.com/questions/6766333/capture-windows-screen-with-ffmpeg&#xA;    av_dict_set(&amp;options, "rtbufsize", "100M", 0);&#xA;    av_dict_set(&amp;options, "framerate", std::to_string(framerate).c_str(), 0);&#xA;    char buffer[16];&#xA;    sprintf(buffer, "%dx%d", width, height);&#xA;    av_dict_set(&amp;options, "video_size", buffer, 0);&#xA;    return options;&#xA;}&#xA;&#xA;AVCodecParameters* ConfigureAvCodec()&#xA;{&#xA;    AVCodecParameters* av_codec_par_out = avcodec_parameters_alloc();&#xA;    av_codec_par_out->width = width;&#xA;    av_codec_par_out->height = height;&#xA;    av_codec_par_out->bit_rate = 40000;&#xA;    av_codec_par_out->codec_id = AV_CODEC_ID_H264; //AV_CODEC_ID_MPEG4; //Try H.264 instead of MPEG4&#xA;    av_codec_par_out->codec_type = AVMEDIA_TYPE_VIDEO;&#xA;    av_codec_par_out->format = 0;&#xA;    return av_codec_par_out;&#xA;}&#xA;&#xA;int GetVideoStreamIndex()&#xA;{&#xA;    int VideoStreamIndx = -1;&#xA;    avformat_find_stream_info(ifmtCtx, NULL);&#xA;    /* find the first video stream index . Also there is an API available to do the below operations */&#xA;    for (int i = 0; i &lt; (int)ifmtCtx->nb_streams; i&#x2B;&#x2B;) // find video stream position/index.&#xA;    {&#xA;        if (ifmtCtx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO)&#xA;        {&#xA;            VideoStreamIndx = i;&#xA;            break;&#xA;        }&#xA;    }&#xA;&#xA;    if (VideoStreamIndx == -1)&#xA;    {&#xA;    }&#xA;&#xA;    return VideoStreamIndx;&#xA;}&#xA;&#xA;void CreateFrames(AVCodecParameters* av_codec_par_in, AVCodecParameters* av_codec_par_out)&#xA;{&#xA;&#xA;    avFrame = av_frame_alloc();&#xA;    avFrame->width = avcodecContx->width;&#xA;    avFrame->height = avcodecContx->height;&#xA;    avFrame->format = av_codec_par_in->format;&#xA;    av_frame_get_buffer(avFrame, 0);&#xA;&#xA;    outFrame = av_frame_alloc();&#xA;    outFrame->width = avCntxOut->width;&#xA;    outFrame->height = avCntxOut->height;&#xA;    outFrame->format = av_codec_par_out->format;&#xA;    av_frame_get_buffer(outFrame, 0);&#xA;}&#xA;&#xA;bool Init()&#xA;{&#xA;    AVCodecParameters* avCodecParOut = ConfigureAvCodec();&#xA;&#xA;    AVDictionary* options = ConfigureScreenCapture();&#xA;&#xA;    AVInputFormat* ifmt = av_find_input_format("gdigrab");&#xA;    auto ifmtCtxLocal = avformat_alloc_context();&#xA;    if (avformat_open_input(&amp;ifmtCtxLocal, "desktop", ifmt, &amp;options) &lt; 0)&#xA;    {&#xA;        return false;&#xA;    }&#xA;    ifmtCtx = ifmtCtxLocal;&#xA;&#xA;    videoStreamIndx = GetVideoStreamIndex();&#xA;&#xA;    AVCodecParameters* avCodecParIn = avcodec_parameters_alloc();&#xA;    avCodecParIn = ifmtCtx->streams[videoStreamIndx]->codecpar;&#xA;&#xA;    AVCodec* avCodec = avcodec_find_decoder(avCodecParIn->codec_id);&#xA;    if (avCodec == NULL)&#xA;    {&#xA;        return false;&#xA;    }&#xA;&#xA;    avcodecContx = avcodec_alloc_context3(avCodec);&#xA;    if (avcodec_parameters_to_context(avcodecContx, avCodecParIn) &lt; 0)&#xA;    {&#xA;        return false;&#xA;    }&#xA;&#xA;    //av_dict_set&#xA;    int value = avcodec_open2(avcodecContx, avCodec, NULL); //Initialize the AVCodecContext to use the given AVCodec.&#xA;    if (value &lt; 0)&#xA;    {&#xA;        return false;&#xA;    }&#xA;&#xA;    AVOutputFormat* ofmt = av_guess_format("h264", NULL, NULL);&#xA;&#xA;    if (ofmt == NULL)&#xA;    {&#xA;        return false;&#xA;    }&#xA;&#xA;    auto ofmtCtxLocal = avformat_alloc_context();&#xA;    avformat_alloc_output_context2(&amp;ofmtCtxLocal, ofmt, NULL, NULL);&#xA;    if (ofmtCtxLocal == NULL)&#xA;    {&#xA;        return false;&#xA;    }&#xA;    ofmtCtx = ofmtCtxLocal;&#xA;&#xA;    AVCodec* avCodecOut = avcodec_find_encoder(avCodecParOut->codec_id);&#xA;    if (avCodecOut == NULL)&#xA;    {&#xA;        return false;&#xA;    }&#xA;&#xA;    videoStream = avformat_new_stream(ofmtCtx, avCodecOut);&#xA;    if (videoStream == NULL)&#xA;    {&#xA;        return false;&#xA;    }&#xA;&#xA;    avCntxOut = avcodec_alloc_context3(avCodecOut);&#xA;    if (avCntxOut == NULL)&#xA;    {&#xA;        return false;&#xA;    }&#xA;&#xA;    if (avcodec_parameters_copy(videoStream->codecpar, avCodecParOut) &lt; 0)&#xA;    {&#xA;        return false;&#xA;    }&#xA;&#xA;    if (avcodec_parameters_to_context(avCntxOut, avCodecParOut) &lt; 0)&#xA;    {&#xA;        return false;&#xA;    }&#xA;&#xA;    avCntxOut->gop_size = 30; //3; //Use I-Frame frame every 30 frames.&#xA;    avCntxOut->max_b_frames = 0;&#xA;    avCntxOut->time_base.num = 1;&#xA;    avCntxOut->time_base.den = framerate;&#xA;&#xA;    //avio_open(&amp;ofmtCtx->pb, "", AVIO_FLAG_READ_WRITE);&#xA;&#xA;    if (avformat_write_header(ofmtCtx, NULL) &lt; 0)&#xA;    {&#xA;        return false;&#xA;    }&#xA;&#xA;    value = avcodec_open2(avCntxOut, avCodecOut, NULL); //Initialize the AVCodecContext to use the given AVCodec.&#xA;    if (value &lt; 0)&#xA;    {&#xA;        return false;&#xA;    }&#xA;&#xA;    if (avcodecContx->codec_id == AV_CODEC_ID_H264)&#xA;    {&#xA;        av_opt_set(avCntxOut->priv_data, "preset", "ultrafast", 0);&#xA;        av_opt_set(avCntxOut->priv_data, "zerolatency", "1", 0);&#xA;        av_opt_set(avCntxOut->priv_data, "tune", "ull", 0);&#xA;    }&#xA;&#xA;    if ((ofmtCtx->oformat->flags &amp; AVFMT_GLOBALHEADER) != 0)&#xA;    {&#xA;        avCntxOut->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;&#xA;    }&#xA;&#xA;    CreateFrames(avCodecParIn, avCodecParOut);&#xA;&#xA;    swsCtx = sws_alloc_context();&#xA;    if (sws_init_context(swsCtx, NULL, NULL) &lt; 0)&#xA;    {&#xA;        return false;&#xA;    }&#xA;&#xA;    swsCtx = sws_getContext(avcodecContx->width, avcodecContx->height, avcodecContx->pix_fmt,&#xA;        avCntxOut->width, avCntxOut->height, avCntxOut->pix_fmt, SWS_FAST_BILINEAR,&#xA;        NULL, NULL, NULL);&#xA;    if (swsCtx == NULL)&#xA;    {&#xA;        return false;&#xA;    }&#xA;&#xA;    return true;&#xA;}&#xA;&#xA;void Encode(AVCodecContext* enc_ctx, AVFrame* frame, AVPacket* pkt)&#xA;{&#xA;    int ret;&#xA;&#xA;    /* send the frame to the encoder */&#xA;    ret = avcodec_send_frame(enc_ctx, frame);&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        return;&#xA;    }&#xA;&#xA;    while (ret >= 0)&#xA;    {&#xA;        ret = avcodec_receive_packet(enc_ctx, pkt);&#xA;        if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)&#xA;            return;&#xA;        if (ret &lt; 0)&#xA;        {&#xA;            return;&#xA;        }&#xA;&#xA;        fs.write((char*)pkt->data, pkt->size);&#xA;        av_packet_unref(pkt);&#xA;    }&#xA;}&#xA;&#xA;void EncodeFrames(int noFrames)&#xA;{&#xA;    int frameCount = 0;&#xA;    avPkt = av_packet_alloc();&#xA;    AVPacket* outPacket = av_packet_alloc();&#xA;    encPacketCounter = 0;&#xA;&#xA;    while (av_read_frame(ifmtCtx, avPkt) >= 0)&#xA;    {&#xA;        if (frameCount&#x2B;&#x2B; == noFrames)&#xA;            break;&#xA;        if (avPkt->stream_index != videoStreamIndx) continue;&#xA;&#xA;        avcodec_send_packet(avcodecContx, avPkt);&#xA;&#xA;        if (avcodec_receive_frame(avcodecContx, avFrame) >= 0) // Frame successfully decoded :)&#xA;        {&#xA;            outPacket->data = NULL; // packet data will be allocated by the encoder&#xA;            outPacket->size = 0;&#xA;&#xA;            outPacket->pts = av_rescale_q(encPacketCounter, avCntxOut->time_base, videoStream->time_base);&#xA;            if (outPacket->dts != AV_NOPTS_VALUE)&#xA;                outPacket->dts = av_rescale_q(encPacketCounter, avCntxOut->time_base, videoStream->time_base);&#xA;&#xA;            outPacket->dts = av_rescale_q(encPacketCounter, avCntxOut->time_base, videoStream->time_base);&#xA;            outPacket->duration = av_rescale_q(1, avCntxOut->time_base, videoStream->time_base);&#xA;&#xA;            outFrame->pts = av_rescale_q(encPacketCounter, avCntxOut->time_base, videoStream->time_base);&#xA;            outFrame->pkt_duration = av_rescale_q(encPacketCounter, avCntxOut->time_base, videoStream->time_base);&#xA;            encPacketCounter&#x2B;&#x2B;;&#xA;&#xA;            int sts = sws_scale(swsCtx,&#xA;                avFrame->data, avFrame->linesize, 0, avFrame->height,&#xA;                outFrame->data, outFrame->linesize);&#xA;&#xA;            /* make sure the frame data is writable */&#xA;            auto ret = av_frame_make_writable(outFrame);&#xA;            if (ret &lt; 0)&#xA;                break;&#xA;            Encode(avCntxOut, outFrame, outPacket);&#xA;        }&#xA;        av_frame_unref(avFrame);&#xA;        av_packet_unref(avPkt);&#xA;    }&#xA;}&#xA;&#xA;void Dispose()&#xA;{&#xA;    fs.close();&#xA;&#xA;    auto ifmtCtxLocal = ifmtCtx;&#xA;    avformat_close_input(&amp;ifmtCtx);&#xA;    avformat_free_context(ifmtCtx);&#xA;    avcodec_free_context(&amp;avcodecContx);&#xA;&#xA;}&#xA;&#xA;int main(int argc, char** argv)&#xA;{&#xA;    avdevice_register_all();&#xA;&#xA;    fs.open("out.h264");&#xA;&#xA;    if (Init())&#xA;    {&#xA;        EncodeFrames(300);&#xA;    }&#xA;    else&#xA;    {&#xA;        std::cout &lt;&lt; "Failed to Init \n";&#xA;    }    &#xA;&#xA;    Dispose();&#xA;&#xA;    return 0;&#xA;}&#xA;</string></fstream></iostream>

    &#xA;

    As far as I can tell the setup of the encoding process is correct as it is largely unchanged from how the example given in the official documentation is working : https://libav.org/documentation/doxygen/master/encode__video_8c_source.html

    &#xA;

    However there is limited documentation around the desktop capture online so I am not sure if I have set that up correctly.

    &#xA;

    Bad image

    &#xA;

  • Find a great Google Tag Manager alternative in Matomo Tag Manager

    29 avril 2020, par Joselyn Khor — Analytics Tips, Development, Marketing, Plugins

    If you’re looking for a tag management system that rivals Google’s, then Matomo Tag Manager is a great Google Tag Manager alternative that takes your tracking to the next level.

    What’s a tag manager ?

    If you’re not familiar with Google Tag Manager or Matomo Tag Manager – they’re both free tag management systems that let you manage all your website code snippets (tags) in one place. 

    Tags are typically JavaScript code or HTML that lets you integrate various features into your site in just a few clicks. For example : analytics codes, conversion tracking codes, exit popups and surveys, remarketing codes, social widgets, affiliates, and ads. With a tag manager, you get to easily look into and manage these different tracking codes.

    Why use a tag manager ?

    Tag management systems are game changers because they let you track important data more effectively by easily adding code snippets (tags) to your website. 

    By not needing to hard code each individual code you also save time. Rather than waiting for someone to make tag changes and to deploy your website, you can make the changes yourself without needing the technical expertise of a developer.

    Why is Matomo Tag Manager a great Google Tag Manager alternative ?

     Matomo Tag Manager is a great Google Tag Manager alternative. Not only does it let you manage all your tracking and marketing tags in one place, it also offers less complexity and more flexibility. 

    By tagging your website and using Matomo Tag Manager alongside Matomo Analytics, you can collect much more data than you’d be able to otherwise. 

    A bonus to using Matomo is the privacy and data ownership aspect. With Matomo you also get the added peace of mind that comes with 100% data ownership and privacy protection. You will never be left wondering what’s happening to your data. Rest assured knowing you’re doing the best to protect user privacy, while getting useful insights to improve your website. 

    And since Matomo Tag Manager is the one of the best alternatives to Google Tag Manager, you’ll gain more than you lose by having full confidence that your data is yours to own.

    Three key benefits of using Matomo Tag Manager :

    • Empowers you to deploy and manage your own tags
      This takes the hassle out of needing a web developer to hard code and edit every tag on your website. Now you can deploy tracking code on chosen pages and track various data yourself. 
    • Open up endless possibilities on data tracking
      Dig a lot deeper to track analytics, conversions, and more. Now you can implement advanced tracking solutions without needing to pay an external source. 
    • Save time and create your own impact
      With limited resources you certainly don’t want to be wasting any time having to go back and forth with an external party over what tags to add or take away. An over-dependence on web developers or agencies carrying out tag management for you, stalls growth and experimentation opportunities. With a tag management system you have the convenience of inserting your own tags and getting to a desired outcome faster. You won’t have to forgo tracking opportunities because now it’s in your hands.
  • Sporadic "Error parsing Cues... Operation not permitted" errors when trying to generate a DASH manifest

    22 novembre 2023, par kshetline

    I have already-generated .webm audio and video files (1 audio, 3 video resolutions for each video I want to stream). The video has been generated not (directly) by ffmpeg, but HandbrakeCLI 1.7.0, with V9 encoding. The audio (which has never caused an error) is generated by ffmpeg using libvorbis.

    &#xA;

    Most of the time ffmpeg (version 6.1) creates a manifest without any problem. Sporadically, however, "Error parsing Cues" comes up (frequently with the latest videos I've been trying to process) and I can't create a manifest. Since this is happening during an automated process to process many videos for streaming, the audio and video sources are being created exactly the same way whether ffmpeg succeeds or fails in generating a manifest, making this all the more confusing.

    &#xA;

    The video files ffmpeg chokes on play perfectly well using VLC, and mediainfo doesn't show any problems with these files.

    &#xA;

    Here's the way I've been (sometimes successfully, sometimes not) generating a manifest, with extra logging added :

    &#xA;

    ffmpeg -v 9 -loglevel 99 \&#xA;  -f webm_dash_manifest -i &#x27;.\Sample Video.v480.webm&#x27; \&#xA;  -f webm_dash_manifest -i &#x27;.\Sample Video.v720.webm&#x27; \&#xA;  -f webm_dash_manifest -i &#x27;.\Sample Video.v1080.webm&#x27; \&#xA;  -f webm_dash_manifest -i &#x27;.\Sample Video.audio.webm&#x27; \&#xA;  -c copy -map 0 -map 1 -map 2 -map 3 \&#xA;  -f webm_dash_manifest -adaptation_sets "id=0,streams=0,1,2 id=1,streams=3" \&#xA;  &#x27;.\Sample Video.mpd&#x27;&#xA;

    &#xA;

    Here's the result when it fails :

    &#xA;

    ffmpeg version 6.1-full_build-www.gyan.dev Copyright (c) 2000-2023 the FFmpeg developers&#xA;  built with gcc 12.2.0 (Rev10, Built by MSYS2 project)&#xA;  configuration: --enable-gpl --enable-version3 --enable-static --pkg-config=pkgconf --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libaribcaption --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-libharfbuzz --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-dxva2 --enable-d3d11va --enable-libvpl --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libcodec2 --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint&#xA;  libavutil      58. 29.100 / 58. 29.100&#xA;  libavcodec     60. 31.102 / 60. 31.102&#xA;  libavformat    60. 16.100 / 60. 16.100&#xA;  libavdevice    60.  3.100 / 60.  3.100&#xA;  libavfilter     9. 12.100 /  9. 12.100&#xA;  libswscale      7.  5.100 /  7.  5.100&#xA;  libswresample   4. 12.100 /  4. 12.100&#xA;  libpostproc    57.  3.100 / 57.  3.100&#xA;Splitting the commandline.&#xA;Reading option &#x27;-v&#x27; ... matched as option &#x27;v&#x27; (set logging level) with argument &#x27;9&#x27;.&#xA;Reading option &#x27;-loglevel&#x27; ... matched as option &#x27;loglevel&#x27; (set logging level) with argument &#x27;99&#x27;.&#xA;Reading option &#x27;-f&#x27; ... matched as option &#x27;f&#x27; (force format) with argument &#x27;webm_dash_manifest&#x27;.&#xA;Reading option &#x27;-i&#x27; ... matched as output url with argument &#x27;.\Sample Video.v480.webm&#x27;.&#xA;Reading option &#x27;-f&#x27; ... matched as option &#x27;f&#x27; (force format) with argument &#x27;webm_dash_manifest&#x27;.&#xA;Reading option &#x27;-i&#x27; ... matched as output url with argument &#x27;.\Sample Video.v720.webm&#x27;.&#xA;Reading option &#x27;-f&#x27; ... matched as option &#x27;f&#x27; (force format) with argument &#x27;webm_dash_manifest&#x27;.&#xA;Reading option &#x27;-i&#x27; ... matched as output url with argument &#x27;.\Sample Video.v1080.webm&#x27;.&#xA;Reading option &#x27;-f&#x27; ... matched as option &#x27;f&#x27; (force format) with argument &#x27;webm_dash_manifest&#x27;.&#xA;Reading option &#x27;-i&#x27; ... matched as output url with argument &#x27;.\Sample Video.audio.webm&#x27;.&#xA;Reading option &#x27;-c&#x27; ... matched as option &#x27;c&#x27; (codec name) with argument &#x27;copy&#x27;.&#xA;Reading option &#x27;-map&#x27; ... matched as option &#x27;map&#x27; (set input stream mapping) with argument &#x27;0&#x27;.&#xA;Reading option &#x27;-map&#x27; ... matched as option &#x27;map&#x27; (set input stream mapping) with argument &#x27;1&#x27;.&#xA;Reading option &#x27;-map&#x27; ... matched as option &#x27;map&#x27; (set input stream mapping) with argument &#x27;2&#x27;.&#xA;Reading option &#x27;-map&#x27; ... matched as option &#x27;map&#x27; (set input stream mapping) with argument &#x27;3&#x27;.&#xA;Reading option &#x27;-f&#x27; ... matched as option &#x27;f&#x27; (force format) with argument &#x27;webm_dash_manifest&#x27;.&#xA;Reading option &#x27;-adaptation_sets&#x27; ... matched as AVOption &#x27;adaptation_sets&#x27; with argument &#x27;id=0,streams=0,1,2 id=1,streams=3&#x27;.&#xA;Reading option &#x27;.\Sample Video.mpd&#x27; ... matched as output url.&#xA;Finished splitting the commandline.&#xA;Parsing a group of options: global .&#xA;Applying option v (set logging level) with argument 9.&#xA;Successfully parsed a group of options.&#xA;Parsing a group of options: input url .\Sample Video.v480.webm.&#xA;Applying option f (force format) with argument webm_dash_manifest.&#xA;Successfully parsed a group of options.&#xA;Opening an input file: .\Sample Video.v480.webm.&#xA;[webm_dash_manifest @ 000002bbcb41dc80] Opening &#x27;.\Sample Video.v480.webm&#x27; for reading&#xA;[file @ 000002bbcb41e300] Setting default whitelist &#x27;file,crypto,data&#x27;&#xA;st:0 removing common factor 1000000 from timebase&#xA;[webm_dash_manifest @ 000002bbcb41dc80] Error parsing Cues&#xA;[AVIOContext @ 000002bbcb41e5c0] Statistics: 102283 bytes read, 4 seeks&#xA;[in#0 @ 000002bbcb41dac0] Error opening input: Operation not permitted&#xA;Error opening input file .\Sample Video.v480.webm.&#xA;Error opening input files: Operation not permitted&#xA;

    &#xA;

    This is mediainfo for the offending input file, Sample Video.v480.webm :

    &#xA;

    General&#xA;Complete name                            : .\Sample Video.v480.webm&#xA;Format                                   : WebM&#xA;Format version                           : Version 2&#xA;File size                                : 628 MiB&#xA;Duration                                 : 1 h 34 min&#xA;Overall bit rate                         : 926 kb/s&#xA;Frame rate                               : 23.976 FPS&#xA;Encoded date                             : 2023-11-21 16:48:35 UTC&#xA;Writing application                      : HandBrake 1.7.0 2023111500&#xA;Writing library                          : Lavf60.16.100&#xA;&#xA;Video&#xA;ID                                       : 1&#xA;Format                                   : VP9&#xA;Format profile                           : 0&#xA;Codec ID                                 : V_VP9&#xA;Duration                                 : 1 h 34 min&#xA;Bit rate                                 : 882 kb/s&#xA;Width                                    : 720 pixels&#xA;Height                                   : 480 pixels&#xA;Display aspect ratio                     : 16:9&#xA;Frame rate mode                          : Constant&#xA;Frame rate                               : 23.976 (24000/1001) FPS&#xA;Color space                              : YUV&#xA;Chroma subsampling                       : 4:2:0&#xA;Bit depth                                : 8 bits&#xA;Bits/(Pixel*Frame)                       : 0.106&#xA;Stream size                              : 598 MiB (95%)&#xA;Default                                  : Yes&#xA;Forced                                   : No&#xA;Color range                              : Limited&#xA;Color primaries                          : BT.709&#xA;Transfer characteristics                 : BT.709&#xA;Matrix coefficients                      : BT.709&#xA;

    &#xA;

    I don't know if I need different command line options, or whether this might be an ffmpeg or Handbrake bug. It has taken many, many hours to generate these video files (VP9 is painfully slow to encode), so I hate to do a lot of this over again, especially doing it again encoding the video with ffmpeg instead of Handbrake, as Handbrake is (oddly enough, considering it uses ffmpeg under the hood) noticeably faster.

    &#xA;

    I have no idea what these "Cues" are that ffmpeg wants and can't parse, or how I would change them.

    &#xA;