
Recherche avancée
Médias (1)
-
Revolution of Open-source and film making towards open film making
6 octobre 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (67)
-
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...) -
Menus personnalisés
14 novembre 2010, parMediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
Menus créés à l’initialisation du site
Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...) -
Le plugin : Gestion de la mutualisation
2 mars 2010, parLe plugin de Gestion de mutualisation permet de gérer les différents canaux de mediaspip depuis un site maître. Il a pour but de fournir une solution pure SPIP afin de remplacer cette ancienne solution.
Installation basique
On installe les fichiers de SPIP sur le serveur.
On ajoute ensuite le plugin "mutualisation" à la racine du site comme décrit ici.
On customise le fichier mes_options.php central comme on le souhaite. Voilà pour l’exemple celui de la plateforme mediaspip.net :
< ?php (...)
Sur d’autres sites (8431)
-
Why there is no AVFrame->data[2] data when decode h264 by ffmpeg use "h264_cuvid"
27 juillet 2017, par Wu NLenv : ubuntu 16.04 64 bit ; ffmpeg 3.3.2 build whih cuda cuvid libnpp...
use ffmpeg cmd :ffmpeg -vsync 0 -c:v h264_cuvid -i test.264 -f rawvideo test.yuv
works fine, the generated yuv file is ok.
BUT When I decode this 264 file by my code use ’h264_cuvid’ decoder, something problem happens, this is my code :#include
#define __STDC_CONSTANT_MACROS
#ifdef _WIN32
//Windows
extern "C"
{
#include "libavcodec/avcodec.h"
};
#else
//Linux...
#ifdef __cplusplus
extern "C"
{
#endif
#include <libavcodec></libavcodec>avcodec.h>
#ifdef __cplusplus
};
#endif
#endif
//test different codec
#define TEST_H264 1
#define TEST_HEVC 0
int main(int argc, char* argv[])
{
AVCodec *pCodec;
AVCodecContext *pCodecCtx= NULL;
AVCodecParserContext *pCodecParserCtx=NULL;
FILE *fp_in;
FILE *fp_out;
AVFrame *pFrame;
const int in_buffer_size=4096;
unsigned char in_buffer[in_buffer_size + FF_INPUT_BUFFER_PADDING_SIZE]= {0};
unsigned char *cur_ptr;
int cur_size;
AVPacket packet;
int ret, got_picture;
#if TEST_HEVC
enum AVCodecID codec_id=AV_CODEC_ID_HEVC;
char filepath_in[]="bigbuckbunny_480x272.hevc";
#elif TEST_H264
AVCodecID codec_id=AV_CODEC_ID_H264;
char filepath_in[]="2_60_265to264.264";
#else
AVCodecID codec_id=AV_CODEC_ID_MPEG2VIDEO;
char filepath_in[]="bigbuckbunny_480x272.m2v";
#endif
char filepath_out[]="mainSend.yuv";
int first_time=1;
//av_log_set_level(AV_LOG_DEBUG);
avcodec_register_all();
// pCodec = avcodec_find_decoder(codec_id);
pCodec = avcodec_find_decoder_by_name("h264_cuvid");
if (!pCodec)
{
printf("Codec not found\n");
return -1;
}
pCodecCtx = avcodec_alloc_context3(pCodec);
if (!pCodecCtx)
{
printf("Could not allocate video codec context\n");
return -1;
}
pCodecParserCtx=av_parser_init(pCodec->id);
if (!pCodecParserCtx)
{
printf("Could not allocate video parser context\n");
return -1;
}
if (avcodec_open2(pCodecCtx, pCodec, NULL) < 0)
{
printf("Could not open codec\n");
return -1;
}
//Input File
fp_in = fopen(filepath_in, "rb");
if (!fp_in)
{
printf("Could not open input stream\n");
return -1;
}
//Output File
fp_out = fopen(filepath_out, "wb");
if (!fp_out)
{
printf("Could not open output YUV file\n");
return -1;
}
pFrame = av_frame_alloc();
av_init_packet(&packet);
while (1)
{
cur_size = fread(in_buffer, 1, in_buffer_size, fp_in);
if (cur_size == 0)
break;
cur_ptr=in_buffer;
while (cur_size>0)
{
int len = av_parser_parse2(
pCodecParserCtx, pCodecCtx,
&packet.data, &packet.size,
cur_ptr, cur_size,
AV_NOPTS_VALUE, AV_NOPTS_VALUE, AV_NOPTS_VALUE);
cur_ptr += len;
cur_size -= len;
if(packet.size==0)
continue;
//Some Info from AVCodecParserContext
printf("[Packet]Size:%6d\t",packet.size);
switch(pCodecParserCtx->pict_type)
{
case AV_PICTURE_TYPE_I:
printf("Type:I\tNumber:%4d\n",pCodecParserCtx->output_picture_number);
break;
case AV_PICTURE_TYPE_P:
printf("Type:P\t");
break;
case AV_PICTURE_TYPE_B:
printf("Type:B\t");
break;
default:
printf("Type:Other\t");
break;
}
printf("Number:%4d\n",pCodecParserCtx->output_picture_number);
AVFrame* myFrame = av_frame_alloc();
ret = avcodec_decode_video2(pCodecCtx, myFrame, &got_picture, &packet);
if (ret < 0)
{
printf("Decode Error.\n");
return ret;
}
if (got_picture)
{
if(first_time)
{
printf("\nCodec Full Name:%s\n",pCodecCtx->codec->long_name);
printf("width:%d\nheight:%d\n\n",pCodecCtx->width,pCodecCtx->height);
first_time=0;
}
//Y, U, V
for(int i=0; iheight; i++)
{
fwrite(myFrame->data[0]+myFrag-g>linesize[0]*i,1,myFrame->width,fp_out);
}
for(int i=0; iheight/2; i++)
{
fwrite(myFrame->data[1]+myFrag-g>linesize[1]*i,1,myFrame->width/2,fp_out);
}
for(int i=0; iheight/2; i++)
{
fwrite(myFrame->data[2]+myFrag-g>linesize[2]*i,1,myFrame->width/2,fp_out);
}
// printf("pframe's width height %d %d\t key frame %d\n",myFrame->width,myFrame->height,myFrame->key_frame);
printf("Succeed to decode 1 frame!\n");
av_frame_free(&myFrame);
}
}
}
fclose(fp_in);
fclose(fp_out);
av_parser_close(pCodecParserCtx);
av_frame_free(&pFrame);
avcodec_close(pCodecCtx);
av_free(pCodecCtx);
return 0;
}In this demo code, I call h264_cuvid by
vcodec_find_decoder_by_name("h264_cuvid");
BUT the code crash atfwrite(myFrame->data[2]+myFrag-g>linesize[2]*i,1,myFrame->width/2,fp_out);
So after debug with codeblocks, I found that there is no data in myFrame->data[2] codeblocks watching windowAny suggestion ? thanks !
-
"Application provided invalid, non monotonically increasing dts to muxer in stream 0 : 47104 >= -4251" in C ffmpeg video & audio streams processing
30 décembre 2023, par M.HakimFor an input.mp4 file containing a video stream and an audio stream, intend to convert the video stream into h264 codec and the audio stream into aac codec and combine the two streams in output.mp4 file using C and ffmpeg libraries.
Am getting an error [mp4 @ 0x5583c88fd340] Application provided invalid, non monotonically increasing dts to muxer in stream 0 : 47104 >= -4251
How do i solve that error ?


#include 
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h> 
#include <libavutil></libavutil>opt.h>

int encodeVideoAndAudio4(char *pInName, char *pOutName) {

 AVFormatContext *format_ctx = avformat_alloc_context();

 AVCodecContext *video_dec_ctx = NULL;
 AVCodecContext *video_enc_ctx = NULL;
 AVCodec *video_dec_codec = NULL;
 AVCodec *video_enc_codec = NULL;
 AVDictionary *video_enc_opts = NULL;

 AVCodecContext *audio_dec_ctx = NULL;
 AVCodecContext *audio_enc_ctx = NULL;
 AVCodec *audio_dec_codec = NULL;
 AVCodec *audio_enc_codec = NULL;


 if (avformat_open_input(&format_ctx, pInName, NULL, NULL) < 0) {
 fprintf(stderr, "Error: Could not open input file\n");
 return 1;
 }

 if (avformat_find_stream_info(format_ctx, NULL) < 0) {
 fprintf(stderr, "Error: Could not find stream information\n");
 return 1;
 }

 for (int i = 0; i < format_ctx->nb_streams; i++) {
 AVStream *stream = format_ctx->streams[i];
 const char *media_type_str = av_get_media_type_string(stream->codecpar->codec_type);
 AVRational time_base = stream->time_base;

 }

 int video_stream_index = -1;
 for (int i = 0; i < format_ctx->nb_streams; i++) {
 if (format_ctx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO) {
 video_stream_index = i;
 break;
 }
 }
 if (video_stream_index == -1) {
 fprintf(stderr, "Error: Could not find a video stream\n");
 return 1;
 }

 AVStream *videoStream = format_ctx->streams[video_stream_index];
 video_dec_ctx = avcodec_alloc_context3(NULL);
 avcodec_parameters_to_context(video_dec_ctx, videoStream->codecpar);

 video_dec_codec = avcodec_find_decoder(video_dec_ctx->codec_id);

 if (!video_dec_codec) {
 fprintf(stderr, "Unsupported video codec!\n");
 return 1;
 }

 if (avcodec_open2(video_dec_ctx, video_dec_codec, NULL) < 0) {
 fprintf(stderr, "Error: Could not open a video decoder codec\n");
 return 1;
 }

 video_enc_codec = avcodec_find_encoder(AV_CODEC_ID_H264);
 if (!video_enc_codec) {
 fprintf(stderr, "Error: Video Encoder codec not found\n");
 return 1;
 }

 video_enc_ctx = avcodec_alloc_context3(video_enc_codec);
 if (!video_enc_ctx) {
 fprintf(stderr, "Error: Could not allocate video encoder codec context\n");
 return 1;
 }

 videoStream->time_base = (AVRational){1, 25};

 video_enc_ctx->bit_rate = 1000; 
 video_enc_ctx->width = video_dec_ctx->width;
 video_enc_ctx->height = video_dec_ctx->height;
 video_enc_ctx->time_base = (AVRational){1, 25};
 video_enc_ctx->gop_size = 10;
 video_enc_ctx->max_b_frames = 1;
 video_enc_ctx->pix_fmt = AV_PIX_FMT_YUV420P;

 if (avcodec_open2(video_enc_ctx, video_enc_codec, NULL) < 0) {
 fprintf(stderr, "Error: Could not open encoder codec\n");
 return 1;
 }

 av_dict_set(&video_enc_opts, "preset", "medium", 0);
 av_opt_set_dict(video_enc_ctx->priv_data, &video_enc_opts);

 AVPacket video_pkt;
 av_init_packet(&video_pkt);
 video_pkt.data = NULL;
 video_pkt.size = 0;

 AVPacket pkt;
 av_init_packet(&pkt);
 pkt.data = NULL;
 pkt.size = 0;

 AVFrame *video_frame = av_frame_alloc();
 if (!video_frame) {
 fprintf(stderr, "Error: Could not allocate video frame\n");
 return 1;
 }

 video_frame->format = video_enc_ctx->pix_fmt;
 video_frame->width = video_enc_ctx->width;
 video_frame->height = video_enc_ctx->height;
 
 int audio_stream_index = -1;
 for (int i = 0; i < format_ctx->nb_streams; i++) {
 if (format_ctx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_AUDIO) {
 audio_stream_index = i;
 break;
 }
 }

 if (audio_stream_index == -1) {
 fprintf(stderr, "Error: Could not find an audio stream\n");
 return 1;
 }
 
 AVStream *audioStream = format_ctx->streams[audio_stream_index];
 audio_dec_ctx = avcodec_alloc_context3(NULL);
 avcodec_parameters_to_context(audio_dec_ctx, audioStream->codecpar);
 
 audio_dec_codec = avcodec_find_decoder(audio_dec_ctx->codec_id);
 
 if (!audio_dec_codec) {
 fprintf(stderr, "Unsupported audio codec!\n");
 return 1;
 }
 
 if (avcodec_open2(audio_dec_ctx, audio_dec_codec, NULL) < 0) {
 fprintf(stderr, "Error: Could not open Audio decoder codec\n");
 return 1;
 }
 
 audio_enc_codec = avcodec_find_encoder(AV_CODEC_ID_AAC);
 if (!audio_enc_codec) {
 fprintf(stderr, "Error: Audio Encoder codec not found\n");
 return 1;
 }
 
 audio_enc_ctx = avcodec_alloc_context3(audio_enc_codec);
 if (!audio_enc_ctx) {
 fprintf(stderr, "Error: Could not allocate audio encoder codec context\n");
 return 1;
 }

 audioStream->time_base = (AVRational){1, audio_dec_ctx->sample_rate};
 
 audio_enc_ctx->bit_rate = 64000; 
 audio_enc_ctx->sample_rate = audio_dec_ctx->sample_rate;
 audio_enc_ctx->channels = audio_dec_ctx->channels;
 audio_enc_ctx->channel_layout = av_get_default_channel_layout(audio_enc_ctx->channels);
 audio_enc_ctx->sample_fmt = AV_SAMPLE_FMT_FLTP;
 audio_enc_ctx->profile = FF_PROFILE_AAC_LOW;
 
 if (avcodec_open2(audio_enc_ctx, audio_enc_codec, NULL) < 0) {
 fprintf(stderr, "Error: Could not open encoder codec\n");
 return 1;
 }
 
 AVPacket audio_pkt;
 av_init_packet(&audio_pkt);
 audio_pkt.data = NULL;
 audio_pkt.size = 0;
 
 AVFrame *audio_frame = av_frame_alloc();
 if (!audio_frame) {
 fprintf(stderr, "Error: Could not allocate audio frame\n");
 return 1;
 }

 audio_frame->format = audio_enc_ctx->sample_fmt;
 audio_frame->sample_rate = audio_enc_ctx->sample_rate;
 audio_frame->channels = audio_enc_ctx->channels;
 
 AVFormatContext *output_format_ctx = NULL;
 if (avformat_alloc_output_context2(&output_format_ctx, NULL, NULL, pOutName) < 0) {
 fprintf(stderr, "Error: Could not create output context\n");
 return 1;
 }
 
 if (avio_open(&output_format_ctx->pb, pOutName, AVIO_FLAG_WRITE) < 0) {
 fprintf(stderr, "Error: Could not open output file\n");
 return 1;
 }
 
 AVStream *video_stream = avformat_new_stream(output_format_ctx, video_enc_codec);
 if (!video_stream) {
 fprintf(stderr, "Error: Could not create video stream\n");
 return 1;
 }
 
 av_dict_set(&video_stream->metadata, "rotate", "90", 0);
 
 if (avcodec_parameters_from_context(video_stream->codecpar, video_enc_ctx) < 0) {
 fprintf(stderr, "Error: Could not copy video codec parameters\n");
 return 1;
 }
 
 AVStream *audio_stream = avformat_new_stream(output_format_ctx, audio_enc_codec);
 if (!audio_stream) {
 fprintf(stderr, "Error: Could not create audio stream\n");
 return 1;
 }
 
 if (avcodec_parameters_from_context(audio_stream->codecpar, audio_enc_ctx) < 0) {
 fprintf(stderr, "Error: Could not copy audio codec parameters\n");
 return 1;
 }
 
 if (avformat_write_header(output_format_ctx, NULL) < 0) {
 fprintf(stderr, "Error: Could not write header\n");
 return 1;
 }
 
 int video_frame_count = 0, audio_frame_count = 0;
 
 while (1) {

 if (av_read_frame(format_ctx, &pkt) < 0) {
 fprintf(stderr, "BREAK FROM MAIN WHILE LOOP\n");
 break;
 }

 if (pkt.stream_index == video_stream_index) {

 if (avcodec_send_packet(video_dec_ctx, &pkt) < 0) {
 fprintf(stderr, "Error: Could not send video packet for decoding\n");
 return 1;
 }

 while (avcodec_receive_frame(video_dec_ctx, video_frame) == 0) { 

 if (avcodec_send_frame(video_enc_ctx, video_frame) < 0) {
 fprintf(stderr, "Error: Could not send video frame for encoding\n");
 return 1;
 }

 while (avcodec_receive_packet(video_enc_ctx, &video_pkt) == 0) {
 
 if (av_write_frame(output_format_ctx, &video_pkt) < 0) {
 fprintf(stderr, "Error: Could not write video packet to output file.\n");
 return 1;
 }

 av_packet_unref(&video_pkt);
 }

 video_frame_count++;
 }
 } else if (pkt.stream_index == audio_stream_index) {

 if (avcodec_send_packet(audio_dec_ctx, &pkt) < 0) {
 fprintf(stderr, "Error: Could not send audio packet for decoding\n");
 return 1;
 }

 while (avcodec_receive_frame(audio_dec_ctx, audio_frame) == 0) { 
 
 if (avcodec_send_frame(audio_enc_ctx, audio_frame) < 0) {
 fprintf(stderr, "Error: Could not send audio frame for encoding\n");
 return 1;
 }

 while (avcodec_receive_packet(audio_enc_ctx, &audio_pkt) == 0) { if (av_write_frame(output_format_ctx, &audio_pkt) < 0) {
 fprintf(stderr, "Error: Could not write audio packet to output file\n");
 return 1;
 }

 av_packet_unref(&audio_pkt);
 }

 audio_frame_count++;
 }
 }

 av_packet_unref(&pkt);
 }

 if (av_write_trailer(output_format_ctx) < 0) {
 fprintf(stderr, "Error: Could not write trailer\n");
 return 1;
 } 
 
 avformat_close_input(&format_ctx);
 avio_close(output_format_ctx->pb);
 avformat_free_context(output_format_ctx);
 
 av_frame_free(&video_frame);
 avcodec_free_context(&video_dec_ctx);
 avcodec_free_context(&video_enc_ctx);
 av_dict_free(&video_enc_opts);
 
 av_frame_free(&audio_frame);
 avcodec_free_context(&audio_dec_ctx);
 avcodec_free_context(&audio_enc_ctx);

 printf("Conversion complete. %d video frames processed and %d audio frames processed.\n",video_frame_count, audio_frame_count);

 return 0;
}


int main(int argc, char *argv[]) {
 if (argc != 3) {
 printf("Usage: %s \n", argv[0]);
 return 1;
 }

 const char *input_filename = argv[1];
 const char *output_filename = argv[2];

 avcodec_register_all();
 av_register_all();

 int returnValue = encodeVideoAndAudio4(input_filename, output_filename);
 
 return 0;
}




When i comment out the blocks that process one of the two streams, the other stream is converted and written to the output.mp4 successfully.
When each stream is processed in a separate loop, only the first stream is processed and written to the output.mp4 file and the other stream is skipped.
When both streams are processed in a common loop as it is in the code above, the above mentioned error appears.


-
Sporadic "Error parsing Cues... Operation not permitted" errors when trying to generate a DASH manifest
22 novembre 2023, par kshetlineI have already-generated .webm audio and video files (1 audio, 3 video resolutions for each video I want to stream). The video has been generated not (directly) by ffmpeg, but HandbrakeCLI 1.7.0, with V9 encoding. The audio (which has never caused an error) is generated by ffmpeg using libvorbis.


Most of the time ffmpeg (version 6.1) creates a manifest without any problem. Sporadically, however, "Error parsing Cues" comes up (frequently with the latest videos I've been trying to process) and I can't create a manifest. Since this is happening during an automated process to process many videos for streaming, the audio and video sources are being created exactly the same way whether ffmpeg succeeds or fails in generating a manifest, making this all the more confusing.


The video files ffmpeg chokes on play perfectly well using VLC, and mediainfo doesn't show any problems with these files.


Here's the way I've been (sometimes successfully, sometimes not) generating a manifest, with extra logging added :


ffmpeg -v 9 -loglevel 99 \
 -f webm_dash_manifest -i '.\Sample Video.v480.webm' \
 -f webm_dash_manifest -i '.\Sample Video.v720.webm' \
 -f webm_dash_manifest -i '.\Sample Video.v1080.webm' \
 -f webm_dash_manifest -i '.\Sample Video.audio.webm' \
 -c copy -map 0 -map 1 -map 2 -map 3 \
 -f webm_dash_manifest -adaptation_sets "id=0,streams=0,1,2 id=1,streams=3" \
 '.\Sample Video.mpd'



Here's the result when it fails :


ffmpeg version 6.1-full_build-www.gyan.dev Copyright (c) 2000-2023 the FFmpeg developers
 built with gcc 12.2.0 (Rev10, Built by MSYS2 project)
 configuration: --enable-gpl --enable-version3 --enable-static --pkg-config=pkgconf --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libaribcaption --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-libharfbuzz --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-dxva2 --enable-d3d11va --enable-libvpl --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libcodec2 --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint
 libavutil 58. 29.100 / 58. 29.100
 libavcodec 60. 31.102 / 60. 31.102
 libavformat 60. 16.100 / 60. 16.100
 libavdevice 60. 3.100 / 60. 3.100
 libavfilter 9. 12.100 / 9. 12.100
 libswscale 7. 5.100 / 7. 5.100
 libswresample 4. 12.100 / 4. 12.100
 libpostproc 57. 3.100 / 57. 3.100
Splitting the commandline.
Reading option '-v' ... matched as option 'v' (set logging level) with argument '9'.
Reading option '-loglevel' ... matched as option 'loglevel' (set logging level) with argument '99'.
Reading option '-f' ... matched as option 'f' (force format) with argument 'webm_dash_manifest'.
Reading option '-i' ... matched as output url with argument '.\Sample Video.v480.webm'.
Reading option '-f' ... matched as option 'f' (force format) with argument 'webm_dash_manifest'.
Reading option '-i' ... matched as output url with argument '.\Sample Video.v720.webm'.
Reading option '-f' ... matched as option 'f' (force format) with argument 'webm_dash_manifest'.
Reading option '-i' ... matched as output url with argument '.\Sample Video.v1080.webm'.
Reading option '-f' ... matched as option 'f' (force format) with argument 'webm_dash_manifest'.
Reading option '-i' ... matched as output url with argument '.\Sample Video.audio.webm'.
Reading option '-c' ... matched as option 'c' (codec name) with argument 'copy'.
Reading option '-map' ... matched as option 'map' (set input stream mapping) with argument '0'.
Reading option '-map' ... matched as option 'map' (set input stream mapping) with argument '1'.
Reading option '-map' ... matched as option 'map' (set input stream mapping) with argument '2'.
Reading option '-map' ... matched as option 'map' (set input stream mapping) with argument '3'.
Reading option '-f' ... matched as option 'f' (force format) with argument 'webm_dash_manifest'.
Reading option '-adaptation_sets' ... matched as AVOption 'adaptation_sets' with argument 'id=0,streams=0,1,2 id=1,streams=3'.
Reading option '.\Sample Video.mpd' ... matched as output url.
Finished splitting the commandline.
Parsing a group of options: global .
Applying option v (set logging level) with argument 9.
Successfully parsed a group of options.
Parsing a group of options: input url .\Sample Video.v480.webm.
Applying option f (force format) with argument webm_dash_manifest.
Successfully parsed a group of options.
Opening an input file: .\Sample Video.v480.webm.
[webm_dash_manifest @ 000002bbcb41dc80] Opening '.\Sample Video.v480.webm' for reading
[file @ 000002bbcb41e300] Setting default whitelist 'file,crypto,data'
st:0 removing common factor 1000000 from timebase
[webm_dash_manifest @ 000002bbcb41dc80] Error parsing Cues
[AVIOContext @ 000002bbcb41e5c0] Statistics: 102283 bytes read, 4 seeks
[in#0 @ 000002bbcb41dac0] Error opening input: Operation not permitted
Error opening input file .\Sample Video.v480.webm.
Error opening input files: Operation not permitted



This is
mediainfo
for the offending input file, Sample Video.v480.webm :

General
Complete name : .\Sample Video.v480.webm
Format : WebM
Format version : Version 2
File size : 628 MiB
Duration : 1 h 34 min
Overall bit rate : 926 kb/s
Frame rate : 23.976 FPS
Encoded date : 2023-11-21 16:48:35 UTC
Writing application : HandBrake 1.7.0 2023111500
Writing library : Lavf60.16.100

Video
ID : 1
Format : VP9
Format profile : 0
Codec ID : V_VP9
Duration : 1 h 34 min
Bit rate : 882 kb/s
Width : 720 pixels
Height : 480 pixels
Display aspect ratio : 16:9
Frame rate mode : Constant
Frame rate : 23.976 (24000/1001) FPS
Color space : YUV
Chroma subsampling : 4:2:0
Bit depth : 8 bits
Bits/(Pixel*Frame) : 0.106
Stream size : 598 MiB (95%)
Default : Yes
Forced : No
Color range : Limited
Color primaries : BT.709
Transfer characteristics : BT.709
Matrix coefficients : BT.709



I don't know if I need different command line options, or whether this might be an ffmpeg or Handbrake bug. It has taken many, many hours to generate these video files (VP9 is painfully slow to encode), so I hate to do a lot of this over again, especially doing it again encoding the video with ffmpeg instead of Handbrake, as Handbrake is (oddly enough, considering it uses ffmpeg under the hood) noticeably faster.


I have no idea what these "Cues" are that ffmpeg wants and can't parse, or how I would change them.