
Recherche avancée
Autres articles (39)
-
MediaSPIP Init et Diogène : types de publications de MediaSPIP
11 novembre 2010, parÀ l’installation d’un site MediaSPIP, le plugin MediaSPIP Init réalise certaines opérations dont la principale consiste à créer quatre rubriques principales dans le site et de créer cinq templates de formulaire pour Diogène.
Ces quatre rubriques principales (aussi appelées secteurs) sont : Medias ; Sites ; Editos ; Actualités ;
Pour chacune de ces rubriques est créé un template de formulaire spécifique éponyme. Pour la rubrique "Medias" un second template "catégorie" est créé permettant d’ajouter (...) -
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...) -
Submit bugs and patches
13 avril 2011Unfortunately a software is never perfect.
If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
You may also (...)
Sur d’autres sites (7135)
-
Discord FFMPEG audio wont play from yt-dlp
19 mars 2023, par user21236822My question is this : Why isn't my bot playing audio ?


I want the bot to join, play audio from queue, then disconnect without downloading an mp3 file.


I tried using youtube-dl, but I switched to the yt-dlp library after getting errors I couldn't fix.
I am running on Windows 10 locally. All my libraries are up to date.


Here are my ydl_opts and FFMPEG_OPTS :


ydl_opts = {
 'format': 'bestaudio/best',
 'postprocessors': [{
 'key': 'FFmpegExtractAudio',
 'preferredcodec': 'mp3',
 'preferredquality': '192',
 }],
}

FFMPEG_OPTIONS = {
 'before_options': '-reconnect 1 -reconnect_streamed 1 -reconnect_delay_max 5',
 'options': '-vn'
} 



Here is where I believe the problem is.


async def play():
 print("Play Called")
 musicPlay()
 # Get message object from initial request
 message = ytLinkQue.get()
 print(f"Message object recieved: {message}")
 voiceChannel = message.author.voice.channel
 vc = await voiceChannel.connect()
 songsPlayed = 0
 
 while not ytLinkQue.empty():
 # Get current song
 currentSong = ytLinkQue.get()[0]
 print(f"Current song: {currentSong}")

 # Get song from Youtube
 with yt_dlp.YoutubeDL(ydl_opts) as ydl:
 # song = ydl.download(currentSong)
 info = ydl.extract_info(currentSong, download=False)
 song = info['formats'][0]['url']

 # Play Song
 vc.play(discord.FFmpegPCMAudio(song, **FFMPEG_OPTIONS), after=lambda e: print('Song done'))

 # Wait until the song has finished playing
 while vc.is_playing():
 print("playing rn")
 await asyncio.sleep(1)
 
 await vc.disconnect()
 musicStop()



When play() is called, here is the output in terminal with my annotations as **** text **** :


>python main.py
2023-02-17 15:21:09 INFO discord.client logging in using static token
2023-02-17 15:21:10 INFO discord.gateway Shard ID None has connected to Gateway (Session ID: 60b9fce14faa5daa4aed9eb6db01a74d).
Max que: 50
Text Channel: 828698708123451434
Testing Bot#4591 is ready.
Passing message object
**** play() funciton is called ****
Play Called
Message object recieved: <message channel="<TextChannel" position="7" nsfw="False" news="False"> type= author=<member discriminator="'0199'" bot="False" nick="'Fragnk7?'" guild="<Guild" chunked="True">> flags=<messageflags value="0">>
2023-02-17 15:21:16 INFO discord.voice_client Connecting to voice...
2023-02-17 15:21:16 INFO discord.voice_client Starting voice handshake... (connection attempt 1)
2023-02-17 15:21:17 INFO discord.voice_client Voice handshake complete. Endpoint found seattle2004.discord.media
Current song: https://www.youtube.com/watch?v=vcAp4nmTZCA
[youtube] Extracting URL: https://www.youtube.com/watch?v=vcAp4nmTZCA 
[youtube] vcAp4nmTZCA: Downloading webpage 
[youtube] vcAp4nmTZCA: Downloading android player API JSON 
**** Does not play any audio ****
Playing rn
Song done
2023-02-17 15:21:18 INFO discord.player ffmpeg process 20700 successfully terminated with return code of 1.
2023-02-17 15:21:19 INFO discord.voice_client The voice handshake is being terminated for Channel ID 400178308467392513 (Guild ID 261601676941721602)
2023-02-17 15:21:19 INFO discord.voice_client Disconnecting from voice normally, close code 1000.
</messageflags></member></message>


On Discord's end, the bot successfully connects then disconnects after 2 second.


Note : I've only included code I think is relevant. Please let me know if I should add anything else to the post, otherwise, here is the github for the project. Code is in main.py.
https://github.com/LukeLeimbach/wallMomentMusic


Thank you in advance !


I've applied the advice from these posts but it still will not play audio :


-
https://stackoverflow.com/questions/45770016/how-do-i-make-my-discord-bot-play-audio-from-youtube


-
https://stackoverflow.com/questions/66070749/how-to-fix-discord-music-bot-that-stops-playing-before-the-song-is-actually-over?newreg=c70dd786cf5844e490045494223c0381


-
https://stackoverflow.com/questions/57688808/playing-music-with-a-bot-from-youtube-without-downloading-the-file


-
Ffmpeg error "avcodec_send_frame" return "invalid argument"
17 octobre 2023, par Paulo CoutinhoI have a problem in function
avcodec_send_frame
throwing errorError sending frame for encoding: Invalid argument
(-22). I already search, check, recheck and nothing. It is near the ffmpeg examples. Can anyone help me ? Thanks.

This is my code :


static void callbackAddSubtitle(const Message &m, const Response r)
{
 try
 {
 av_log_set_level(AV_LOG_DEBUG);

 spdlog::debug("[Mapping :: callbackAddSubtitle] Adding subtitle...");

 auto inputOpt = m.get("input");
 auto outputOpt = m.get("output");

 if (!inputOpt.has_value() || !outputOpt.has_value())
 {
 r(std::string{"INVALID-PARAMS"});
 return;
 }

 const std::string &input = inputOpt.value();
 const std::string &output = outputOpt.value();

 // initialize input
 spdlog::debug("[Mapping :: callbackAddSubtitle] Initializing input video...");

 AVFormatContext *inputFormatCtx = avformat_alloc_context();
 if (avformat_open_input(&inputFormatCtx, input.c_str(), nullptr, nullptr) != 0)
 {
 spdlog::error("Failed to open input");
 r(std::string{"ERROR-OPEN-INPUT"});
 return;
 }

 if (avformat_find_stream_info(inputFormatCtx, nullptr) < 0)
 {
 spdlog::error("Failed to find stream information");
 avformat_close_input(&inputFormatCtx);
 r(std::string{"ERROR-FIND-STREAM"});
 return;
 }

 int videoStreamIndex = av_find_best_stream(inputFormatCtx, AVMEDIA_TYPE_VIDEO, -1, -1, nullptr, 0);
 if (videoStreamIndex < 0)
 {
 spdlog::error("Could not find a video stream");
 r(std::string{"ERROR-FIND-VIDEO-STREAM"});
 return;
 }

 AVRational timeBase = inputFormatCtx->streams[videoStreamIndex]->time_base;

 AVCodecParameters *inputCodecPar = inputFormatCtx->streams[videoStreamIndex]->codecpar;
 const AVCodec *inputCodec = avcodec_find_decoder(inputCodecPar->codec_id);
 AVCodecContext *inputCodecCtx = avcodec_alloc_context3(inputCodec);

 avcodec_parameters_to_context(inputCodecCtx, inputCodecPar);
 avcodec_open2(inputCodecCtx, inputCodec, nullptr);

 // initialize input audio
 spdlog::debug("[Mapping :: callbackAddSubtitle] Initializing input audio...");

 int audioStreamIndex = av_find_best_stream(inputFormatCtx, AVMEDIA_TYPE_AUDIO, -1, -1, nullptr, 0);
 if (audioStreamIndex < 0)
 {
 spdlog::error("Could not find an audio stream");
 r(std::string{"ERROR-FIND-AUDIO-STREAM"});
 return;
 }

 AVCodecParameters *inputAudioCodecPar = inputFormatCtx->streams[audioStreamIndex]->codecpar;
 const AVCodec *inputAudioCodec = avcodec_find_decoder(inputAudioCodecPar->codec_id);
 AVCodecContext *inputAudioCodecCtx = avcodec_alloc_context3(inputAudioCodec);

 avcodec_parameters_to_context(inputAudioCodecCtx, inputAudioCodecPar);
 avcodec_open2(inputAudioCodecCtx, inputAudioCodec, nullptr);

 // initialize output video
 spdlog::debug("[Mapping :: callbackAddSubtitle] Initializing output video...");

 AVFormatContext *outputFormatCtx = nullptr;
 avformat_alloc_output_context2(&outputFormatCtx, nullptr, nullptr, output.c_str());
 AVStream *outputStream = avformat_new_stream(outputFormatCtx, nullptr);

 AVCodecContext *outputCodecCtx = avcodec_alloc_context3(inputCodec);
 avcodec_parameters_to_context(outputCodecCtx, inputCodecPar);
 int retOutVideo = avcodec_open2(outputCodecCtx, inputCodec, nullptr);

 if (retOutVideo < 0)
 {
 char err[AV_ERROR_MAX_STRING_SIZE];
 av_make_error_string(err, AV_ERROR_MAX_STRING_SIZE, retOutVideo);
 spdlog::error("Failed to initialize output video: {}", err);
 r(std::string{"ERROR-INIT-OUTPUT-VIDEO"});
 return;
 }

 outputStream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
 outputStream->codecpar->codec_id = inputCodec->id;
 avcodec_parameters_from_context(outputStream->codecpar, outputCodecCtx);

 if (!(outputFormatCtx->oformat->flags & AVFMT_NOFILE))
 {
 avio_open(&outputFormatCtx->pb, output.c_str(), AVIO_FLAG_WRITE);
 }

 const char *pixelFormatName = getPixelFormatName(outputCodecCtx->pix_fmt);
 spdlog::debug("Pixel Format: {}", pixelFormatName);

 // initialize output audio
 spdlog::debug("[Mapping :: callbackAddSubtitle] Initializing output audio...");

 AVStream *outputAudioStream = avformat_new_stream(outputFormatCtx, nullptr);
 AVCodecContext *outputAudioCodecCtx = avcodec_alloc_context3(inputAudioCodec);
 avcodec_parameters_to_context(outputAudioCodecCtx, inputAudioCodecPar);
 int retOutAudio = avcodec_open2(outputAudioCodecCtx, inputAudioCodec, nullptr);

 if (retOutAudio < 0)
 {
 char err[AV_ERROR_MAX_STRING_SIZE];
 av_make_error_string(err, AV_ERROR_MAX_STRING_SIZE, retOutAudio);
 spdlog::error("Failed to initialize output audio: {}", err);
 r(std::string{"ERROR-INIT-OUTPUT-AUDIO"});
 return;
 }

 outputAudioStream->codecpar->codec_type = AVMEDIA_TYPE_AUDIO;
 outputAudioStream->codecpar->codec_id = inputAudioCodec->id;
 avcodec_parameters_from_context(outputAudioStream->codecpar, outputAudioCodecCtx);

 // initialize filters
 spdlog::debug("[Mapping :: callbackAddSubtitle] Initializing filters...");

 AVFilterGraph *filterGraph = avfilter_graph_alloc();
 if (!filterGraph)
 {
 spdlog::error("Failed to allocate filter graph");
 r(std::string{"ERROR-FILTER-GRAPH"});
 return;
 }

 AVFilterContext *bufferSinkCtx;
 AVFilterContext *bufferSrcCtx;

 const AVFilter *bufferSink = avfilter_get_by_name("buffersink");
 const AVFilter *bufferSrc = avfilter_get_by_name("buffer");

 // input filter
 char filterInArgs[512];
 snprintf(filterInArgs, sizeof(filterInArgs), "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d", inputCodecPar->width, inputCodecPar->height, inputCodecCtx->pix_fmt, timeBase.num, timeBase.den, inputCodecCtx->sample_aspect_ratio.num, inputCodecCtx->sample_aspect_ratio.den);

 spdlog::debug("[Mapping :: callbackAddSubtitle] Buffer src args: {}", filterInArgs);

 int retFilterIn = avfilter_graph_create_filter(&bufferSrcCtx, bufferSrc, "in", filterInArgs, nullptr, filterGraph);
 if (retFilterIn < 0)
 {
 char err[AV_ERROR_MAX_STRING_SIZE];
 av_make_error_string(err, AV_ERROR_MAX_STRING_SIZE, retFilterIn);
 spdlog::error("Failed to create bufferSrcCtx: {}", err);
 r(std::string{"ERROR-CREATE-FILTER-SRC"});
 return;
 }

 // output filter
 int retFilterOut = avfilter_graph_create_filter(&bufferSinkCtx, bufferSink, "out", nullptr, nullptr, filterGraph);

 if (retFilterOut < 0)
 {
 char err[AV_ERROR_MAX_STRING_SIZE];
 av_make_error_string(err, AV_ERROR_MAX_STRING_SIZE, retFilterOut);
 spdlog::error("Failed to create bufferSinkCtx: {}", err);
 r(std::string{"ERROR-CREATE-FILTER-SINK"});
 return;
 }

 enum AVPixelFormat pix_fmts[] = {AV_PIX_FMT_YUV420P, AV_PIX_FMT_NONE};
 av_opt_set_int_list(bufferSinkCtx, "pix_fmts", pix_fmts, AV_PIX_FMT_NONE, AV_OPT_SEARCH_CHILDREN);

 // add filters to graph and link them
 const char *filterSpec = "drawtext=text='Legenda Adicionada Automaticamente Via FFMPEG e C++': fontcolor=yellow: bordercolor=black: fontfile='/Users/paulo/Downloads/roboto/Roboto-Black.ttf'";
 const AVFilter *filter = avfilter_get_by_name("drawtext");

 AVFilterInOut *outputs = avfilter_inout_alloc();
 AVFilterInOut *inputs = avfilter_inout_alloc();

 outputs->name = av_strdup("in");
 outputs->filter_ctx = bufferSrcCtx;
 outputs->pad_idx = 0;
 outputs->next = nullptr;
 inputs->name = av_strdup("out");
 inputs->filter_ctx = bufferSinkCtx;
 inputs->pad_idx = 0;
 inputs->next = nullptr;

 if (avfilter_graph_parse_ptr(filterGraph, filterSpec, &inputs, &outputs, nullptr) < 0)
 {
 spdlog::error("Failed to parse filter graph");
 r(std::string{"ERROR-PARSE-FILTER"});
 return;
 }

 if (avfilter_graph_config(filterGraph, nullptr) < 0)
 {
 spdlog::error("Failed to configure filter graph");
 r(std::string{"ERROR-CONFIG-FILTER"});
 return;
 }

 // header
 spdlog::debug("[Mapping :: callbackAddSubtitle] Writing header...");

 if (avformat_write_header(outputFormatCtx, nullptr) < 0)
 {
 spdlog::error("Error writing header");
 r(std::string{"ERROR-WRITE-HEADER"});
 return;
 }

 // read frames and write to output
 AVPacket *packet = av_packet_alloc();
 AVFrame *frame = av_frame_alloc();

 frame->format = inputCodecCtx->pix_fmt;
 frame->width = inputCodecCtx->width;
 frame->height = inputCodecCtx->height;

 AVFrame *filt_frame = av_frame_alloc();

 filt_frame->format = inputCodecCtx->pix_fmt;
 filt_frame->width = inputCodecCtx->width;
 filt_frame->height = inputCodecCtx->height;

 while (av_read_frame(inputFormatCtx, packet) >= 0)
 {
 if (packet->stream_index == videoStreamIndex)
 {
 if (avcodec_send_packet(inputCodecCtx, packet) < 0)
 {
 spdlog::error("Error sending packet for decoding");
 r(std::string{"ERROR-SEND-PACKET-DECODE"});
 return;
 }

 while (avcodec_receive_frame(inputCodecCtx, frame) == 0)
 {
 // Envia o quadro decodificado para o gráfico de filtro
 if (av_buffersrc_add_frame_flags(bufferSrcCtx, frame, AV_BUFFERSRC_FLAG_KEEP_REF) < 0)
 {
 spdlog::error("Error while feeding the filtergraph");
 r(std::string{"ERROR-FEED-FILTERGRAPH"});
 return;
 }

 // Recebe um quadro do gráfico de filtro
 if (av_buffersink_get_frame(bufferSinkCtx, filt_frame) < 0)
 {
 spdlog::error("Error while receiving the filtered frame");
 r(std::string{"ERROR-RECEIVE-FILTERED-FRAME"});
 return;
 }

 // Envia o quadro decodificado para re-codificação
 int retSendFrame = avcodec_send_frame(outputCodecCtx, filt_frame);
 if (retSendFrame < 0)
 {
 char err[AV_ERROR_MAX_STRING_SIZE];
 av_make_error_string(err, AV_ERROR_MAX_STRING_SIZE, retSendFrame);
 spdlog::error("Error sending frame for encoding: {}", err);
 r(std::string{"ERROR-SEND-FRAME-ENCODE"});
 return;
 }

 AVPacket *output_packet = av_packet_alloc();
 output_packet->data = nullptr;
 output_packet->size = 0;

 // Re-codifica filt_frame para um pacote
 if (avcodec_receive_packet(outputCodecCtx, output_packet) == 0)
 {
 // Escreve o pacote no fluxo de saída
 av_write_frame(outputFormatCtx, output_packet);
 av_packet_unref(output_packet);
 }

 av_frame_unref(filt_frame);
 }

 // time
 packet->pts = av_rescale_q_rnd(packet->pts, inputFormatCtx->streams[videoStreamIndex]->time_base, outputFormatCtx->streams[videoStreamIndex]->time_base, (AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
 packet->dts = av_rescale_q_rnd(packet->dts, inputFormatCtx->streams[videoStreamIndex]->time_base, outputFormatCtx->streams[videoStreamIndex]->time_base, (AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
 packet->duration = av_rescale_q(packet->duration, inputFormatCtx->streams[videoStreamIndex]->time_base, outputFormatCtx->streams[videoStreamIndex]->time_base);
 packet->stream_index = videoStreamIndex;

 // write packet to output video stream
 av_interleaved_write_frame(outputFormatCtx, packet);
 }
 else if (packet->stream_index == audioStreamIndex)
 {
 // rescale timestamps
 packet->pts = av_rescale_q_rnd(packet->pts, inputFormatCtx->streams[audioStreamIndex]->time_base, outputFormatCtx->streams[audioStreamIndex]->time_base, (AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
 packet->dts = av_rescale_q_rnd(packet->dts, inputFormatCtx->streams[audioStreamIndex]->time_base, outputFormatCtx->streams[audioStreamIndex]->time_base, (AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
 packet->duration = av_rescale_q(packet->duration, inputFormatCtx->streams[audioStreamIndex]->time_base, outputFormatCtx->streams[audioStreamIndex]->time_base);
 packet->stream_index = audioStreamIndex;

 // write packet to output audio stream
 av_interleaved_write_frame(outputFormatCtx, packet);
 }

 av_packet_unref(packet);
 }

 av_packet_free(&packet);
 av_frame_free(&frame);
 av_frame_free(&filt_frame);

 spdlog::debug("[Mapping :: callbackAddSubtitle] Writing trailer...");

 if (av_write_trailer(outputFormatCtx) < 0)
 {
 spdlog::error("Error writing trailer");
 r(std::string{"ERROR-WRITE-TRAILER"});
 return;
 }

 // cleanup
 spdlog::debug("[Mapping :: callbackAddSubtitle] Cleaning...");

 if (!(outputFormatCtx->oformat->flags & AVFMT_NOFILE))
 {
 avio_closep(&outputFormatCtx->pb);
 }

 avcodec_free_context(&inputCodecCtx);
 avcodec_free_context(&inputAudioCodecCtx);
 avcodec_free_context(&outputCodecCtx);
 avcodec_free_context(&outputAudioCodecCtx);

 avformat_free_context(inputFormatCtx);
 avformat_free_context(outputFormatCtx);

 r(std::string{"OK"});
 }
 catch (const std::exception &e)
 {
 spdlog::error("Error: {}", e.what());
 r(std::string{"ERROR"});
 }
}



The error is :


[2023-10-17 06:30:16.936] [debug] [Mapping :: callbackAddSubtitle] Adding subtitle...
[2023-10-17 06:30:16.936] [debug] [Mapping :: callbackAddSubtitle] Initializing input video...
[NULL @ 0x153604a60] Opening '/Users/paulo/Downloads/movie.mp4' for reading
[file @ 0x6000001fd170] Setting default whitelist 'file,crypto,data'
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] Format mov,mp4,m4a,3gp,3g2,mj2 probed with size=2048 and score=100
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] ISO: File Type Major Brand: isom
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] Unknown dref type 0x206c7275 size 12
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] Processing st: 0, edit list 0 - media time: 0, duration: 2669670
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] Unknown dref type 0x206c7275 size 12
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] Processing st: 1, edit list 0 - media time: 1024, duration: 4272096
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] drop a frame at curr_cts: 0 @ 0
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] Before avformat_find_stream_info() pos: 113542488 bytes read:110788 seeks:1 nb_streams:2
[h264 @ 0x153604cd0] nal_unit_type: 7(SPS), nal_ref_idc: 3
[h264 @ 0x153604cd0] Decoding VUI
[h264 @ 0x153604cd0] nal_unit_type: 8(PPS), nal_ref_idc: 3
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] demuxer injecting skip 1024 / discard 0
[aac @ 0x1536056f0] skip 1024 / discard 0 samples due to side data
[h264 @ 0x153604cd0] nal_unit_type: 7(SPS), nal_ref_idc: 3
[h264 @ 0x153604cd0] Decoding VUI
[h264 @ 0x153604cd0] nal_unit_type: 8(PPS), nal_ref_idc: 3
[h264 @ 0x153604cd0] nal_unit_type: 6(SEI), nal_ref_idc: 0
[h264 @ 0x153604cd0] nal_unit_type: 5(IDR), nal_ref_idc: 3
[h264 @ 0x153604cd0] Format yuv420p chosen by get_format().
[h264 @ 0x153604cd0] Reinit context to 1088x1920, pix_fmt: yuv420p
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] All info found
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] After avformat_find_stream_info() pos: 195211 bytes read:305951 seeks:2 frames:2
[2023-10-17 06:30:18.160] [debug] [Mapping :: callbackAddSubtitle] Initializing input audio...
[h264 @ 0x143604330] nal_unit_type: 7(SPS), nal_ref_idc: 3
[h264 @ 0x143604330] Decoding VUI
[h264 @ 0x143604330] nal_unit_type: 8(PPS), nal_ref_idc: 3
[2023-10-17 06:30:18.160] [debug] [Mapping :: callbackAddSubtitle] Initializing output video...
[h264 @ 0x143611ec0] nal_unit_type: 7(SPS), nal_ref_idc: 3
[h264 @ 0x143611ec0] Decoding VUI
[h264 @ 0x143611ec0] nal_unit_type: 8(PPS), nal_ref_idc: 3
[file @ 0x6000001f4000] Setting default whitelist 'file,crypto,data'
[2023-10-17 06:30:18.167] [debug] Pixel Format: YUV420P
[2023-10-17 06:30:18.167] [debug] [Mapping :: callbackAddSubtitle] Initializing output audio...
[2023-10-17 06:30:18.167] [debug] [Mapping :: callbackAddSubtitle] Initializing filters...
[2023-10-17 06:30:18.168] [debug] [Mapping :: callbackAddSubtitle] Buffer src args: video_size=1080x1920:pix_fmt=0:time_base=1/30000:pixel_aspect=1/1
detected 10 logical cores
[in @ 0x6000004ec0b0] Setting 'video_size' to value '1080x1920'
[in @ 0x6000004ec0b0] Setting 'pix_fmt' to value '0'
[in @ 0x6000004ec0b0] Setting 'time_base' to value '1/30000'
[in @ 0x6000004ec0b0] Setting 'pixel_aspect' to value '1/1'
[in @ 0x6000004ec0b0] w:1080 h:1920 pixfmt:yuv420p tb:1/30000 fr:0/1 sar:1/1
[AVFilterGraph @ 0x6000017e8000] Setting 'text' to value 'Legenda Adicionada Automaticamente Via FFMPEG e C++'
[AVFilterGraph @ 0x6000017e8000] Setting 'fontcolor' to value 'yellow'
[AVFilterGraph @ 0x6000017e8000] Setting 'bordercolor' to value 'black'
[AVFilterGraph @ 0x6000017e8000] Setting 'fontfile' to value '/Users/paulo/Downloads/roboto/Roboto-Black.ttf'
[AVFilterGraph @ 0x6000017e8000] query_formats: 3 queried, 2 merged, 0 already done, 0 delayed
[2023-10-17 06:30:18.172] [debug] [Mapping :: callbackAddSubtitle] Writing header...
[h264 @ 0x143604330] nal_unit_type: 6(SEI), nal_ref_idc: 0
[h264 @ 0x143604330] nal_unit_type: 5(IDR), nal_ref_idc: 3
[h264 @ 0x143604330] Format yuv420p chosen by get_format().
[h264 @ 0x143604330] Reinit context to 1088x1920, pix_fmt: yuv420p
[Parsed_drawtext_0 @ 0x6000004f4160] Copying data in avfilter.
[Parsed_drawtext_0 @ 0x6000004f4160] n:0 t:0.000000 text_w:424 text_h:16 x:0 y:0
[2023-10-17 06:30:18.182] [error] Error sending frame for encoding: Invalid argument
Returned Value: ERROR-SEND-FRAME-ENCODE



-
ffmpeg giving Error while decoding stream #0:1 : Invalid data found when processing input
12 octobre 2023, par AbyI am trying to merge two video files into one using ffmpeg on Windows. The process has been proven to work over and over (with over 100 files merged together at some points) - but I have come across an input file that is causing the process to fail with the errors :


_[aac @ 00000142532f74c0] channel element 1.0 is not allocated
Error while decoding stream #0:1: Invalid data found when processing input
[aac @ 00000142532f74c0] channel element 1.0 is not allocated
Error while decoding stream #0:1: Invalid data found when processing input
[aac @ 00000142532f74c0] channel element 1.0 is not allocated
.
.
.



There seems to be 3 command line steps to get here, using a concats-inputs.dat file containing :


file E:/..../snippet A.mp4
file E:/..../snippet B.mp4



(Copies of these files can be found at https://filebin.net/77wbowvh7vbklkey/snippet_A.mp4 and https://filebin.net/77wbowvh7vbklkey/snippet_B.mp4)


Step 1 :


> ffmpeg-6.0-full_build/bin/ffmpeg -y -progress ".Default.mp4.progressinfo.dat" -vsync 0 -f concat -safe 0 -i "E:/...../concat-inputs.dat" -c:v copy -c:a copy -crf 0 -b:v 10M "E:/...../video.Default.mp4"



with the output....


built with gcc 12.2.0 (Rev10, Built by MSYS2 project)

 configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libvpl --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint

 libavutil 58. 2.100 / 58. 2.100
 libavcodec 60. 3.100 / 60. 3.100
 libavformat 60. 3.100 / 60. 3.100
 libavdevice 60. 1.100 / 60. 1.100
 libavfilter 9. 3.100 / 9. 3.100
 libswscale 7. 1.100 / 7. 1.100
 libswresample 4. 10.100 / 4. 10.100
 libpostproc 57. 1.100 / 57. 1.100

-vsync is deprecated. Use -fps_mode

Passing a number to -vsync is deprecated, use a string argument as described in the manual.
[mov,mp4,m4a,3gp,3g2,mj2 @ 000001bf88ffe240] Auto-inserting h264_mp4toannexb bitstream filter
Input #0, concat, from 'E:/...../concat-inputs.dat':

 Duration: N/A, start: -0.010667, bitrate: 20382 kb/s

 Stream #0:0(und): Video: h264 (High 4:4:4 Predictive) (avc1 / 0x31637661), yuv420p(tv, bt709, progressive), 1280x720 [SAR 1:1 DAR 16:9], 20043 kb/s, 50 fps, 50 tbr, 12800 tbn

 Metadata:

 handler_name : VideoHandler
 vendor_id : [0][0][0][0]
 encoder : Lavc60.3.100 libx264

 Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 96000 Hz, 5.1, fltp, 339 kb/s

 Metadata:
 handler_name : SoundHandler
 vendor_id : [0][0][0][0]

Output #0, mp4, to 'E:/...../video.Default.mp4':

 Metadata:
 encoder : Lavf60.3.100

 Stream #0:0(und): Video: h264 (High 4:4:4 Predictive) (avc1 / 0x31637661), yuv420p(tv, bt709, progressive), 1280x720 [SAR 1:1 DAR 16:9], q=2-31, 10000 kb/s, 50 fps, 50 tbr, 12800 tbn

 Metadata:

 handler_name : VideoHandler
 vendor_id : [0][0][0][0]

 encoder : Lavc60.3.100 libx264

 Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 96000 Hz, 5.1, fltp, 339 kb/s

 Metadata:

 handler_name : SoundHandler
 vendor_id : [0][0][0][0]

Stream mapping:

 Stream #0:0 -> #0:0 (copy)
 Stream #0:1 -> #0:1 (copy)

Press [q] to stop, [?] for help

frame= 0 fps=0.0 q=-1.0 size= 0kB time=00:00:00.00 bitrate=N/A speed=N/A 
_[mov,mp4,m4a,3gp,3g2,mj2 @ 000001bf890653c0] Auto-inserting h264_mp4toannexb bitstream filter

[mp4 @ 000001bf89000580] Non-monotonous DTS in output stream 0:1; previous: 180224, current: 180192; changing to 180225. This may result in incorrect timestamps in the output file.

frame= 210 fps=0.0 q=-1.0 Lsize= 11537kB time=00:00:04.21 bitrate=22433.7kbits/s speed=41.9x

video:11417kB audio:114kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.053312%



Step 2


> ffmpeg-6.0-full_build/bin/ffmpeg -y -progress ".Default.mp4.progressinfo.dat" -vsync 0 -f concat -safe 0 -i "E:/...../concat-inputs.dat" -c:v copy -c:a copy -crf 0 -b:v 10M "E:/...../audio.Default.wav"



which outputs...


ffmpeg version 6.0-full_build-www.gyan.dev Copyright (c) 2000-2023 the FFmpeg developers

 built with gcc 12.2.0 (Rev10, Built by MSYS2 project)

 configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libvpl --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint

 libavutil 58. 2.100 / 58. 2.100
 libavcodec 60. 3.100 / 60. 3.100
 libavformat 60. 3.100 / 60. 3.100
 libavdevice 60. 1.100 / 60. 1.100
 libavfilter 9. 3.100 / 9. 3.100
 libswscale 7. 1.100 / 7. 1.100
 libswresample 4. 10.100 / 4. 10.100
 libpostproc 57. 1.100 / 57. 1.100

-vsync is deprecated. Use -fps_mode

Passing a number to -vsync is deprecated, use a string argument as described in the manual.

[mov,mp4,m4a,3gp,3g2,mj2 @ 00000246d314e240] Auto-inserting h264_mp4toannexb bitstream filter

Input #0, concat, from 'E:/...../concat-inputs.dat':

 Duration: N/A, start: -0.010667, bitrate: 20382 kb/s

 Stream #0:0(und): Video: h264 (High 4:4:4 Predictive) (avc1 / 0x31637661), yuv420p(tv, bt709, progressive), 1280x720 [SAR 1:1 DAR 16:9], 20043 kb/s, 50 fps, 50 tbr, 12800 tbn

 Metadata:

 handler_name : VideoHandler
 vendor_id : [0][0][0][0]
 encoder : Lavc60.3.100 libx264

 Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 96000 Hz, 5.1, fltp, 339 kb/s

 Metadata:

 handler_name : SoundHandler
 vendor_id : [0][0][0][0]

[out#0/wav @ 00000246d31bd240] Codec AVOption b (set bitrate (in bits/s)) has not been used for any stream. The most likely reason is either wrong type (e.g. a video option with no video streams) or that it is a private option of some encoder which was not actually used for any stream.

Output #0, wav, to 'E:/...../audio.Default.wav':

 Metadata:

 ISFT : Lavf60.3.100

 Stream #0:0(und): Audio: aac (LC) ([255][0][0][0] / 0x00FF), 96000 Hz, 5.1, fltp, 339 kb/s

 Metadata:

 handler_name : SoundHandler
 vendor_id : [0][0][0][0]

Stream mapping:

 Stream #0:1 -> #0:0 (copy)

Press [q] to stop, [?] for help

size= 0kB time=00:00:00.00 bitrate=N/A speed=N/A 
_[mov,mp4,m4a,3gp,3g2,mj2 @ 00000246d3b009c0] Auto-inserting h264_mp4toannexb bitstream filter

[wav @ 00000246d3150580] Non-monotonous DTS in output stream 0:0; previous: 180224, current: 180192; changing to 180224. This may result in incorrect timestamps in the output file.

size= 114kB time=00:00:04.21 bitrate= 222.4kbits/s speed= 128x

video:0kB audio:114kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.102561%



Step 3


> ffmpeg-6.0-full_build/bin/ffmpeg -y -progress ".Default.mp4.progressinfo.dat" -i "E:/...../video.Default.mp4" -i "E:/...../audio.Default.wav" -crf 0 -c:v copy -c:a aac "E:/...../Default.mp4"



... which then gives the errors....


ffmpeg version 6.0-full_build-www.gyan.dev Copyright (c) 2000-2023 the FFmpeg developers

 built with gcc 12.2.0 (Rev10, Built by MSYS2 project)

 configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libvpl --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint

 libavutil 58. 2.100 / 58. 2.100
 libavcodec 60. 3.100 / 60. 3.100 
 libavformat 60. 3.100 / 60. 3.100 
 libavdevice 60. 1.100 / 60. 1.100 
 libavfilter 9. 3.100 / 9. 3.100 
 libswscale 7. 1.100 / 7. 1.100 
 libswresample 4. 10.100 / 4. 10.100 
 libpostproc 57. 1.100 / 57. 1.100

Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'E:/...../video.Default.mp4':

 Metadata:

 major_brand : isom 
 minor_version : 512 
 compatible_brands: isomiso2avc1mp41 
 encoder : Lavf60.3.100

 Duration: 00:00:04.23, start: 0.000000, bitrate: 22359 kb/s

 Stream #0:0[0x1](und): Video: h264 (High 4:4:4 Predictive) (avc1 / 0x31637661), yuv420p(tv, bt709, progressive), 1280x720 [SAR 1:1 DAR 16:9], 22178 kb/s, 49.80 fps, 50 tbr, 12800 tbn (default)

 Metadata:

 handler_name : VideoHandler 
 vendor_id : [0][0][0][0] 
 encoder : Lavc60.3.100 libx264

 Stream #0:1[0x2](und): Audio: aac (LC) (mp4a / 0x6134706D), 96000 Hz, 5.1, fltp, 221 kb/s (default)

 Metadata:

 handler_name : SoundHandler 
 vendor_id : [0][0][0][0]

[aac @ 000001425315e580] Multiple frames in a packet.

Input #1, wav, from 'E:/...../audio.Default.wav':

 Metadata:

 encoder : Lavf60.3.100

 Duration: 00:00:04.22, bitrate: 221 kb/s

 Stream #1:0: Audio: aac (LC) ([255][0][0][0] / 0x00FF), 96000 Hz, 5.1, fltp, 339 kb/s

Stream mapping:

 Stream #0:0 -> #0:0 (copy) 
 Stream #0:1 -> #0:1 (aac (native) -> aac (native))

Press [q] to stop, [?] for help

Output #0, mp4, to 'E:/...../Default.mp4':

 Metadata:

 major_brand : isom 
 minor_version : 512 
 compatible_brands: isomiso2avc1mp41

 encoder : Lavf60.3.100

 Stream #0:0(und): Video: h264 (High 4:4:4 Predictive) (avc1 / 0x31637661), yuv420p(tv, bt709, progressive), 1280x720 [SAR 1:1 DAR 16:9], q=2-31, 22178 kb/s, 49.80 fps, 50 tbr, 12800 tbn (default)

 Metadata:

 handler_name : VideoHandler 
 vendor_id : [0][0][0][0] 
 encoder : Lavc60.3.100 libx264

 Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 96000 Hz, 5.1, fltp, 341 kb/s (default)

 Metadata:

 handler_name : SoundHandler 
 vendor_id : [0][0][0][0] 
 encoder : Lavc60.3.100 aac

frame= 0 fps=0.0 q=-1.0 size= 0kB time=-577014:32:22.77 bitrate= -0.0kbits/s speed=N/A 
_[aac @ 00000142532f74c0] channel element 1.0 is not allocated

Error while decoding stream #0:1: Invalid data found when processing input 
[aac @ 00000142532f74c0] channel element 1.0 is not allocated
.
.
.



If I was to do this to merge
snippet B
withsnippet B
then it would work - it's something aboutsnippet A
that is causing the problem.

Is there any way to get around this... what is it about
snippet A
that is causing a problem... and is there a way to "normalize" it so that it can be merged as part of the "set".

Note, I just upgraded to ffmpeg6 after a previous version was giving the same problems - so I will also work on the deprecated messages when I can.