
Recherche avancée
Médias (1)
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (96)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
Librairies et binaires spécifiques au traitement vidéo et sonore
31 janvier 2010, parLes logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
Binaires complémentaires et facultatifs flvtool2 : (...)
Sur d’autres sites (6610)
-
Invalid frame dimension, invalid data found when processing input, muxing overhead : unknown when concatting old VOB files | FFMPEG
9 mars 2023, par Victor HartmanI am dealing with multiple errors which I am not able to resolve. I am trying to concat 4 old VOB files to a mp4. Using command (already solved a 'pts has no value error') :


.\ffmpeg.exe -fflags +genpts -f concat -i .\files.txt -c copy output.mp4



files.txt looks like this :


file 'VTS_01_1.VOB'
file 'VTS_01_2.VOB'
file 'VTS_01_3.VOB'
file 'VTS_01_4.VOB'



Output looks like this :


ffmpeg version 2023-03-05-git-912ac82a3c-essentials_build-www.gyan.dev Copyright (c) 2000-2023 the FFmpeg developers
 built with gcc 12.2.0 (Rev10, Built by MSYS2 project)
 configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-zlib --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-sdl2 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-libaom --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-libfreetype --enable-libfribidi --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libvpl --enable-libgme --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libtheora --enable-libvo-amrwbenc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-librubberband
 libavutil 58. 3.100 / 58. 3.100
 libavcodec 60. 6.100 / 60. 6.100
 libavformat 60. 4.100 / 60. 4.100
 libavdevice 60. 2.100 / 60. 2.100
 libavfilter 9. 4.100 / 9. 4.100
 libswscale 7. 2.100 / 7. 2.100
 libswresample 4. 11.100 / 4. 11.100
 libpostproc 57. 2.100 / 57. 2.100
Input #0, concat, from '.\files.txt':
 Duration: N/A, start: 0.000000, bitrate: N/A
 Stream #0:0: Data: dvd_nav_packet
 Stream #0:1: Video: mpeg2video (Main), yuv420p(tv, top first), 720x480 [SAR 8:9 DAR 4:3], 29.97 fps, 29.97 tbr, 90k tbn
 Side data:
 cpb: bitrate max/min/avg: 7000000/0/0 buffer size: 1835008 vbv_delay: N/A
 Stream #0:2: Audio: ac3, 48000 Hz, stereo, fltp, 448 kb/s
File 'output.mp4' already exists. Overwrite? [y/N] y
[mp4 @ 00000211b44d0900] track 1: codec frame size is not set
Output #0, mp4, to 'output.mp4':
 Metadata:
 encoder : Lavf60.4.100
 Stream #0:0: Video: mpeg2video (Main) (mp4v / 0x7634706D), yuv420p(tv, top first), 720x480 [SAR 8:9 DAR 4:3], q=2-31, 29.97 fps, 29.97 tbr, 90k tbn
 Side data:
 cpb: bitrate max/min/avg: 7000000/0/0 buffer size: 1835008 vbv_delay: N/A
 Stream #0:1: Audio: ac3 (ac-3 / 0x332D6361), 48000 Hz, stereo, fltp, 448 kb/s
Stream mapping:
 Stream #0:1 -> #0:0 (copy)
 Stream #0:2 -> #0:1 (copy)
Press [q] to stop, [?] for help
[mpeg2video @ 00000211b44a5f00] Invalid frame dimensions 0x0. bitrate=5521.6kbits/s speed= 643x
 Last message repeated 3 times
[concat @ 00000211b449b500] DTS 137507370 < 137525388 out of order
[mp4 @ 00000211b44d0900] Non-monotonous DTS in output stream 0:0; previous: 137525388, current: 137507370; changing to 137525389. This may result in incorrect timestamps in the output file.
[mp4 @ 00000211b44d0900] Non-monotonous DTS in output stream 0:0; previous: 137525389, current: 137510250; changing to 137525390. This may result in incorrect timestamps in the output file.
[mp4 @ 00000211b44d0900] Non-monotonous DTS in output stream 0:0; previous: 137525390, current: 137510250; changing to 137525391. This may result in incorrect timestamps in the output file.
[mp4 @ 00000211b44d0900] Non-monotonous DTS in output stream 0:0; previous: 137525391, current: 137513130; changing to 137525392. This may result in incorrect timestamps in the output file.
[mp4 @ 00000211b44d0900] Non-monotonous DTS in output stream 0:0; previous: 137525392, current: 137516010; changing to 137525393. This may result in incorrect timestamps in the output file.
[mp4 @ 00000211b44d0900] Non-monotonous DTS in output stream 0:0; previous: 137525393, current: 137518890; changing to 137525394. This may result in incorrect timestamps in the output file.
[mp4 @ 00000211b44d0900] Non-monotonous DTS in output stream 0:0; previous: 137525394, current: 137521770; changing to 137525395. This may result in incorrect timestamps in the output file.
[mp4 @ 00000211b44d0900] Non-monotonous DTS in output stream 0:0; previous: 137525395, current: 137524650; changing to 137525396. This may result in incorrect timestamps in the output file.
av_interleaved_write_frame(): Invalid data found when processing input
[out#0/mp4 @ 00000211b44f96c0] Error muxing a packet
[out#0/mp4 @ 00000211b44f96c0] Error writing trailer: Invalid data found when processing input
frame=45815 fps=19808 q=-1.0 Lsize= 1027328kB time=00:25:29.51 bitrate=5502.3kbits/s speed= 661x
video:943791kB audio:83556kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
Conversion failed!



-
Have an error running one PowerShell script to concatenate avi files, when a very similar script works fine
1er juillet 2023, par hw22sWhen I run the following PowerShell script, to concatenate the .avi files in a folder, I get an error.


Script :


# Set the folder path where the AVI files are located
$folderPath = "C:\Users\HomePC\hswlab\Desktop\videos\eyeblink\022223\2023_04_24\16_15_23\My_WebCam"

# Get all AVI files in the folder
$aviFiles = Get-ChildItem -Path $folderPath -Filter "*.avi" | Sort-Object Name

# Check if there are at least 2 AVI files for concatenation
if ($aviFiles.Count -ge 2) {
 $videoFiles = $aviFiles.FullName

 Write-Host "Input Files for Concatenation:"
 foreach ($file in $aviFiles) {
 Write-Host $file.Name
 }

 $outputFile = Join-Path -Path $folderPath -ChildPath "concatenated.avi"

 # Create the FFmpeg command for concatenation
 $concatArguments = "-f", "concat", "-i", "`"concat:$videoFiles`"", "-c", "copy", "`"$outputFile`""

 $command = "ffmpeg $concatArguments"
 Write-Host "FFmpeg Command: $command"

 try {
 $process = Start-Process -FilePath "ffmpeg" -ArgumentList $concatArguments -NoNewWindow -PassThru -Wait -ErrorAction Stop
 Write-Host "Concatenation complete. Output file: $outputFile"
 } catch {
 Write-Host "Error occurred while executing FFmpeg command:"
 Write-Host "Error message: $($_.Exception.Message)"
 }
} else {
 Write-Host "Not enough AVI files found in the folder for concatenation."
}



Output with error, (and no concatenated file is actually produced, needless to say) :


Input Files for Concatenation:
webcam0.avi
webcam1.avi
FFmpeg Command: ffmpeg -f concat -i "concat:C:\Users\HomePC\hswlab\Desktop\videos\eyeblink\022223\2023_04_24\16_15_23\My_WebCam\webcam0.avi C:\Users\HomePC\hswlab\Desktop\videos\eyeblink\022223\2023_04_24\16_15_23\My_WebCam\webcam1.avi" -c copy "C:\Users\HomePC\hswlab\Desktop\videos\eyeblink\022223\2023_04_24\16_15_23\My_WebCam\concatenated.avi"
ffmpeg version 2023-06-27-git-9b6d191a66-essentials_build-www.gyan.dev Copyright (c) 2000-2023 the FFmpeg developers
 built with gcc 12.2.0 (Rev10, Built by MSYS2 project)
 configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-zlib --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-sdl2 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-libaom --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-libfreetype --enable-libfribidi --enable-libharfbuzz --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libvpl --enable-libgme --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libtheora --enable-libvo-amrwbenc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-librubberband
 libavutil 58. 13.101 / 58. 13.101
 libavcodec 60. 21.100 / 60. 21.100
 libavformat 60. 9.100 / 60. 9.100
 libavdevice 60. 2.100 / 60. 2.100
 libavfilter 9. 8.102 / 9. 8.102
 libswscale 7. 3.100 / 7. 3.100
 libswresample 4. 11.100 / 4. 11.100
 libpostproc 57. 2.100 / 57. 2.100
[in#0 @ 000001d2f05f1b00] Error opening input: Invalid argument
Concatenation complete. Output file: C:\Users\HomePC\hswlab\Desktop\videos\eyeblink\022223\2023_04_24\16_15_23\My_WebCam\concatenated.avi



But this script, which ostensibly does the same thing to the same files, works fine :


$file1 = "C:\Users\HomePC\hswlab\Desktop\videos\eyeblink\022223\2023_04_24\16_15_23\My_WebCam\webcam0.avi" # Path to the first video file
$file2 = "C:\Users\HomePC\hswlab\Desktop\videos\eyeblink\022223\2023_04_24\16_15_23\My_WebCam\webcam1.avi" # Path to the second video file
$outputFile = "C:\Users\HomePC\hswlab\Desktop\videos\eyeblink\022223\2023_04_24\16_15_23\My_WebCam\concat.avi" # Path to the output concatenated video file

# Create a temporary batch script
$batchScriptPath = [System.IO.Path]::GetTempFileName() + ".bat"
@'
@echo off
ffmpeg -i "%~1" -i "%~2" -filter_complex "[0:v][1:v]concat=n=2:v=1[outv]" -map "[outv]" "%~3"
'@ | Set-Content -Path $batchScriptPath

# Execute the temporary batch script
try {
 & $batchScriptPath $file1 $file2 $outputFile
 Write-Host "Concatenation complete. Output file: $outputFile"
} catch {
 Write-Host "Error occurred while executing FFmpeg command:"
 Write-Host $_.Exception.Message
}

# Remove the temporary batch script
Remove-Item -Path $batchScriptPath -Force



I can't figure out what is wrong with the first script. I tried many variations of the first script but cannot get it to work.


Any help ?


-
Ffmpeg error "avcodec_send_frame" return "invalid argument"
17 octobre 2023, par Paulo CoutinhoI have a problem in function
avcodec_send_frame
throwing errorError sending frame for encoding: Invalid argument
(-22). I already search, check, recheck and nothing. It is near the ffmpeg examples. Can anyone help me ? Thanks.

This is my code :


static void callbackAddSubtitle(const Message &m, const Response r)
{
 try
 {
 av_log_set_level(AV_LOG_DEBUG);

 spdlog::debug("[Mapping :: callbackAddSubtitle] Adding subtitle...");

 auto inputOpt = m.get("input");
 auto outputOpt = m.get("output");

 if (!inputOpt.has_value() || !outputOpt.has_value())
 {
 r(std::string{"INVALID-PARAMS"});
 return;
 }

 const std::string &input = inputOpt.value();
 const std::string &output = outputOpt.value();

 // initialize input
 spdlog::debug("[Mapping :: callbackAddSubtitle] Initializing input video...");

 AVFormatContext *inputFormatCtx = avformat_alloc_context();
 if (avformat_open_input(&inputFormatCtx, input.c_str(), nullptr, nullptr) != 0)
 {
 spdlog::error("Failed to open input");
 r(std::string{"ERROR-OPEN-INPUT"});
 return;
 }

 if (avformat_find_stream_info(inputFormatCtx, nullptr) < 0)
 {
 spdlog::error("Failed to find stream information");
 avformat_close_input(&inputFormatCtx);
 r(std::string{"ERROR-FIND-STREAM"});
 return;
 }

 int videoStreamIndex = av_find_best_stream(inputFormatCtx, AVMEDIA_TYPE_VIDEO, -1, -1, nullptr, 0);
 if (videoStreamIndex < 0)
 {
 spdlog::error("Could not find a video stream");
 r(std::string{"ERROR-FIND-VIDEO-STREAM"});
 return;
 }

 AVRational timeBase = inputFormatCtx->streams[videoStreamIndex]->time_base;

 AVCodecParameters *inputCodecPar = inputFormatCtx->streams[videoStreamIndex]->codecpar;
 const AVCodec *inputCodec = avcodec_find_decoder(inputCodecPar->codec_id);
 AVCodecContext *inputCodecCtx = avcodec_alloc_context3(inputCodec);

 avcodec_parameters_to_context(inputCodecCtx, inputCodecPar);
 avcodec_open2(inputCodecCtx, inputCodec, nullptr);

 // initialize input audio
 spdlog::debug("[Mapping :: callbackAddSubtitle] Initializing input audio...");

 int audioStreamIndex = av_find_best_stream(inputFormatCtx, AVMEDIA_TYPE_AUDIO, -1, -1, nullptr, 0);
 if (audioStreamIndex < 0)
 {
 spdlog::error("Could not find an audio stream");
 r(std::string{"ERROR-FIND-AUDIO-STREAM"});
 return;
 }

 AVCodecParameters *inputAudioCodecPar = inputFormatCtx->streams[audioStreamIndex]->codecpar;
 const AVCodec *inputAudioCodec = avcodec_find_decoder(inputAudioCodecPar->codec_id);
 AVCodecContext *inputAudioCodecCtx = avcodec_alloc_context3(inputAudioCodec);

 avcodec_parameters_to_context(inputAudioCodecCtx, inputAudioCodecPar);
 avcodec_open2(inputAudioCodecCtx, inputAudioCodec, nullptr);

 // initialize output video
 spdlog::debug("[Mapping :: callbackAddSubtitle] Initializing output video...");

 AVFormatContext *outputFormatCtx = nullptr;
 avformat_alloc_output_context2(&outputFormatCtx, nullptr, nullptr, output.c_str());
 AVStream *outputStream = avformat_new_stream(outputFormatCtx, nullptr);

 AVCodecContext *outputCodecCtx = avcodec_alloc_context3(inputCodec);
 avcodec_parameters_to_context(outputCodecCtx, inputCodecPar);
 int retOutVideo = avcodec_open2(outputCodecCtx, inputCodec, nullptr);

 if (retOutVideo < 0)
 {
 char err[AV_ERROR_MAX_STRING_SIZE];
 av_make_error_string(err, AV_ERROR_MAX_STRING_SIZE, retOutVideo);
 spdlog::error("Failed to initialize output video: {}", err);
 r(std::string{"ERROR-INIT-OUTPUT-VIDEO"});
 return;
 }

 outputStream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
 outputStream->codecpar->codec_id = inputCodec->id;
 avcodec_parameters_from_context(outputStream->codecpar, outputCodecCtx);

 if (!(outputFormatCtx->oformat->flags & AVFMT_NOFILE))
 {
 avio_open(&outputFormatCtx->pb, output.c_str(), AVIO_FLAG_WRITE);
 }

 const char *pixelFormatName = getPixelFormatName(outputCodecCtx->pix_fmt);
 spdlog::debug("Pixel Format: {}", pixelFormatName);

 // initialize output audio
 spdlog::debug("[Mapping :: callbackAddSubtitle] Initializing output audio...");

 AVStream *outputAudioStream = avformat_new_stream(outputFormatCtx, nullptr);
 AVCodecContext *outputAudioCodecCtx = avcodec_alloc_context3(inputAudioCodec);
 avcodec_parameters_to_context(outputAudioCodecCtx, inputAudioCodecPar);
 int retOutAudio = avcodec_open2(outputAudioCodecCtx, inputAudioCodec, nullptr);

 if (retOutAudio < 0)
 {
 char err[AV_ERROR_MAX_STRING_SIZE];
 av_make_error_string(err, AV_ERROR_MAX_STRING_SIZE, retOutAudio);
 spdlog::error("Failed to initialize output audio: {}", err);
 r(std::string{"ERROR-INIT-OUTPUT-AUDIO"});
 return;
 }

 outputAudioStream->codecpar->codec_type = AVMEDIA_TYPE_AUDIO;
 outputAudioStream->codecpar->codec_id = inputAudioCodec->id;
 avcodec_parameters_from_context(outputAudioStream->codecpar, outputAudioCodecCtx);

 // initialize filters
 spdlog::debug("[Mapping :: callbackAddSubtitle] Initializing filters...");

 AVFilterGraph *filterGraph = avfilter_graph_alloc();
 if (!filterGraph)
 {
 spdlog::error("Failed to allocate filter graph");
 r(std::string{"ERROR-FILTER-GRAPH"});
 return;
 }

 AVFilterContext *bufferSinkCtx;
 AVFilterContext *bufferSrcCtx;

 const AVFilter *bufferSink = avfilter_get_by_name("buffersink");
 const AVFilter *bufferSrc = avfilter_get_by_name("buffer");

 // input filter
 char filterInArgs[512];
 snprintf(filterInArgs, sizeof(filterInArgs), "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d", inputCodecPar->width, inputCodecPar->height, inputCodecCtx->pix_fmt, timeBase.num, timeBase.den, inputCodecCtx->sample_aspect_ratio.num, inputCodecCtx->sample_aspect_ratio.den);

 spdlog::debug("[Mapping :: callbackAddSubtitle] Buffer src args: {}", filterInArgs);

 int retFilterIn = avfilter_graph_create_filter(&bufferSrcCtx, bufferSrc, "in", filterInArgs, nullptr, filterGraph);
 if (retFilterIn < 0)
 {
 char err[AV_ERROR_MAX_STRING_SIZE];
 av_make_error_string(err, AV_ERROR_MAX_STRING_SIZE, retFilterIn);
 spdlog::error("Failed to create bufferSrcCtx: {}", err);
 r(std::string{"ERROR-CREATE-FILTER-SRC"});
 return;
 }

 // output filter
 int retFilterOut = avfilter_graph_create_filter(&bufferSinkCtx, bufferSink, "out", nullptr, nullptr, filterGraph);

 if (retFilterOut < 0)
 {
 char err[AV_ERROR_MAX_STRING_SIZE];
 av_make_error_string(err, AV_ERROR_MAX_STRING_SIZE, retFilterOut);
 spdlog::error("Failed to create bufferSinkCtx: {}", err);
 r(std::string{"ERROR-CREATE-FILTER-SINK"});
 return;
 }

 enum AVPixelFormat pix_fmts[] = {AV_PIX_FMT_YUV420P, AV_PIX_FMT_NONE};
 av_opt_set_int_list(bufferSinkCtx, "pix_fmts", pix_fmts, AV_PIX_FMT_NONE, AV_OPT_SEARCH_CHILDREN);

 // add filters to graph and link them
 const char *filterSpec = "drawtext=text='Legenda Adicionada Automaticamente Via FFMPEG e C++': fontcolor=yellow: bordercolor=black: fontfile='/Users/paulo/Downloads/roboto/Roboto-Black.ttf'";
 const AVFilter *filter = avfilter_get_by_name("drawtext");

 AVFilterInOut *outputs = avfilter_inout_alloc();
 AVFilterInOut *inputs = avfilter_inout_alloc();

 outputs->name = av_strdup("in");
 outputs->filter_ctx = bufferSrcCtx;
 outputs->pad_idx = 0;
 outputs->next = nullptr;
 inputs->name = av_strdup("out");
 inputs->filter_ctx = bufferSinkCtx;
 inputs->pad_idx = 0;
 inputs->next = nullptr;

 if (avfilter_graph_parse_ptr(filterGraph, filterSpec, &inputs, &outputs, nullptr) < 0)
 {
 spdlog::error("Failed to parse filter graph");
 r(std::string{"ERROR-PARSE-FILTER"});
 return;
 }

 if (avfilter_graph_config(filterGraph, nullptr) < 0)
 {
 spdlog::error("Failed to configure filter graph");
 r(std::string{"ERROR-CONFIG-FILTER"});
 return;
 }

 // header
 spdlog::debug("[Mapping :: callbackAddSubtitle] Writing header...");

 if (avformat_write_header(outputFormatCtx, nullptr) < 0)
 {
 spdlog::error("Error writing header");
 r(std::string{"ERROR-WRITE-HEADER"});
 return;
 }

 // read frames and write to output
 AVPacket *packet = av_packet_alloc();
 AVFrame *frame = av_frame_alloc();

 frame->format = inputCodecCtx->pix_fmt;
 frame->width = inputCodecCtx->width;
 frame->height = inputCodecCtx->height;

 AVFrame *filt_frame = av_frame_alloc();

 filt_frame->format = inputCodecCtx->pix_fmt;
 filt_frame->width = inputCodecCtx->width;
 filt_frame->height = inputCodecCtx->height;

 while (av_read_frame(inputFormatCtx, packet) >= 0)
 {
 if (packet->stream_index == videoStreamIndex)
 {
 if (avcodec_send_packet(inputCodecCtx, packet) < 0)
 {
 spdlog::error("Error sending packet for decoding");
 r(std::string{"ERROR-SEND-PACKET-DECODE"});
 return;
 }

 while (avcodec_receive_frame(inputCodecCtx, frame) == 0)
 {
 // Envia o quadro decodificado para o gráfico de filtro
 if (av_buffersrc_add_frame_flags(bufferSrcCtx, frame, AV_BUFFERSRC_FLAG_KEEP_REF) < 0)
 {
 spdlog::error("Error while feeding the filtergraph");
 r(std::string{"ERROR-FEED-FILTERGRAPH"});
 return;
 }

 // Recebe um quadro do gráfico de filtro
 if (av_buffersink_get_frame(bufferSinkCtx, filt_frame) < 0)
 {
 spdlog::error("Error while receiving the filtered frame");
 r(std::string{"ERROR-RECEIVE-FILTERED-FRAME"});
 return;
 }

 // Envia o quadro decodificado para re-codificação
 int retSendFrame = avcodec_send_frame(outputCodecCtx, filt_frame);
 if (retSendFrame < 0)
 {
 char err[AV_ERROR_MAX_STRING_SIZE];
 av_make_error_string(err, AV_ERROR_MAX_STRING_SIZE, retSendFrame);
 spdlog::error("Error sending frame for encoding: {}", err);
 r(std::string{"ERROR-SEND-FRAME-ENCODE"});
 return;
 }

 AVPacket *output_packet = av_packet_alloc();
 output_packet->data = nullptr;
 output_packet->size = 0;

 // Re-codifica filt_frame para um pacote
 if (avcodec_receive_packet(outputCodecCtx, output_packet) == 0)
 {
 // Escreve o pacote no fluxo de saída
 av_write_frame(outputFormatCtx, output_packet);
 av_packet_unref(output_packet);
 }

 av_frame_unref(filt_frame);
 }

 // time
 packet->pts = av_rescale_q_rnd(packet->pts, inputFormatCtx->streams[videoStreamIndex]->time_base, outputFormatCtx->streams[videoStreamIndex]->time_base, (AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
 packet->dts = av_rescale_q_rnd(packet->dts, inputFormatCtx->streams[videoStreamIndex]->time_base, outputFormatCtx->streams[videoStreamIndex]->time_base, (AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
 packet->duration = av_rescale_q(packet->duration, inputFormatCtx->streams[videoStreamIndex]->time_base, outputFormatCtx->streams[videoStreamIndex]->time_base);
 packet->stream_index = videoStreamIndex;

 // write packet to output video stream
 av_interleaved_write_frame(outputFormatCtx, packet);
 }
 else if (packet->stream_index == audioStreamIndex)
 {
 // rescale timestamps
 packet->pts = av_rescale_q_rnd(packet->pts, inputFormatCtx->streams[audioStreamIndex]->time_base, outputFormatCtx->streams[audioStreamIndex]->time_base, (AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
 packet->dts = av_rescale_q_rnd(packet->dts, inputFormatCtx->streams[audioStreamIndex]->time_base, outputFormatCtx->streams[audioStreamIndex]->time_base, (AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
 packet->duration = av_rescale_q(packet->duration, inputFormatCtx->streams[audioStreamIndex]->time_base, outputFormatCtx->streams[audioStreamIndex]->time_base);
 packet->stream_index = audioStreamIndex;

 // write packet to output audio stream
 av_interleaved_write_frame(outputFormatCtx, packet);
 }

 av_packet_unref(packet);
 }

 av_packet_free(&packet);
 av_frame_free(&frame);
 av_frame_free(&filt_frame);

 spdlog::debug("[Mapping :: callbackAddSubtitle] Writing trailer...");

 if (av_write_trailer(outputFormatCtx) < 0)
 {
 spdlog::error("Error writing trailer");
 r(std::string{"ERROR-WRITE-TRAILER"});
 return;
 }

 // cleanup
 spdlog::debug("[Mapping :: callbackAddSubtitle] Cleaning...");

 if (!(outputFormatCtx->oformat->flags & AVFMT_NOFILE))
 {
 avio_closep(&outputFormatCtx->pb);
 }

 avcodec_free_context(&inputCodecCtx);
 avcodec_free_context(&inputAudioCodecCtx);
 avcodec_free_context(&outputCodecCtx);
 avcodec_free_context(&outputAudioCodecCtx);

 avformat_free_context(inputFormatCtx);
 avformat_free_context(outputFormatCtx);

 r(std::string{"OK"});
 }
 catch (const std::exception &e)
 {
 spdlog::error("Error: {}", e.what());
 r(std::string{"ERROR"});
 }
}



The error is :


[2023-10-17 06:30:16.936] [debug] [Mapping :: callbackAddSubtitle] Adding subtitle...
[2023-10-17 06:30:16.936] [debug] [Mapping :: callbackAddSubtitle] Initializing input video...
[NULL @ 0x153604a60] Opening '/Users/paulo/Downloads/movie.mp4' for reading
[file @ 0x6000001fd170] Setting default whitelist 'file,crypto,data'
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] Format mov,mp4,m4a,3gp,3g2,mj2 probed with size=2048 and score=100
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] ISO: File Type Major Brand: isom
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] Unknown dref type 0x206c7275 size 12
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] Processing st: 0, edit list 0 - media time: 0, duration: 2669670
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] Unknown dref type 0x206c7275 size 12
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] Processing st: 1, edit list 0 - media time: 1024, duration: 4272096
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] drop a frame at curr_cts: 0 @ 0
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] Before avformat_find_stream_info() pos: 113542488 bytes read:110788 seeks:1 nb_streams:2
[h264 @ 0x153604cd0] nal_unit_type: 7(SPS), nal_ref_idc: 3
[h264 @ 0x153604cd0] Decoding VUI
[h264 @ 0x153604cd0] nal_unit_type: 8(PPS), nal_ref_idc: 3
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] demuxer injecting skip 1024 / discard 0
[aac @ 0x1536056f0] skip 1024 / discard 0 samples due to side data
[h264 @ 0x153604cd0] nal_unit_type: 7(SPS), nal_ref_idc: 3
[h264 @ 0x153604cd0] Decoding VUI
[h264 @ 0x153604cd0] nal_unit_type: 8(PPS), nal_ref_idc: 3
[h264 @ 0x153604cd0] nal_unit_type: 6(SEI), nal_ref_idc: 0
[h264 @ 0x153604cd0] nal_unit_type: 5(IDR), nal_ref_idc: 3
[h264 @ 0x153604cd0] Format yuv420p chosen by get_format().
[h264 @ 0x153604cd0] Reinit context to 1088x1920, pix_fmt: yuv420p
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] All info found
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] After avformat_find_stream_info() pos: 195211 bytes read:305951 seeks:2 frames:2
[2023-10-17 06:30:18.160] [debug] [Mapping :: callbackAddSubtitle] Initializing input audio...
[h264 @ 0x143604330] nal_unit_type: 7(SPS), nal_ref_idc: 3
[h264 @ 0x143604330] Decoding VUI
[h264 @ 0x143604330] nal_unit_type: 8(PPS), nal_ref_idc: 3
[2023-10-17 06:30:18.160] [debug] [Mapping :: callbackAddSubtitle] Initializing output video...
[h264 @ 0x143611ec0] nal_unit_type: 7(SPS), nal_ref_idc: 3
[h264 @ 0x143611ec0] Decoding VUI
[h264 @ 0x143611ec0] nal_unit_type: 8(PPS), nal_ref_idc: 3
[file @ 0x6000001f4000] Setting default whitelist 'file,crypto,data'
[2023-10-17 06:30:18.167] [debug] Pixel Format: YUV420P
[2023-10-17 06:30:18.167] [debug] [Mapping :: callbackAddSubtitle] Initializing output audio...
[2023-10-17 06:30:18.167] [debug] [Mapping :: callbackAddSubtitle] Initializing filters...
[2023-10-17 06:30:18.168] [debug] [Mapping :: callbackAddSubtitle] Buffer src args: video_size=1080x1920:pix_fmt=0:time_base=1/30000:pixel_aspect=1/1
detected 10 logical cores
[in @ 0x6000004ec0b0] Setting 'video_size' to value '1080x1920'
[in @ 0x6000004ec0b0] Setting 'pix_fmt' to value '0'
[in @ 0x6000004ec0b0] Setting 'time_base' to value '1/30000'
[in @ 0x6000004ec0b0] Setting 'pixel_aspect' to value '1/1'
[in @ 0x6000004ec0b0] w:1080 h:1920 pixfmt:yuv420p tb:1/30000 fr:0/1 sar:1/1
[AVFilterGraph @ 0x6000017e8000] Setting 'text' to value 'Legenda Adicionada Automaticamente Via FFMPEG e C++'
[AVFilterGraph @ 0x6000017e8000] Setting 'fontcolor' to value 'yellow'
[AVFilterGraph @ 0x6000017e8000] Setting 'bordercolor' to value 'black'
[AVFilterGraph @ 0x6000017e8000] Setting 'fontfile' to value '/Users/paulo/Downloads/roboto/Roboto-Black.ttf'
[AVFilterGraph @ 0x6000017e8000] query_formats: 3 queried, 2 merged, 0 already done, 0 delayed
[2023-10-17 06:30:18.172] [debug] [Mapping :: callbackAddSubtitle] Writing header...
[h264 @ 0x143604330] nal_unit_type: 6(SEI), nal_ref_idc: 0
[h264 @ 0x143604330] nal_unit_type: 5(IDR), nal_ref_idc: 3
[h264 @ 0x143604330] Format yuv420p chosen by get_format().
[h264 @ 0x143604330] Reinit context to 1088x1920, pix_fmt: yuv420p
[Parsed_drawtext_0 @ 0x6000004f4160] Copying data in avfilter.
[Parsed_drawtext_0 @ 0x6000004f4160] n:0 t:0.000000 text_w:424 text_h:16 x:0 y:0
[2023-10-17 06:30:18.182] [error] Error sending frame for encoding: Invalid argument
Returned Value: ERROR-SEND-FRAME-ENCODE