Recherche avancée

Médias (1)

Mot : - Tags -/musée

Autres articles (88)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • Gestion des droits de création et d’édition des objets

    8 février 2011, par

    Par défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;

Sur d’autres sites (12525)

  • Convert Webrtc track stream to URL (RTSP/UDP/RTP/Http) in Video tag

    19 juillet 2020, par Zeeshan Younis

    I am new in WebRTC and i have done client/server connection, from client i choose WebCam and post stream to server using Track and on Server side i am getting that track and assign track stream to video source. Everything till now fine but problem is now i include AI(Artificial Intelligence) and now i want to convert my track stream to URL maybe UDP/RTSP/RTP etc. So AI will use that URL for object detection. I don't know how we can convert track stream to URL.
Although there is a couple of packages like https://ffmpeg.org/ and RTP to Webrtc etc, i am using Nodejs, Socket.io and Webrtc, below you can check my client and server side code for getting and posting stream, i am following thi github code https://github.com/Basscord/webrtc-video-broadcast.
Now my main concern is to make track as a URL for video tag, is it possible or not or please suggest, any help would be appreciated.

    


    Server.js

    


    This is nodejs server code


    

    

    const express = require("express");
const app = express();

let broadcaster;
const port = 4000;

const http = require("http");
const server = http.createServer(app);

const io = require("socket.io")(server);
app.use(express.static(__dirname + "/public"));

io.sockets.on("error", e => console.log(e));
io.sockets.on("connection", socket => {
  socket.on("broadcaster", () => {
    broadcaster = socket.id;
    socket.broadcast.emit("broadcaster");
  });
  socket.on("watcher", () => {
    socket.to(broadcaster).emit("watcher", socket.id);
  });
  socket.on("offer", (id, message) => {
    socket.to(id).emit("offer", socket.id, message);
  });
  socket.on("answer", (id, message) => {
    socket.to(id).emit("answer", socket.id, message);
  });
  socket.on("candidate", (id, message) => {
    socket.to(id).emit("candidate", socket.id, message);
  });
  socket.on("disconnect", () => {
    socket.to(broadcaster).emit("disconnectPeer", socket.id);
  });
});
server.listen(port, () => console.log(`Server is running on port ${port}`));

    


    


    



    Broadcast.js
This is the code for emit stream(track)


    

    

    const peerConnections = {};
const config = {
  iceServers: [
    {
      urls: ["stun:stun.l.google.com:19302"]
    }
  ]
};

const socket = io.connect(window.location.origin);

socket.on("answer", (id, description) => {
  peerConnections[id].setRemoteDescription(description);
});

socket.on("watcher", id => {
  const peerConnection = new RTCPeerConnection(config);
  peerConnections[id] = peerConnection;

  let stream = videoElement.srcObject;
  stream.getTracks().forEach(track => peerConnection.addTrack(track, stream));

  peerConnection.onicecandidate = event => {
    if (event.candidate) {
      socket.emit("candidate", id, event.candidate);
    }
  };

  peerConnection
    .createOffer()
    .then(sdp => peerConnection.setLocalDescription(sdp))
    .then(() => {
      socket.emit("offer", id, peerConnection.localDescription);
    });
});

socket.on("candidate", (id, candidate) => {
  peerConnections[id].addIceCandidate(new RTCIceCandidate(candidate));
});

socket.on("disconnectPeer", id => {
  peerConnections[id].close();
  delete peerConnections[id];
});

window.onunload = window.onbeforeunload = () => {
  socket.close();
};

// Get camera and microphone
const videoElement = document.querySelector("video");
const audioSelect = document.querySelector("select#audioSource");
const videoSelect = document.querySelector("select#videoSource");

audioSelect.onchange = getStream;
videoSelect.onchange = getStream;

getStream()
  .then(getDevices)
  .then(gotDevices);

function getDevices() {
  return navigator.mediaDevices.enumerateDevices();
}

function gotDevices(deviceInfos) {
  window.deviceInfos = deviceInfos;
  for (const deviceInfo of deviceInfos) {
    const option = document.createElement("option");
    option.value = deviceInfo.deviceId;
    if (deviceInfo.kind === "audioinput") {
      option.text = deviceInfo.label || `Microphone ${audioSelect.length + 1}`;
      audioSelect.appendChild(option);
    } else if (deviceInfo.kind === "videoinput") {
      option.text = deviceInfo.label || `Camera ${videoSelect.length + 1}`;
      videoSelect.appendChild(option);
    }
  }
}

function getStream() {
  if (window.stream) {
    window.stream.getTracks().forEach(track => {
      track.stop();
    });
  }
  const audioSource = audioSelect.value;
  const videoSource = videoSelect.value;
  const constraints = {
    audio: { deviceId: audioSource ? { exact: audioSource } : undefined },
    video: { deviceId: videoSource ? { exact: videoSource } : undefined }
  };
  return navigator.mediaDevices
    .getUserMedia(constraints)
    .then(gotStream)
    .catch(handleError);
}

function gotStream(stream) {
  window.stream = stream;
  audioSelect.selectedIndex = [...audioSelect.options].findIndex(
    option => option.text === stream.getAudioTracks()[0].label
  );
  videoSelect.selectedIndex = [...videoSelect.options].findIndex(
    option => option.text === stream.getVideoTracks()[0].label
  );
  videoElement.srcObject = stream;
  socket.emit("broadcaster");
}

function handleError(error) {
  console.error("Error: ", error);
}

    


    


    



    RemoteServer.js
This code is getting track and assign to video tag


    

    

    let peerConnection;
const config = {
  iceServers: [
    {
      urls: ["stun:stun.l.google.com:19302"]
    }
  ]
};

const socket = io.connect(window.location.origin);
const video = document.querySelector("video");

socket.on("offer", (id, description) => {
  peerConnection = new RTCPeerConnection(config);
  peerConnection
    .setRemoteDescription(description)
    .then(() => peerConnection.createAnswer())
    .then(sdp => peerConnection.setLocalDescription(sdp))
    .then(() => {
      socket.emit("answer", id, peerConnection.localDescription);
    });
  peerConnection.ontrack = event => {
    video.srcObject = event.streams[0];
  };
  peerConnection.onicecandidate = event => {
    if (event.candidate) {
      socket.emit("candidate", id, event.candidate);
    }
  };
});

socket.on("candidate", (id, candidate) => {
  peerConnection
    .addIceCandidate(new RTCIceCandidate(candidate))
    .catch(e => console.error(e));
});

socket.on("connect", () => {
  socket.emit("watcher");
});

socket.on("broadcaster", () => {
  socket.emit("watcher");
});

socket.on("disconnectPeer", () => {
  peerConnection.close();
});

window.onunload = window.onbeforeunload = () => {
  socket.close();
};

    


    


    



  • Multiple format changes in FFmpeg for thermal camera

    6 février 2023, par Greynol4

    I'm having trouble generating a command to process output from a uvc thermal camera's raw data so that it can be colorized and then output to a virtual device with the intention of streaming it over rtsp. This is on a raspberry pi 3B+ with 32bit bullseye.

    


    The original code that works perfectly for previewing it is :

    


    ffmpeg -input_format yuyv422 -video_size 256x384 -i /dev/video0 -vf 'crop=h=(ih/2):y=(ih/2)' -pix_fmt yuyv422 -f rawvideo - | ffplay -pixel_format gray16le -video_size 256x192 -f rawvideo -i - -vf 'normalize=smoothing=10, format=pix_fmts=rgb48, pseudocolor=p=inferno'


    


    Essentially what this is doing is taking the raw data, cutting the useful portion out, then piping it to ffplay where it is seen as 16bit grayscale (in this case gray16le), then it is normalized, formatted to 48 bit rgb and then a pseudocolor filter is applied.

    


    I haven't been able to get this to translate into ffmpeg-only because it throws codec errors or format errors or converts the 16bit to 10bit even though I need the 16bit. I have tried using v4l2loopback and two instances of ffmpeg in separate windows to see if I could figure out where the error was actually occuring but I suspect that is introducing more format issues that are distracting from the original problem. The closest I have been able to get is

    


    ffmpeg -input_format yuyv422 -video_size 256x384 -i /dev/video0 -vf 'crop=h=(ih/2):y=(ih/2)' -pix_fmt yuyv422 -f rawvideo /dev/video3

    


    Followed by

    


    ffmpeg -video_size 256x192 -i /dev/video3  -f rawvideo -pix_fmt gray16le -vf 'normalize=smoothing=10,format=pix_fmts=rgb48, pseudocolor=p=inferno' -f rawvideo -f v4l2 /dev/video4

    


    This results in a non colorized but somewhat useful image with certain temperatures showing as missing pixels as opposed to the command with ffplay where it shows a properly colorized stream without missing pixels.

    


    I'll include my configuration and log from the preview command but the log doesn't show errors unless I try to modify parameters and presumably mess up the syntax.

    


     ffmpeg -input_format yuyv422 -video_size 256x384 -i /dev/video0 -vf 'crop=h=(ih/2):y=(ih/2)' -pix_fmt yuyv422 -f rawvideo - | ffplay -pixel_format gray16le -video_size 256x192 -f rawvideo -i - -vf 'normalize=smoothing=10, format=pix_fmts=rgb48, pseudocolor=p=inferno'
ffplay version N-109758-gbdc76f467f Copyright (c) 2003-2023 the FFmpeg developers
  built with gcc 10 (Raspbian 10.2.1-6+rpi1)
  configuration: --prefix=/usr/local --enable-nonfree --enable-gpl --enable-hardcoded-tables --disable-ffprobe --disable-ffplay --enable-libx264 --enable-libx265 --enable-sdl --enable-sdl2 --enable-ffplay
  libavutil      57. 44.100 / 57. 44.100
  libavcodec     59. 63.100 / 59. 63.100
  libavformat    59. 38.100 / 59. 38.100
  libavdevice    59.  8.101 / 59.  8.101
  libavfilter     8. 56.100 /  8. 56.100
  libswscale      6.  8.112 /  6.  8.112
  libswresample   4.  9.100 /  4.  9.100
  libpostproc    56.  7.100 / 56.  7.100
ffmpeg version N-109758-gbdc76f467f Copyright (c) 2000-2023 the FFmpeg developers
  built with gcc 10 (Raspbian 10.2.1-6+rpi1)
  configuration: --prefix=/usr/local --enable-nonfree --enable-gpl --enable-hardcoded-tables --disable-ffprobe --disable-ffplay --enable-libx264 --enable-libx265 --enable-sdl --enable-sdl2 --enable-ffplay
  libavutil      57. 44.100 / 57. 44.100
  libavcodec     59. 63.100 / 59. 63.100
  libavformat    59. 38.100 / 59. 38.100
  libavdevice    59.  8.101 / 59.  8.101
  libavfilter     8. 56.100 /  8. 56.100
  libswscale      6.  8.112 /  6.  8.112
  libswresample   4.  9.100 /  4.  9.100
  libpostproc    56.  7.100 / 56.  7.100
Input #0, video4linux2,v4l2, from '/dev/video0':B sq=    0B f=0/0   
  Duration: N/A, start: 242.040935, bitrate: 39321 kb/s
  Stream #0:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 256x384, 39321 kb/s, 25 fps, 25 tbr, 1000k tbn
Stream mapping:.000 fd=   0 aq=    0KB vq=    0KB sq=    0B f=0/0   
  Stream #0:0 -> #0:0 (rawvideo (native) -> rawvideo (native))
Press [q] to stop, [?] for help
Output #0, rawvideo, to 'pipe:':   0KB vq=    0KB sq=    0B f=0/0   
  Metadata:
    encoder         : Lavf59.38.100
  Stream #0:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422(tv, progressive), 256x192, q=2-31, 19660 kb/s, 25 fps, 25 tbn
    Metadata:
      encoder         : Lavc59.63.100 rawvideo
frame=    0 fps=0.0 q=0.0 size=       0kB time=-577014:32:22.77 bitrate=  -0.0kbInput #0, rawvideo, from 'fd:':    0KB vq=    0KB sq=    0B f=0/0   
  Duration: N/A, start: 0.000000, bitrate: 19660 kb/s
  Stream #0:0: Video: rawvideo (Y1[0][16] / 0x10003159), gray16le, 256x192, 19660 kb/s, 25 tbr, 25 tbn
frame=   13 fps=0.0 q=-0.0 size=    1152kB time=00:00:00.52 bitrate=18148.4kbitsframe=   25 fps= 24 q=-0.0 size=    2304kB time=00:00:01.00 bitrate=18874.4kbitsframe=   39 fps= 25 q=-0.0 size=    3648kB time=00:00:01.56 bitrate=19156.7kbitsframe=   51 fps= 24 q=-0.0 size=    4800kB time=00:00:02.04 bitrate=19275.3kbitsframe=   64 fps= 24 q=-0.0 size=    6048kB time=00:00:02.56 bitrate=19353.6kbitsframe=   78 fps= 25 q=-0.0 size=    7392kB time=00:00:03.12 bitrate=19408.7kbits




    


    I'd also like to use the correct option so it isn't scrolling though every frame in the log as well as links to resources for adapting a command to a script for beginners even though that's outside the purview of this question so any direction on those would be much appreciated.

    


  • Ffmpeg error "avcodec_send_frame" return "invalid argument"

    17 octobre 2023, par Paulo Coutinho

    I have a problem in function avcodec_send_frame throwing error Error sending frame for encoding: Invalid argument (-22). I already search, check, recheck and nothing. It is near the ffmpeg examples. Can anyone help me ? Thanks.

    


    This is my code :

    


    static void callbackAddSubtitle(const Message &m, const Response r)
{
    try
    {
        av_log_set_level(AV_LOG_DEBUG);

        spdlog::debug("[Mapping :: callbackAddSubtitle] Adding subtitle...");

        auto inputOpt = m.get("input");
        auto outputOpt = m.get("output");

        if (!inputOpt.has_value() || !outputOpt.has_value())
        {
            r(std::string{"INVALID-PARAMS"});
            return;
        }

        const std::string &input = inputOpt.value();
        const std::string &output = outputOpt.value();

        // initialize input
        spdlog::debug("[Mapping :: callbackAddSubtitle] Initializing input video...");

        AVFormatContext *inputFormatCtx = avformat_alloc_context();
        if (avformat_open_input(&inputFormatCtx, input.c_str(), nullptr, nullptr) != 0)
        {
            spdlog::error("Failed to open input");
            r(std::string{"ERROR-OPEN-INPUT"});
            return;
        }

        if (avformat_find_stream_info(inputFormatCtx, nullptr) < 0)
        {
            spdlog::error("Failed to find stream information");
            avformat_close_input(&inputFormatCtx);
            r(std::string{"ERROR-FIND-STREAM"});
            return;
        }

        int videoStreamIndex = av_find_best_stream(inputFormatCtx, AVMEDIA_TYPE_VIDEO, -1, -1, nullptr, 0);
        if (videoStreamIndex < 0)
        {
            spdlog::error("Could not find a video stream");
            r(std::string{"ERROR-FIND-VIDEO-STREAM"});
            return;
        }

        AVRational timeBase = inputFormatCtx->streams[videoStreamIndex]->time_base;

        AVCodecParameters *inputCodecPar = inputFormatCtx->streams[videoStreamIndex]->codecpar;
        const AVCodec *inputCodec = avcodec_find_decoder(inputCodecPar->codec_id);
        AVCodecContext *inputCodecCtx = avcodec_alloc_context3(inputCodec);

        avcodec_parameters_to_context(inputCodecCtx, inputCodecPar);
        avcodec_open2(inputCodecCtx, inputCodec, nullptr);

        // initialize input audio
        spdlog::debug("[Mapping :: callbackAddSubtitle] Initializing input audio...");

        int audioStreamIndex = av_find_best_stream(inputFormatCtx, AVMEDIA_TYPE_AUDIO, -1, -1, nullptr, 0);
        if (audioStreamIndex < 0)
        {
            spdlog::error("Could not find an audio stream");
            r(std::string{"ERROR-FIND-AUDIO-STREAM"});
            return;
        }

        AVCodecParameters *inputAudioCodecPar = inputFormatCtx->streams[audioStreamIndex]->codecpar;
        const AVCodec *inputAudioCodec = avcodec_find_decoder(inputAudioCodecPar->codec_id);
        AVCodecContext *inputAudioCodecCtx = avcodec_alloc_context3(inputAudioCodec);

        avcodec_parameters_to_context(inputAudioCodecCtx, inputAudioCodecPar);
        avcodec_open2(inputAudioCodecCtx, inputAudioCodec, nullptr);

        // initialize output video
        spdlog::debug("[Mapping :: callbackAddSubtitle] Initializing output video...");

        AVFormatContext *outputFormatCtx = nullptr;
        avformat_alloc_output_context2(&outputFormatCtx, nullptr, nullptr, output.c_str());
        AVStream *outputStream = avformat_new_stream(outputFormatCtx, nullptr);

        AVCodecContext *outputCodecCtx = avcodec_alloc_context3(inputCodec);
        avcodec_parameters_to_context(outputCodecCtx, inputCodecPar);
        int retOutVideo = avcodec_open2(outputCodecCtx, inputCodec, nullptr);

        if (retOutVideo < 0)
        {
            char err[AV_ERROR_MAX_STRING_SIZE];
            av_make_error_string(err, AV_ERROR_MAX_STRING_SIZE, retOutVideo);
            spdlog::error("Failed to initialize output video: {}", err);
            r(std::string{"ERROR-INIT-OUTPUT-VIDEO"});
            return;
        }

        outputStream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
        outputStream->codecpar->codec_id = inputCodec->id;
        avcodec_parameters_from_context(outputStream->codecpar, outputCodecCtx);

        if (!(outputFormatCtx->oformat->flags & AVFMT_NOFILE))
        {
            avio_open(&outputFormatCtx->pb, output.c_str(), AVIO_FLAG_WRITE);
        }

        const char *pixelFormatName = getPixelFormatName(outputCodecCtx->pix_fmt);
        spdlog::debug("Pixel Format: {}", pixelFormatName);

        // initialize output audio
        spdlog::debug("[Mapping :: callbackAddSubtitle] Initializing output audio...");

        AVStream *outputAudioStream = avformat_new_stream(outputFormatCtx, nullptr);
        AVCodecContext *outputAudioCodecCtx = avcodec_alloc_context3(inputAudioCodec);
        avcodec_parameters_to_context(outputAudioCodecCtx, inputAudioCodecPar);
        int retOutAudio = avcodec_open2(outputAudioCodecCtx, inputAudioCodec, nullptr);

        if (retOutAudio < 0)
        {
            char err[AV_ERROR_MAX_STRING_SIZE];
            av_make_error_string(err, AV_ERROR_MAX_STRING_SIZE, retOutAudio);
            spdlog::error("Failed to initialize output audio: {}", err);
            r(std::string{"ERROR-INIT-OUTPUT-AUDIO"});
            return;
        }

        outputAudioStream->codecpar->codec_type = AVMEDIA_TYPE_AUDIO;
        outputAudioStream->codecpar->codec_id = inputAudioCodec->id;
        avcodec_parameters_from_context(outputAudioStream->codecpar, outputAudioCodecCtx);

        // initialize filters
        spdlog::debug("[Mapping :: callbackAddSubtitle] Initializing filters...");

        AVFilterGraph *filterGraph = avfilter_graph_alloc();
        if (!filterGraph)
        {
            spdlog::error("Failed to allocate filter graph");
            r(std::string{"ERROR-FILTER-GRAPH"});
            return;
        }

        AVFilterContext *bufferSinkCtx;
        AVFilterContext *bufferSrcCtx;

        const AVFilter *bufferSink = avfilter_get_by_name("buffersink");
        const AVFilter *bufferSrc = avfilter_get_by_name("buffer");

        // input filter
        char filterInArgs[512];
        snprintf(filterInArgs, sizeof(filterInArgs), "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d", inputCodecPar->width, inputCodecPar->height, inputCodecCtx->pix_fmt, timeBase.num, timeBase.den, inputCodecCtx->sample_aspect_ratio.num, inputCodecCtx->sample_aspect_ratio.den);

        spdlog::debug("[Mapping :: callbackAddSubtitle] Buffer src args: {}", filterInArgs);

        int retFilterIn = avfilter_graph_create_filter(&bufferSrcCtx, bufferSrc, "in", filterInArgs, nullptr, filterGraph);
        if (retFilterIn < 0)
        {
            char err[AV_ERROR_MAX_STRING_SIZE];
            av_make_error_string(err, AV_ERROR_MAX_STRING_SIZE, retFilterIn);
            spdlog::error("Failed to create bufferSrcCtx: {}", err);
            r(std::string{"ERROR-CREATE-FILTER-SRC"});
            return;
        }

        // output filter
        int retFilterOut = avfilter_graph_create_filter(&bufferSinkCtx, bufferSink, "out", nullptr, nullptr, filterGraph);

        if (retFilterOut < 0)
        {
            char err[AV_ERROR_MAX_STRING_SIZE];
            av_make_error_string(err, AV_ERROR_MAX_STRING_SIZE, retFilterOut);
            spdlog::error("Failed to create bufferSinkCtx: {}", err);
            r(std::string{"ERROR-CREATE-FILTER-SINK"});
            return;
        }

        enum AVPixelFormat pix_fmts[] = {AV_PIX_FMT_YUV420P, AV_PIX_FMT_NONE};
        av_opt_set_int_list(bufferSinkCtx, "pix_fmts", pix_fmts, AV_PIX_FMT_NONE, AV_OPT_SEARCH_CHILDREN);

        // add filters to graph and link them
        const char *filterSpec = "drawtext=text='Legenda Adicionada Automaticamente Via FFMPEG e C++': fontcolor=yellow: bordercolor=black: fontfile='/Users/paulo/Downloads/roboto/Roboto-Black.ttf'";
        const AVFilter *filter = avfilter_get_by_name("drawtext");

        AVFilterInOut *outputs = avfilter_inout_alloc();
        AVFilterInOut *inputs = avfilter_inout_alloc();

        outputs->name = av_strdup("in");
        outputs->filter_ctx = bufferSrcCtx;
        outputs->pad_idx = 0;
        outputs->next = nullptr;
        inputs->name = av_strdup("out");
        inputs->filter_ctx = bufferSinkCtx;
        inputs->pad_idx = 0;
        inputs->next = nullptr;

        if (avfilter_graph_parse_ptr(filterGraph, filterSpec, &inputs, &outputs, nullptr) < 0)
        {
            spdlog::error("Failed to parse filter graph");
            r(std::string{"ERROR-PARSE-FILTER"});
            return;
        }

        if (avfilter_graph_config(filterGraph, nullptr) < 0)
        {
            spdlog::error("Failed to configure filter graph");
            r(std::string{"ERROR-CONFIG-FILTER"});
            return;
        }

        // header
        spdlog::debug("[Mapping :: callbackAddSubtitle] Writing header...");

        if (avformat_write_header(outputFormatCtx, nullptr) < 0)
        {
            spdlog::error("Error writing header");
            r(std::string{"ERROR-WRITE-HEADER"});
            return;
        }

        // read frames and write to output
        AVPacket *packet = av_packet_alloc();
        AVFrame *frame = av_frame_alloc();

        frame->format = inputCodecCtx->pix_fmt;
        frame->width = inputCodecCtx->width;
        frame->height = inputCodecCtx->height;

        AVFrame *filt_frame = av_frame_alloc();

        filt_frame->format = inputCodecCtx->pix_fmt;
        filt_frame->width = inputCodecCtx->width;
        filt_frame->height = inputCodecCtx->height;

        while (av_read_frame(inputFormatCtx, packet) >= 0)
        {
            if (packet->stream_index == videoStreamIndex)
            {
                if (avcodec_send_packet(inputCodecCtx, packet) < 0)
                {
                    spdlog::error("Error sending packet for decoding");
                    r(std::string{"ERROR-SEND-PACKET-DECODE"});
                    return;
                }

                while (avcodec_receive_frame(inputCodecCtx, frame) == 0)
                {
                    // Envia o quadro decodificado para o gráfico de filtro
                    if (av_buffersrc_add_frame_flags(bufferSrcCtx, frame, AV_BUFFERSRC_FLAG_KEEP_REF) < 0)
                    {
                        spdlog::error("Error while feeding the filtergraph");
                        r(std::string{"ERROR-FEED-FILTERGRAPH"});
                        return;
                    }

                    // Recebe um quadro do gráfico de filtro
                    if (av_buffersink_get_frame(bufferSinkCtx, filt_frame) < 0)
                    {
                        spdlog::error("Error while receiving the filtered frame");
                        r(std::string{"ERROR-RECEIVE-FILTERED-FRAME"});
                        return;
                    }

                    // Envia o quadro decodificado para re-codificação
                    int retSendFrame = avcodec_send_frame(outputCodecCtx, filt_frame);
                    if (retSendFrame < 0)
                    {
                        char err[AV_ERROR_MAX_STRING_SIZE];
                        av_make_error_string(err, AV_ERROR_MAX_STRING_SIZE, retSendFrame);
                        spdlog::error("Error sending frame for encoding: {}", err);
                        r(std::string{"ERROR-SEND-FRAME-ENCODE"});
                        return;
                    }

                    AVPacket *output_packet = av_packet_alloc();
                    output_packet->data = nullptr;
                    output_packet->size = 0;

                    // Re-codifica filt_frame para um pacote
                    if (avcodec_receive_packet(outputCodecCtx, output_packet) == 0)
                    {
                        // Escreve o pacote no fluxo de saída
                        av_write_frame(outputFormatCtx, output_packet);
                        av_packet_unref(output_packet);
                    }

                    av_frame_unref(filt_frame);
                }

                // time
                packet->pts = av_rescale_q_rnd(packet->pts, inputFormatCtx->streams[videoStreamIndex]->time_base, outputFormatCtx->streams[videoStreamIndex]->time_base, (AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
                packet->dts = av_rescale_q_rnd(packet->dts, inputFormatCtx->streams[videoStreamIndex]->time_base, outputFormatCtx->streams[videoStreamIndex]->time_base, (AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
                packet->duration = av_rescale_q(packet->duration, inputFormatCtx->streams[videoStreamIndex]->time_base, outputFormatCtx->streams[videoStreamIndex]->time_base);
                packet->stream_index = videoStreamIndex;

                // write packet to output video stream
                av_interleaved_write_frame(outputFormatCtx, packet);
            }
            else if (packet->stream_index == audioStreamIndex)
            {
                // rescale timestamps
                packet->pts = av_rescale_q_rnd(packet->pts, inputFormatCtx->streams[audioStreamIndex]->time_base, outputFormatCtx->streams[audioStreamIndex]->time_base, (AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
                packet->dts = av_rescale_q_rnd(packet->dts, inputFormatCtx->streams[audioStreamIndex]->time_base, outputFormatCtx->streams[audioStreamIndex]->time_base, (AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
                packet->duration = av_rescale_q(packet->duration, inputFormatCtx->streams[audioStreamIndex]->time_base, outputFormatCtx->streams[audioStreamIndex]->time_base);
                packet->stream_index = audioStreamIndex;

                // write packet to output audio stream
                av_interleaved_write_frame(outputFormatCtx, packet);
            }

            av_packet_unref(packet);
        }

        av_packet_free(&packet);
        av_frame_free(&frame);
        av_frame_free(&filt_frame);

        spdlog::debug("[Mapping :: callbackAddSubtitle] Writing trailer...");

        if (av_write_trailer(outputFormatCtx) < 0)
        {
            spdlog::error("Error writing trailer");
            r(std::string{"ERROR-WRITE-TRAILER"});
            return;
        }

        // cleanup
        spdlog::debug("[Mapping :: callbackAddSubtitle] Cleaning...");

        if (!(outputFormatCtx->oformat->flags & AVFMT_NOFILE))
        {
            avio_closep(&outputFormatCtx->pb);
        }

        avcodec_free_context(&inputCodecCtx);
        avcodec_free_context(&inputAudioCodecCtx);
        avcodec_free_context(&outputCodecCtx);
        avcodec_free_context(&outputAudioCodecCtx);

        avformat_free_context(inputFormatCtx);
        avformat_free_context(outputFormatCtx);

        r(std::string{"OK"});
    }
    catch (const std::exception &e)
    {
        spdlog::error("Error: {}", e.what());
        r(std::string{"ERROR"});
    }
}


    


    The error is :

    


    [2023-10-17 06:30:16.936] [debug] [Mapping :: callbackAddSubtitle] Adding subtitle...
[2023-10-17 06:30:16.936] [debug] [Mapping :: callbackAddSubtitle] Initializing input video...
[NULL @ 0x153604a60] Opening '/Users/paulo/Downloads/movie.mp4' for reading
[file @ 0x6000001fd170] Setting default whitelist 'file,crypto,data'
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] Format mov,mp4,m4a,3gp,3g2,mj2 probed with size=2048 and score=100
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] ISO: File Type Major Brand: isom
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] Unknown dref type 0x206c7275 size 12
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] Processing st: 0, edit list 0 - media time: 0, duration: 2669670
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] Unknown dref type 0x206c7275 size 12
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] Processing st: 1, edit list 0 - media time: 1024, duration: 4272096
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] drop a frame at curr_cts: 0 @ 0
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] Before avformat_find_stream_info() pos: 113542488 bytes read:110788 seeks:1 nb_streams:2
[h264 @ 0x153604cd0] nal_unit_type: 7(SPS), nal_ref_idc: 3
[h264 @ 0x153604cd0] Decoding VUI
[h264 @ 0x153604cd0] nal_unit_type: 8(PPS), nal_ref_idc: 3
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] demuxer injecting skip 1024 / discard 0
[aac @ 0x1536056f0] skip 1024 / discard 0 samples due to side data
[h264 @ 0x153604cd0] nal_unit_type: 7(SPS), nal_ref_idc: 3
[h264 @ 0x153604cd0] Decoding VUI
[h264 @ 0x153604cd0] nal_unit_type: 8(PPS), nal_ref_idc: 3
[h264 @ 0x153604cd0] nal_unit_type: 6(SEI), nal_ref_idc: 0
[h264 @ 0x153604cd0] nal_unit_type: 5(IDR), nal_ref_idc: 3
[h264 @ 0x153604cd0] Format yuv420p chosen by get_format().
[h264 @ 0x153604cd0] Reinit context to 1088x1920, pix_fmt: yuv420p
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] All info found
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] After avformat_find_stream_info() pos: 195211 bytes read:305951 seeks:2 frames:2
[2023-10-17 06:30:18.160] [debug] [Mapping :: callbackAddSubtitle] Initializing input audio...
[h264 @ 0x143604330] nal_unit_type: 7(SPS), nal_ref_idc: 3
[h264 @ 0x143604330] Decoding VUI
[h264 @ 0x143604330] nal_unit_type: 8(PPS), nal_ref_idc: 3
[2023-10-17 06:30:18.160] [debug] [Mapping :: callbackAddSubtitle] Initializing output video...
[h264 @ 0x143611ec0] nal_unit_type: 7(SPS), nal_ref_idc: 3
[h264 @ 0x143611ec0] Decoding VUI
[h264 @ 0x143611ec0] nal_unit_type: 8(PPS), nal_ref_idc: 3
[file @ 0x6000001f4000] Setting default whitelist 'file,crypto,data'
[2023-10-17 06:30:18.167] [debug] Pixel Format: YUV420P
[2023-10-17 06:30:18.167] [debug] [Mapping :: callbackAddSubtitle] Initializing output audio...
[2023-10-17 06:30:18.167] [debug] [Mapping :: callbackAddSubtitle] Initializing filters...
[2023-10-17 06:30:18.168] [debug] [Mapping :: callbackAddSubtitle] Buffer src args: video_size=1080x1920:pix_fmt=0:time_base=1/30000:pixel_aspect=1/1
detected 10 logical cores
[in @ 0x6000004ec0b0] Setting 'video_size' to value '1080x1920'
[in @ 0x6000004ec0b0] Setting 'pix_fmt' to value '0'
[in @ 0x6000004ec0b0] Setting 'time_base' to value '1/30000'
[in @ 0x6000004ec0b0] Setting 'pixel_aspect' to value '1/1'
[in @ 0x6000004ec0b0] w:1080 h:1920 pixfmt:yuv420p tb:1/30000 fr:0/1 sar:1/1
[AVFilterGraph @ 0x6000017e8000] Setting 'text' to value 'Legenda Adicionada Automaticamente Via FFMPEG e C++'
[AVFilterGraph @ 0x6000017e8000] Setting 'fontcolor' to value 'yellow'
[AVFilterGraph @ 0x6000017e8000] Setting 'bordercolor' to value 'black'
[AVFilterGraph @ 0x6000017e8000] Setting 'fontfile' to value '/Users/paulo/Downloads/roboto/Roboto-Black.ttf'
[AVFilterGraph @ 0x6000017e8000] query_formats: 3 queried, 2 merged, 0 already done, 0 delayed
[2023-10-17 06:30:18.172] [debug] [Mapping :: callbackAddSubtitle] Writing header...
[h264 @ 0x143604330] nal_unit_type: 6(SEI), nal_ref_idc: 0
[h264 @ 0x143604330] nal_unit_type: 5(IDR), nal_ref_idc: 3
[h264 @ 0x143604330] Format yuv420p chosen by get_format().
[h264 @ 0x143604330] Reinit context to 1088x1920, pix_fmt: yuv420p
[Parsed_drawtext_0 @ 0x6000004f4160] Copying data in avfilter.
[Parsed_drawtext_0 @ 0x6000004f4160] n:0 t:0.000000 text_w:424 text_h:16 x:0 y:0
[2023-10-17 06:30:18.182] [error] Error sending frame for encoding: Invalid argument
Returned Value: ERROR-SEND-FRAME-ENCODE