Recherche avancée

Médias (2)

Mot : - Tags -/map

Autres articles (16)

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

  • Gestion générale des documents

    13 mai 2011, par

    MédiaSPIP ne modifie jamais le document original mis en ligne.
    Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
    Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...)

Sur d’autres sites (4942)

  • Encoding .png images with h264 to a file on disk

    19 février 2021, par xyfix

    Can somebody help me to find out why I end up with a file on disk that is only 24 kb and not readable by vlc or so, while I send valid YUV images to the codec. I have added the .h and .cpp file. Up till "avcodec_receive_packet" everything seems to be OK. The function call "avcodec_send_frame" returns 0, so that must be OK but "avcodec_receive_packet" returns -11. If I flush the encoder (currently commented) then "avcodec_receive_packet" returns 0 and I can see encoded data if I store it on disk. Also the input image to the encoder is also correct (currently commented) and checked. I'm aiming for an intra-frame encoding, so I should get the encoded frame data back, but I don't get anything back even if I send 24 images to it.

    


    .h file

    


    #ifndef MOVIECODEC_H
#define MOVIECODEC_H

#include 

extern "C"
{
    #include "Codec/include/libavcodec/avcodec.h"
    #include "Codec/include/libavdevice/avdevice.h"
    #include "Codec/include/libavformat/avformat.h"
    #include "Codec/include/libavutil/avutil.h"
    #include "Codec/include/libavutil/imgutils.h"
    #include "Codec/include/libswscale/swscale.h"
}


class MovieCodec
{
public:

    MovieCodec(const char *filename);

    ~MovieCodec();

    void encodeImage( const cv::Mat& image );

    void close();
    
private :

    void add_stream();

    void openVideoCodec();

    void write_video_frame(const cv::Mat& image);

    void createFrame( const cv::Mat& image );

private:

    static int s_frameCount;

    int m_timeVideo = 0;

    std::string m_filename;

    FILE* m_file;

    AVCodec* m_encoder = NULL;

    AVOutputFormat* m_outputFormat = NULL;

    AVFormatContext* m_formatCtx = NULL;

    AVCodecContext* m_codecCtx = NULL;

    AVStream* m_streamOut = NULL;

    AVFrame* m_frame = NULL;

    AVPacket* m_packet = NULL;

};


    


    .cpp file

    


    #ifndef MOVIECODEC_CPP
#define MOVIECODEC_CPP

#include "moviecodec.h"


#define STREAM_DURATION   5.0
#define STREAM_FRAME_RATE 24
#define STREAM_NB_FRAMES  ((int)(STREAM_DURATION * STREAM_FRAME_RATE))
#define STREAM_PIX_FMT    AV_PIX_FMT_YUV420P /* default pix_fmt */


static int sws_flags = SWS_BICUBIC;
int MovieCodec::s_frameCount = 0;

MovieCodec::MovieCodec( const char* filename ) :
    m_filename( filename ),
    m_encoder( avcodec_find_encoder( AV_CODEC_ID_H264 ))
{
    av_log_set_level(AV_LOG_VERBOSE);

    int ret(0);

    m_file = fopen( m_filename.c_str(), "wb");

    // allocate the output media context
    ret = avformat_alloc_output_context2( &m_formatCtx, m_outputFormat, NULL, m_filename.c_str());

    if (!m_formatCtx)
        return;

    m_outputFormat = m_formatCtx->oformat;

    // Add the video stream using H264 codec
    add_stream();

    // Open video codec and allocate the necessary encode buffers
    if (m_streamOut)
        openVideoCodec();

    av_dump_format( m_formatCtx, 0, m_filename.c_str(), 1);

    // Open the output media file, if needed
    if (!( m_outputFormat->flags & AVFMT_NOFILE))
    {
        ret = avio_open( &m_formatCtx->pb, m_filename.c_str(), AVIO_FLAG_WRITE);

        if (ret < 0)
        {
            char error[255];
            ret = av_strerror( ret, error, 255);
            fprintf(stderr, "Could not open '%s': %s\n", m_filename.c_str(), error);
            return ;
        }
    }
    else
    {
        return;
    }

    m_formatCtx->flush_packets = 1;

    ret = avformat_write_header( m_formatCtx, NULL );

    if (ret < 0)
    {
        char error[255];
        av_strerror(ret, error, 255);
        fprintf(stderr, "Error occurred when opening output file: %s\n", error);
        return;
    }


    if ( m_frame )
           m_frame->pts = 0;
}



MovieCodec::~MovieCodec()
{
    close();
}



void MovieCodec::encodeImage(const cv::Mat &image)
{
    // Compute video time from last added video frame
    m_timeVideo = (double)m_frame->pts) * av_q2d(m_streamOut->time_base);

    // Stop media if enough time
    if (!m_streamOut /*|| m_timeVideo >= STREAM_DURATION*/)
       return;

    // Add a video frame
    write_video_frame( image );

    // Increase frame pts according to time base
    m_frame->pts += av_rescale_q(1, m_codecCtx->time_base, m_streamOut->time_base);
}


void MovieCodec::close()
{
    int ret( 0 );

    // Write media trailer
//    if( m_formatCtx )
//        ret = av_write_trailer( m_formatCtx );

    /* flush the encoder */
    ret = avcodec_send_frame(m_codecCtx, NULL);

    /* Close each codec. */
    if ( m_streamOut )
    {
        av_free( m_frame->data[0]);
        av_free( m_frame );
    }

    if (!( m_outputFormat->flags & AVFMT_NOFILE))
        /* Close the output file. */
        ret = avio_close( m_formatCtx->pb);


    /* free the stream */
    avformat_free_context( m_formatCtx );

    fflush( m_file );
}


void MovieCodec::createFrame( const cv::Mat& image )
{
    /**
     * \note allocate frame
     */
    m_frame = av_frame_alloc();
    m_frame->format = STREAM_PIX_FMT;
    m_frame->width = image.cols();
    m_frame->height = image.rows();
    m_frame->pict_type = AV_PICTURE_TYPE_I;
    int ret = av_image_alloc(m_frame->data, m_frame->linesize, m_frame->width,  m_frame->height, STREAM_PIX_FMT, 1);

    if (ret < 0)
    {
        return;
    }

    struct SwsContext* sws_ctx = sws_getContext((int)image.cols(), (int)image.rows(), AV_PIX_FMT_RGB24,
                                                (int)image.cols(), (int)image.rows(), STREAM_PIX_FMT, 0, NULL, NULL, NULL);

    const uint8_t* rgbData[1] = { (uint8_t* )image.getData() };
    int rgbLineSize[1] = { 3 * image.cols() };

    sws_scale(sws_ctx, rgbData, rgbLineSize, 0, image.rows(), m_frame->data, m_frame->linesize);
}


/* Add an output stream. */
void MovieCodec::add_stream()
{
    AVCodecID codecId = AV_CODEC_ID_H264;

    if (!( m_encoder ))
    {
        fprintf(stderr, "Could not find encoder for '%s'\n",
            avcodec_get_name(codecId));
        return;
    }

    // Get the stream for codec
    m_streamOut = avformat_new_stream(m_formatCtx, m_encoder);

    if (!m_streamOut) {
        fprintf(stderr, "Could not allocate stream\n");
        return;
    }

    m_streamOut->id = m_formatCtx->nb_streams - 1;

    m_codecCtx = avcodec_alloc_context3( m_encoder);

    m_streamOut->codecpar->codec_id = codecId;
    m_streamOut->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
    m_streamOut->codecpar->bit_rate = 400000;
    m_streamOut->codecpar->width = 800;
    m_streamOut->codecpar->height = 640;
    m_streamOut->codecpar->format = STREAM_PIX_FMT;
    m_streamOut->codecpar->codec_tag = 0x31637661;
    m_streamOut->codecpar->video_delay = 0;
    m_streamOut->time_base = { 1, STREAM_FRAME_RATE };


    avcodec_parameters_to_context( m_codecCtx, m_streamOut->codecpar);
    
    m_codecCtx->gop_size = 0; /* emit one intra frame every twelve frames at most */
    m_codecCtx->max_b_frames = 0;
    m_codecCtx->time_base = { 1, STREAM_FRAME_RATE };
    m_codecCtx->framerate = { STREAM_FRAME_RATE, 1 };
    m_codecCtx->pix_fmt = STREAM_PIX_FMT;



    if (m_streamOut->codecpar->codec_id == AV_CODEC_ID_H264)
    {
      av_opt_set( m_codecCtx, "preset", "ultrafast", 0 );
      av_opt_set( m_codecCtx, "vprofile", "baseline", 0 );
      av_opt_set( m_codecCtx, "tune", "zerolatency", 0 );
    }

//    /* Some formats want stream headers to be separate. */
//    if (m_formatCtx->oformat->flags & AVFMT_GLOBALHEADER)
//            m_codecCtx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;


}


void MovieCodec::openVideoCodec()
{
    int ret;

    /* open the codec */
    ret = avcodec_open2(m_codecCtx, m_encoder, NULL);

    if (ret < 0)
    {
        char error[255];
        av_strerror(ret, error, 255);
        fprintf(stderr, "Could not open video codec: %s\n", error);
    }
}



void MovieCodec::write_video_frame( const cv::Mat& image )
{
    int ret;

    createFrame( image );


    if (m_formatCtx->oformat->flags & 0x0020 )
    {
        /* Raw video case - directly store the picture in the packet */
        AVPacket pkt;
        av_init_packet(&pkt);

        pkt.flags |= AV_PKT_FLAG_KEY;
        pkt.stream_index = m_streamOut->index;
        pkt.data = m_frame->data[0];
        pkt.size = sizeof(AVPicture);

//        ret = av_interleaved_write_frame(m_formatCtx, &pkt);
        ret = av_write_frame( m_formatCtx, &pkt );
    }
    else
    {
        AVPacket pkt;
        av_init_packet(&pkt);

        /* encode the image */

//cv::Mat yuv420p( m_frame->height + m_frame->height/2, m_frame->width, CV_8UC1, m_frame->data[0]);
//cv::Mat cvmIm;
//cv::cvtColor(yuv420p,cvmIm,CV_YUV420p2BGR);
//cv::imwrite("c:\\tmp\\YUVoriginal.png", cvmIm);

        ret = avcodec_send_frame(m_codecCtx, m_frame);

        if (ret < 0)
        {
            char error[255];
            av_strerror(ret, error, 255);
            fprintf(stderr, "Error encoding video frame: %s\n", error);
            return;
        }

        /* If size is zero, it means the image was buffered. */
//        ret = avcodec_receive_packet(m_codecCtx, &pkt);

        do
        {
            ret = avcodec_receive_packet(m_codecCtx, &pkt);

            if (ret == 0)
            {
                ret = av_write_frame( m_formatCtx, &pkt );
                av_packet_unref(&pkt);

                break;
            }
//            else if ((ret < 0) && (ret != AVERROR(EAGAIN)))
//            {
//                return;
//            }
//            else if (ret == AVERROR(EAGAIN))
//            {
//                /* flush the encoder */
//                ret = avcodec_send_frame(m_codecCtx, NULL);
//
//                if (0 > ret)
//                    return;
//            }
        } while (ret == 0);

        if( !ret && pkt.size)
        {
            pkt.stream_index = m_streamOut->index;

            /* Write the compressed frame to the media file. */
//            ret = av_interleaved_write_frame(m_formatCtx, &pkt);
            ret = av_write_frame( m_formatCtx, &pkt );
        }
        else
        {
            ret = 0;
        }
    }

    if (ret != 0)
    {
        char error[255];
        av_strerror(ret, error, 255);
        fprintf(stderr, "Error while writing video frame: %s\n", error);
        return;
    }

    s_frameCount++;
}


    


  • FFmpeg stream stops after a certain time

    7 avril 2021, par Anarno

    We have a little Nodejs app, which starts a stream process, with a child_process.spawn. On the client-side, we have an HTML5-canvas element, which records the video data new MediaRecorder(canvas.captureStream(30), config), then this client sends its data to our Nodejs server over a WebSocket connection. We using FFmpeg for video encoding and decoding, then we send the data to our 3-rd party service (MUX), which accepts the stream and broadcasts them. Sadly the process continuously loses its fps, and after in general 1 minute, stops with an interesting error code. (when we save the video result locally instead of streaming via rtmps, it works perfectly.

    


    *The whole system is in docker.

    


    The error :

    


     stderr: [tls @ 0x7f998e7bca40] Error in the pull function.&#xA;Our_app_logs:    | av_interleaved_write_frame(): I/O error    &#xA;Our_app_logs:    | [flv @ 0x7f998eeb1680] Failed to update header with correct duration.&#xA;Our_app_logs:    | [flv @ 0x7f998eeb1680] Failed to update header with correct filesize.&#xA;Our_app_logs:    | Error writing trailer of rtmps://global-live.mux.com/app/94e85197-78a3-f092-3437-03d93aba74e0: I/O error&#xA;Our_app_logs:    |  <buffer 5b="5b" 74="74" 6c="6c" 73="73" 20="20" 40="40" 30="30" 78="78" 37="37" 66="66" 39="39" 38="38" 65="65" 62="62" 63="63" 61="61" 34="34" 5d="5d" 45="45" 72="72" 6f="6f" 69="69" 6e="6e" 68="68" 70="70" 75="75" 2e="2e">&#xA;Our_app_logs:    | stderr: frame= 1478 fps= 25 q=23.0 Lsize=  &#xA;   402kB time=00:01:02.89 bitrate=  52.4kbits/s speed=1.05x&#xA;Our_app_logs:    | video:369kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 9.034639%&#xA;Our_app_logs:    |  <buffer 66="66" 72="72" 61="61" 6d="6d" 65="65" 3d="3d" 20="20" 31="31" 34="34" 37="37" 38="38" 70="70" 73="73" 32="32" 35="35" 71="71" 33="33" 2e="2e" 30="30" 4c="4c" 69="69" 7a="7a" 6b="6b" 42="42" 74="74">&#xA;Our_app_logs:    | stderr: [tls @ 0x7f998e7bca40]  <buffer 5b="5b" 6c="6c" 73="73" 20="20" 40="40" 30="30" 78="78" 37="37" 66="66" 39="39" 38="38" 65="65" 62="62" 63="63" 61="61" 34="34" 5d="5d">    &#xA;Our_app_logs:    | stderr: The specified session has been invalidated for some reason.&#xA;Our_app_logs:    |  <buffer 54="54" 68="68" 65="65" 20="20" 73="73" 70="70" 63="63" 69="69" 66="66" 64="64" 6f="6f" 6e="6e" 61="61" 62="62" 76="76" 6c="6c" 74="74" 72="72" 6d="6d">&#xA;Our_app_logs:    | stderr:     Last message repeated 1 times  &#xA;Our_app_logs:    |  <buffer 20="20" 4c="4c" 61="61" 73="73" 74="74" 6d="6d" 65="65" 67="67" 72="72" 70="70" 64="64" 31="31" 69="69" 0a="0a">  &#xA;Our_app_logs:    | stderr: [libx264 @ 0x7f998e790080]  <buffer 5b="5b" 6c="6c" 69="69" 62="62" 78="78" 32="32" 36="36" 34="34" 20="20" 40="40" 30="30" 37="37" 66="66" 39="39" 38="38" 65="65" 5d="5d">&#xA;Our_app_logs:    | stderr: frame I:13    Avg QP: 5.39  size:  &#xA;2478&#xA;Our_app_logs:    |  <buffer 66="66" 72="72" 61="61" 6d="6d" 65="65" 20="20" 49="49" 3a="3a" 31="31" 33="33" 41="41" 76="76" 67="67" 51="51" 50="50" 35="35" 2e="2e" 39="39" 73="73" 69="69" 7a="7a" 32="32" 34="34" 37="37" 38="38" 0a="0a">&#xA;Our_app_logs:    | stderr: [libx264 @ 0x7f998e790080]  <buffer 5b="5b" 6c="6c" 69="69" 62="62" 78="78" 32="32" 36="36" 34="34" 20="20" 40="40" 30="30" 37="37" 66="66" 39="39" 38="38" 65="65" 5d="5d">&#xA;Our_app_logs:    | stderr: frame P:1465  Avg QP:13.51  size:  &#xA; 235&#xA;Our_app_logs:    |  <buffer 66="66" 72="72" 61="61" 6d="6d" 65="65" 20="20" 50="50" 3a="3a" 31="31" 34="34" 36="36" 35="35" 41="41" 76="76" 67="67" 51="51" 33="33" 2e="2e" 73="73" 69="69" 7a="7a" 32="32" 0a="0a">&#xA;Our_app_logs:    | stderr: [libx264 @ 0x7f998e790080]  <buffer 5b="5b" 6c="6c" 69="69" 62="62" 78="78" 32="32" 36="36" 34="34" 20="20" 40="40" 30="30" 37="37" 66="66" 39="39" 38="38" 65="65" 5d="5d">&#xA;Our_app_logs:    | stderr: mb I  I16..4: 99.2%  0.1%  0.7%    &#xA;Our_app_logs:    |  <buffer 6d="6d" 62="62" 20="20" 49="49" 31="31" 36="36" 2e="2e" 34="34" 3a="3a" 39="39" 32="32" 25="25" 30="30" 37="37" 0a="0a">        &#xA;Our_app_logs:    | stderr: [libx264 @ 0x7f998e790080]  <buffer 5b="5b" 6c="6c" 69="69" 62="62" 78="78" 32="32" 36="36" 34="34" 20="20" 40="40" 30="30" 37="37" 66="66" 39="39" 38="38" 65="65" 5d="5d">&#xA;Our_app_logs:    | stderr: mb P  I16..4:  0.3%  0.0%  0.0%  P16..4:  0.1%  0.0%  0.0%  0.0%  0.0%    skip:99.6%&#xA;Our_app_logs:    |  <buffer 6d="6d" 62="62" 20="20" 50="50" 49="49" 31="31" 36="36" 2e="2e" 34="34" 3a="3a" 30="30" 33="33" 25="25">&#xA;Our_app_logs:    | stderr: [libx264 @ 0x7f998e790080]  <buffer 5b="5b" 6c="6c" 69="69" 62="62" 78="78" 32="32" 36="36" 34="34" 20="20" 40="40" 30="30" 37="37" 66="66" 39="39" 38="38" 65="65" 5d="5d">&#xA;Our_app_logs:    | stderr: 8x8 transform intra:0.3% inter:17.3%&#xA;Our_app_logs:    |  <buffer 38="38" 78="78" 20="20" 74="74" 72="72" 61="61" 6e="6e" 73="73" 66="66" 6f="6f" 6d="6d" 69="69" 3a="3a" 30="30" 2e="2e" 33="33" 25="25" 65="65" 31="31" 37="37" 0a="0a">&#xA;Our_app_logs:    | stderr: [libx264 @ 0x7f998e790080]  <buffer 5b="5b" 6c="6c" 69="69" 62="62" 78="78" 32="32" 36="36" 34="34" 20="20" 40="40" 30="30" 37="37" 66="66" 39="39" 38="38" 65="65" 5d="5d">&#xA;Our_app_logs:    | stderr: coded y,uvDC,uvAC intra: 1.4% 6.9% &#xA;4.7% inter: 0.0% 0.0% 0.0%&#xA;Our_app_logs:    |  <buffer 63="63" 6f="6f" 64="64" 65="65" 20="20" 79="79" 2c="2c" 75="75" 76="76" 44="44" 43="43" 41="41" 69="69" 6e="6e" 74="74" 72="72" 61="61" 3a="3a" 31="31" 2e="2e" 34="34" 25="25" 36="36" 39="39" 37="37" 30="30">&#xA;Our_app_logs:    | stderr: [libx264 @ 0x7f998e790080]  <buffer 5b="5b" 6c="6c" 69="69" 62="62" 78="78" 32="32" 36="36" 34="34" 20="20" 40="40" 30="30" 37="37" 66="66" 39="39" 38="38" 65="65" 5d="5d">&#xA;Our_app_logs:    | stderr: i16 v,h,dc,p: 90%  5%  5%  0%&#xA;Our_app_logs:    |  <buffer 69="69" 31="31" 36="36" 20="20" 76="76" 2c="2c" 68="68" 64="64" 63="63" 70="70" 3a="3a" 39="39" 30="30" 25="25" 35="35" 0a="0a">&#xA;Our_app_logs:    | stderr: [libx264 @ 0x7f998e790080]  <buffer 5b="5b" 6c="6c" 69="69" 62="62" 78="78" 32="32" 36="36" 34="34" 20="20" 40="40" 30="30" 37="37" 66="66" 39="39" 38="38" 65="65" 5d="5d">&#xA;Our_app_logs:    | stderr: i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 23% &#xA;18% 51%  6%  0%  0%  0%  0%  3%&#xA;Our_app_logs:    |  <buffer 69="69" 38="38" 20="20" 76="76" 2c="2c" 68="68" 64="64" 63="63" 6c="6c" 72="72" 75="75" 3a="3a" 32="32" 33="33" 25="25" 31="31" 35="35" 36="36" 30="30">&#xA;Our_app_logs:    | stderr: [libx264 @ 0x7f998e790080]  <buffer 5b="5b" 6c="6c" 69="69" 62="62" 78="78" 32="32" 36="36" 34="34" 20="20" 40="40" 30="30" 37="37" 66="66" 39="39" 38="38" 65="65" 5d="5d">&#xA;Our_app_logs:    | stderr: i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 33% &#xA;25% 40%  0%  0%  0%  0%  0%  0%&#xA;Our_app_logs:    |  <buffer 69="69" 34="34" 20="20" 76="76" 2c="2c" 68="68" 64="64" 63="63" 6c="6c" 72="72" 75="75" 3a="3a" 33="33" 25="25" 32="32" 35="35" 30="30">&#xA;Our_app_logs:    | stderr: [libx264 @ 0x7f998e790080]  <buffer 5b="5b" 6c="6c" 69="69" 62="62" 78="78" 32="32" 36="36" 34="34" 20="20" 40="40" 30="30" 37="37" 66="66" 39="39" 38="38" 65="65" 5d="5d">&#xA;Our_app_logs:    | stderr: i8c dc,h,v,p: 86%  7%  6%  0%&#xA;Our_app_logs:    |  <buffer 69="69" 38="38" 63="63" 20="20" 64="64" 2c="2c" 68="68" 76="76" 70="70" 3a="3a" 36="36" 25="25" 37="37" 30="30" 0a="0a">&#xA;Our_app_logs:    | stderr: [libx264 @ 0x7f998e790080]  <buffer 5b="5b" 6c="6c" 69="69" 62="62" 78="78" 32="32" 36="36" 34="34" 20="20" 40="40" 30="30" 37="37" 66="66" 39="39" 38="38" 65="65" 5d="5d">&#xA;Our_app_logs:    | stderr: Weighted P-Frames: Y:0.1% UV:0.1%  &#xA;Our_app_logs:    |  <buffer 57="57" 65="65" 69="69" 67="67" 68="68" 74="74" 64="64" 20="20" 50="50" 2d="2d" 46="46" 72="72" 61="61" 6d="6d" 73="73" 3a="3a" 59="59" 30="30" 2e="2e" 31="31" 25="25" 55="55" 56="56" 0a="0a">  &#xA;Our_app_logs:    | stderr: [libx264 @ 0x7f998e790080]  <buffer 5b="5b" 6c="6c" 69="69" 62="62" 78="78" 32="32" 36="36" 34="34" 20="20" 40="40" 30="30" 37="37" 66="66" 39="39" 38="38" 65="65" 5d="5d">&#xA;Our_app_logs:    | stderr: kb/s:2041.23&#xA;Our_app_logs:    |  <buffer 6b="6b" 62="62" 2f="2f" 73="73" 3a="3a" 32="32" 30="30" 34="34" 31="31" 2e="2e" 33="33" 0a="0a">&#xA;Our_app_logs:    | stderr: Conversion failed!&#xA;Our_app_logs:    |  <buffer 43="43" 6f="6f" 6e="6e" 76="76" 65="65" 72="72" 73="73" 69="69" 20="20" 66="66" 61="61" 6c="6c" 64="64" 21="21" 0a="0a">&#xA;Our_app_logs:    | close, code: 1, signal: null&#xA;Our_app_logs:    | from react application: 14203&#xA;Our_app_logs:    | Status ok...&#xA;Our_app_logs:    | Data ok...&#xA;Our_app_logs:    | FFmpeg ok...&#xA;Our_app_logs:    | Writeable ok... <buffer c4="c4" 81="81" 0e="0e" 11="11" 00="00" 01="01" 61="61" c7="c7" 80="80" 5b="5b" b6="b6" 72="72" 03="03" bc="bc" b7="b7" de="de" 59="59" 7f="7f" 3c="3c" 27="27" b3="b3" 87="87" b2="b2" e6="e6" 84="84" d0="d0" f0="f0" 02="02" 2d="2d" c0="c0" 08="08" 70="70"> undefined&#xA;Our_app_logs:    | stderr: ffmpeg version 4.2.4 Copyright (c) &#xA;2000-2020 the FFmpeg developers&#xA;Our_app_logs:    |   built with gcc 9.2.0 (Alpine 9.2.0)&#xA;Our_app_logs:    |   configuration: --prefix=/usr --enable-avresample --enable-avfilter --enable-gnutls --enable-gpl --enable-libass --enable-libmp3lame --enable-libvorbis --enable-libvpx --enable-libxvid --enable-libx264 --enable-libx265 --enable-libtheora --enable-libv4l2 --enable-postproc --enable-pic --enable-pthreads --enable-shared --enable-libxcb --disable-stripping --disable-static --disable-librtmp --enable-vaapi --enable-vdpau --enable-libopus --disable-debug&#xA;Our_app_logs:    |  <buffer 66="66" 6d="6d" 70="70" 65="65" 67="67" 20="20" 76="76" 72="72" 73="73" 69="69" 6f="6f" 6e="6e" 34="34" 2e="2e" 32="32" 43="43" 79="79" 68="68" 74="74" 28="28" 63="63" 29="29" 30="30" 2d="2d" 46="46">&#xA;Our_app_logs:    | stderr:   libavutil      56. 31.100 / 56. 31.100&#xA;Our_app_logs:    |   libavcodec     58. 54.100 / 58. 54.100   &#xA;Our_app_logs:    |   libavformat    58. 29.100 / 58. 29.100&#xA;Our_app_logs:    |   libavdevice    58.  8.100 / 58.  8.100&#xA;Our_app_logs:    |   libavfilter     7. 57.100 /  7. 57.100&#xA;Our_app_logs:    |   libavresample   4.  0.  0 /  4.  0.  0   &#xA;Our_app_logs:    |   libswscale      5.  5.100 /  5.  5.100   &#xA;Our_app_logs:    |   libswresample   3.  5.100 /  3.  5.100   &#xA;Our_app_logs:    |   libpostproc    55.  5.100 / 55.  5.100   &#xA;Our_app_logs:    |  <buffer 20="20" 6c="6c" 69="69" 62="62" 61="61" 76="76" 75="75" 74="74" 35="35" 36="36" 2e="2e" 33="33" 31="31" 30="30" 2f="2f" 0a="0a" 63="63" 6f="6f">&#xA;Our_app_logs:    | stderr: [h264 @ 0x7f299f44f600] non-existing PPS 14 referenced&#xA;Our_app_logs:    |     Last message repeated 4 times&#xA;Our_app_logs:    | [h264 @ 0x7f299f44f600] decode_slice_header error&#xA;Our_app_logs:    | [h264 @ 0x7f299f44f600] non-existing PPS 14 referenced&#xA;Our_app_logs:    | [h264 @ 0x7f299f44f600] decode_slice_header error&#xA;Our_app_logs:    | [h264 @ 0x7f299f44f600] no frame!&#xA;Our_app_logs:    | [h264 @ 0x7f299f44f600] non-existing PPS 14 referenced&#xA;Our_app_logs:    | [h264 @ 0x7f299f44f600] decode_slice_header error&#xA;Our_app_logs:    |  <buffer 5b="5b" 68="68" 32="32" 36="36" 34="34" 20="20" 40="40" 30="30" 78="78" 37="37" 66="66" 39="39" 5d="5d" 6e="6e" 6f="6f" 2d="2d" 65="65" 69="69" 73="73" 74="74" 67="67" 50="50" 53="53" 31="31" 72="72">&#xA;Our_app_logs:    | stderr: [h264 @ 0x7f299f44f600] non-existing PPS 14 referenced&#xA;Our_app_logs:    | [h264 @ 0x7f299f44f600] decode_slice_header error&#xA;Our_app_logs:    | [h264 @ 0x7f299f44f600] no frame!&#xA;Our_app_logs:    | [h264 @ 0x7f299f44f600] non-existing PPS 14 referenced&#xA;Our_app_logs:    | [h264 @ 0x7f299f44f600] decode_slice_header error&#xA;Our_app_logs:    | [h264 @ 0x7f299f44f600] non-existing PPS 14 referenced&#xA;Our_app_logs:    | [h264 @ 0x7f299f44f600] decode_slice_header error&#xA;Our_app_logs:    | [h264 @ 0x7f299f44f600] no frame!&#xA;Our_app_logs:    | [h264 @ 0x7f299f44f600] non-existing PPS 14 referenced&#xA;Our_app_logs:    | [h264 @ 0x7f299f44f600] decode_slice_header error&#xA;Our_app_logs:    | [h264 @ 0x7f299f44f600] non-existing PPS 14 referenced&#xA;Our_app_logs:    | [h264 @ 0x7f299f44f600] decode_slice_header error&#xA;Our_app_logs:    | [h264 @ 0x7f299f44f600] no frame!&#xA;Our_app_logs:    | [h264 @ 0x7f299f44f600] non-existing PPS 14 referenced&#xA;Our_app_logs:    |     Last message repeated 5 times&#xA;Our_app_logs:    | [h264 @ 0x7f299f44f600] decode_slice_header error&#xA;Our_app_logs:    | [h264 @ 0x7f299f44f600] non-existing PPS 14 referenced&#xA;Our_app_logs:    | [h264 @ 0x7f299f44f600] decode_slice_header error&#xA;Our_app_logs:    | [h264 @ 0x7f299f44f600] no frame!&#xA;Our_app_logs:    | [h264 @ 0x7f299f44f600] non-existing PPS 14 referenced&#xA;Our_app_logs:    | [h264 @ 0x7f299f44f600] decode_slice_header error&#xA;Our_app_logs:    | [h264 @ 0x7f299f44f600] non-existing PPS 14 referenced&#xA;Our_app_logs:    | [h264 @ 0x7f299f44f600] decode_slice_header error&#xA;Our_app_logs:    | [h264 @ 0x7f299f44f600] no frame!&#xA;Our_app_logs:    | [h264 @ 0x7f299f44f600] non-existing PPS 14 referenced&#xA;Our_app_logs:    | [h264 @ 0x7f299f44f600] decode_slice_headerOur_app_logs:    | [h264 @ 0x7f299f44f600] non-existing PPS 14 referenced&#xA;Our_app_logs:    | [h264 @ 0x7f299f44f600] decode_slice_header error&#xA;Our_app_logs:    | [h264 @ 0x7f299f44f600] no frame!&#xA;Our_app_logs:    | [h264 @ 0x7f299f44f600] non-existing PPS 14 referenced&#xA;Our_app_logs:    | [h264 @ 0x7f299f44f600] decode_slice_header error&#xA;Our_app_logs:    | [h264 @ 0x7f299f44f600] non-existing PPS 14 referenced&#xA;Our_app_logs:    | [h264 @ 0x7f299f44f600] decode_slice_header error&#xA;Our_app_logs:    | [h264 @ 0x7f299f44f600] no frame!&#xA;Our_app_logs:    | [h264 @ 0x7f299f44f600] non-existing PPS 14 referenced&#xA;Our_app_logs:    | [h264 @ 0x7f299f44f600] decode_slice_header error&#xA;Our_app_logs:    | [h264 @ 0x7f299f44f600] non-existing PPS 14 referenced&#xA;Our_app_logs:    | [h264 @ 0x7f299f44f600] decode_slice_header error&#xA;Our_app_logs:    | [h264 @ 0x7f299f44f600] no frame!&#xA;&#xA;</buffer></buffer></buffer></buffer></buffer></buffer></buffer></buffer></buffer></buffer></buffer></buffer></buffer></buffer></buffer></buffer></buffer></buffer></buffer></buffer></buffer></buffer></buffer></buffer></buffer></buffer></buffer></buffer></buffer></buffer></buffer></buffer></buffer></buffer>

    &#xA;

    The FFmpeg config :

    &#xA;

    const FFMPEG_CONFIG = [&#xA;  &#x27;-i&#x27;,&#xA;  &#x27;-&#x27;,&#xA;&#xA;  // video codec config: low latency, adaptive bitrate&#xA;  //   &#x27;-vcodec&#x27;,&#xA;  //   &#x27;copy&#x27;,&#xA;  &#x27;-c:v&#x27;,&#xA;  &#x27;libx264&#x27;,&#xA;  &#x27;-preset&#x27;,&#xA;  &#x27;veryfast&#x27;,&#xA;  &#x27;-tune&#x27;,&#xA;  &#x27;zerolatency&#x27;,&#xA;&#xA;  // audio codec config: sampling frequency (11025, 22050, 44100), bitrate 64 kbits&#xA;  &#x27;-c:a&#x27;,&#xA;  &#x27;aac&#x27;,&#xA;  &#x27;-ar&#x27;,&#xA;  &#x27;44100&#x27;,&#xA;  &#x27;-b:a&#x27;,&#xA;  &#x27;64k&#x27;,&#xA;&#xA;  //force to overwrite&#xA;  &#x27;-y&#x27;,&#xA;&#xA;  // used for audio sync&#xA;  &#x27;-use_wallclock_as_timestamps&#x27;,&#xA;  &#x27;1&#x27;,&#xA;  &#x27;-async&#x27;,&#xA;  &#x27;1&#x27;,&#xA;&#xA;  //&#x27;-filter_complex&#x27;, &#x27;aresample=44100&#x27;, // resample audio to 44100Hz, needed if input is not 44100&#xA;  //&#x27;-strict&#x27;, &#x27;experimental&#x27;,&#xA;  &#x27;-bufsize&#x27;,&#xA;  &#x27;1000&#x27;,&#xA;  &#x27;-f&#x27;,&#xA;  &#x27;flv&#x27;,&#xA;];&#xA;

    &#xA;

    The process :

    &#xA;

    const process = child_process.spawn(&#x27;ffmpeg&#x27;, [&#xA;      ...FFMPEG_CONFIG,&#xA;      //   &#x27;local.bin&#x27;,&#xA;      url,&#xA;    ]);&#xA; &#xA;    process.stderr.on(&#x27;data&#x27;, data => {&#xA;      console.log(`stderr: ${data}`, data);&#xA;    });&#xA;&#xA;    process.stdin.on(&#x27;error&#x27;, e => {&#xA;      console.log(&#x27;FFmpeg STDIN Error&#x27;, e);&#xA;    });&#xA;&#xA;    process.on(&#x27;error&#x27;, err => console.log(err));&#xA;&#xA;    process.on(&#x27;close&#x27;, (code, signal) => {&#xA;      console.log(`close, code: ${code}, signal: ${signal}`);&#xA;    });&#xA;

    &#xA;

    The writing :

    &#xA;

    if (!Buffer.isBuffer(data)) return;&#xA;if (!process.stdin.writable) return;&#xA;process.stdin.write(data);&#xA;

    &#xA;

  • FFMPEG C++ Non monotonically increasing dts to muxer

    14 février 2021, par blacksmith

    I have two inputs :

    &#xA;

      &#xA;
    • Stream with audio and video
    • &#xA;

    • Stream with only video
    • &#xA;

    &#xA;

    I want to generate an ouput stream mixing Audio from the first stream and Video from the second stream.

    &#xA;

    But when i generate the output it says i am non monotonically increasing dts to muxer. I think im messing up with the dts.&#xA;Please help !!

    &#xA;

    Here is my code :

    &#xA;

    #include <iostream>&#xA;&#xA;//Linux...&#xA;#ifdef __cplusplus&#xA;extern "C"&#xA;{&#xA;#endif&#xA;#include <libavformat></libavformat>avformat.h>&#xA;#include <libavutil></libavutil>mathematics.h>&#xA;#include <libavutil></libavutil>time.h>&#xA;#ifdef __cplusplus&#xA;}&#xA;#endif&#xA;&#xA;using namespace std;&#xA;int main(int argc, char* argv[])&#xA;{&#xA;    AVOutputFormat* ofmt = NULL;&#xA;    //Input AVFormatContext and Output AVFormatContext&#xA;    AVFormatContext *audio_video_ifmt_ctx = NULL,&#xA;                    * only_video_ifmt_ctx = NULL,&#xA;                    * ofmt_ctx = NULL;&#xA;&#xA;    AVPacket audio_video_pkt, only_video_pkt;&#xA;    const char *audio_video_in_filename, * only_video_in_filename,  * out_filename;&#xA;    int ret, i;&#xA;    int audio_video_audioindex = -1;&#xA;    int audio_video_videoindex = -1;&#xA;    int only_video_videoindex = -1;&#xA;    int audio_video_dataindex = -1;&#xA;    int only_video_dataindex = -1;&#xA;&#xA;&#xA;    int audio_video_frame_index = 0, only_video_frame_index = 0;&#xA;    audio_video_in_filename = "rtmp://localhost/live/STREAM_NAME.flv";&#xA;    only_video_in_filename = "rtmp://localhost/live/STREAM_NAME2.flv";&#xA;    out_filename = "rtmp://localhost/live/STREAM_NAME3.flv";&#xA;    //Network&#xA;    avformat_network_init();&#xA;&#xA;    //Input ONLY VIDEO&#xA;    if ((ret = avformat_open_input(&amp;only_video_ifmt_ctx, only_video_in_filename, 0, 0)) &lt; 0) {&#xA;        printf("Could not open audio and video input file.");&#xA;        return 0;&#xA;    }&#xA;    if ((ret = avformat_find_stream_info(only_video_ifmt_ctx, 0)) &lt; 0) {&#xA;        printf("Failed to retrieve audio and video input stream information");&#xA;        return 0;&#xA;    }&#xA;    //Input AUDIO AND VIDEO&#xA;    if ((ret = avformat_open_input(&amp;audio_video_ifmt_ctx, audio_video_in_filename, 0, 0)) &lt; 0) {&#xA;        printf("Could not open only video input file.");&#xA;        return 0;&#xA;    }&#xA;    if ((ret = avformat_find_stream_info(audio_video_ifmt_ctx, 0)) &lt; 0) {&#xA;        printf("Failed to retrieve only video input stream information");&#xA;        return 0;&#xA;    }&#xA;    //Search audio_video stream index&#xA;    for (i = 0; i &lt; audio_video_ifmt_ctx->nb_streams; i&#x2B;&#x2B;) {&#xA;        if (audio_video_ifmt_ctx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO) {&#xA;            audio_video_videoindex = i;&#xA;        }&#xA;        if (audio_video_ifmt_ctx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_AUDIO) {&#xA;            audio_video_audioindex = i;&#xA;        }&#xA;        if (audio_video_ifmt_ctx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_DATA) {&#xA;            audio_video_dataindex = i;&#xA;        }&#xA;    }&#xA;&#xA;    //Search only_video stream index&#xA;    for (i = 0; i &lt; only_video_ifmt_ctx->nb_streams; i&#x2B;&#x2B;) {&#xA;        if (only_video_ifmt_ctx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO) {&#xA;            only_video_videoindex = i;&#xA;        }&#xA;        else if(audio_video_ifmt_ctx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_DATA) {&#xA;            only_video_dataindex = i;&#xA;        }&#xA;    }&#xA;&#xA;    av_dump_format(audio_video_ifmt_ctx, 0, audio_video_in_filename, 0);&#xA;    av_dump_format(only_video_ifmt_ctx, 0, only_video_in_filename, 0);&#xA;&#xA;    //Allocate output stream context&#xA;    avformat_alloc_output_context2(&amp;ofmt_ctx, NULL, NULL, out_filename); //RTMP&#xA;    if (!ofmt_ctx) {&#xA;        printf("Could not create output context\n");&#xA;        ret = AVERROR_UNKNOWN;&#xA;        return 0;&#xA;    }&#xA;    ofmt = ofmt_ctx->oformat;&#xA;&#xA;    //Create Output Data and Audio Stream&#xA;    for (i = 0; i &lt; audio_video_ifmt_ctx->nb_streams; i&#x2B;&#x2B;) {&#xA;        AVStream* video_in_stream;&#xA;        //Create output AVStream according to input AVStream&#xA;        if (i == audio_video_dataindex) {&#xA;            video_in_stream = audio_video_ifmt_ctx->streams[audio_video_dataindex];&#xA;        }&#xA;        else if (i == audio_video_audioindex) {&#xA;            video_in_stream = audio_video_ifmt_ctx->streams[audio_video_audioindex];&#xA;        }&#xA;        else {&#xA;            video_in_stream = NULL;&#xA;            continue;&#xA;        }&#xA;&#xA;&#xA;        //AVStream *out_stream = avformat_new_stream(ofmt_ctx, in_stream->codec->codec);&#xA;        AVCodec* codec = avcodec_find_decoder(video_in_stream->codecpar->codec_id);&#xA;        AVStream* out_stream = avformat_new_stream(ofmt_ctx, codec);&#xA;&#xA;        if (!out_stream) {&#xA;            printf("Failed allocating output stream\n");&#xA;            ret = AVERROR_UNKNOWN;&#xA;            return 0;&#xA;        }&#xA;&#xA;        AVCodecContext* p_codec_ctx = avcodec_alloc_context3(codec);&#xA;        ret = avcodec_parameters_to_context(p_codec_ctx, video_in_stream->codecpar);&#xA;&#xA;        //Copy the settings of AVCodecContext&#xA;        if (ret &lt; 0) {&#xA;            printf("Failed to copy context from input to output stream codec context\n");&#xA;            return 0;&#xA;        }&#xA;        p_codec_ctx->codec_tag = 0;&#xA;        if (ofmt_ctx->oformat->flags &amp; AVFMT_GLOBALHEADER)&#xA;            p_codec_ctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;&#xA;        ret = avcodec_parameters_from_context(out_stream->codecpar, p_codec_ctx);&#xA;        if (ret &lt; 0) {&#xA;            av_log(NULL, AV_LOG_ERROR, "eno:[%d] error to paramters codec paramter \n", ret);&#xA;        }&#xA;    }&#xA;    &#xA;    //Create Output Video Stream&#xA;    for (i = 0; i &lt; only_video_ifmt_ctx->nb_streams; i&#x2B;&#x2B;) {&#xA;        AVStream* video_in_stream;&#xA;        //Create output AVStream according to input AVStream&#xA;        if (i == only_video_dataindex) {&#xA;            continue;&#xA;            video_in_stream = only_video_ifmt_ctx->streams[only_video_dataindex];&#xA;        }&#xA;        else if (i == only_video_videoindex) {&#xA;            video_in_stream = only_video_ifmt_ctx->streams[only_video_videoindex];&#xA;        }&#xA;        else {&#xA;            video_in_stream = NULL;&#xA;            continue;&#xA;        }&#xA;&#xA;&#xA;        //AVStream *out_stream = avformat_new_stream(ofmt_ctx, in_stream->codec->codec);&#xA;        AVCodec* codec = avcodec_find_decoder(video_in_stream->codecpar->codec_id);&#xA;        AVStream* out_stream = avformat_new_stream(ofmt_ctx, codec);&#xA;&#xA;        if (!out_stream) {&#xA;            printf("Failed allocating output stream\n");&#xA;            ret = AVERROR_UNKNOWN;&#xA;            return 0;&#xA;        }&#xA;&#xA;        AVCodecContext* p_codec_ctx = avcodec_alloc_context3(codec);&#xA;        ret = avcodec_parameters_to_context(p_codec_ctx, video_in_stream->codecpar);&#xA;&#xA;        //Copy the settings of AVCodecContext&#xA;        if (ret &lt; 0) {&#xA;            printf("Failed to copy context from input to output stream codec context\n");&#xA;            return 0;&#xA;        }&#xA;        p_codec_ctx->codec_tag = 0;&#xA;        if (ofmt_ctx->oformat->flags &amp; AVFMT_GLOBALHEADER)&#xA;            p_codec_ctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;&#xA;        ret = avcodec_parameters_from_context(out_stream->codecpar, p_codec_ctx);&#xA;        if (ret &lt; 0) {&#xA;            av_log(NULL, AV_LOG_ERROR, "eno:[%d] error to paramters codec paramter \n", ret);&#xA;        }&#xA;    }&#xA;&#xA;    &#xA;   &#xA;    //Dump Format------------------&#xA;    av_dump_format(ofmt_ctx, 0, out_filename, 1);&#xA;    //Open output URL&#xA;    if (!(ofmt->flags &amp; AVFMT_NOFILE)) {&#xA;        ret = avio_open(&amp;ofmt_ctx->pb, out_filename, AVIO_FLAG_WRITE);&#xA;        if (ret &lt; 0) {&#xA;            printf("Could not open output URL &#x27;%s&#x27;", out_filename);&#xA;            goto end;&#xA;        }&#xA;    }&#xA;    //Write output file header&#xA;    ret = avformat_write_header(ofmt_ctx, NULL);&#xA;    if (ret &lt; 0) {&#xA;        printf("Error occurred when opening output URL\n");&#xA;        goto end;&#xA;    }&#xA;    //Pull stream&#xA;    while (1)&#xA;    {&#xA;&#xA;        AVStream  *audio_video_in_stream, * only_video_in_stream, * out_stream;&#xA;        ret = av_read_frame(audio_video_ifmt_ctx, &amp;audio_video_pkt);&#xA;        ret = av_read_frame(only_video_ifmt_ctx, &amp;only_video_pkt);&#xA;        if (ret &lt; 0)&#xA;            break;&#xA;&#xA;        audio_video_in_stream = audio_video_ifmt_ctx->streams[audio_video_pkt.stream_index];&#xA;        only_video_in_stream = only_video_ifmt_ctx->streams[only_video_pkt.stream_index];&#xA;       &#xA;        out_stream = NULL;&#xA;&#xA;        ///// TREAT AUDIO PACKETS /////&#xA;&#xA;        if (audio_video_pkt.stream_index == audio_video_dataindex) {&#xA;            out_stream = ofmt_ctx->streams[0];&#xA;        }&#xA;        if (audio_video_pkt.stream_index == audio_video_audioindex) {&#xA;            out_stream = ofmt_ctx->streams[1];&#xA;        }&#xA;       &#xA;       &#xA;        //Convert PTS/DTS&#xA;        if (out_stream != NULL) {&#xA;            audio_video_pkt.pts = av_rescale_q_rnd(audio_video_pkt.pts, audio_video_in_stream->time_base, out_stream->time_base, (AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));&#xA;            audio_video_pkt.dts = av_rescale_q_rnd(audio_video_pkt.dts, audio_video_in_stream->time_base, out_stream->time_base, (AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));&#xA;            audio_video_pkt.duration = av_rescale_q(audio_video_pkt.duration, audio_video_in_stream->time_base, out_stream->time_base);&#xA;            audio_video_pkt.pos = -1;&#xA;&#xA;           &#xA;        }&#xA;        //Writing audio stream&#xA;        if (out_stream != NULL) {&#xA;            ret = av_interleaved_write_frame(ofmt_ctx, &amp;audio_video_pkt);&#xA;        }&#xA;  &#xA;        ///// TREAT VIDEO PACKETS /////&#xA;&#xA;        if (only_video_pkt.stream_index == only_video_videoindex) {&#xA;            out_stream = ofmt_ctx->streams[2];&#xA;        }&#xA;&#xA;        //Convert PTS/DTS&#xA;        if (out_stream != NULL) {&#xA;            only_video_pkt.pts = av_rescale_q_rnd(only_video_pkt.pts, only_video_in_stream->time_base, out_stream->time_base, (AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));&#xA;            only_video_pkt.dts = av_rescale_q_rnd(only_video_pkt.dts, only_video_in_stream->time_base, out_stream->time_base, (AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));&#xA;            only_video_pkt.duration = av_rescale_q(only_video_pkt.duration, only_video_in_stream->time_base, out_stream->time_base);&#xA;            only_video_pkt.pos = -1;&#xA;        }&#xA;        //Writing video&#xA;        if (out_stream != NULL) {&#xA;            ret = av_interleaved_write_frame(ofmt_ctx, &amp;only_video_pkt);&#xA;        }&#xA;        //Desreferenciamos el paquete&#xA;        av_packet_unref(&amp;audio_video_pkt);&#xA;        av_packet_unref(&amp;only_video_pkt);&#xA;&#xA;        if (ret &lt; 0) {&#xA;            printf("Error muxing packet\n");&#xA;            break;&#xA;        }&#xA;    }&#xA;&#xA;    //Write file trailer&#xA;    av_write_trailer(ofmt_ctx);&#xA;end:&#xA;    avformat_close_input(&amp;audio_video_ifmt_ctx);&#xA;    avformat_close_input(&amp;only_video_ifmt_ctx);&#xA;&#xA;    /* close output */&#xA;    if (ofmt_ctx &amp;&amp; !(ofmt->flags &amp; AVFMT_NOFILE))&#xA;        avio_close(ofmt_ctx->pb);&#xA;    avformat_free_context(ofmt_ctx);&#xA;    if (ret &lt; 0 &amp;&amp; ret != AVERROR_EOF) {&#xA;        printf("Error occurred.\n");&#xA;        return -1;&#xA;    }&#xA;    return 0;&#xA;}&#xA;</iostream>

    &#xA;

    Here is the output :

    &#xA;

    Input #0, flv, from &#x27;rtmp://localhost/live/STREAM_NAME.flv&#x27;:&#xA;  Metadata:&#xA;    fileSize        : 0&#xA;    audiochannels   : 2&#xA;    2.1             : false&#xA;    3.1             : false&#xA;    4.0             : false&#xA;    4.1             : false&#xA;    5.1             : false&#xA;    7.1             : false&#xA;    encoder         : obs-output module (libobs version 25.0.4)&#xA;  Duration: 00:00:00.00, start: 2910.332000, bitrate: N/A&#xA;    Stream #0:0: Data: none&#xA;    Stream #0:1: Audio: aac (LC), 44100 Hz, stereo, fltp, 163 kb/s&#xA;    Stream #0:2: Video: h264 (High), yuv420p(progressive), 1920x1080, 2560 kb/s, 23.98 fps, 23.98 tbr, 1k tbn, 47.95 tbc&#xA;Input #0, flv, from &#x27;rtmp://localhost/live/STREAM_NAME2.flv&#x27;:&#xA;  Metadata:&#xA;    encoder         : Lavf58.65.101&#xA;  Duration: 00:00:00.00, start: 2726.365000, bitrate: N/A&#xA;    Stream #0:0: Data: none&#xA;    Stream #0:1: Video: h264 (High), yuv420p(progressive), 1920x1080, 52 fps, 52 tbr, 1k tbn, 104 tbc&#xA;Output #0, flv, to &#x27;rtmp://localhost/live/STREAM_NAME3.flv&#x27;:&#xA;    Stream #0:0: Data: none&#xA;    Stream #0:1: Audio: aac (LC), 44100 Hz, stereo, fltp, 163 kb/s&#xA;    Stream #0:2: Video: h264 (High), yuv420p(progressive), 1920x1080, q=2-31&#xA;[flv @ 0000014F2CDB9500] Application provided invalid, non monotonically increasing dts to muxer in stream 1: 2910332 >= 2726346&#xA;Error muxing packet&#xA;[flv @ 0000014F2CDB9500] Failed to update header with correct duration.&#xA;[flv @ 0000014F2CDB9500] Failed to update header with correct filesize.&#xA;Error occurred.&#xA;

    &#xA;