Recherche avancée

Médias (0)

Mot : - Tags -/auteurs

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (77)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

Sur d’autres sites (6393)

  • Convert videos from .264 to .265 (HEVC) with ffmpeg [closed]

    11 août 2024, par John Terragnoli

    I see that there are a few questions on this subject but I am still getting errors. All I want to do is convert videos in my library to HEVC so they take up less space.
    
I've tried this :

    



    ffmpeg -i input.mp4 -c:v libx265 output.mp4


    



    ffmpeg seems to take a long time and the output seems to be about the right size. The video will play with VLC but the icon is weird and when I try to open it with QuickTime, I get the error : 'The document “output.mov” could not be opened. The file isn’t compatible with QuickTime Player.'

    



    I don't want to change any of the fancy settings. I just want the files to take up less space and with minimal or no quality loss.

    



    Thanks !

    



    EDIT : 
Having trouble keeping the time stamp that I put into the videos.
    
Originally I was using exiftool in terminal. But, sometimes that doesn’t work with videos, so I would airdrop them to my iPhone, use an app called Metapho to change the dates, and then airdrop them back. Exiftool was create but sometimes I just wouldn’t work. It would change the date to something like 1109212 Aug 2nd. Weird. Bottom line is that when I do these conversions, I really don’t want lose the time stamps in them.

    



    ORIGINAL FILE THAT I TIMESTAMPED, IN .264

    



    ffmpeg version 4.2.1 Copyright (c) 2000-2019 the FFmpeg developers
  built with Apple clang version 11.0.0 (clang-1100.0.33.8)
  configuration: --prefix=/usr/local/Cellar/ffmpeg/4.2.1_2 --enable-shared --enable-pthreads --enable-version3 --enable-avresample --cc=clang --host-cflags='-I/Library/Java/JavaVirtualMachines/adoptopenjdk-13.jdk/Contents/Home/include -I/Library/Java/JavaVirtualMachines/adoptopenjdk-13.jdk/Contents/Home/include/darwin -fno-stack-check' --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libmp3lame --enable-libopus --enable-librubberband --enable-libsnappy --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librtmp --enable-libspeex --enable-libsoxr --enable-videotoolbox --disable-libjack --disable-indev=jack
  libavutil      56. 31.100 / 56. 31.100
  libavcodec     58. 54.100 / 58. 54.100
  libavformat    58. 29.100 / 58. 29.100
  libavdevice    58.  8.100 / 58.  8.100
  libavfilter     7. 57.100 /  7. 57.100
  libavresample   4.  0.  0 /  4.  0.  0
  libswscale      5.  5.100 /  5.  5.100
  libswresample   3.  5.100 /  3.  5.100
  libpostproc    55.  5.100 / 55.  5.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'test_original.mov':
  Metadata:
    major_brand     : qt  
    minor_version   : 0
    compatible_brands: qt  
    creation_time   : 2019-10-22T18:48:43.000000Z
    encoder         : HandBrake 0.10.2 2015060900
    com.apple.quicktime.creationdate: 1994-12-25T18:00:00Z
  Duration: 00:01:21.27, start: 0.000000, bitrate: 800 kb/s
    Chapter #0:0: start 0.000000, end 81.265000
    Metadata:
      title           : Chapter 12
    Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv, smpte170m/smpte170m/bt709, progressive), 710x482 [SAR 58409:65535 DAR 1043348:794715], 634 kb/s, SAR 9172:10291 DAR 404229:307900, 29.95 fps, 29.97 tbr, 90k tbn, 180k tbc (default)
    Metadata:
      creation_time   : 2019-10-22T18:48:43.000000Z
      handler_name    : Core Media Video
      encoder         : 'avc1'
    Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 160 kb/s (default)
    Metadata:
      creation_time   : 2019-10-22T18:48:43.000000Z
      handler_name    : Core Media Audio
    Stream #0:2(und): Data: bin_data (text / 0x74786574), 0 kb/s
    Metadata:
      creation_time   : 2019-10-22T18:48:43.000000Z
      handler_name    : Core Media Text
At least one output file must be specified


    



    FILE CONVERTED TO HEVC, WITHOUT -COPYTS TAG

    



    ffmpeg version 4.2.1 Copyright (c) 2000-2019 the FFmpeg developers
  built with Apple clang version 11.0.0 (clang-1100.0.33.8)
  configuration: --prefix=/usr/local/Cellar/ffmpeg/4.2.1_2 --enable-shared --enable-pthreads --enable-version3 --enable-avresample --cc=clang --host-cflags='-I/Library/Java/JavaVirtualMachines/adoptopenjdk-13.jdk/Contents/Home/include -I/Library/Java/JavaVirtualMachines/adoptopenjdk-13.jdk/Contents/Home/include/darwin -fno-stack-check' --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libmp3lame --enable-libopus --enable-librubberband --enable-libsnappy --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librtmp --enable-libspeex --enable-libsoxr --enable-videotoolbox --disable-libjack --disable-indev=jack
  libavutil      56. 31.100 / 56. 31.100
  libavcodec     58. 54.100 / 58. 54.100
  libavformat    58. 29.100 / 58. 29.100
  libavdevice    58.  8.100 / 58.  8.100
  libavfilter     7. 57.100 /  7. 57.100
  libavresample   4.  0.  0 /  4.  0.  0
  libswscale      5.  5.100 /  5.  5.100
  libswresample   3.  5.100 /  3.  5.100
  libpostproc    55.  5.100 / 55.  5.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'test_original_HEVC.mov':
  Metadata:
    major_brand     : qt  
    minor_version   : 512
    compatible_brands: qt  
    encoder         : Lavf58.29.100
  Duration: 00:01:21.30, start: 0.000000, bitrate: 494 kb/s
    Chapter #0:0: start 0.000000, end 81.265000
    Metadata:
      title           : Chapter 12
    Stream #0:0: Video: hevc (Main) (hvc1 / 0x31637668), yuv420p(tv, progressive), 710x482 [SAR 9172:10291 DAR 404229:307900], 356 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 29.97 tbc (default)
    Metadata:
      handler_name    : Core Media Video
      encoder         : Lavc58.54.100 libx265
    Stream #0:1: Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 128 kb/s (default)
    Metadata:
      handler_name    : Core Media Audio
    Stream #0:2(eng): Data: bin_data (text / 0x74786574), 0 kb/s
    Metadata:
      handler_name    : SubtitleHandler
At least one output file must be specified


    



    FILE CONVERTED TO HEVC, WITH -COPYTS TAG

    



    ffmpeg version 4.2.1 Copyright (c) 2000-2019 the FFmpeg developers
  built with Apple clang version 11.0.0 (clang-1100.0.33.8)
  configuration: --prefix=/usr/local/Cellar/ffmpeg/4.2.1_2 --enable-shared --enable-pthreads --enable-version3 --enable-avresample --cc=clang --host-cflags='-I/Library/Java/JavaVirtualMachines/adoptopenjdk-13.jdk/Contents/Home/include -I/Library/Java/JavaVirtualMachines/adoptopenjdk-13.jdk/Contents/Home/include/darwin -fno-stack-check' --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libmp3lame --enable-libopus --enable-librubberband --enable-libsnappy --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librtmp --enable-libspeex --enable-libsoxr --enable-videotoolbox --disable-libjack --disable-indev=jack
  libavutil      56. 31.100 / 56. 31.100
  libavcodec     58. 54.100 / 58. 54.100
  libavformat    58. 29.100 / 58. 29.100
  libavdevice    58.  8.100 / 58.  8.100
  libavfilter     7. 57.100 /  7. 57.100
  libavresample   4.  0.  0 /  4.  0.  0
  libswscale      5.  5.100 /  5.  5.100
  libswresample   3.  5.100 /  3.  5.100
  libpostproc    55.  5.100 / 55.  5.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'test_original_HEVC_keepts.mov':
  Metadata:
    major_brand     : qt  
    minor_version   : 512
    compatible_brands: qt  
    encoder         : Lavf58.29.100
  Duration: 00:01:21.30, start: 0.000000, bitrate: 494 kb/s
    Chapter #0:0: start 0.000000, end 81.265000
    Metadata:
      title           : Chapter 12
    Stream #0:0: Video: hevc (Main) (hvc1 / 0x31637668), yuv420p(tv, progressive), 710x482 [SAR 9172:10291 DAR 404229:307900], 356 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 29.97 tbc (default)
    Metadata:
      handler_name    : Core Media Video
      encoder         : Lavc58.54.100 libx265
    Stream #0:1: Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 128 kb/s (default)
    Metadata:
      handler_name    : Core Media Audio
    Stream #0:2(eng): Data: bin_data (text / 0x74786574), 0 kb/s
    Metadata:
      handler_name    : SubtitleHandler
At least one output file must be specified


    


  • FFMPEG. Read frame, process it, put it to output video. Copy sound stream unchanged

    9 décembre 2016, par Andrey Smorodov

    I want to apply processing to a video clip with sound track, extract and process frame by frame and write result to output file. Number of frames, size of frame and speed remains unchanged in output clip. Also I want to keep the same audio track as I have in source.

    I can read clip, decode frames and process then using opencv. Audio packets are also writes fine. I’m stuck on forming output video stream.

    The minimal runnable code I have for now (sorry it not so short, but cant do it shorter) :

    extern "C" {
    #include <libavutil></libavutil>timestamp.h>
    #include <libavformat></libavformat>avformat.h>
    #include "libavcodec/avcodec.h"
    #include <libavutil></libavutil>opt.h>
    #include <libavdevice></libavdevice>avdevice.h>
    #include <libswscale></libswscale>swscale.h>
    }
    #include "opencv2/opencv.hpp"

    #if LIBAVCODEC_VERSION_INT &lt; AV_VERSION_INT(55,28,1)
    #define av_frame_alloc  avcodec_alloc_frame
    #endif

    using namespace std;
    using namespace cv;

    static void log_packet(const AVFormatContext *fmt_ctx, const AVPacket *pkt, const char *tag)
    {
       AVRational *time_base = &amp;fmt_ctx->streams[pkt->stream_index]->time_base;

       char buf1[AV_TS_MAX_STRING_SIZE] = { 0 };
       av_ts_make_string(buf1, pkt->pts);
       char buf2[AV_TS_MAX_STRING_SIZE] = { 0 };
       av_ts_make_string(buf1, pkt->dts);
       char buf3[AV_TS_MAX_STRING_SIZE] = { 0 };
       av_ts_make_string(buf1, pkt->duration);

       char buf4[AV_TS_MAX_STRING_SIZE] = { 0 };
       av_ts_make_time_string(buf1, pkt->pts, time_base);
       char buf5[AV_TS_MAX_STRING_SIZE] = { 0 };
       av_ts_make_time_string(buf1, pkt->dts, time_base);
       char buf6[AV_TS_MAX_STRING_SIZE] = { 0 };
       av_ts_make_time_string(buf1, pkt->duration, time_base);

       printf("pts:%s pts_time:%s dts:%s dts_time:%s duration:%s duration_time:%s stream_index:%d\n",
           buf1, buf4,
           buf2, buf5,
           buf3, buf6,
           pkt->stream_index);

    }


    int main(int argc, char **argv)
    {
       AVOutputFormat *ofmt = NULL;
       AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
       AVPacket pkt;
       AVFrame *pFrame = NULL;
       AVFrame *pFrameRGB = NULL;
       int frameFinished = 0;
       pFrame = av_frame_alloc();
       pFrameRGB = av_frame_alloc();

       const char *in_filename, *out_filename;
       int ret, i;
       in_filename = "../../TestClips/Audio Video Sync Test.mp4";
       out_filename = "out.mp4";

       // Initialize FFMPEG
       av_register_all();
       // Get input file format context
       if ((ret = avformat_open_input(&amp;ifmt_ctx, in_filename, 0, 0)) &lt; 0)
       {
           fprintf(stderr, "Could not open input file '%s'", in_filename);
           goto end;
       }
       // Extract streams description
       if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) &lt; 0)
       {
           fprintf(stderr, "Failed to retrieve input stream information");
           goto end;
       }
       // Print detailed information about the input or output format,
       // such as duration, bitrate, streams, container, programs, metadata, side data, codec and time base.
       av_dump_format(ifmt_ctx, 0, in_filename, 0);

       // Allocate an AVFormatContext for an output format.
       avformat_alloc_output_context2(&amp;ofmt_ctx, NULL, NULL, out_filename);
       if (!ofmt_ctx)
       {
           fprintf(stderr, "Could not create output context\n");
           ret = AVERROR_UNKNOWN;
           goto end;
       }

       // The output container format.
       ofmt = ofmt_ctx->oformat;

       // Allocating output streams
       for (i = 0; i &lt; ifmt_ctx->nb_streams; i++)
       {
           AVStream *in_stream = ifmt_ctx->streams[i];
           AVStream *out_stream = avformat_new_stream(ofmt_ctx, in_stream->codec->codec);
           if (!out_stream)
           {
               fprintf(stderr, "Failed allocating output stream\n");
               ret = AVERROR_UNKNOWN;
               goto end;
           }
           ret = avcodec_copy_context(out_stream->codec, in_stream->codec);
           if (ret &lt; 0)
           {
               fprintf(stderr, "Failed to copy context from input to output stream codec context\n");
               goto end;
           }
           out_stream->codec->codec_tag = 0;
           if (ofmt_ctx->oformat->flags &amp; AVFMT_GLOBALHEADER)
           {
               out_stream->codec->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
           }
       }

       // Show output format info
       av_dump_format(ofmt_ctx, 0, out_filename, 1);

       // Open output file
       if (!(ofmt->flags &amp; AVFMT_NOFILE))
       {
           ret = avio_open(&amp;ofmt_ctx->pb, out_filename, AVIO_FLAG_WRITE);
           if (ret &lt; 0)
           {
               fprintf(stderr, "Could not open output file '%s'", out_filename);
               goto end;
           }
       }
       // Write output file header
       ret = avformat_write_header(ofmt_ctx, NULL);
       if (ret &lt; 0)
       {
           fprintf(stderr, "Error occurred when opening output file\n");
           goto end;
       }

       // Search for input video codec info
       AVCodec *in_codec = nullptr;
       AVCodecContext* avctx = nullptr;

       int video_stream_index = -1;
       for (int i = 0; i &lt; ifmt_ctx->nb_streams; i++)
       {
           if (ifmt_ctx->streams[i]->codec->coder_type == AVMEDIA_TYPE_VIDEO)
           {
               video_stream_index = i;
               avctx = ifmt_ctx->streams[i]->codec;
               in_codec = avcodec_find_decoder(avctx->codec_id);
               if (!in_codec)
               {
                   fprintf(stderr, "in codec not found\n");
                   exit(1);
               }
               break;
           }
       }

       // Search for output video codec info
       AVCodec *out_codec = nullptr;
       AVCodecContext* o_avctx = nullptr;

       int o_video_stream_index = -1;
       for (int i = 0; i &lt; ofmt_ctx->nb_streams; i++)
       {
           if (ofmt_ctx->streams[i]->codec->coder_type == AVMEDIA_TYPE_VIDEO)
           {
               o_video_stream_index = i;
               o_avctx = ofmt_ctx->streams[i]->codec;
               out_codec = avcodec_find_encoder(o_avctx->codec_id);
               if (!out_codec)
               {
                   fprintf(stderr, "out codec not found\n");
                   exit(1);
               }
               break;
           }
       }

       // openCV pixel format
       AVPixelFormat pFormat = AV_PIX_FMT_RGB24;
       // Data size
       int numBytes = avpicture_get_size(pFormat, avctx->width, avctx->height);
       // allocate buffer
       uint8_t *buffer = (uint8_t *)av_malloc(numBytes * sizeof(uint8_t));
       // fill frame structure
       avpicture_fill((AVPicture *)pFrameRGB, buffer, pFormat, avctx->width, avctx->height);
       // frame area
       int y_size = avctx->width * avctx->height;
       // Open input codec
       avcodec_open2(avctx, in_codec, NULL);
       // Main loop
       while (1)
       {
           AVStream *in_stream, *out_stream;
           ret = av_read_frame(ifmt_ctx, &amp;pkt);
           if (ret &lt; 0)
           {
               break;
           }
           in_stream = ifmt_ctx->streams[pkt.stream_index];
           out_stream = ofmt_ctx->streams[pkt.stream_index];
           log_packet(ifmt_ctx, &amp;pkt, "in");
           // copy packet
           pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base, AVRounding(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
           pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base, AVRounding(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
           pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base);
           pkt.pos = -1;

           log_packet(ofmt_ctx, &amp;pkt, "out");
           if (pkt.stream_index == video_stream_index)
           {
               avcodec_decode_video2(avctx, pFrame, &amp;frameFinished, &amp;pkt);
               if (frameFinished)
               {
                   struct SwsContext *img_convert_ctx;
                   img_convert_ctx = sws_getCachedContext(NULL,
                       avctx->width,
                       avctx->height,
                       avctx->pix_fmt,
                       avctx->width,
                       avctx->height,
                       AV_PIX_FMT_BGR24,
                       SWS_BICUBIC,
                       NULL,
                       NULL,
                       NULL);
                   sws_scale(img_convert_ctx,
                       ((AVPicture*)pFrame)->data,
                       ((AVPicture*)pFrame)->linesize,
                       0,
                       avctx->height,
                       ((AVPicture *)pFrameRGB)->data,
                       ((AVPicture *)pFrameRGB)->linesize);

                   sws_freeContext(img_convert_ctx);

                   // Do some image processing
                   cv::Mat img(pFrame->height, pFrame->width, CV_8UC3, pFrameRGB->data[0],false);
                   cv::GaussianBlur(img,img,Size(5,5),3);
                   cv::imshow("Display", img);
                   cv::waitKey(5);
                   // --------------------------------
                   // Transform back to initial format
                   // --------------------------------
                   img_convert_ctx = sws_getCachedContext(NULL,
                       avctx->width,
                       avctx->height,
                       AV_PIX_FMT_BGR24,
                       avctx->width,
                       avctx->height,
                       avctx->pix_fmt,
                       SWS_BICUBIC,
                       NULL,
                       NULL,
                       NULL);
                   sws_scale(img_convert_ctx,
                       ((AVPicture*)pFrameRGB)->data,
                       ((AVPicture*)pFrameRGB)->linesize,
                       0,
                       avctx->height,
                       ((AVPicture *)pFrame)->data,
                       ((AVPicture *)pFrame)->linesize);
                       // --------------------------------------------
                       // Something must be here
                       // --------------------------------------------
                       //
                       // Write fideo frame (How to write frame to output stream ?)
                       //
                       // --------------------------------------------
                        sws_freeContext(img_convert_ctx);
               }

           }
           else // write sound frame
           {
               ret = av_interleaved_write_frame(ofmt_ctx, &amp;pkt);
           }
           if (ret &lt; 0)
           {
               fprintf(stderr, "Error muxing packet\n");
               break;
           }
           // Decrease packet ref counter
           av_packet_unref(&amp;pkt);
       }
       av_write_trailer(ofmt_ctx);
    end:
       avformat_close_input(&amp;ifmt_ctx);
       // close output
       if (ofmt_ctx &amp;&amp; !(ofmt->flags &amp; AVFMT_NOFILE))
       {
           avio_closep(&amp;ofmt_ctx->pb);
       }
       avformat_free_context(ofmt_ctx);
       if (ret &lt; 0 &amp;&amp; ret != AVERROR_EOF)
       {
           char buf_err[AV_ERROR_MAX_STRING_SIZE] = { 0 };
           av_make_error_string(buf_err, AV_ERROR_MAX_STRING_SIZE, ret);
           fprintf(stderr, "Error occurred: %s\n", buf_err);
           return 1;
       }

       avcodec_close(avctx);
       av_free(pFrame);
       av_free(pFrameRGB);

       return 0;
    }
  • FFmpeg RTSP drop rate increases when frame rate is reduced

    13 avril 2024, par Avishka Perera

    I need to read an RTSP stream, process the images individually in Python, and then write the images back to an RTSP stream. As the RTSP server, I am using Mediamtx [1]. For streaming, I am using FFmpeg [2].

    &#xA;

    I have the following code that works perfectly fine. For simplification purposes, I am streaming three generated images.

    &#xA;

    import time&#xA;import numpy as np&#xA;import subprocess&#xA;&#xA;width, height = 640, 480&#xA;fps = 25&#xA;rtsp_server_address = f"rtsp://localhost:8554/mystream"&#xA;&#xA;ffmpeg_cmd = [&#xA;    "ffmpeg",&#xA;    "-re",&#xA;    "-f",&#xA;    "rawvideo",&#xA;    "-pix_fmt",&#xA;    "rgb24",&#xA;    "-s",&#xA;    f"{width}x{height}",&#xA;    "-i",&#xA;    "-",&#xA;    "-r",&#xA;    str(fps),&#xA;    "-avoid_negative_ts",&#xA;    "make_zero",&#xA;    "-vcodec",&#xA;    "libx264",&#xA;    "-threads",&#xA;    "4",&#xA;    "-f",&#xA;    "rtsp",&#xA;    rtsp_server_address,&#xA;]&#xA;colors = np.array(&#xA;    [&#xA;        [255, 0, 0],&#xA;        [0, 255, 0],&#xA;        [0, 0, 255],&#xA;    ]&#xA;).reshape(3, 1, 1, 3)&#xA;images = (np.ones((3, width, height, 3)) * colors).astype(np.uint8)&#xA;&#xA;if __name__ == "__main__":&#xA;&#xA;    process = subprocess.Popen(ffmpeg_cmd, stdin=subprocess.PIPE)&#xA;    start = time.time()&#xA;    exported = 0&#xA;    while True:&#xA;        exported &#x2B;= 1&#xA;        next_time = start &#x2B; exported / fps&#xA;        now = time.time()&#xA;        if next_time > now:&#xA;            sleep_dur = next_time - now&#xA;            time.sleep(sleep_dur)&#xA;&#xA;        image = images[exported % 3]&#xA;        image_bytes = image.tobytes()&#xA;&#xA;        process.stdin.write(image_bytes)&#xA;        process.stdin.flush()&#xA;&#xA;    process.stdin.close()&#xA;    process.wait()&#xA;

    &#xA;

    The issue is, that I need to run this at 10 fps because the processing step is heavy and can only afford 10 fps. Hence, as I reduce the frame rate from 25 to 10, the drop rate increases from 0% to 100%. And after a few iterations, I get a BrokenPipeError: [Errno 32] Broken pipe. Refer to the appendix for the complete log.

    &#xA;

    As an alternative, I can use OpenCV compiled from source with GStreamer [3], but I prefer using FFmpeg to make the shipping process simple. Since compiling OpenCV from source can be tedious and dependent on the system.

    &#xA;

    References

    &#xA;

    [1] Mediamtx (formerly rtsp-simple-server) : https://github.com/bluenviron/mediamtx

    &#xA;

    [2] FFmpeg : https://github.com/FFmpeg/FFmpeg

    &#xA;

    [3] Compile OpenCV with GStreamer : https://github.com/bluenviron/mediamtx?tab=readme-ov-file#opencv

    &#xA;

    Appendix

    &#xA;

    Creating the source stream

    &#xA;

    To instantiate the unprocessed stream, I use the following command. This streams the content of my webcam as and RTSP stream.

    &#xA;

    ffmpeg -video_size 1280x720 -i /dev/video0  -avoid_negative_ts make_zero -vcodec libx264 -r 10 -f rtsp rtsp://localhost:8554/webcam&#xA;

    &#xA;

    Error log

    &#xA;

    ffmpeg version 6.1.1 Copyright (c) 2000-2023 the FFmpeg developers&#xA;  built with gcc 12.3.0 (conda-forge gcc 12.3.0-5)&#xA;  configuration: --prefix=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_plac --cc=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/x86_64-conda-linux-gnu-cc --cxx=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/x86_64-conda-linux-gnu-c&#x2B;&#x2B; --nm=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/x86_64-conda-linux-gnu-nm --ar=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/x86_64-conda-linux-gnu-ar --disable-doc --disable-openssl --enable-demuxer=dash --enable-hardcoded-tables --enable-libfreetype --enable-libharfbuzz --enable-libfontconfig --enable-libopenh264 --enable-libdav1d --enable-gnutls --enable-libmp3lame --enable-libvpx --enable-libass --enable-pthreads --enable-vaapi --enable-libopenvino --enable-gpl --enable-libx264 --enable-libx265 --enable-libaom --enable-libsvtav1 --enable-libxml2 --enable-pic --enable-shared --disable-static --enable-version3 --enable-zlib --enable-libopus --pkg-config=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/pkg-config&#xA;  libavutil      58. 29.100 / 58. 29.100&#xA;  libavcodec     60. 31.102 / 60. 31.102&#xA;  libavformat    60. 16.100 / 60. 16.100&#xA;  libavdevice    60.  3.100 / 60.  3.100&#xA;  libavfilter     9. 12.100 /  9. 12.100&#xA;  libswscale      7.  5.100 /  7.  5.100&#xA;  libswresample   4. 12.100 /  4. 12.100&#xA;  libpostproc    57.  3.100 / 57.  3.100&#xA;Input #0, rawvideo, from &#x27;fd:&#x27;:&#xA;  Duration: N/A, start: 0.000000, bitrate: 184320 kb/s&#xA;  Stream #0:0: Video: rawvideo (RGB[24] / 0x18424752), rgb24, 640x480, 184320 kb/s, 25 tbr, 25 tbn&#xA;Stream mapping:&#xA;  Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264))&#xA;[libx264 @ 0x5e2ef8b01340] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2&#xA;[libx264 @ 0x5e2ef8b01340] profile High 4:4:4 Predictive, level 2.2, 4:4:4, 8-bit&#xA;[libx264 @ 0x5e2ef8b01340] 264 - core 164 r3095 baee400 - H.264/MPEG-4 AVC codec - Copyleft 2003-2022 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=4 threads=4 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=10 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00&#xA;Output #0, rtsp, to &#x27;rtsp://localhost:8554/mystream&#x27;:&#xA;  Metadata:&#xA;    encoder         : Lavf60.16.100&#xA;  Stream #0:0: Video: h264, yuv444p(tv, progressive), 640x480, q=2-31, 10 fps, 90k tbn&#xA;    Metadata:&#xA;      encoder         : Lavc60.31.102 libx264&#xA;    Side data:&#xA;      cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A&#xA;[vost#0:0/libx264 @ 0x5e2ef8b01080] Error submitting a packet to the muxer: Broken pipe   &#xA;[out#0/rtsp @ 0x5e2ef8afd780] Error muxing a packet&#xA;[out#0/rtsp @ 0x5e2ef8afd780] video:1kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown&#xA;frame=    1 fps=0.1 q=-1.0 Lsize=N/A time=00:00:04.70 bitrate=N/A dup=0 drop=70 speed=0.389x    &#xA;[libx264 @ 0x5e2ef8b01340] frame I:16    Avg QP: 6.00  size:   147&#xA;[libx264 @ 0x5e2ef8b01340] frame P:17    Avg QP: 9.94  size:   101&#xA;[libx264 @ 0x5e2ef8b01340] frame B:17    Avg QP: 9.94  size:    64&#xA;[libx264 @ 0x5e2ef8b01340] consecutive B-frames: 50.0%  0.0% 42.0%  8.0%&#xA;[libx264 @ 0x5e2ef8b01340] mb I  I16..4: 81.3% 18.7%  0.0%&#xA;[libx264 @ 0x5e2ef8b01340] mb P  I16..4: 52.9%  0.0%  0.0%  P16..4:  0.0%  0.0%  0.0%  0.0%  0.0%    skip:47.1%&#xA;[libx264 @ 0x5e2ef8b01340] mb B  I16..4:  0.0%  5.9%  0.0%  B16..8:  0.1%  0.0%  0.0%  direct: 0.0%  skip:94.0%  L0:56.2% L1:43.8% BI: 0.0%&#xA;[libx264 @ 0x5e2ef8b01340] 8x8 transform intra:15.4% inter:100.0%&#xA;[libx264 @ 0x5e2ef8b01340] coded y,u,v intra: 0.0% 0.0% 0.0% inter: 0.0% 0.0% 0.0%&#xA;[libx264 @ 0x5e2ef8b01340] i16 v,h,dc,p: 97%  0%  3%  0%&#xA;[libx264 @ 0x5e2ef8b01340] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu:  0%  0% 100%  0%  0%  0%  0%  0%  0%&#xA;[libx264 @ 0x5e2ef8b01340] Weighted P-Frames: Y:52.9% UV:52.9%&#xA;[libx264 @ 0x5e2ef8b01340] ref P L0: 88.9%  0.0%  0.0% 11.1%&#xA;[libx264 @ 0x5e2ef8b01340] kb/s:8.27&#xA;Conversion failed!&#xA;Traceback (most recent call last):&#xA;  File "/home/avishka/projects/read-process-stream/minimal-ffmpeg-error.py", line 58, in <module>&#xA;    process.stdin.write(image_bytes)&#xA;BrokenPipeError: [Errno 32] Broken pipe&#xA;</module>

    &#xA;