
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (35)
-
Librairies et binaires spécifiques au traitement vidéo et sonore
31 janvier 2010, parLes logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
Binaires complémentaires et facultatifs flvtool2 : (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)
Sur d’autres sites (3393)
-
ffmpeg piped output producing incorrect metadata frame count
8 décembre 2024, par XorgonThe short version : Using piped output from ffmpeg produces a file with incorrect metadata.


ffmpeg -y -i .\test_mp4.mp4 -f avi -c:v libx264 - > output.avi
to make an AVI file using the pipe output.

ffprobe -v error -count_frames -show_entries stream=duration,nb_read_frames,r_frame_rate .\output.avi


The output will show that the metadata does not match the actual frames contained in the video.


Details below.



Using Python, I am attempting to use ffmpeg to compress videos and put them in a PowerPoint. This works great, however, the video files themselves have incorrect frame counts which can cause issues when I read from those videos in other code.


Edit for clarification : by "frame count" I mean the metadata frame count. The actual number of frames contained in the video is correct, but querying the metadata gives an incorrect frame count.


Having eliminated the PowerPoint aspect of the code, I've narrowed this down to the following minimal reproducing example of saving an output from an ffmpeg pipe :


from subprocess import Popen, PIPE

video_path = 'test_mp4.mp4'

ffmpeg_pipe = Popen(['ffmpeg',
 '-y', # Overwrite files
 '-i', f'{video_path}', # Input from file
 '-f', 'avi', # Output format
 '-c:v', 'libx264', # Codec
 '-'], # Output to pipe
 stdout=PIPE)

new_path = "piped_video.avi"
vid_file = open(new_path, "wb")
vid_file.write(ffmpeg_pipe.stdout.read())
vid_file.close()



I've tested several different videos. One small example video that I've tested can be found here.


I've tried a few different codecs with
avi
format and triedlibvpx
withwebm
format. For theavi
outputs, the frame count usually reads as1073741824
(2^30). Weirdly, for thewebm
format, the frame count read as-276701161105643264
.

Edit : This issue can also be reproduced with just ffmpeg in command prompt using the following command :

ffmpeg -y -i .\test_mp4.mp4 -f avi -c:v libx264 - > output.avi


This is a snippet I used to read the frame count, but one could also see the error by opening the video details in Windows Explorer and seeing the total time as something like 9942 hours, 3 minutes, and 14 seconds.


import cv2

video_path = 'test_mp4.mp4'
new_path = "piped_video.webm"

cap = cv2.VideoCapture(video_path)
print(f"Original video frame count: = {int(cap.get(cv2.CAP_PROP_FRAME_COUNT)):d}")
cap.release()

cap = cv2.VideoCapture(new_path)
print(f"Piped video frame count: = {int(cap.get(cv2.CAP_PROP_FRAME_COUNT)):d}")
cap.release()



The error can also be observed using
ffprobe
with the following command :ffprobe -v error -count_frames -show_entries stream=duration,nb_read_frames,r_frame_rate .\output.avi
. Note that the frame rate and number of frames counted by ffprobe do not match with the duration from the metadata.

For completeness, here is the ffmpeg output :


ffmpeg version 2023-06-11-git-09621fd7d9-full_build-www.gyan.dev Copyright (c) 2000-2023 the FFmpeg developers
 built with gcc 12.2.0 (Rev10, Built by MSYS2 project)
 configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libaribcaption --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libvpl --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libcodec2 --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint
 libavutil 58. 13.100 / 58. 13.100
 libavcodec 60. 17.100 / 60. 17.100
 libavformat 60. 6.100 / 60. 6.100
 libavdevice 60. 2.100 / 60. 2.100
 libavfilter 9. 8.101 / 9. 8.101
 libswscale 7. 3.100 / 7. 3.100
 libswresample 4. 11.100 / 4. 11.100
 libpostproc 57. 2.100 / 57. 2.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'test_mp4.mp4':
 Metadata:
 major_brand : mp42
 minor_version : 0
 compatible_brands: isommp42
 creation_time : 2022-08-10T12:54:09.000000Z
 Duration: 00:00:06.67, start: 0.000000, bitrate: 567 kb/s
 Stream #0:0[0x1](eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p(progressive), 384x264 [SAR 1:1 DAR 16:11], 563 kb/s, 30 fps, 30 tbr, 30k tbn (default)
 Metadata:
 creation_time : 2022-08-10T12:54:09.000000Z
 handler_name : Mainconcept MP4 Video Media Handler
 vendor_id : [0][0][0][0]
 encoder : AVC Coding
Stream mapping:
 Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
Press [q] to stop, [?] for help
[libx264 @ 0000018c68c8b9c0] using SAR=1/1
[libx264 @ 0000018c68c8b9c0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0000018c68c8b9c0] profile High, level 2.1, 4:2:0, 8-bit
Output #0, avi, to 'pipe:':
 Metadata:
 major_brand : mp42
 minor_version : 0
 compatible_brands: isommp42
 ISFT : Lavf60.6.100
 Stream #0:0(eng): Video: h264 (H264 / 0x34363248), yuv420p(progressive), 384x264 [SAR 1:1 DAR 16:11], q=2-31, 30 fps, 30 tbn (default)
 Metadata:
 creation_time : 2022-08-10T12:54:09.000000Z
 handler_name : Mainconcept MP4 Video Media Handler
 vendor_id : [0][0][0][0]
 encoder : Lavc60.17.100 libx264
 Side data:
 cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
[out#0/avi @ 0000018c687f47c0] video:82kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 3.631060%
frame= 200 fps=0.0 q=-1.0 Lsize= 85kB time=00:00:06.56 bitrate= 106.5kbits/s speed=76.2x 
[libx264 @ 0000018c68c8b9c0] frame I:1 Avg QP:16.12 size: 3659
[libx264 @ 0000018c68c8b9c0] frame P:80 Avg QP:21.31 size: 647
[libx264 @ 0000018c68c8b9c0] frame B:119 Avg QP:26.74 size: 243
[libx264 @ 0000018c68c8b9c0] consecutive B-frames: 3.0% 53.0% 0.0% 44.0%
[libx264 @ 0000018c68c8b9c0] mb I I16..4: 17.6% 70.6% 11.8%
[libx264 @ 0000018c68c8b9c0] mb P I16..4: 0.8% 1.7% 0.6% P16..4: 17.6% 4.6% 3.3% 0.0% 0.0% skip:71.4%
[libx264 @ 0000018c68c8b9c0] mb B I16..4: 0.1% 0.3% 0.2% B16..8: 11.7% 1.4% 0.4% direct: 0.6% skip:85.4% L0:32.0% L1:59.7% BI: 8.3%
[libx264 @ 0000018c68c8b9c0] 8x8 transform intra:59.6% inter:62.4%
[libx264 @ 0000018c68c8b9c0] coded y,uvDC,uvAC intra: 48.5% 0.0% 0.0% inter: 3.5% 0.0% 0.0%
[libx264 @ 0000018c68c8b9c0] i16 v,h,dc,p: 19% 39% 25% 17%
[libx264 @ 0000018c68c8b9c0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 21% 25% 30% 3% 3% 4% 4% 4% 5%
[libx264 @ 0000018c68c8b9c0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 22% 20% 16% 6% 8% 8% 8% 5% 6%
[libx264 @ 0000018c68c8b9c0] i8c dc,h,v,p: 100% 0% 0% 0%
[libx264 @ 0000018c68c8b9c0] Weighted P-Frames: Y:0.0% UV:0.0%
[libx264 @ 0000018c68c8b9c0] ref P L0: 76.2% 7.9% 11.2% 4.7%
[libx264 @ 0000018c68c8b9c0] ref B L0: 85.6% 12.9% 1.5%
[libx264 @ 0000018c68c8b9c0] ref B L1: 97.7% 2.3%
[libx264 @ 0000018c68c8b9c0] kb/s:101.19



So the question is : why does this happen, and how can one avoid it ?


-
Can not add tmcd stream using libavcodec to replicate behavior of ffmpeg -timecode option
2 août, par Sailor JerryI'm trying to replicate option of command line ffmpeg -timecode in my C/C++ code. For some reasons the tcmd stream is not written to the output file. However the av_dump_format shows it in run time


Here is my minimal test


#include <iostream>
extern "C" {
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavutil></libavutil>avutil.h>
#include <libswscale></libswscale>swscale.h>
#include <libavutil></libavutil>opt.h>
#include <libavutil></libavutil>imgutils.h>
#include <libavutil></libavutil>samplefmt.h>
}
bool checkProResAvailability() {
 const AVCodec* codec = avcodec_find_encoder_by_name("prores_ks");
 if (!codec) {
 std::cerr << "ProRes codec not available. Please install FFmpeg with ProRes support." << std::endl;
 return false;
 }
 return true;
}

int main(){
 av_log_set_level(AV_LOG_INFO);

 const char* outputFileName = "test_tmcd.mov";
 AVFormatContext* formatContext = nullptr;
 AVCodecContext* videoCodecContext = nullptr;

 if (!checkProResAvailability()) {
 return -1;
 }

 std::cout << "Creating test file with tmcd stream: " << outputFileName << std::endl;

 // Allocate the output format context
 if (avformat_alloc_output_context2(&formatContext, nullptr, "mov", outputFileName) < 0) {
 std::cerr << "Failed to allocate output context!" << std::endl;
 return -1;
 }

 if (avio_open(&formatContext->pb, outputFileName, AVIO_FLAG_WRITE) < 0) {
 std::cerr << "Failed to open output file!" << std::endl;
 avformat_free_context(formatContext);
 return -1;
 }

 // Find ProRes encoder
 const AVCodec* videoCodec = avcodec_find_encoder_by_name("prores_ks");
 if (!videoCodec) {
 std::cerr << "Failed to find the ProRes encoder!" << std::endl;
 avio_close(formatContext->pb);
 avformat_free_context(formatContext);
 return -1;
 }

 // Video stream setup
 AVStream* videoStream = avformat_new_stream(formatContext, nullptr);
 if (!videoStream) {
 std::cerr << "Failed to create video stream!" << std::endl;
 avio_close(formatContext->pb);
 avformat_free_context(formatContext);
 return -1;
 }

 videoCodecContext = avcodec_alloc_context3(videoCodec);
 if (!videoCodecContext) {
 std::cerr << "Failed to allocate video codec context!" << std::endl;
 avio_close(formatContext->pb);
 avformat_free_context(formatContext);
 return -1;
 }

 videoCodecContext->width = 1920;
 videoCodecContext->height = 1080;
 videoCodecContext->pix_fmt = AV_PIX_FMT_YUV422P10;
 videoCodecContext->time_base = (AVRational){1, 30}; // Set FPS: 30
 videoCodecContext->bit_rate = 2000000;

 if (avcodec_open2(videoCodecContext, videoCodec, nullptr) < 0) {
 std::cerr << "Failed to open ProRes codec!" << std::endl;
 avcodec_free_context(&videoCodecContext);
 avio_close(formatContext->pb);
 avformat_free_context(formatContext);
 return -1;
 }

 if (avcodec_parameters_from_context(videoStream->codecpar, videoCodecContext) < 0) {
 std::cerr << "Failed to copy codec parameters to video stream!" << std::endl;
 avcodec_free_context(&videoCodecContext);
 avio_close(formatContext->pb);
 avformat_free_context(formatContext);
 return -1;
 }

 videoStream->time_base = videoCodecContext->time_base;

 // Timecode stream setup
 AVStream* timecodeStream = avformat_new_stream(formatContext, nullptr);
 if (!timecodeStream) {
 std::cerr << "Failed to create timecode stream!" << std::endl;
 avcodec_free_context(&videoCodecContext);
 avio_close(formatContext->pb);
 avformat_free_context(formatContext);
 return -1;
 }

 timecodeStream->codecpar->codec_type = AVMEDIA_TYPE_DATA;
 timecodeStream->codecpar->codec_id = AV_CODEC_ID_TIMED_ID3;
 timecodeStream->codecpar->codec_tag = MKTAG('t', 'm', 'c', 'd'); // Timecode tag
 timecodeStream->time_base = (AVRational){1, 30}; // FPS: 30

 if (av_dict_set(&timecodeStream->metadata, "timecode", "00:00:30:00", 0) < 0) {
 std::cerr << "Failed to set timecode metadata!" << std::endl;
 avcodec_free_context(&videoCodecContext);
 avio_close(formatContext->pb);
 avformat_free_context(formatContext);
 return -1;
 }

 // Write container header
 if (avformat_write_header(formatContext, nullptr) < 0) {
 std::cerr << "Failed to write file header!" << std::endl;
 avcodec_free_context(&videoCodecContext);
 avio_close(formatContext->pb);
 avformat_free_context(formatContext);
 return -1;
 }

 // Encode a dummy video frame
 AVFrame* frame = av_frame_alloc();
 if (!frame) {
 std::cerr << "Failed to allocate video frame!" << std::endl;
 avcodec_free_context(&videoCodecContext);
 avio_close(formatContext->pb);
 avformat_free_context(formatContext);
 return -1;
 }

 frame->format = videoCodecContext->pix_fmt;
 frame->width = videoCodecContext->width;
 frame->height = videoCodecContext->height;

 if (av_image_alloc(frame->data, frame->linesize, frame->width, frame->height, videoCodecContext->pix_fmt, 32) < 0) {
 std::cerr << "Failed to allocate frame buffer!" << std::endl;
 av_frame_free(&frame);
 avcodec_free_context(&videoCodecContext);
 avio_close(formatContext->pb);
 avformat_free_context(formatContext);
 return -1;
 }

 // Fill frame with black
 memset(frame->data[0], 0, frame->linesize[0] * frame->height); // Y plane
 memset(frame->data[1], 128, frame->linesize[1] * frame->height / 2); // U plane
 memset(frame->data[2], 128, frame->linesize[2] * frame->height / 2); // V plane

 // Encode the frame
 AVPacket packet;
 av_init_packet(&packet);
 packet.data = nullptr;
 packet.size = 0;

 if (avcodec_send_frame(videoCodecContext, frame) == 0) {
 if (avcodec_receive_packet(videoCodecContext, &packet) == 0) {
 packet.stream_index = videoStream->index;
 av_interleaved_write_frame(formatContext, &packet);
 av_packet_unref(&packet);
 }
 }

 av_frame_free(&frame);

 // Write a dummy packet for the timecode stream
 AVPacket tmcdPacket;
 av_init_packet(&tmcdPacket);
 tmcdPacket.stream_index = timecodeStream->index;
 tmcdPacket.flags |= AV_PKT_FLAG_KEY;
 tmcdPacket.data = nullptr; // Empty packet for timecode
 tmcdPacket.size = 0;
 tmcdPacket.pts = 0; // Set necessary PTS
 tmcdPacket.dts = 0;
 av_interleaved_write_frame(formatContext, &tmcdPacket);

 // Write trailer
 if (av_write_trailer(formatContext) < 0) {
 std::cerr << "Failed to write file trailer!" << std::endl;
 }

 av_dump_format(formatContext, 0, "test.mov", 1);

 // Cleanup
 avcodec_free_context(&videoCodecContext);
 avio_close(formatContext->pb);
 avformat_free_context(formatContext);

 std::cout << "Test file with timecode created successfully: " << outputFileName << std::endl;

 return 0;
}
</iostream>


The code output is :


Creating test file with tmcd stream: test_tmcd.mov
[prores_ks @ 0x11ce05790] Autoselected HQ profile to keep best quality. It can be overridden through -profile option.
[mov @ 0x11ce04f20] Timestamps are unset in a packet for stream 0. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly
[mov @ 0x11ce04f20] Encoder did not produce proper pts, making some up.
Output #0, mov, to 'test.mov':
 Metadata:
 encoder : Lavf61.7.100
 Stream #0:0: Video: prores (HQ) (apch / 0x68637061), yuv422p10le, 1920x1080, q=2-31, 2000 kb/s, 15360 tbn
 Stream #0:1: Data: timed_id3 (tmcd / 0x64636D74)
 Metadata:
 timecode : 00:00:30:00
Test file with timecode created successfully: test_tmcd.mov



The ffprobe output is :


$ ffprobe test_tmcd.mov
ffprobe version 7.1.1 Copyright (c) 2007-2025 the FFmpeg developers
 built with Apple clang version 16.0.0 (clang-1600.0.26.6)
 configuration: --prefix=/opt/homebrew/Cellar/ffmpeg/7.1.1_3 --enable-shared --enable-pthreads --enable-version3 --cc=clang --host-cflags= --host-ldflags='-Wl,-ld_classic' --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libaribb24 --enable-libbluray --enable-libdav1d --enable-libharfbuzz --enable-libjxl --enable-libmp3lame --enable-libopus --enable-librav1e --enable-librist --enable-librubberband --enable-libsnappy --enable-libsrt --enable-libssh --enable-libsvtav1 --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libspeex --enable-libsoxr --enable-libzmq --enable-libzimg --disable-libjack --disable-indev=jack --enable-videotoolbox --enable-audiotoolbox --enable-neon
 libavutil 59. 39.100 / 59. 39.100
 libavcodec 61. 19.101 / 61. 19.101
 libavformat 61. 7.100 / 61. 7.100
 libavdevice 61. 3.100 / 61. 3.100
 libavfilter 10. 4.100 / 10. 4.100
 libswscale 8. 3.100 / 8. 3.100
 libswresample 5. 3.100 / 5. 3.100
 libpostproc 58. 3.100 / 58. 3.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'test_tmcd.mov':
 Metadata:
 major_brand : qt 
 minor_version : 512
 compatible_brands: qt 
 encoder : Lavf61.7.100
 Duration: N/A, start: 0.000000, bitrate: N/A
 Stream #0:0[0x1]: Video: prores (HQ) (apch / 0x68637061), yuv422p10le, 1920x1080, 15360 tbn (default)
 Metadata:
 handler_name : VideoHandler
 vendor_id : FFMP
$ 




Spent hours with all AI models, no help. Appeal to the human intelligence now


-
FFmpeg RTSP drop rate increases when frame rate is reduced
13 avril 2024, par Avishka PereraI need to read an RTSP stream, process the images individually in Python, and then write the images back to an RTSP stream. As the RTSP server, I am using Mediamtx [1]. For streaming, I am using FFmpeg [2].


I have the following code that works perfectly fine. For simplification purposes, I am streaming three generated images.


import time
import numpy as np
import subprocess

width, height = 640, 480
fps = 25
rtsp_server_address = f"rtsp://localhost:8554/mystream"

ffmpeg_cmd = [
 "ffmpeg",
 "-re",
 "-f",
 "rawvideo",
 "-pix_fmt",
 "rgb24",
 "-s",
 f"{width}x{height}",
 "-i",
 "-",
 "-r",
 str(fps),
 "-avoid_negative_ts",
 "make_zero",
 "-vcodec",
 "libx264",
 "-threads",
 "4",
 "-f",
 "rtsp",
 rtsp_server_address,
]
colors = np.array(
 [
 [255, 0, 0],
 [0, 255, 0],
 [0, 0, 255],
 ]
).reshape(3, 1, 1, 3)
images = (np.ones((3, width, height, 3)) * colors).astype(np.uint8)

if __name__ == "__main__":

 process = subprocess.Popen(ffmpeg_cmd, stdin=subprocess.PIPE)
 start = time.time()
 exported = 0
 while True:
 exported += 1
 next_time = start + exported / fps
 now = time.time()
 if next_time > now:
 sleep_dur = next_time - now
 time.sleep(sleep_dur)

 image = images[exported % 3]
 image_bytes = image.tobytes()

 process.stdin.write(image_bytes)
 process.stdin.flush()

 process.stdin.close()
 process.wait()



The issue is, that I need to run this at 10 fps because the processing step is heavy and can only afford 10 fps. Hence, as I reduce the frame rate from 25 to 10, the drop rate increases from 0% to 100%. And after a few iterations, I get a
BrokenPipeError: [Errno 32] Broken pipe
. Refer to the appendix for the complete log.

As an alternative, I can use OpenCV compiled from source with GStreamer [3], but I prefer using FFmpeg to make the shipping process simple. Since compiling OpenCV from source can be tedious and dependent on the system.


References


[1] Mediamtx (formerly rtsp-simple-server) : https://github.com/bluenviron/mediamtx


[2] FFmpeg : https://github.com/FFmpeg/FFmpeg


[3] Compile OpenCV with GStreamer : https://github.com/bluenviron/mediamtx?tab=readme-ov-file#opencv


Appendix


Creating the source stream


To instantiate the unprocessed stream, I use the following command. This streams the content of my webcam as and RTSP stream.


ffmpeg -video_size 1280x720 -i /dev/video0 -avoid_negative_ts make_zero -vcodec libx264 -r 10 -f rtsp rtsp://localhost:8554/webcam



Error log


ffmpeg version 6.1.1 Copyright (c) 2000-2023 the FFmpeg developers
 built with gcc 12.3.0 (conda-forge gcc 12.3.0-5)
 configuration: --prefix=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_plac --cc=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/x86_64-conda-linux-gnu-cc --cxx=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/x86_64-conda-linux-gnu-c++ --nm=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/x86_64-conda-linux-gnu-nm --ar=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/x86_64-conda-linux-gnu-ar --disable-doc --disable-openssl --enable-demuxer=dash --enable-hardcoded-tables --enable-libfreetype --enable-libharfbuzz --enable-libfontconfig --enable-libopenh264 --enable-libdav1d --enable-gnutls --enable-libmp3lame --enable-libvpx --enable-libass --enable-pthreads --enable-vaapi --enable-libopenvino --enable-gpl --enable-libx264 --enable-libx265 --enable-libaom --enable-libsvtav1 --enable-libxml2 --enable-pic --enable-shared --disable-static --enable-version3 --enable-zlib --enable-libopus --pkg-config=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/pkg-config
 libavutil 58. 29.100 / 58. 29.100
 libavcodec 60. 31.102 / 60. 31.102
 libavformat 60. 16.100 / 60. 16.100
 libavdevice 60. 3.100 / 60. 3.100
 libavfilter 9. 12.100 / 9. 12.100
 libswscale 7. 5.100 / 7. 5.100
 libswresample 4. 12.100 / 4. 12.100
 libpostproc 57. 3.100 / 57. 3.100
Input #0, rawvideo, from 'fd:':
 Duration: N/A, start: 0.000000, bitrate: 184320 kb/s
 Stream #0:0: Video: rawvideo (RGB[24] / 0x18424752), rgb24, 640x480, 184320 kb/s, 25 tbr, 25 tbn
Stream mapping:
 Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264))
[libx264 @ 0x5e2ef8b01340] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0x5e2ef8b01340] profile High 4:4:4 Predictive, level 2.2, 4:4:4, 8-bit
[libx264 @ 0x5e2ef8b01340] 264 - core 164 r3095 baee400 - H.264/MPEG-4 AVC codec - Copyleft 2003-2022 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=4 threads=4 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=10 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, rtsp, to 'rtsp://localhost:8554/mystream':
 Metadata:
 encoder : Lavf60.16.100
 Stream #0:0: Video: h264, yuv444p(tv, progressive), 640x480, q=2-31, 10 fps, 90k tbn
 Metadata:
 encoder : Lavc60.31.102 libx264
 Side data:
 cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
[vost#0:0/libx264 @ 0x5e2ef8b01080] Error submitting a packet to the muxer: Broken pipe 
[out#0/rtsp @ 0x5e2ef8afd780] Error muxing a packet
[out#0/rtsp @ 0x5e2ef8afd780] video:1kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
frame= 1 fps=0.1 q=-1.0 Lsize=N/A time=00:00:04.70 bitrate=N/A dup=0 drop=70 speed=0.389x 
[libx264 @ 0x5e2ef8b01340] frame I:16 Avg QP: 6.00 size: 147
[libx264 @ 0x5e2ef8b01340] frame P:17 Avg QP: 9.94 size: 101
[libx264 @ 0x5e2ef8b01340] frame B:17 Avg QP: 9.94 size: 64
[libx264 @ 0x5e2ef8b01340] consecutive B-frames: 50.0% 0.0% 42.0% 8.0%
[libx264 @ 0x5e2ef8b01340] mb I I16..4: 81.3% 18.7% 0.0%
[libx264 @ 0x5e2ef8b01340] mb P I16..4: 52.9% 0.0% 0.0% P16..4: 0.0% 0.0% 0.0% 0.0% 0.0% skip:47.1%
[libx264 @ 0x5e2ef8b01340] mb B I16..4: 0.0% 5.9% 0.0% B16..8: 0.1% 0.0% 0.0% direct: 0.0% skip:94.0% L0:56.2% L1:43.8% BI: 0.0%
[libx264 @ 0x5e2ef8b01340] 8x8 transform intra:15.4% inter:100.0%
[libx264 @ 0x5e2ef8b01340] coded y,u,v intra: 0.0% 0.0% 0.0% inter: 0.0% 0.0% 0.0%
[libx264 @ 0x5e2ef8b01340] i16 v,h,dc,p: 97% 0% 3% 0%
[libx264 @ 0x5e2ef8b01340] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 0% 0% 100% 0% 0% 0% 0% 0% 0%
[libx264 @ 0x5e2ef8b01340] Weighted P-Frames: Y:52.9% UV:52.9%
[libx264 @ 0x5e2ef8b01340] ref P L0: 88.9% 0.0% 0.0% 11.1%
[libx264 @ 0x5e2ef8b01340] kb/s:8.27
Conversion failed!
Traceback (most recent call last):
 File "/home/avishka/projects/read-process-stream/minimal-ffmpeg-error.py", line 58, in <module>
 process.stdin.write(image_bytes)
BrokenPipeError: [Errno 32] Broken pipe
</module>