12:33
We have a software where we capture the stream from the camera connected to the laptop or device using ffmpeg python,
ffmpeg
.input(video, s='640x480', **self.args) //tried with rtbufsize=1000M (enough I suupose/ also sometimes the error does not occur even on default rtbufsize which is around 3MB)
.output('pipe:', format='rawvideo', pix_fmt='rgb24')
.overwrite_output()
.run_async(pipe_stdout=True)
majority of the times when I start the software like the software is still initiating we receive the following error, (...)
13:28
I have a video with the following ffprobe output:
Input #0, matroska,webm, from 'video.mkv':
Metadata:
title : Video - 01
creation_time : 2021-07-14T02:49:59.000000Z
ENCODER : Lavf58.29.100
Duration: 00:22:57.28, start: 0.000000, bitrate: 392 kb/s
Chapters:
Chapter #0:0: start 0.000000, end 86.169000
Metadata:
title : Opening
Chapter #0:1: start 86.169000, end 641.266000
Metadata:
title : Part A
Chapter #0:2: start 641.266000, end 651.359000
Metadata:
title : Eyecatch
Chapter #0:3: start (...)
09:56
I'd like to use cuda gpu as below. But ffmpeg keep saying that
Impossible to convert between the formats supported by the filter 'Parsed_hwupload_cuda_0' and the filter 'auto_scale_2'
[fc#0 ⓐ 0000018ff3ff4e80] Error reinitializing filters!
[fc#0 ⓐ 0000018ff3ff4e80] Task finished with error code: -40 (Function not implemented)
[fc#0 ⓐ 0000018ff3ff4e80] Terminating thread with return code -40 (Function not implemented)"**
*ffmpeg -y -loglevel debug -nostats -hide_banner -init_hw_device cuda=gpu:0 -hwaccel cuda -hwaccel_device gpu (...)
09:43
I have a FFmpeg-Python program that puts subtitles into a video, pseudocode below:
for i in range(10000):
video= video.filter(
'drawtext', fontfile=FONT_FILE, text=cur_string, x='(w-text_w)/2', y='(h-text_h)/2',
fontsize=FONT_SIZE, fontcolor=FONT_COLOR, borderw=2, bordercolor=FONT_OUTLINE_COLOR,
enable=f'between(t,i,i+1)')
video.output(PATH).run()
The code above gives the following error:
FileNotFoundError: [WinError 206] The filename or extension is too long
My questions are:
(1) Is (...)
11:20
I am working on a project to stream an H.264 video file using RabbitMQ (AMQP protocol) and display it in a web application. The setup involves capturing video frames, encoding them, sending them to RabbitMQ, and then consuming and decoding them on the web application side using Flask and Flask-SocketIO.
However, I am encountering performance issues with the publishing and subscribing rates in RabbitMQ. I cannot seem to achieve more than 10 messages per second. This is not sufficient for smooth video streaming.
I need help to diagnose and resolve these performance (...)
09:18
I use ffmpeg.av_hwframe_transfer_data to sent decoded frames into GPU, but i can not get them again in another good format. I try to change my shaders, use av_hwframe_transfer_get_formats but it is not working!dfghkdsfiuhsgiherghoeirughoighweroigjoiwejgoiwrjgjeoijgoiewgoheroighoieqfgoihqeoigheiogieiqrhgihergh2eouirghou2rerhg
My code:
private static bool _readingComplete = false;
private static bool _decodingComplete = false;
private static readonly object _lock = new object();
private static Queue packets = new Queue();
private static readonly object (...)
06:38
I have a Video in webm format (like video.webm the duration is 60 seconds)
I want to get specified segment of video (i.e split video) with http header range (Range: 100-200).
In another word:
I want to get a section of video (e.g. from second 4 to 12) but I don't want to use any converter like ffmpeg. I want to send http request to server and get specified range of webm file.
Can I use this method (http range header)?
(...)
06:24
I came across with a problem when ffprobe and decoding video stream .
Here is the log:
ffprobe version 6.1.1 Copyright (c) 2007-2023 the FFmpeg developers
built with gcc 9 (Ubuntu 9.4.0-1ubuntu1~20.04.2)
configuration: --enable-gpl --enable-version3 --enable-shared --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-libsnappy --enable-zlib --enable-libsrt --enable-libssh --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libdav1d --enable-libdavs2 --enable-libzvbi --enable-libwebp (...)
05:10
I'd like to use cuda gpu as below. But ffmpeg keep saying that
Impossible to convert between the formats supported by the filter 'Parsed_hwupload_cuda_0' and the filter 'auto_scale_2'
[fc#0 ⓐ 0000018ff3ff4e80] Error reinitializing filters!
[fc#0 ⓐ 0000018ff3ff4e80] Task finished with error code: -40 (Function not implemented)
[fc#0 ⓐ 0000018ff3ff4e80] Terminating thread with return code -40 (Function not implemented)"**
*ffmpeg -y -loglevel debug -nostats -hide_banner -init_hw_device cuda=gpu:0 -hwaccel cuda -hwaccel_device gpu (...)
03:55
I'am using NodeJS server to display a html page which has webcam option. Once user visited to my NodeJS server, it will serve html page. User can allow webcam option and see webcam view on the page. In the backend, I send webcam stream (byte array) using socket.io. I receive byte array successfully in backend with the help of socket.io. BUT MY PROBLEM IS, I can't pipe this byte array to the ffmpeg spawn process. I don't know how to properly pipe this data to the ffmpeg. Once it done, all my problem will be solved. On the other side, I have node-media-server as RTMP server (...)