09:56
I'd like to use cuda gpu as below. But ffmpeg keep saying that
Impossible to convert between the formats supported by the filter 'Parsed_hwupload_cuda_0' and the filter 'auto_scale_2'
[fc#0 ⓐ 0000018ff3ff4e80] Error reinitializing filters!
[fc#0 ⓐ 0000018ff3ff4e80] Task finished with error code: -40 (Function not implemented)
[fc#0 ⓐ 0000018ff3ff4e80] Terminating thread with return code -40 (Function not implemented)"**
*ffmpeg -y -loglevel debug -nostats -hide_banner -init_hw_device cuda=gpu:0 -hwaccel cuda -hwaccel_device gpu (...)
06:24
I came across with a problem when ffprobe and decoding video stream .
Here is the log:
ffprobe version 6.1.1 Copyright (c) 2007-2023 the FFmpeg developers
built with gcc 9 (Ubuntu 9.4.0-1ubuntu1~20.04.2)
configuration: --enable-gpl --enable-version3 --enable-shared --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-libsnappy --enable-zlib --enable-libsrt --enable-libssh --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libdav1d --enable-libdavs2 --enable-libzvbi --enable-libwebp (...)
19:45
I need some help :).
I need to trim a batch of MP4 files to exactly 299 seconds, or 4 minutes and 59 seconds, and from what I've seen, ffmpeg can do this for me.
I also need to merge them later on, all in batch, the files that were split above.
I think two scripts can do this, it shouldn't be too complicated, but reading the ffmpeg documentation, I found it a bit tricky for me haha.
The split can have segments like: Original_File_Name_001.MP4, Original_File_Name_002.MP4, and so on.
I saw on github that there's a program, LOSSLESSCUT, which is great (...)
16:36
#!/bin/bash
# Set paths to your audio files
input_file="File (File Path)"
output_file="$(dirname "Repeat Item (File Path)")/Repeat Item-watermark.mp3"
# Get the duration of the input file in seconds
input_duration=$(ffprobe -v error -show_entries format=duration -of default=noprint_wrappers=1:nokey=1 "$input_file")
# Run ffmpeg with the updated command
ffmpeg -y -i "$input_file" -filter_complex "
sine=frequency=438:duration=0.1,volume=12dB[beep];
aevalsrc=0:d=0.9[silence];
[beep][silence]concat=n=2:v=0:a=1[beep_pulse];
[beep_pulse]aloo
14:12
After reviewing the implementation of av_freep(void* arg) in the FFmpeg documentation, I have some questions. Specifically, can its code be simplified to the following form?
FFmpeg Implementation:
void av_freep(void* arg)
void* val;
memcpy(&val, arg, sizeof(val));
memcpy(arg, &(void*) NULL , sizeof(val));
av_free(val);
my version:
void av_freep(void* arg)
free(*arg);
*arg = (...)
20:18
I am trying to install ffmpeg in order to use it on OpenAI to record videos. I have installed it using brew install ffmpeg but somehow when I compile my code I get the same error, it is like the package is not recognized by my virtualenv where I am working.
Error on Python console:
raise error.DependencyNotInstalled("""Found neither the ffmpeg nor avconv executables. On OS X, you can install ffmpeg via `brew install ffmpeg`. On most Ubuntu variants, `sudo apt-get install ffmpeg` should do it. On Ubuntu 14.04, however, you'll need to install avconv with `sudo (...)
18:29
I would like to play encrypted movies in vlc by decrypting them by ffmpeg like below on macOS.
The size of my movie is, say, 200-1000MB. (hundreds of files)
My problem is that it is too slow to decrypt; it takes 5mins for 300MB movie before it starts to play.
My guess is that ffmpeg first decrypts the whole content of 300MB and then VLC plays it.
My question: Is it possible to play the movie "on the fly" while decrypting it?
(play the decrypted chunk once it is decrypted, Not wait until the whole movie is decrypted before it plays, so that it starts to play in, say, (...)
16:12
I am new to selenium, I want to add screen recording in my test cases, and I am using ffmpeg. The issue i am facing is the recording successfully starts, but it does not stop after the test case is successfully executed.
Below is how my code is strcutured:
package utilities;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
public class ScreenRecording
private Process ffmpeg;
private String outputFilePath;
private boolean isRecording;
public ScreenRecording(String outputFilePath) (...)
14:23
I got a file which is created based on messages coming from streaming, all of the "messages" are ending with b'h264\\x00'
I got a need to
load the data into ffmpeg
perform some processing of the data
re-attach the data to same "messages"
Data is loaded with ffmpeg and saved with ffmpeg - however - ffmpeg removes "part" of the data
I have simplified the process and currently I am only loading and saving the data, without any processing, but still - part of the data is being removed
I have used several commands - but always, part of my data (...)
12:33
We have a software where we capture the stream from the camera connected to the laptop or device using ffmpeg python,
ffmpeg
.input(video, s='640x480', **self.args) //tried with rtbufsize=1000M (enough I suupose/ also sometimes the error does not occur even on default rtbufsize which is around 3MB)
.output('pipe:', format='rawvideo', pix_fmt='rgb24')
.overwrite_output()
.run_async(pipe_stdout=True)
majority of the times when I start the software like the software is still initiating we receive the following error, (...)
13:28
I have a video with the following ffprobe output:
Input #0, matroska,webm, from 'video.mkv':
Metadata:
title : Video - 01
creation_time : 2021-07-14T02:49:59.000000Z
ENCODER : Lavf58.29.100
Duration: 00:22:57.28, start: 0.000000, bitrate: 392 kb/s
Chapters:
Chapter #0:0: start 0.000000, end 86.169000
Metadata:
title : Opening
Chapter #0:1: start 86.169000, end 641.266000
Metadata:
title : Part A
Chapter #0:2: start 641.266000, end 651.359000
Metadata:
title : Eyecatch
Chapter #0:3: start (...)
09:43
I have a FFmpeg-Python program that puts subtitles into a video, pseudocode below:
for i in range(10000):
video= video.filter(
'drawtext', fontfile=FONT_FILE, text=cur_string, x='(w-text_w)/2', y='(h-text_h)/2',
fontsize=FONT_SIZE, fontcolor=FONT_COLOR, borderw=2, bordercolor=FONT_OUTLINE_COLOR,
enable=f'between(t,i,i+1)')
video.output(PATH).run()
The code above gives the following error:
FileNotFoundError: [WinError 206] The filename or extension is too long
My questions are:
(1) Is (...)
11:20
I am working on a project to stream an H.264 video file using RabbitMQ (AMQP protocol) and display it in a web application. The setup involves capturing video frames, encoding them, sending them to RabbitMQ, and then consuming and decoding them on the web application side using Flask and Flask-SocketIO.
However, I am encountering performance issues with the publishing and subscribing rates in RabbitMQ. I cannot seem to achieve more than 10 messages per second. This is not sufficient for smooth video streaming.
I need help to diagnose and resolve these performance (...)
09:18
I use ffmpeg.av_hwframe_transfer_data to sent decoded frames into GPU, but i can not get them again in another good format. I try to change my shaders, use av_hwframe_transfer_get_formats but it is not working!dfghkdsfiuhsgiherghoeirughoighweroigjoiwejgoiwrjgjeoijgoiewgoheroighoieqfgoihqeoigheiogieiqrhgihergh2eouirghou2rerhg
My code:
private static bool _readingComplete = false;
private static bool _decodingComplete = false;
private static readonly object _lock = new object();
private static Queue packets = new Queue();
private static readonly object (...)
06:38
I have a Video in webm format (like video.webm the duration is 60 seconds)
I want to get specified segment of video (i.e split video) with http header range (Range: 100-200).
In another word:
I want to get a section of video (e.g. from second 4 to 12) but I don't want to use any converter like ffmpeg. I want to send http request to server and get specified range of webm file.
Can I use this method (http range header)?
(...)
05:10
I'd like to use cuda gpu as below. But ffmpeg keep saying that
Impossible to convert between the formats supported by the filter 'Parsed_hwupload_cuda_0' and the filter 'auto_scale_2'
[fc#0 ⓐ 0000018ff3ff4e80] Error reinitializing filters!
[fc#0 ⓐ 0000018ff3ff4e80] Task finished with error code: -40 (Function not implemented)
[fc#0 ⓐ 0000018ff3ff4e80] Terminating thread with return code -40 (Function not implemented)"**
*ffmpeg -y -loglevel debug -nostats -hide_banner -init_hw_device cuda=gpu:0 -hwaccel cuda -hwaccel_device gpu (...)
03:55
I'am using NodeJS server to display a html page which has webcam option. Once user visited to my NodeJS server, it will serve html page. User can allow webcam option and see webcam view on the page. In the backend, I send webcam stream (byte array) using socket.io. I receive byte array successfully in backend with the help of socket.io. BUT MY PROBLEM IS, I can't pipe this byte array to the ffmpeg spawn process. I don't know how to properly pipe this data to the ffmpeg. Once it done, all my problem will be solved. On the other side, I have node-media-server as RTMP server (...)