23:25
I'd like to embed bounding boxes, labels, etc. into live video stream to optionally draw them thereafter on the client-side in web-browser and looking for a way how to do.
Could you please give me a hint how to insert the meta-data "on fly" and which player might be used for it?
17:29
I'm trying to allow the user to export the scene as an mp4 (video format), the items of the scene consists of QGraphicsVideoItem and multiple QGraphicsTextItem, I need to export the scene as it will allow the user to save the video with the text items. I've found one of the ways to do this but the issue is that it will take hours for a simple 5 seconds videos as it saves every image to a byte to create a video, every image is a millisecond. If I change from millisecond to seconds it could speed up but the video will not look as smooth, is there a more efficient way of doing (...)
17:17
I am running ffmpeg in python as a subprocess. I want to burn the subtitles into a video. Just using ffmpeg (commandline without python in windows) the following works:
ffmpeg.exe" -y -i "input_file" -vf "subtitles= \\'input_file_path\\' :si=0" -acodec copy "output_file_path"
the \\ escape characters are required by ffmpeg for special characters within a filter, However trying to replicate this within a python subprocess has proved problematic, here is one of many failed attempts:
command = (...)
16:30
I am recording desktop video and audio with ffmpeg. I am controlling the process with c#. Simplified, my ffmpeg arguments are "ffmpeg -f gdigrab -framerate 30 -i desktop -preset ultrafast -pix_fmt yuv420p output.mp4"
I have some functionality to pause and resume the recording, essentially just suspending and resuming the process. This works, however, it leaves the paused section in the video, just one frame for the entire duration. To fix this, I added "-vf setpts=N/FR/TB" yo the arguments. This resolved that issue, but while recording my framerate was 1.3 fps rather than 30 and the (...)
15:07
I use ffmpeg to format videos according to tik-toke standards, but I don't know why videos are not cropped and lose quality.
const transcode = async () =>
const ffmpeg = ffmpegRef.current;
await ffmpeg.writeFile('input.mp4', await fetchFile(file));
await ffmpeg.exec([
'-i', 'input.mp4',
'-vf', 'scale=1080:1920,crop=ih*(9/16):ih',
'output.mp4'
]);
const data = await ffmpeg.readFile('output.mp4');
videoRef.current.src =
URL.createObjectURL(new Blob([data.buffer], (...)
13:13
I have a opengl buffer that I need to forward directly to ffmpeg to do the nvenc based h264 encoding.
My current way of doing this is glReadPixels to get the pixels out of the frame buffer and then passing that pointer into ffmpeg such that it can encode the frame into H264 packets for RTSP. However, this is bad because I have to copy bytes out of the GPU ram into CPU ram, to only copy them back into the GPU for encoding.
12:20
I tried it and sure enough I'm not typing the correct command in exec. Please let me know if you have a video resizing command for exec.
const compress = async () =>
const videoURL =
"../public/videos/20240508_095425.mp4";
const ffmpeg = ffmpegRef.current;
await ffmpeg.writeFile("input.mp4", await fetchFile(videoURL));
await ffmpeg.exec([
"-i",
"input.mp4",
"-c:v",
"copy",
"-c:a",
"aac",
"-b:a",
"128k",
"output.mp4",
]);
const fileData = await ffmpeg.readFile("output.mp4");
const data = new (...)
11:00
I'm using ffmpeg to extract one frame (as a jpeg) every five minutes from videos, and piping the output from the console to a text file in order to get the exact timestamps of the extracted frames.
The command I'm using is:
ffmpeg -i input.avi -ss 00:10:00 -vframes 10 -vf showinfo,fps=fps=1/300 %03d.jpg &> output.txt
Where -ss 00:10:00 lets me skip ahead 10 mins in the video before starting, and -vframes 10 lets me capture only the first 10 frames (1 frame per 5 mins).
This almost works fine except that the command outputs information for all frames, (...)
09:09
I have a whiteboard/hand sketching app written with D3.js svg path elements. These elements are grouped into scenes with their own mp3 audio.
Multiple scenes with their animated svg paths are played according to a timeline, with the audio running perfectly.
So the end result is like a whiteboard animated movie. This works well within the app. How can i record the whole animation as a mp4 file?
I would like to run this entirely on the client side. I have looked at ffmpeg but prefer not to do a desktop install for this web app.
I looked into using (...)
05:27
I am trying to convert an audio file generated from flutter "text to speech" package to mp3 file but it is failing everytime.
I have written the below code for file conversion. The conversion fails everytime. I have imported the package ffmpeg_kit_flutter. It doesnt even show why the conversion is failing.
I have looked up in stackoverflow and other sites but could not find any relevant solutions. I am using vscode as editor. I have attached flutter doctor output below as well. Could anyone please guide me? Let me know if you need more information.
List command = [
(...)
03:58
Is it possible to check if a video file has a subtitle using bash and get a simple answer like "yes" or "no". I don't need to know any details about the subtitles.
Maybe using ffmpeg?
03:23
I'm using ffmpeg to take input from a live stream and write an mkv file. The stream runs a few hours. I want to play the mkv file while this is in progress. I find that the players basically do "seeks" by playing through the file, which is ok for very short "seeks" but not for very long ones.
When the ffmpeg ends it finalizes the mkv file, and seeks long or short work fine.
What can I do to be able to have a more usable mkv while ffmpeg is in progress? I suppose I could force ffmpeg to create a new mkv file every hour, with a name including an index. (Is there any (...)