21:50
I have this code that automatically makes a couple of videos using ffmpeg. The result is 2 or 3 .mp4 files that consist of an audio and a still image. I want to concatenate these files automatically by using ffmpeg concat demuxer, but when the concatenation finishes the videos do concatenate, but the images are hyperimposed on top of one another. So instead of having 12 seconds of a cat image and 10 seconds of a skyscraper, I get 22 seconds of a cat image, and when I skip forward in the video the image changes to the skyscraper and doesnt change back.
Every single tutorial I see has (...)
17:29
Here is my code where I pass a list of image paths that concatenate. I am facing an issue with the front camera video. When concatenated completely some videos rotate 90 degrees.
Future mergeVideos(List videoPaths) async
VideoHelper.showInSnackBar('Videos merged Start', context);
String outputPath = await VideoHelper.generateOutputPath();
FlutterFFmpeg flutterFFmpeg = FlutterFFmpeg();
// Create a text file containing the paths of the videos to concatenate
String fileListPath =
'$(await (...)
15:01
avformat_open_input() deletes the AVFormatContext* and returns -6 when the source order changes.
I am trying to open multiple media sources dynamically with different(mixed) formats and codecs in a single context (AVFormatContext).
My media sources are a BlackMagic DeckLink Duo SDI input as first source and an mp4 file or rtsp stream as second.
When I order to open (avformat_open_input()) the source 2 (RTSP or MP4 file) at first and then open the BlackMagic DeckLink Duo, proceed as expected.
But when I change the order, and first open the DeckLink and then try to open (...)
08:59
we wanted to build a video editor where we wanted to give the option to add watermarks, change the bitrate, resize the video, add frames inside the video, etc. We are using NodeJS for the backend, I have tried to achieve the same thing with the help of ffmpeg package in NodeJS, but it is taking too much time to get the edited video. Is there any better approach that I can follow to get this done as quickly as possible? The video size could be up to 1 GB.
Server Configuration
4 CPU
8 GB (...)
07:26
I am attempting to stream my screen to an RTMP URL using FFMPEG with X11 for screen capture and NVIDIA's hardware acceleration to enhance performance. Despite using NVIDIA acceleration, the stream is still experiencing lags and low-quality output. I've noticed that FFMPEG is utilizing only about 100MB of GPU memory, which seems low. Here's the command I'm currently using:
ffmpeg -hwaccel cuvid -f x11grab -s 1920x1080 -i :1 -f pulse -i VirtualSink.monitor -c:v h264_nvenc -preset:v p1 -b:v 2500k -maxrate 2500k -bufsize 5000k -vf (...)
06:55
I'm trying to do tonemapping (and resizing) of a UHD HDR video stream with ffmpeg. The following command:
ffmpeg -vsync 0 -hwaccel cuda -init_hw_device opencl=ocl -filter_hw_device ocl
-threads 1 -extra_hw_frames 3 -c:v hevc_cuvid -resize 1920x1080 -i "INPUT.hevc"
-vf "hwupload,
tonemap_opencl=tonemap=mobius:param=0.01:desat=0:r=tv:p=bt709:t=bt709:m=bt709:format=nv12,
hwdownload,format=nv12,hwupload_cuda"
-c:v hevc_nvenc -b:v 8M "OUTPUT.hevc"
seems to work (around 200 FPS on an RTX 3080). However, I notice that it still uses one CPU core and (...)
05:47
I'm trying to capture a simulator's screen using idb & ffmpeg with the command
idb video-stream --fps 15 --format h264 --compression-quality 1.0 --udid | ffmpeg -i pipe:0 -vcodec libx264 -threads 1 -crf 40 -preset ultrafast -f hls -g 30 -hls_list_size 0 -hls_time 5 -r 15 ./index.m3u8
This is taking around 15s to create the index.m3u8 file resulting in the loss of first 15 seconds of video.
I've tried adding
-tune zerolatency
but this too has no effect.
Idb starts video streaming right away on its own. Need help triaging why ffmpeg is (...)