13:26
I am using the following ffmpeg command to trim and select certain portions of the video.
ffmpeg -v verbose -i test.mp4 -filter_complex \\
"[0:v]trim=start=5:end=40,setpts=PTS-STARTPTS[v_trimmed]; \\
[v_trimmed]select='between(t,10,20)',setpts=N/FRAME_RATE/TB[v]; \\
[0:a]atrim=start=5:end=40,asetpts=PTS-STARTPTS[a_trimmed]; \\
[a_trimmed]aselect='between(t,10,20)',asetpts=N/SR/TB[a]" \\
-map "[v]" -map "[a]" -y -r 30 output_video.mp4
The above command errors out with the following trace
[h264 ⓐ 0x12570a0e0] Reinit context to (...)
07:52
I'm recording my screen on Windows using the command: ./ffmpeg -f gdigrab -framerate 30 -i desktop -c:v libx264 -crf 20 -preset ultrafast -max_muxing_queue_size 1024 -hls_time 5 -hls_list_size 10 -hls_flags delete_segments -hls_segment_filename "d:\\\\1\\\\file%03d.ts" "d:\\\\1\\\\playlist.m3u8"
Then I use the function avformat_open_input to open this m3u8 file, but after calling this function, the m3u8 file stops updating. When I use the command ./ffmpeg -i D:\\1\\video.m3u8 -c copy -f mpegts "srt://xxxxx?streamid=#!::h=live/qyt,m=publish", the test results are the same. However, when I (...)
13:26
I'm trying to create a video from a single image with a very specific duration of 0.09375 seconds using FFmpeg. I've tried various commands, but I can't seem to get the exact duration I need. The closest I've gotten is 0.080000 seconds. It doesn't always have to be something like 0.09375, but I wanted to have an example for it.
I've also tried trimming a video, but from what I've read so far, the encoding of the video can be a problem. Even after trying different FFmpeg commands or using MoviePy directly, I've never arrived at the desired (...)
15:10
Hello I am trying to get Ass (Advanced SubStation Alpha) code to animate (highlight in green) each word as it is spoken and then have it revert back to white afterwards. What happens with my code is very odd. At first I cannot even see any text. It then appears with multiple lines overlapped, partially hanging off the screen. From this point it "kind of" works. I can see the desired highlighting happening but because the text is all jumbled and overlapping it can hardly be called a success.
In case it is relevant I am using ffmeg to burn in the subtitles. I am using the following (...)
13:47
Is there anyway (besides the documented slow method) to get access to the texture pixels from an SDL2 texture? SDL_RenderReadPixels says Warning: this is a very slow method and should not be used frequently.
I want to point an ffmpeg AVFrame->data over to the texture pixels and have it encode what's in the texture. Basically, screen grab a texture after rendering and blending several textures together.
SDL_LockTexture() gives write-only access to the pixels. The docs also say not to expect pixel data to be present in the pointer returned to you.
Is it possible with (...)
07:46
I'm trying to write software to edit audio files. One aspect of this software is a normalization function.
Until now, I have been normalizing audio with Audacity. Now, I want to normalize them using FFmpeg from a C# application. I looked into the documentation of FFmpeg and saw that there are different types of normalization. Additionally, FFmpeg has a lot of settings, which I find overwhelming.
For Python, I found a nice library that makes it easy to normalize audio:
from pydub import AudioSegment
from pydub.effects import normalize
def (...)
07:33
I am working on a project to stream an H.264 video file using RabbitMQ (AMQP protocol) and display it in a web application. The setup involves capturing video frames, encoding them, sending them to RabbitMQ, and then consuming and decoding them on the web application side using Flask and Flask-SocketIO.
However, I am encountering performance issues with the publishing and subscribing rates in RabbitMQ. I cannot seem to achieve more than 10 messages per second. This is not sufficient for smooth video streaming.
I need help to diagnose and resolve these performance (...)
00:49
I have the following command:
"ffmpeg -i video.mp4 -i input.png -filter_complex [1:v]scale=iw*(iw/920):-1[scaled];[0:v][scaled]overlay=(main_w-overlay_w)/2:(main_h-overlay_h)/2 -c:v libx264 -c:a aac -strict experimental -pix_fmt yuv420p -y output-scaling.mp4"
In the scale section, it takes the width of the input.png but i want the width of the video.mp4. Is this possible?