22:41
I've performed some overlays of video on video using ffmpeg. However, in some cases I get a single black line at the bottom of the overlaid video. How do I avoid this?
P.
22:05
When encoding H.264 using ffmpeg I get the following type of warnings en masse:
Past duration 0.603386 too large
Past duration 0.614372 too large
Past duration 0.606377 too large
What do they mean? I have not found anything clear online or in the ffmpeg documentation.
21:56
I have ffmpeg receiving an rtsp stream and its outputting hls files. I want those files to be continuously stored on s3. its not clear how to do this.
I've seen other similar posts but the solutions are always single file outputs piped to the aws cli. In this case I have an indefinite incoming stream and multiple files to output.
This is what I currently have.
ffmpeg
-f segment \\
-segment_list_flags live \\
-segment_time 1 \\
-segment_list_size 5 \\
-segment_format mpegts \\
-segment_list public/st/streaming.m3u8 \\
-segment_list_type (...)
19:34
I am using ffmpeg-python (the wrapper for ffmpeg written in python)
Below is the sample code mentioned.
inp.filter_multi_output('split')[k]
.filter_('rotate', a=rotation, fillcolor='#00FF0000', ow=f"rotw(rotation)", oh=f"roth(rotation)")
.filter_('scale', width=(i['width'] * Factors().factors['w_factor']), height=(i['height'] * Factors().factors['h_factor'])).filter('setsar', '1/1')
(...)
18:06
I am currently trying to create a batch file which will do the following :
Open a cmd prompt
Execute a ffmpeg | sox (audio shenanigans for my project)
Open another cmd prompt
Execute the same ffmpeg | sox with different parameters
...
My current script is as follow :
start /min cmd /k "ffmpeg -i http://localhost:8000/xxx -f flac - | sox - -t flac E:\\Extracts\\xxx.flac silence 1 0.50 0.1% 1 2.0 0.1% : newfile : restart"
But when executing this, only the first ffmpeg part will be effectively executed in the new prompt. The command is working (...)
16:37
I capture audio and video.
Video is captured by using Desktop Duplication API and, as a result, I get Textures 2D.
These Textures 2D are char arrays.
m_immediateContext->CopyResource(currTexture, m_acquiredDesktopImage.Get());
D3D11_MAPPED_SUBRESOURCE* resource = new D3D11_MAPPED_SUBRESOURCE;
UINT subresource = D3D11CalcSubresource(0, 0, 0);
m_immediateContext->Map(currTexture, subresource, D3D11_MAP_READ_WRITE, 0, resource);
uchar * buffer = new uchar[(m_desc.Height * m_desc.Width * 4)];
const uchar * mappedData = (...)
15:00
im using this Script to keep the video same as source but to only convert the audio track from
EAC3 to AC3
ffmpeg -i input.mkv -map 0 -map -0:s? -c:v copy -c:a ac3 -b:a 640k "D:/Movie/output.mkv"
and this command is working good when the input and output both are mkv files
but when the input is mp4 and the output is mkv im getting this error
no such file or directory
how can i also change only the container from mp4 to mkv without re encoding and also to convert the audio track to AC3
my command is not working when doing it from mp4 to (...)
13:10
based on asked questions and available guides, it is obvious how to add text overlay on a video by using FFmpeg.
The question is, is it possible to define in which time period the text be displayed and after that be deleted?
suppose we have a movie with 33 minutes long and it is wanted to display text in time 5:01 min until time 7 min.
Is there any way?
Thank you
11:10
I'm using Linux. I want to add transparent watermark in center of a videos with ffmpeg. I just wanted to write a bash script but I couldn't. I also tried on python.
My folder tree;
$ tree
.
├── 1._Intro
│ ├── 1._Welcome.mp4
│ ├── 2._Thisisantest.mp4
| .
| .
| .
│ ├── 9._Hello.mp4
│ ├── 10._Is_problem_there.jpg
│ └── pdffile1.pdf
├── 3._HTML
│ ├── 1._HTML.mp4
│ └── 4._Form.mp4
.
.
.
How can I add transparent watermark those mp4 files with (...)
10:21
I was trying to fit a generator into a model and I got this error:
AssertionError: Cannot find installation of real FFmpeg (which comes with ffprobe).
I have looked over many of the solutions on GitHub and other questions on Stack Overflow but none of them worked for me.
Here is one of the commands I ran:
sudo add-apt-repository ppa:mc3man/trusty-media
sudo apt-get update
sudo apt-get install ffmpeg
sudo apt-get install frei0r-plugins
pip list also indicates the presence of ffmpeg-1.4
In addition, I tried force (...)
10:10
Am compiling and installing FFMPEG on Raspberry Pi 4
Have installed all the other depnedenices and then I enter the following:
cd ~/ffmpeg_sources && wget -O ffmpeg-snapshot.tar.bz2 https://ffmpeg.org/releases/ffmpeg-snapshot.tar.bz2 && tar xjvf ffmpeg-snapshot.tar.bz2 && cd ffmpeg && PATH="$HOME/bin:$PATH" PKG_CONFIG_PATH="$HOME/ffmpeg_build/lib/pkgconfig" ./configure --prefix="$HOME/ffmpeg_build" --pkg-config-flags="--static" --extra-cflags="-I$HOME/ffmpeg_build/include" --extra-ldflags="-L$HOME/ffmpeg_build/lib" --extra-libs="-lpthread -lm" (...)
10:08
I have many surveillance cameras,and i made a function to handle these video which user RTSP to hls.BUT!!!when i test one or two video transcoding, there is no problem to show , once I use all video's url(rtsp://[username]:[password]ⓐ192.168.1.67:554) to handle these videos ,It's very easy to disconnect and stop my program.:(
I used multithreading to process video links.
so ,Could ffmpeg handle many streaming simultaneously? or i made a wrong code.
How to handle many rtsp streamings by ffmpeg?
xxx.start(de, CommandBuidlerFactory.createBuidler()
(...)
07:56
I have two folders with the exact same number of images: "folder1" and "folder2".
I want to take these images, convert them into two videos and have them rendered in the same video file. Think of it like a comparison video that's rendered side-by-side in the same file.
I can use ffmpeg to turn the still images into videos, but this will have one file for each video:
ffmpeg -start_number 0 -i "folder1/image-%06d.png" -c:v libx264 -vf "format=yuv420p" video1.mp4
ffmpeg -start_number 0 -i "folder2/image-%06d.png" -c:v libx264 -vf "format=yuv420p" (...)
08:54
I developed an app to push live stream with ffmpeg. When I checked the app with leaks --atExit -- (I'm on mac), I found some memory leak with AVFormatContext.
The minimized code are provided below:
#include
extern "C"
#include
#include
#include
void foo()
avdevice_register_all();
AVFormatContext *avInputFormatContext = avformat_alloc_context();
AVInputFormat *avInputFormat = av_find_input_format("avfoundation");
std::cout
The output is
Process: ffmpegtest [87726]
Path: (...)
01:45
I am looking to generate thumbnail VTT files with coordinates with nodejs.
I used this to generate the thumbnails:
ffmpeg -i 1.mp4 -filter_complex "select='not(mod(n,190))',scale=160:90,tile=5x5" -vsync vfr -qscale:v 5 -an thumbs-%02d.jpg
this generate multiple 5x5 sprite, each single thumbnail is for 8 second. so I am looking for something like this:
WEBVTT
00:00.000 --> 00:08.000
/assets/thumbnails-01.jpg#xywh=0,0,160,90
00:09.000 --> 00:16.000
/assets/thumbnails-01.jpg#xywh=160,0,320,90
...
(...)
01:06
To extract a list of specific frames to file I can use:
ffmpeg -i in.mp4 -vf select='eq(n\\,100)+eq(n\\,184)+eq(n\\,213)' -vsync 0 frames%d.jpg
And to extract frames sequentially to numpy:
command = [ 'ffmpeg',
'-i', 'input.mp4',
'-f', 'image2pipe',
'-pix_fmt', 'rgb24',
'-vcodec', 'rawvideo', '-']
pipe = subprocess.Popen(command, stdout = subprocess.PIPE, bufsize=10**8)
raw_image = pipe.stdout.read(420*360*3) (...)