Newest 'ffmpeg' Questions - Stack Overflow
Les articles publiés sur le site
-
ffmpeg : How to limit bit rate of video output ?
27 mars 2017, par MatTo a 2GB raw video file taken in front of a greenscreen I want to add the alpha channel.
My problem is: the output files appear much too large to me (about 20-fold the size of the input), and OpenShot, the video editor, seems unable to handle such huge files. Comparing file properties of input and output shows bitrates of about 15k in the infiles, and 150k-250k in the result files, which seems to pinpoint the problem.
(By the way, I also want to discard the first part of the clip as well as its audio channel and reduce the frame rate from 30 to 24 fps in the process, which all works fine.)
This is my command:
ffmpeg.exe -i -vf chromakey=0x007d06:0.15:0.0 -ss 40.420 -r 24 -b:v 15840k -maxrate 15840k -bufsize 15840k -an -c:v
The parameters as I understand them:
-i ... infile -vf chromakey=0x007d06:0.15:0.0 ... chroma, tolerance & blend of greenscreens -ss 40.420 ... skip before second 40.420 (this works) -r 24 ... frame rate (this works, too) -b:v 15804k ... bitrate (seemingly no effect) -maxrate 15840k ... maximum BitRate (seemingly no effect) -bufzise 15840k ... Buffer size (can't tell) -an ... no audio channel (works fine) -c:v
... Video Encoder (ffvhuff, png, qtrle and tiff were tried, all to the same result) ... outfile b, maxrate and bufsize are described as global options. No matter what setting, during the process bitrates of up to some 3000000kbits/s (tiff) are reported.
(I also added -pixel_format argb, but this, too, seemed to be ignored, ffmpeg always reports to write yuva420p.)
What am I missing?
-
How to get video rotation angle when metadata rotation is not right ?
27 mars 2017, par sunjinboI have a video file, I get its resolution(3840x2160) and rotation(0) from the below codes:
MediaMetadataRetriever retr = new MediaMetadataRetriever(); retr.setDataSource(mVideoFile); String height = retr.extractMetadata(MediaMetadataRetriever.METADATA_KEY_VIDEO_HEIGHT); String width = retr.extractMetadata(MediaMetadataRetriever.METADATA_KEY_VIDEO_WIDTH); String rotation = retr.extractMetadata(MediaMetadataRetriever.METADATA_KEY_VIDEO_ROTATION);
But actullay, the video rotation angle in degrees is not right, it should be 90, 2160x3840 resolution, so my video always failed to display correctly in my android application.
Interestingly, some 3rd-party video players (such as VLC) can detect the actual rotation for this video file, and its display is ok either, how did they do that?
-
use ffmpeg convert mp4 to m3u8 and .ts file batch [on hold]
27 mars 2017, par user6006103ffmpeg -i input.mp4 -c:v libx264 -c:a aac -strict -2 -f hls -hls_time 20 -hls_list_size 0 -hls_wrap 0 output.m3u8
a script Use ffmpeg batch mp4 converted into m3u8
-
ffmpeg convert images with name pattern into video [duplicate]
27 mars 2017, par passionThis question already has an answer here:
- Input parameters to FFMPEG 2 answers
I am converting png images into a video using ffmpeg.
My png file names are like this:
image1_xy.png image2_xy.png image3_xy.png image4_xy.png ... image10_xy.png image11_xy.png ... image100_xy.png image101_xy.png ...
my script is the following:
ffmpeg -r 1 -f image2 -s 1920x1080 -i image%03d.png -vcodec libx264 -crf 25 -pix_fmt yuv420p test.mp4
which is working only when the file names are like
image001.png
,image002.png
...How should i modify this to take into account the file name pattern I have?
update
i don't mind if the answer suggests to use a bash script to rename files in the order they appear with the right pattern.
-
overlay n previouse frame as red and n future frame as green, using ffmpeg
27 mars 2017, par 12.yakirHow can I use ffmpeg to overlay the current frame with the difference between the current frame and the frame from (say) 10 frames ago encoded as (say) red, and overlay that with the difference between the current frame and the frame from (say) 10 frames ahead encoded as (say) green...?
It should output a single video, with the history, present, and future overlayed on-top of each other.
After reading about ffmpeg filters, I suspect I'll need to use some magical combination of: split, itsoffset, blend, difference, filter_complex, overlay, and maybe tblend..?
It would be awesome if I could also scale down the video first (before any eventual splitting) to make it go faster (the spatial resolution of the videos I need to work with is overly high).
Impossible?