Newest 'ffmpeg' Questions - Stack Overflow
Les articles publiés sur le site
-
Using ffmpeg to overlay black line or add border to two side by side videos
14 février 2020, par user3135427I am using the following to generate a video that is side by side.
ffmpeg -i left.mp4 -i right.mp4 -filter_complex "[0:v]setpts=PTS-STARTPTS, pad=iw*2:ih[bg]; [1:v]setpts=PTS-STARTPTS[fg]; [bg][fg]overlay=w" -y final.mp4
It looks like this.
http://www.mo-de.net/d/partnerAcrobatics.mp4
I would like to place a vertical black line on top right in the middle or add a black border to the video on the left. If I add a border to the left video I would like to maintain the original sum dimension of the original videos. This solution would require subtracting the border width from the left videos width. I will take either solution.
Thanks
-
Array of Bitmaps into Video with ffmpeg
14 février 2020, par Steve Jobs KappaI'm Facing current porblem. I have an Array of 100 Bitmaps. these are Screenshots i took from a view. I used JCodec to make it to a video, buts its waay to slow. Im hoping to get better results with FFmpeg
Now i want to use the FFmpeg Library. Simillar questions were asked but i have no Idea how to use ffmpeg and how i have to use it in my specific case. All i see are weird Complex Commands See:
File dir = your directory where image stores; String filePrefix = "picture"; //imagename prefix String fileExtn = ".jpg";//image extention filePath = dir.getAbsolutePath(); File src = new File(dir, filePrefix + "%03d" + fileExtn);// image name should ne picture001, picture002,picture003 soon ffmpeg takes as input valid
complexCommand = new String[]{"-i", src + "", "-c:v", "libx264", "-c:a", "aac", "-vf", "setpts=2*PTS", "-pix_fmt", "yuv420p", "-crf", "10", "-r", "15", "-shortest", "-y", "/storage/emulated/0/" + app_name + "/Video/" + app_name + "_Video" + number + ".mp4"};
The Problem is, that in this case he is using a Path. I need it to be from an Array. And i have no idea what to do with the String (ComplexCommands) :/ My Bitmaps are like this:
Bitmap[] bitmaps = new Bitmaps[100];
this is filled later on.
if anyone is searching on how to do it with JCodec:
try { out = NIOUtils.writableFileChannel( Environment.getExternalStorageDirectory().getAbsolutePath()+"/***yourpath***/output"+System.currentTimeMillis()+".mp4"); // for Android use: AndroidSequenceEncoder AndroidSequenceEncoder encoder = new AndroidSequenceEncoder(out, Rational.R(25, 1)); for (int i = 0 ; i < 100 ; i++) { // Generate the image, for Android use Bitmap // Encode the image System.out.println("LOO2P"+i); encoder.encodeImage(bitmaps[i]); } // Finalize the encoding, i.e. clear the buffers, write the header, etc. encoder.finish(); } catch (FileNotFoundException e) { System.out.println("fNF"); e.printStackTrace(); } catch (IOException e) { System.out.println("IOE"); e.printStackTrace(); } finally { System.out.println("IOSSE"); NIOUtils.closeQuietly(out); }
-
Read a Bytes image from Amazon Kinesis output in python
14 février 2020, par Varun_RathinamI used
imageio.get_reader(BytesIO(a), 'ffmpeg')
to load a bytes image and save it as normal image.But the below error throws when I read the image using
imageio.get_reader(BytesIO(a), 'ffmpeg')
Traceback (most recent call last): File "
", line 1, in File "/home/tango/anaconda3/lib/python3.6/site-packages/imageio/core/functions.py", line 186, in get_reader return format.get_reader(request) File "/home/tango/anaconda3/lib/python3.6/site-packages/imageio/core/format.py", line 164, in get_reader return self.Reader(self, request) File "/home/tango/anaconda3/lib/python3.6/site-packages/imageio/core/format.py", line 214, in __init__ self._open(**self.request.kwargs.copy()) File "/home/tango/anaconda3/lib/python3.6/site-packages/imageio/plugins/ffmpeg.py", line 323, in _open self._initialize() File "/home/tango/anaconda3/lib/python3.6/site-packages/imageio/plugins/ffmpeg.py", line 466, in _initialize self._meta.update(self._read_gen.__next__()) File "/home/tango/anaconda3/lib/python3.6/site-packages/imageio_ffmpeg/_io.py", line 150, in read_frames raise IOError(fmt.format(err2)) OSError: Could not load meta information === stderr === ffmpeg version 4.2 Copyright (c) 2000-2019 the FFmpeg developers built with gcc 7.3.0 (crosstool-NG 1.23.0.449-a04d0) configuration: --prefix=/home/tango/anaconda3 --cc=/home/conda/feedstock_root/build_artifacts/ffmpeg_1566210161358/_build_env/bin/x86_64-conda_cos6-linux-gnu-cc --disable-doc --disable-openssl --enable-avresample --enable-gnutls --enable-gpl --enable-hardcoded-tables --enable-libfreetype --enable-libopenh264 --enable-libx264 --enable-pic --enable-pthreads --enable-shared --enable-static --enable-version3 --enable-zlib --enable-libmp3lame libavutil 56. 31.100 / 56. 31.100 libavcodec 58. 54.100 / 58. 54.100 libavformat 58. 29.100 / 58. 29.100 libavdevice 58. 8.100 / 58. 8.100 libavfilter 7. 57.100 / 7. 57.100 libavresample 4. 0. 0 / 4. 0. 0 libswscale 5. 5.100 / 5. 5.100 libswresample 3. 5.100 / 3. 5.100 libpostproc 55. 5.100 / 55. 5.100 [matroska,webm @ 0x5619b9da3cc0] File ended prematurely [matroska,webm @ 0x5619b9da3cc0] Could not find codec parameters for stream 0 (Video: h264, none, 1280x720): unspecified pixel format Consider increasing the value for the 'analyzeduration' and 'probesize' options Input #0, matroska,webm, from '/tmp/imageio_zm6hhpgr': Metadata: title : Kinesis Video SDK encoder : Kinesis Video SDK 1.0.0 AWS_KINESISVIDEO_FRAGMENT_NUMBER: 91343852333183888465720004820715065721442989478 AWS_KINESISVIDEO_SERVER_TIMESTAMP: 1580791384.096 AWS_KINESISVIDEO_PRODUCER_TIMESTAMP: 1580791377.843 Duration: N/A, bitrate: N/A Stream #0:0(eng): Video: h264, none, 1280x720, SAR 1:1 DAR 16:9, 1k tbr, 1k tbn, 2k tbc (default) Metadata: title : kinesis_video Stream mapping: Stream #0:0 -> #0:0 (h264 (native) -> rawvideo (native)) Press [q] to stop, [?] for help Cannot determine format of input stream 0:0 after EOF Error marking filters as finished Conversion failed! The above approach to read a MKV bytes file was done based on this thread
Or is there is any approach to parse and read the MKV bytes file.
-
How to ignore the other output streams if failed in ffmpeg ?
14 février 2020, par geo-freakI have an input video. I take the input and encode the video, and encode all the audio channels in it in a single command. I get 5.1 audio and sometimes stereo audio(once in a day randomly) from the live input. I use the following command to encode all the streams.
ffmpeg -i input.ts -c:v libx264 -c:a libfdk_aac -y output.m3u8 \ -vn -map_channel 0.1.0 -c:a libfdk_aac -ac 1 -y channel1.m3u8 \ -vn -map_channel 0.1.1 -c:a libfdk_aac -ac 1 -y channel2.m3u8 \ -vn -map 0:2 -c:a libfdk_aac -ac 1 -y channel3.m3u8 \ -vn -map_channel 0.2.0 -c:a libfdk_aac -ac 1 -y channel4.m3u8 \ -vn -map_channel 0.2.1 -c:a libfdk_aac -ac 1 -y channel5.m3u8 \ -vn -map 0:3 -c:a libfdk_aac -ac 1 -y channel6.m3u8 \ -vn -map_channel 0.3.0 -c:a libfdk_aac -ac 1 -y channel7.m3u8 \ -vn -map_channel 0.3.1 -c:a libfdk_aac -ac 1 -y channel8.m3u8 -v verbose
Sometimes my whole encoding command is failing due to the sudden change in the audio layout(5.1 to stereo) for a particular hour. I don't want my encoding command to stop, I need the main video output.m3u8(video file) however the audio layout is. Can I ignore the other encodings by using any options if they failed? I have seen the option
onfail
in ffmpeg documentation. I tried using that in my command, but I failed. Below is the command I used.ffmpeg -i input.ts -c:v libx264 -c:a libfdk_aac -y output.m3u8 -vn -map_channel 0.1.0 -c:a libfdk_aac -ac 1 -y channel1.m3u8 -vn -map_channel 0.1.1 -c:a libfdk_aac -ac 1 -y channel2.m3u8 -f "[f=mpegts:onfail=ignore] -vn -map 0:2 -c:a libfdk_aac -ac 1 -y channel3.m3u8" -f "[f=mpegts:onfail=ignore] -vn -map_channel 0.2.0 -c:a libfdk_aac -ac 1 -y channel4.m3u8" -f "[f=mpegts:onfail=ignore] -vn -map_channel 0.2.1 -c:a libfdk_aac -ac 1 -y channel5.m3u8" -f "[f=mpegts:onfail=ignore] -vn -map 0:3 -c:a libfdk_aac -ac 1 -y channel6.m3u8" -f "[f=mpegts:onfail=ignore] -vn -map_channel 0.3.0 -c:a libfdk_aac -ac 1 -y channel7.m3u8" -f "[f=mpegts:onfail=ignore] -vn -map_channel 0.3.1 -c:a libfdk_aac -ac 1 -y channel8.m3u8" -f "[f=mpegts:onfail=ignore] -vn -map 0:3.1 -c:a libfdk_aac -ac 1 -y channel6.m3u8" -v verbose
When I give the input video with 5.1 audio, the above command worked, but only for the first 3 outputs. It is totally ignoring all the remaining outputs even if they work. It is not bothering about the result of the outputs.
My question is how to use that onfail option to ignore if one of the outputs failed?
-
FFprob time stimstempt
14 février 2020, par user12129980I got this results:
`media_type=video|stream_index=0|key_frame=1|pkt_pts=4004|pkt_pts_time=0.066733|pkt_dts=4004|pkt_dts_time=0.066733|best_effort_timestamp=4004|best_effort_timestamp_time=0.066733|pkt_duration=N/A|pkt_duration_time=N/A|pkt_pos=247781|pkt_size=3110400|width=1920|height=1080|pix_fmt=yuv420p|sample_aspect_ratio=1:1|pict_type=I|coded_picture_number=0|display_picture_number=0|interlaced_frame=0|top_field_first=0|repeat_pict=0|color_range=unknown|color_space=unknown|color_primaries=unknown|color_transfer=unknown|chroma_location=unspecified|tag:lavfi.scene_score=0.441274
media_type=video|stream_index=0|key_frame=1|pkt_pts=9009|pkt_pts_time=0.150150|pkt_dts=9009|pkt_dts_time=0.150150|best_effort_timestamp=9009|best_effort_timestamp_time=0.150150|pkt_duration=N/A|pkt_duration_time=N/A|pkt_pos=278961|pkt_size=3110400|width=1920|height=1080|pix_fmt=yuv420p|sample_aspect_ratio=1:1|pict_type=I|coded_picture_number=0|display_picture_number=0|interlaced_frame=0|top_field_first=0|repeat_pict=0|color_range=unknown|color_space=unknown|color_primaries=unknown|color_transfer=unknown|chroma_location=unspecified|tag:lavfi.scene_score=0.639635
media_type=video|stream_index=0|key_frame=1|pkt_pts=63063|pkt_pts_time=1.051050|pkt_dts=63063|pkt_dts_time=1.051050|best_effort_timestamp=63063|best_effort_timestamp_time=1.051050|pkt_duration=N/A|pkt_duration_time=N/A|pkt_pos=1741936|pkt_size=3110400|width=1920|height=1080|pix_fmt=yuv420p|sample_aspect_ratio=1:1|pict_type=I|coded_picture_number=0|display_picture_number=0|interlaced_frame=0|top_field_first=0|repeat_pict=0|color_range=unknown|color_space=unknown|color_primaries=unknown|color_transfer=unknown|chroma_location=unspecified|tag:lavfi.scene_score=0.477124
media_type=video|stream_index=0|key_frame=1|pkt_pts=142142|pkt_pts_time=2.369033|pkt_dts=142142|pkt_dts_time=2.369033|best_effort_timestamp=142142|best_effort_timestamp_time=2.369033|pkt_duration=N/A|pkt_duration_time=N/A|pkt_pos=5345789|pkt_size=3110400|width=1920|height=1080|pix_fmt=yuv420p|sample_aspect_ratio=1:1|pict_type=I|coded_picture_number=0|display_picture_number=0|interlaced_frame=0|top_field_first=0|repeat_pict=0|color_range=unknown|color_space=unknown|color_primaries=unknown|color_transfer=unknown|chroma_location=unspecified|tag:lavfi.scene_score=0.386543 `
I try to understand that, for my understanding
best_effort_timestamp=4004
represent the boundary of the first shot, but what 4004 represents? milisecond? microsecond?