Newest 'ffmpeg' Questions - Stack Overflow
Les articles publiés sur le site
-
After upgrade ffmpeg code doesn't working clip build
11 février, par TchouneI have a problem after upgrading ffmpeg from 4.2.2 to 5.2.2, my code no longer works. When I upload a video to my React-Native application I get a file corruption error on my FFmpeg python agent. -> sends to Laravel which stores the video on the minio storage; the video is available -> sends http to the minio key to download locally the mp4 video is corrupted on the minio too... I have the impression that it's an error downloading the video locally that makes the video corrupt, but I have no idea how I can debug this problem. If I upload the video directly from my web interface I don't have this problem. The only difference is processClipSynchronously which is set to True on mobile and False on web.
Laravel Agent send to python microservice :
// Store uploaded video file $videoFilePath = $this->storeVideoFile($learningGoal, $videoFile); // Add video to storyboard $agentResponse = Http::post($this->agentUrl . 'learning-goals/' . $learningGoal->id . '/storyboards/' . $storyboardId . '/chapters/' . $chapterId . '/videos', [ 'clip' => $videoFilePath, 'processClipSynchronously' => $processClipSynchronously ]);
Python agent video :
@app.route('/learning-goals//storyboards//chapters//videos', methods=['post']) def post_storyboard_videos(learning_goal_id, storyboard_id, chapter_id): storyboard = get_storyboard(learning_goal_id, storyboard_id) chapter, position = get_chapter(storyboard, chapter_id) if 'clip' in request.get_json(): chapter['clip'] = request.get_json()['clip'] if 'duration' in storyboard: del chapter['duration'] if 'thumbnail' in storyboard: del chapter['thumbnail'] if 'ncAudioPreviewPath' in chapter: del chapter['ncAudioPreviewPath'] if 'trim_start' in chapter: del chapter['trim_start'] if 'trim_end' in chapter: del chapter['trim_end'] if 'perform_nc' in chapter: del chapter['perform_nc'] else: abort(400) new_storyboard = create_new_version_storyboard(storyboard) if 'processClipSynchronously' in request.get_json() and request.get_json()['processClipSynchronously']: treat_clip(new_storyboard, chapter) #Mobile trigger here else: thread = StoppableThread(target=treat_clip, args=(new_storyboard, chapter)) thread.daemon = True thread.start() chapter, position = get_chapter(new_storyboard, chapter_id) return json.loads(dumps(chapter)) def treat_clip(storyboard, chapter): logging.info( 'start treating clip (' + chapter['clip'] + ') for learning goal : ' + str(storyboard['learningGoalId'])) file = app.config['VOLUME_PATH'] + chapter['clip'] os.makedirs(dirname(file), exist_ok=True) temp_files_to_remove = [] if not os.path.exists(file): # Download file from S3 storage. s3.download_file(chapter['clip'], file) # Clean the file at the end (it's already in S3). temp_files_to_remove.append(file) else: logging.warn(f'Not downloading {chapter["clip"]} from S3 as it already exists on the filesystem') resolution_width, resolution_height = get_resolution(file) is_rotated_video = is_rotated(file) sample_aspect_ratio = get_sample_aspect_ratio(file) frame_rate = get_frame_rate(file) if not file.endswith( '.mp4') or resolution_width != 1920 or resolution_height != 1080 or is_rotated_video or sample_aspect_ratio != '1:1' or frame_rate > 60: chapter['clip'] = format_video(chapter['clip'], resolution_width, resolution_height, frame_rate, is_rotated_video, str(storyboard['learningGoalId']), 1920, 1080) file = app.config['VOLUME_PATH'] + chapter['clip'] # Update file to S3 storage s3.upload_file(file, chapter['clip']) # Clean the new file at the end. temp_files_to_remove.append(file) clip = VideoFileClip(file) chapter['duration'] = float(clip.duration) thumbnail_relative_path = create_video_thumbnail(storyboard, clip, 0) ....
It's VideoFileClip from moviepy who generate error : Moov atom not found I think S3 not have time to download file, and corrumpt, but I don't know how to test or fix that
thanks in advance
-
Split a video in two and burn subtitles into each output video
11 février, par KaireiI want to split a single input video "input.mp4" into two separate videos "out1.mp4" and "out2.mp4." I also want to burn hard subtitles into each of the output files. The subtitles come from two pre-existing subtitle files "subtitles1.ass" and "subtitles2.ass." I tried just adding -vf "ass=subtitles1.ass" and -vf "ass=subtitles2.ass" before each of the output files. Subtitles from subtitles1.ass were added to out1.mp4 but out2.mp4 had no subtitles. I spent hours reading docs and trying things and realized I probably need a complex filter and mapping so came up with this:
ffmpeg.exe -i "input.mp4" -filter_complex "[0:v]split=2[in1][in2];[in1]ass=subtitles1.ass[out1];[in2]ass=subtitles2.ass[out2]" -map "[out1]" -map 0:a -ss 0:00:00.00 -to 0:01:00.00 "C:\out1.mp4" -map "[out2]" -map 0:a -ss 0:01:00.00 -to 0:02:00.00 "C:\out2.mp4"
... which I think means "Take the input file, split it into two "input pads," send input pad 1 through the subtitle filter with parameter subtitles1.ass and send input pad 2 through the subtitle filter with parameter subtitles2.ass. The two then come out to output pads out1 and out2. I then map out1 (which has the video with burned in subtitles) and also map the audio from the input file, and send the first hour of the video to out1.mp4. I do the same thing for output pad out2 and try to get the second hour of video with subtitles from subtitiles2.ass.
I do get out1.mp4 with the first hour of video and audio and properly burned in subtitles. Unfortunately, out2.mp4 has the correct second hour of video and audio but no subtitles. Am I missing something to get subtitles2.ass burned into out2.mp4?
-
FFMPEG : webm to mp4 quality loss
11 février, par turboLoopWhen trying to convert a .webm video (two colored animation) to a .mp4 video using ffmpeg (3.4.2 on mac) the result is somewhat blurry. I did research this topic and tried different approaches to solve this. Here is the most promising command:
ffmpeg -i vidoe.webm -qscale 1 video.mp4
However, the quality change is still tremendous, see the difference below.
webm
mp4
The resolution of the two videos is the same, however the size dropped from 24,3MB (.webm) to 1,5MB (.mp4) after conversion.
Update
Here is the log of the conversion.
ffmpeg version 3.4.2 Copyright (c) 2000-2018 the FFmpeg developers built with Apple LLVM version 9.0.0 (clang-900.0.39.2) configuration: --prefix=/usr/local/Cellar/ffmpeg/3.4.2 --enable-shared --enable-pthreads --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --disable-jack --enable-gpl --enable-libmp3lame --enable-libx264 --enable-libxvid --enable-opencl --enable-videotoolbox --disable-lzma libavutil 55. 78.100 / 55. 78.100 libavcodec 57.107.100 / 57.107.100 libavformat 57. 83.100 / 57. 83.100 libavdevice 57. 10.100 / 57. 10.100 libavfilter 6.107.100 / 6.107.100 libavresample 3. 7. 0 / 3. 7. 0 libswscale 4. 8.100 / 4. 8.100 libswresample 2. 9.100 / 2. 9.100 libpostproc 54. 7.100 / 54. 7.100 Input #0, matroska,webm, from 'video.webm': Metadata: encoder : whammy Duration: 00:00:05.02, start: 0.000000, bitrate: 38755 kb/s Stream #0:0: Video: vp8, yuv420p(progressive), 1920x1080, SAR 1:1 DAR 16:9, 60 fps, 60 tbr, 1k tbn, 1k tbc (default) Please use -q:a or -q:v, -qscale is ambiguous Stream mapping: Stream #0:0 -> #0:0 (vp8 (native) -> h264 (libx264)) Press [q] to stop, [?] for help [libx264 @ 0x7f8625800c00] -qscale is ignored, -crf is recommended. [libx264 @ 0x7f8625800c00] using SAR=1/1 [libx264 @ 0x7f8625800c00] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2 [libx264 @ 0x7f8625800c00] profile High, level 4.2 [libx264 @ 0x7f8625800c00] 264 - core 152 r2854 e9a5903 - H.264/MPEG-4 AVC codec - Copyleft 2003-2017 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 Output #0, mp4, to 'video.mp4': Metadata: encoder : Lavf57.83.100 Stream #0:0: Video: h264 (libx264) (avc1 / 0x31637661), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], q=-1--1, 60 fps, 15360 tbn, 60 tbc (default) Metadata: encoder : Lavc57.107.100 libx264 Side data: cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1 frame= 301 fps= 45 q=-1.0 Lsize= 1417kB time=00:00:04.96 bitrate=2336.4kbits/s speed=0.735x video:1412kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.309675% [libx264 @ 0x7f8625800c00] frame I:2 Avg QP:13.08 size: 8842 [libx264 @ 0x7f8625800c00] frame P:75 Avg QP:24.29 size: 6785 [libx264 @ 0x7f8625800c00] frame B:224 Avg QP:26.38 size: 4102 [libx264 @ 0x7f8625800c00] consecutive B-frames: 0.7% 0.0% 1.0% 98.3% [libx264 @ 0x7f8625800c00] mb I I16..4: 68.1% 28.7% 3.2% [libx264 @ 0x7f8625800c00] mb P I16..4: 0.1% 2.2% 0.4% P16..4: 6.5% 4.0% 1.4% 0.0% 0.0% skip:85.4% [libx264 @ 0x7f8625800c00] mb B I16..4: 0.0% 0.2% 0.0% B16..8: 8.8% 3.0% 0.3% direct: 0.3% skip:87.3% L0:52.1% L1:47.5% BI: 0.4% [libx264 @ 0x7f8625800c00] 8x8 transform intra:57.7% inter:67.8% [libx264 @ 0x7f8625800c00] coded y,uvDC,uvAC intra: 25.7% 8.7% 0.9% inter: 3.9% 0.4% 0.0% [libx264 @ 0x7f8625800c00] i16 v,h,dc,p: 95% 2% 3% 0% [libx264 @ 0x7f8625800c00] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 17% 5% 48% 5% 7% 6% 5% 4% 3% [libx264 @ 0x7f8625800c00] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 20% 14% 31% 6% 7% 7% 6% 5% 4% [libx264 @ 0x7f8625800c00] i8c dc,h,v,p: 88% 6% 6% 0% [libx264 @ 0x7f8625800c00] Weighted P-Frames: Y:0.0% UV:0.0% [libx264 @ 0x7f8625800c00] ref P L0: 55.3% 5.5% 24.8% 14.5% [libx264 @ 0x7f8625800c00] ref B L0: 75.6% 16.7% 7.7% [libx264 @ 0x7f8625800c00] ref B L1: 93.9% 6.1% [libx264 @ 0x7f8625800c00] kb/s:2304.86
Any idea on how to overcome this quality loss?
-
Output image with correct aspect with ffmpeg
11 février, par koichiroseI have a mkv video with the following properties (obtained with mediainfo):
Width : 718 pixels Height : 432 pixels Display aspect ratio : 2.35:1 Original display aspect ratio : 2.35:1
I'd like to take screenshots of it at certain times:
ffmpeg -ss 4212 -i filename.mkv -frames:v 1 -q:v 2 out.jpg
This will produce a 718x432 jpg image, but the aspect ratio is wrong (the image is "squeezed" horizontally). AFAIK, the output image should be 1015*432 (with width=height * DAR). Is this calculation correct?
Is there a way to have ffmpeg output images with the correct size/AR for all videos (i.e. no "hardcoded" values)? I tried playing with the setdar/setsar filters without success.
Also, out of curiosity, trying to obtain SAR and DAR with ffmpeg produces:
Stream #0:0(eng): Video: h264 (High), yuv420p(tv, smpte170m/smpte170m/bt709, progressive), 718x432 [SAR 64:45 DAR 2872:1215], SAR 155:109 DAR 55645:23544, 24.99 fps, 24.99 tbr, 1k tbn, 49.98 tbc (default)
2872/1215 is 2.363, so a slightly different value than what mediainfo reported. Anyone knows why?
-
FFmpeg - convert 2 audio tracks from video to 5.1 audio, (play video with different languages to different devices) [closed]
10 février, par SandreHow to watch a movie with one language playing through the speakers and another through the headphones?
Disclaimer: I know nothing about audio conversion, and don't want to study ffmpeg. I spend a few hours searching how to do, actually much more than I want. I found a bunch of questions from different people and not a single working solution, so I made a clunky but working solution. If someone helps me make it more elegant, I'll be happy. If my question just gets downvoted like most ffmpeg newbie questions, it probably deserves it. And I hope my question can help people who want enjoy video with 2 different languages.
A clumsy but working solution.
Setup Aggregate Audio Device to play 2 channels of 5.1 through speakers and 2 through bluetooth headphones. (On screenshot Audio MIDI Setup for MacOS)
Use ffmpeg to convert 2 audio tracks into 5.1 audio.
Play video with new external audio track.
# print list of channels ffprobe INPUT.mkv 2>&1 >/dev/null | grep Stream --- sample output --- Stream #0:0(eng): Video: h264 (High), yuv420p(progressive), 1280x544, SAR 1:1 DAR 40:17, 23.98 fps, 23.98 tbr, 1k tbn (default) Stream #0:1(rus): Audio: ac3, 48000 Hz, 5.1(side), fltp, 448 kb/s (default) Stream #0:2(eng): Audio: ac3, 48000 Hz, 5.1(side), fltp, 640 kb/s # extract audio ffmpeg -i INPUT.mkv -map 0:1 -acodec copy ru.ac3 ffmpeg -i INPUT.mkv -map 0:2 -acodec copy en.ac3 # extract only front_center channel from 5.1 - speach ffmpeg -i en.ac3 -filter_complex "channelsplit=channel_layout=5.1:channels=FC[center]" -map "[center]" en_front_center.wav ffmpeg -i ru.ac3 -filter_complex "channelsplit=channel_layout=5.1:channels=FC[center]" -map "[center]" ru_front_center.wav # join to 5.1 ffmpeg -i en_front_center.wav -i ru_front_center.wav -filter_complex "[0:a][0:a][0:a][0:a][1:a][1:a]join=inputs=6:channel_layout=5.1[a]" -map "[a]" output.wav
Is it possible to avoid re-encoding the audio and copying the same channel many times to reduce the file size?