Newest 'ffmpeg' Questions - Stack Overflow
Articles published on the website
-
FFmpeg crossfading with audio keeps throwing [AVFilterGraph @ 0x...] No such filter: '', but works video only
5 December, by J. CravensI've got this script that crossfade's all mp4's in the directory. If I try to add the audio, with this, it doesn't work.
VIDEO="fade=in:st=0:d=1:alpha=1,setpts=PTS-STARTPTS+(${fade_start}/TB)[v${i}];" AUDIO="[0:a]afade=d=1[a0];"
This works fine...
#!/bin/bash CMD="ffmpeg" FILES=(*.mp4) SIZE=${#FILES[@]} VIDEO="" OUT="" i="0" total_duration="0" for file in "${FILES[@]}"; do echo "$file" CMD="$CMD -i '$file'" duration=$(ffprobe -v error -select_streams v:0 -show_entries stream=duration -of csv=p=0 "$file" | cut -d'.' -f1) if [[ "$i" == "0" ]] then VIDEO="[0:v]setpts=PTS-STARTPTS[v0];" else fade_start=$((total_duration)) VIDEO="${VIDEO}[${i}:v]format=yuva420p,fade=in:st=0:d=1:alpha=1,setpts=PTS-STARTPTS+(${fade_start}/TB)[v${i}];" if (( i < SIZE-1 )) then if (( i == 1 )) then OUT="${OUT}[v0][v1]overlay[outv1];" else OUT="${OUT}[outv$((i-1))][v${i}]overlay[outv${i}];" fi else if (( SIZE == 2 )) then OUT="${OUT}[v0][v1]overlay,format=yuv420p[outv]" else OUT="${OUT}[outv$((i-1))][v${i}]overlay,format=yuv420p[outv]" fi fi fi total_duration=$((total_duration+duration)) i=$((i+1)) done CMD="$CMD -filter_complex \"${VIDEO}${OUT}\" -c:v libx264 -preset ultrafast -map [outv] crossfade.mp4" echo "$CMD" bash -c "$CMD"
...but when I try to add the audio, it's not happy. I'm missing something.
Anyone know what is going on? I read of cases leaving an extra ; at the end causing "No such filter", but I triedecho ${VIDEO%?}
before the command, it didn't seem to make a difference. I readacrossfade
requires 32bit little endian, so I went toafade
, it didn't change anything. Ideas?Update:
#!/bin/bash duration_a=$(ffprobe -v error -select_streams a:0 -show_entries stream=duration -of csv=p=0 -i audio_02.m4a) ffmpeg -i audio_01.m4a -i audio_02.m4a -filter_complex \ "[0:a] \ [1:a]acrossfade=d=1.0:c1=tri:c2=tri,asetpts=PTS-STARTPTS+'$duration_a'[outa]" -vn -map "[outa]" 01_02.m4a duration_b=$(ffprobe -v error -select_streams a:0 -show_entries stream=duration -of csv=p=0 -i audio_03.m4a) ffmpeg -i 01_02.m4a -i audio_03.m4a -filter_complex \ "[0:a] \ [1:a]acrossfade=d=1.0:c1=tri:c2=tri,asetpts=PTS-STARTPTS+'$duration_b'[outa]" -vn -map "[outa]" 01_02_03.m4a
...and so on. This still doesn't properly sync the a/v. I noticed the first 2 always had audio sync proper, so I tried to do it, one at a time, appending the next. It still goes out of sync, regardless of using
asetpts=PTS-STARTPTS
.Update 2: I've got this working via fade in/out.
#!/bin/bash if [ -e *.mkv ]; then file_type=".mkv" else file_type=".mp4" fi mkdir ./temp for file in *$file_type; do mv "$file" "in_$file"; done # Function to fade in video function fade_in() { file_list1=(in_*) echo "Executing fade_in" for file in "${file_list1[@]}"; do ffmpeg -i "$file" -y -vf fade=in:0:30 -hide_banner -preset ultrafast "out_$file" done mv in_* ./temp } export -f fade_in # Function to fade out video function fade_out() { file_list2=(out_*) echo "Executing fade_out" for file in "${file_list2[@]}"; do frame_count=$(ffmpeg -i $file -map 0:v:0 -c copy -f null -y /dev/null 2>&1 | grep -Eo 'frame= *[0-9]+ *' | grep -Eo '[0-9]+' | tail -1) frame_start=$((frame_count - 30)) ffmpeg -i "$file" -y -vf fade=out:"$frame_start":30 -hide_banner -preset ultrafast "to_mux_$file" done mv out_* ./temp } export -f fade_out bash -c "fade_in" bash -c "fade_out" ls --quoting-style=shell-always -1v *$file_type > tmp.txt sed 's/^/file /' tmp.txt > list.txt && rm tmp.txt ffmpeg -f concat -safe 0 -i list.txt -c copy -shortest -movflags +faststart fade_muxed$file_type mv to_mux_* ./temp rm list.txt #rm -rf ./temp exit 0
Number the filename's sequentially (i.e. 01_file.mkv 02_file.mkv etc.), to ensure order, or it will be up to the directory sort.
It's not a crossfade, but it's pretty decent and has audio.
Still working on the crossfade + audio. -
OpenCV cannot read AVI/MP4 file. [mpeg4 @ 0x55935a6280c0] get_buffer() failed
5 December, by Alex MurphyI am trying to read single frames from an mp4 file using open-cv in Python
video_path = 'test.mp4' cam = cv2.VideoCapture(video_path, cv2.CAP_FFMPEG) success,image = cam.read() count = 0 while success: success,image = cam.read() print('Read a new frame: ', success) count += 1
The success boolean instantly evaluates to false when trying to read the first frame. These are the errors I'm getting:
[mpeg4 @ 0x55935a6280c0] get_buffer() failed [mpeg4 @ 0x55935a6280c0] thread_get_buffer() failed [mpeg4 @ 0x55935a6280c0] get_buffer() failed (-12 (nil)) [mpeg4 @ 0x55935a52bd80] Too many errors when draining, this is a bug. Stop draining and force EOF.
I can convert files from mp4 to avi and avi to mp4 using ffmpeg in my terminal. It also shows no signs of the files being corrupted. The code still doesn't work when trying to read an mp4. I have 20GB of RAM so I don't think it is a RAM problem.
Python 3.6.13, opencv-python 3.4.18.65. Using Linux
Edit: Restarting machine and changing from 20GB RAM to 40GB sometimes fixes it. Considering its a 17 second file only a few MB big this doesn't seem to make sense?
-
fluent-ffmpeg failed with zero output or errors
5 December, by dcprimeI'm attempting to convert a
.webm
audio file to a.wav
audio file using fluent-ffmpeg. I've looked through all kinds of example code, but whenever I run my code I get no output or errors whatsoever:import ffmpeg from 'fluent-ffmpeg'; function verifyAudio({ testFile }) { // parse file name for wav file from testFile const wavFile = testFile.split('.')[0] + '.wav'; console.log(`wavFile: ${wavFile}`); // convert to wav file ffmpeg(testFile) .inputFormat('webm') .outputFormat('wav') .on('start', function(commandLine) { console.log('Spawned Ffmpeg with command: ' + commandLine); }) .on('progress', (progress) => { console.log('Processing: ' + progress.targetSize + ' KB converted'); }) .on('end', function(err) { console.log('done!', err); }) .on('error', function(err) { console.log('an error: ' + err); }).saveToFile(wavFile); console.log('finished conversion'); } export { verifyAudio };
This function takes in the name of the input
.webm
file as an argument, and attempts to convert it to a.wav
file with filenamewavFile
. When this runs, I see the output ofconsole.log(
wavFile: ${wavFile});
, I see the output ofconsole.log('finished conversion');
, but none of the.on
outputs in theffmpeg
call. No errors are raised at all. Not even when I insert a non-existent filename astestFile
. And no output file is generated (not even an empty one).I have no idea what's going on here, since I have no output to debug on! What is happening here?
I've tried many combinations of defining inputs and outputs, including the various formats. I've tried doing the conversion on the command line using
ffmpeg
, it it works fine.I'm expecting to see the output file: A
.wav
file with the filenamewavFile
. I see nothing. -
How do I get videoshow (or any other js package) to merge image and sound files to the length I specify rather than a constant length of 5 seconds?
5 December, by BragonI’m trying to take an image file and a sound file and merge them together into an mp4 file. To this end, I use videoshow.js which is basically a wrapper for fluent-ffmpeg.js. For some reason, videoshow always sets the duration of the output file to 5 seconds regardless of what I set the loop parameter to. And to top it all off, it fades out the sound towards the end of the clip.
I’m happy for any solution to this even if it doesn’t include the use of videoshow or fluent-ffmpeg.
const url = require('url'); const { smartLog } = require('../services/smart-log'); const { getFile, getDuration } = require('../services/file-service'); const videoshow = require('videoshow'); const path = require('path'); const FFmpeg = require('fluent-ffmpeg'); const fs = require('fs'); const imgToMP4 = (caption, sound, image, duration, output) => { smartLog('info', `Converting ${image}`); const images = [image]; const videoOptions = { fps: 10, loop: duration, transition: false, videoBitrate: 1024, videoCodec: 'libx264', size: '640x?', audioBitrate: '128k', audioChannels: 2, format: 'mp4', pixelFormat: 'yuv420p', }; videoshow([ { path: image, }, ]) .audio(sound) .save(output) .on('start', function (command) { smartLog('info', `ffmpeg process started: ${image}`); }) .on('error', function (err) { smartLog('error', err); }) .on('end', function (output) { smartLog('info', `Video created: ${output}`); }); };
-
Ffmpeg merge audio, video and watermark not working properly
5 December, by ltvieffmpeg -y -i inputVideo.mp4 -itsoffset 50 -i inputAudio.mp3 -i watermark.png -filter_complex "[0][1]amix=inputs=2:weights='1 0.5'[a];[0][2]overlay=(W/2-w/2):(H-h-21)[ovr0]" -map [ovr0]:v -map [a] -preset faster -shortest ouputVideo.mp4
I used above command to merge audio, video and adding a watermark, it work and successfully merged, but the quality of the output video very poor and the '-shortest' tag didn't work. How can I resolve this? I want the video to have the same quality as the original and for 'shortest' working properly