Newest 'ffmpeg' Questions - Stack Overflow
Les articles publiés sur le site
-
Flutter_ffmpeg is discontinued. Is there a way still to use it in a flutter project ?
13 avril, par Saad MushtaqI have been using the flutter_ffmpeg package in a Flutter project. But now, when I run the project, I get a build failure exception. Is there a way to still use it in a Flutter project?
FAILURE: Build failed with an exception.
- What went wrong: Execution failed for task ':app:checkDebugAarMetadata'.
Could not resolve all files for configuration ':app:debugRuntimeClasspath'. Could not find com.arthenica:mobile-ffmpeg-https:4.4. Searched in the following locations: - https://dl.google.com/dl/android/maven2/com/arthenica/mobile-ffmpeg-https/4.4/mobile-ffmpeg-https-4.4.pom - https://repo.maven.apache.org/maven2/com/arthenica/mobile-ffmpeg-https/4.4/mobile-ffmpeg-https-4.4.pom - https://storage.googleapis.com/download.flutter.io/com/arthenica/mobile-ffmpeg-https/4.4/mobile-ffmpeg-https-4.4.pom - https://jcenter.bintray.com/com/arthenica/mobile-ffmpeg-https/4.4/mobile-ffmpeg-https-4.4.pom Required by: project :app > project :flutter_ffmpeg
-
FFmpeg pan filter error when routing stereo audio to rear channels of 5.1 output
13 avril, par MilkyTechI'm trying to mix a stereo commentary track into the rear surround channels of a 5.1 audio stream using FFmpeg on Windows 10. My goal is to lower the volume of the original 5.1 movie audio, then add the stereo commentary so it plays from the rear left and right speakers (SL and SR).
I've already converted the commentary to EAC3 to match the main track's codec:
ffmpeg -i "CastCommentary.m4a" -c:a eac3 -b:a 640k CastCommentary.eac3
Then I tried mixing them like this (from within Command Prompt, not PowerShell or a batch file):
ffmpeg -i "Tropic.Thunder.2008.UNRATED.mkv" -i "CastCommentary.eac3" -filter_complex "[0:a:0]volume=0.4[aud1]; [1:a:0]pan=5.1:FL=0:FR=0:FC=0:LFE=0:SL=c0:SR=c1[cm_rear]; [aud1][cm_rear]amix=inputs=2[aout]" -map 0:v -map "[aout]" -map 0:s? -t 600 -c:v copy -c:s copy -c:a eac3 -b:a 640k "Tropic.Thunder.5.1.commentary.test.mkv"
But I keep getting errors like:
[fc#0 @ ...] Error applying option 'SL' to filter 'pan': Option not found Error : Option not found
Or:
[Parsed_pan_1 @ ...] Expected in channel name, got ""
Or even:
Output channel layout 5.1 does not match the number of channels mapped 2.
I’ve tried variations of the pan syntax:
- pan=5.1:FL=0:FR=0:FC=0:LFE=0:SL=c0:SR=c1
- pan=5.1|FL=0|FR=0|FC=0|LFE=0|SL=c0|SR=c1
- Wrapping in single/double
- quotes Escaping for CMD (no caret issues in current runs)
Nothing seems to work.
🎯 Goal:
- Keep 5.1 audio from the original movie (volume lowered)
- Add stereo commentary to SL and SR
- Output a proper 5.1 EAC3 mix
🔧 System:
- Windows 10
- FFmpeg version: [latest static build from ffmpeg.org]
- Running in true Command Prompt (not PowerShell)
- Source audio: 5.1 EAC3 from a .mkv, stereo .eac3 from .m4a
What’s the correct filter_complex syntax to route a stereo track to the rear channels of a 5.1 layout using FFmpeg on Windows? Am I missing something about pan, amix, or Windows quirks?
-
Convert webm to mp4 using FFMPEG via RecordRTC
12 avril, par niightiikI have an angular application where I am using RecordRTC to record audio + video, the video is getting stored in webm format but I want to convert it into mp4, for that I am using FFMPEG library. I am getting an issue where I have installed @ffmpeg/ffmpeg [^0.12.7] @ffmpeg/core [0.12.4]
Now I am trying to call it in my .ts file
import { createFFmpeg, fetchFile } from '@ffmpeg/ffmpeg';
But getting an issue TS2305: Module '"../../../../../node_modules/@ffmpeg/ffmpeg/dist/esm"' has no exported member 'createFFmpeg'. TS2305: Module '"../../../../../node_modules/@ffmpeg/ffmpeg/dist/esm"' has no exported member 'fetchFile'.
Any help will be appreciated.
-
FFMPEG vsync drop and regeneration [closed]
11 avril, par Lhh92According to the ffmpeg documentation
-vsync parameter
Video sync method. For compatibility reasons old values can be specified as numbers. Newly added values will have to be specified as strings always.
drop
As passthrough but destroys all timestamps, making the muxer generate fresh timestamps based on frame-rate.
It appears that the mpegts mux does not regenerate the timestamps correctly (PTS/DTS); however, piping the output after vsync drop to a second process as raw h264 does force mpegts to regenerate the PTS.
Generate test stream
ffmpeg -f lavfi -i testsrc=duration=20:size=1280x720:rate=50 -pix_fmt yuv420p -c:v libx264 -b:v 4000000 -x264-params ref=1:bframes=0:vbv-maxrate=4500:vbv-bufsize=4000:nal-hrd=cbr:aud=1:bframes=0:intra-refresh=1:keyint=30:min-keyint=30:scenecut=0 -f mpegts -muxrate 5985920 -pcr_period 20 video.ts -y
Generate output ts that has correctly spaced PTS values
ffmpeg -i video.ts -vsync drop -c:v copy -bsf:v h264_mp4toannexb -f h264 - | ffmpeg -fflags +igndts -fflags +nofillin -fflags +genpts -r 50 -i - -c:v copy -f mpegts -muxrate 5985920 video_all_pts_ok.ts -y
Generate output ts where all PTS are zero
ffmpeg -i video.ts -vsync drop -c:v copy -bsf:v h264_mp4toannexb -f mpegts - | ffmpeg -fflags +igndts -fflags +nofillin -fflags +genpts -r 50 -i - -c:v copy -f mpegts -muxrate 5985920 video_all_pts_zero.ts -y
It appears that vsync drop does destroy them but the mpegts doesn't regenerate them? Any ideas on what needs adding to get it to work as a single ffmpeg command?
Tested on both Linux and Windows with the same result
-
Speed Compare audio file library [closed]
11 avril, par DrGMark7I am trying to benchmark some popular python audio libs for audio work in python. I intend to test how the Read/Write of these libs are. I tested with wav files with length 1, 2, 5, 10, 30, 60, 300, 600, 1800 with the following libs: PyTorch Audio, librosa, soundfile, pydub. The speed is shown in the graph below. Can anyone explain me why it is so fast and why pydub is the fastest? I am also wondering how big software like Meta or Youtube handle such small audio files in large quantities.
I'm trying to understand it if we take away from Python, it's a slow language.
P.S. I understand that Pytorch is slow because of the overhead of converting to Tensor.