Newest 'ffmpeg' Questions - Stack Overflow
Les articles publiés sur le site
-
Streaming different MP3 files using Ezstream and Icecast
3 mai, par hh083I am trying to stream two MP3 files to Icecast using Ezstream and the stream should run in web browsers. The files I am testing with were downloaded as webm and converted to MP3 using
ffmpeg
. They have the same channels count, same bitrate and same sample rate but different duration.My setup: the Ezstream xml configuration file is set to stream MP3 and a program playlist is used to identify what is the next file to be streamed, and no encoders or decoders are used. When I start streaming I save the process ID of the Ezstream process (using the
-p
argument), and then I use the commandkill -10 $(cat currentpid)
withcurrentpid
as the file containing the process ID so Ezstream executes the playlist program to get the next file name and skips the current file to play the next one. Basically I am just switching between1.mp3
and2.mp3
.
The problem is that on Chrome web browser, when I switch between the two files the player (default HTML5 player) will suddenly stop (sometimes I can switch multiple times before it happens and sometimes it happens quickly) and the error
PIPELINE_ERROR_DECODE
is what I find when I accessplayer.error
in JavaScript. Although Firefox handles the change and continues the stream normally, I am convinced that Firefox here is the exception, that it is not a bug in Chrome (in my case), and that there is something wrong with my setup that needs to be fixed to support other browsers.Doing the same using
mpv
player, I get the following errors but the audio keeps streaming normally (sometimes it takes multiple switches before it happens just like in Chrome):[ffmpeg/audio] mp3float: big_values too big [ffmpeg/audio] mp3float: Error while decoding MPEG audio frame. Error decoding audio.
I tried using MP3 encoder and decoder I copied from the Ezstream example files (
lame
andmadplay
) but the problem still existed.I am not sure if the problem is basic and I cannot see it or it is more complicated. Also I do not have a problem if I need to use other format than MP3 to fix that issue, as long as that format is supported by Ezstream and Icecast.
Thanks.
-
ffmpeg not honoring sample rate in opus output
3 mai, par AdamI am capturing a live audio stream to Opus, and no matter what I choose for the audio sample rate, I get 48khz output.
This is my command line
./ffmpeg -f alsa -ar 16000 -i sysdefault:CARD=CODEC -f alsa -ar 16000 -i sysdefault:CARD=CODEC_1 -filter_complex join=inputs=2:channel_layout=stereo:map=0.1-FR\|1.0-FL,asetpts=expr=N/SR/TB -ar 16000 -ab 64k -c:a opus -vbr off -compression_level 5 output.ogg
And this is what ffmpeg responds with:
Output #0, ogg, to 'output.ogg': Metadata: encoder : Lavf57.48.100 Stream #0:0: Audio: opus (libopus), 16000 Hz, stereo, s16, delay 104, padding 0, 64 kb/s (default) Metadata: encoder : Lavc57.54.100 libopus
However, it appears that ffmpeg has lied, because when analysing the file again, I get:
Input #0, ogg, from 'output.ogg': Duration: 00:00:03.21, start: 0.000000, bitrate: 89 kb/s Stream #0:0: Audio: opus, 48000 Hz, stereo, s16, delay 156, padding 0 Metadata: ENCODER : Lavc57.54.100 libopus
I have tried so many permutations of sample rate, simplifying down to a single audio input etc etc - always with the same result.
Any ideas?
-
opencv read error :[h264 @ 0x8f915e0] error while decoding MB 53 20, bytestream -7
2 mai, par Alex LuyaMy configuration:
ubuntu 16.04 opencv 3.3.1 gcc version 5.4.0 20160609 ffmpeg version 3.4.2-1~16.04.york0
and I built opencv with:
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D PYTHON_EXECUTABLE=$(which python) -D OPENCV_EXTRA_MODULES_PATH=/home/xxx/opencv_contrib/modules -D WITH_QT=ON -D WITH_OPENGL=ON -D WITH_IPP=ON -D WITH_OPENNI2=ON -D WITH_V4L=ON -D WITH_FFMPEG=ON -D WITH_GSTREAMER=OFF -D WITH_OPENMP=ON -D WITH_VTK=ON -D BUILD_opencv_java=OFF -D BUILD_opencv_python3=OFF -D WITH_CUDA=ON -D ENABLE_FAST_MATH=1 -D WITH_NVCUVID=ON -D CUDA_FAST_MATH=ON -D BUILD_opencv_cnn_3dobj=OFF -D FORCE_VTK=ON -D WITH_CUBLAS=ON -D CUDA_NVCC_FLAGS="-D_FORCE_INLINES" -D WITH_GDAL=ON -D WITH_XINE=ON -D BUILD_EXAMPLES=OFF -D BUILD_DOCS=ON -D BUILD_PERF_TESTS=OFF -D BUILD_TESTS=OFF -D BUILD_opencv_dnn=OFF -D BUILD_PROTOBUF=OFF -D opencv_dnn_BUILD_TORCH_IMPORTER=OFF -D opencv_dnn_PERF_CAFFE=OFF -D opencv_dnn_PERF_CLCAFFE=OFF -DBUILD_opencv_dnn_modern=OFF -D CUDA_ARCH_BIN=6.1 ..
and use these python code to read and show:
import cv2 from com.xxx.cv.core.Image import Image capture=cv2.VideoCapture("rtsp://192.168.10.184:554/mpeg4?username=xxx&password=yyy") while True: grabbed,content=capture.read() if grabbed: Image(content).show() doSomething() else: print "nothing grabbed.."
Everytime, after reading about 50 frames,it will give an error like:
[h264 @ 0x8f915e0] error while decoding MB 53 20, bytestream -7
then nothing can be grabbed further,and the strange thing is:
1,comment doSomething() or 2,keep doSomething() and recording the stream from same IPCamera,then run code against recorded video
both cases,code works fine,can anyone tell how to solve this problem?Thank in advance!
-
ffmpeg.wasm in Angular 19
2 mai, par Yashar TabriziI am developing an Angular app that records videos. Since the videos that come out usually have variable and "wrong" framerates, I want to re-encode them using FFmpeg, particularly ffmpeg.wasm.
I have installed the packages
@ffmpeg/ffmpeg
,@ffmpeg/core
and@ffmpeg/util
and I have written the following workerffmpeg.worker.ts
to do the initialization and to execute the FFmpeg processing:///
import { FFmpeg } from '@ffmpeg/ffmpeg'; import { toBlobURL } from '@ffmpeg/util'; const baseURL = 'https://unpkg.com/@ffmpeg/core@0.12.10/dist/esm'; const ffmpeg = new FFmpeg(); let isLoaded = false; (async () => { await ffmpeg.load({ coreURL: await toBlobURL(`${baseURL}/ffmpeg-core.js`, "text/javascript"), wasmURL: await toBlobURL(`${baseURL}/ffmpeg-core.wasm`, "application/wasm"), }); isLoaded = true; self.postMessage({ type: 'ready' }); })(); self.onmessage = async (e: MessageEvent ) => { if (!isLoaded) { self.postMessage({ type: 'error', error: 'FFmpeg not loaded yet!' }); return; } if (e.data.byteLength === 0) return; try { await ffmpeg.writeFile('input', new Uint8Array(e.data)); await ffmpeg.exec([ '-i', 'input', '-r', '30', '-c:v', 'libx264', '-preset', 'ultrafast', '-pix_fmt', 'yuv420p', '-movflags', 'faststart', 'out.mp4', ]); const data = await ffmpeg.readFile('out.mp4'); if (data instanceof Uint8Array) { self.postMessage({ type: 'done', file: data.buffer }, [data.buffer]); } else { self.postMessage({ type: 'error', error: 'Unexpected output from ffmpeg.readFile,' }); } } catch (err) { self.postMessage({ type: 'error', error: (err as Error).message }); } finally { await ffmpeg.deleteFile(('input')); await ffmpeg.deleteFile(('out.mp4')); } } I have a service called
cameraService
where I do the recording and where I want to do the re-encoding after the recording has stopped, so I have this method that initializes the FFmpeg process:private encoder: Worker | null = null; private initEncoder() { if (this.encoder) return; this.encoder = new Worker( new URL('../workers/ffmpeg.worker', import.meta.url), // Location of my worker { type: 'module' } ); this.encoder.onmessage = (e: MessageEvent) => { switch (e.data.type) { case 'ready': console.log('FFmpeg worker ready.'); break; case 'done': this.reEncodedVideo = new Blob([e.data.file], { type: 'video/mp4' }); this.videoUrlSubject.next(URL.createObjectURL(this.reEncodedVideo)); console.log('FFmpeg encoding completed.'); break; case 'error': console.error('FFmpeg encoding error:', e.data.error); break; } }; }
However, the loading of FFmpeg won't work, no matter what I do. Hosting the
ffmpeg-core.js
andffmpeg-core.wasm
files doesn't help either. I keep getting this message wheneverffmpeg.load()
is called:The file does not exist at ".../.angular/cache/19.2.0/mover/vite/deps/worker.js?worker_file&type=module" which is in the optimize deps directory. The dependency might be incompatible with the dep optimizer. Try adding it to 'optimizeDeps.exclude'.
I know this has something to do with Web Workers and their integration with Vite but has anybody been able to implement ffmpeg.wasm in Angular 19 or is there even any way to achieve this? If not FFmpeg, are there alternatives to perform re-encoding after recording a video in Angular 19?
-
FFMPEG repeated non-monotonic DTS error despite re-encoding and multiple fixes [closed]
1er mai, par World of DepthI have four MP4 files I'm trying to concat. After following the advice in many posts, and many MANY tries, I've gotten it to the point where the concatenated file now plays back, with video and audio, but I still get the following error when processing the 4th file, and suspect that if I add a 5th it won't work again.
[aost#0:1/copy @ 0x135714fd0] Non-monotonic DTS; previous: 579108, current: 577078; changing to 579109. This may result in incorrect timestamps in the output file. [aost#0:1/copy @ 0x135714fd0] Non-monotonic DTS; previous: 579109, current: 578102; changing to 579110. This may result in incorrect timestamps in the output file.
These are the commands I'm using to generate/prepare the 4 input files:
ffmpeg -fflags +igndts -i original1.mp4 -i original1sound.wav -vf scale=1080:1544,setsar=1,unsharp=5:5:0.5 -r 30 -b:v 25M -pix_fmt yuv420p -c:v libx264 -c:a aac_at -ac 2 -video_track_timescale 15360 -max_muxing_queue_size 9999 -y input1.mp4
ffmpeg -fflags +igndts -i original2.mp4 -i original2sound.wav -vf hflip,scale=1080:1544,setsar=1,unsharp=5:5:0.5 -r 30 -b:v 25M -pix_fmt yuv420p -c:v libx264 -c:a aac_at -ac 2 -video_track_timescale 15360 -max_muxing_queue_size 9999 -y input2.mp4
ffmpeg -fflags +igndts -ss 0.5 -to 3.5 -i original3.mp4 -vf hflip,pad=1080:1544:0:16 -b:v 25M -pix_fmt yuv420p -c:v libx264 -c:a aac_at -max_muxing_queue_size 9999 -y input3.mp4
ffmpeg -fflags +igndts -ss 0.5 -to 3.5 -i original4.mp4 -vf pad=1080:1544:0:16 -b:v 25M -pix_fmt yuv420p -c:v libx264 -c:a aac_at -b:a 128k -max_muxing_queue_size 9999 -y input4.mp4
FFPROBE returns this information about the 4 prepared input files, in order from 1 to 4:
Stream #0:0[0x1](und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(progressive), 1080x1544 [SAR 1:1 DAR 135:193], 24643 kb/s, 30 fps, 30 tbr, 15360 tbn (default) Stream #0:1[0x2](und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s (default)
Stream #0:0[0x1](und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(progressive), 1080x1544 [SAR 1:1 DAR 135:193], 25187 kb/s, 30 fps, 30 tbr, 15360 tbn (default) Stream #0:1[0x2](und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s (default)
Stream #0:0[0x1](und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(progressive), 1080x1544, 21640 kb/s, 30 fps, 30 tbr, 15360 tbn (default) Stream #0:1[0x2](und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s (default)
Stream #0:0[0x1](und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(progressive), 1080x1544, 21802 kb/s, 30 fps, 30 tbr, 15360 tbn (default) Stream #0:1[0x2](und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 129 kb/s (default)
I notice some of the final video bitrates are very slightly different, despite specifying 25M; also input4 has 129k audio despite specifying 128k. Any idea why I'm still getting the DTS error?
Related bonus question: in the course of troubleshooting this, this looks like my final version of a command for preparing original files to be concatenated (and preventing DTS errors). Note this assumes the highest TBN value of the original files is 15360. Can anything here be omitted / should anything be added?
ffmpeg -fflags +igndts -i original.mp4 -r 30 -b:v 25M -pix_fmt yuv420p -c:v libx264 -c:a aac_at -ac 2 -video_track_timescale 15360 -max_muxing_queue_size 9999 -y prepared.mp4
Thank you for any help/advice!