Newest 'ffmpeg' Questions - Stack Overflow
Les articles publiés sur le site
-
How to implement spring animation (mass, tension, friction) in FFmpeg zoompan filter instead of linear interpolation ?
29 mai, par Mykyta ManuilenkoI'm trying to create a zoom-in and zoom-out animation using FFmpeg's zoompan filter, but I want to replace the linear interpolation with a spring animation that uses physics parameters (mass, tension, friction).
My input parameters:
"zoompan": { "focusRect": { "x": 1086.36, "y": 641.87, "width": 613, "height": 345 }, "easing": { "mass": 1, "tension": 120, "friction": 20 } }
Current working linear animation:
ffmpeg -framerate 25 -loop 1 -i input.png \ -filter_complex "\ [0:v]scale=6010:3380,setsar=1,split=3[zoomin_input][hold_input][zoomout_input]; \ [zoomin_input]zoompan= \ z='iw/(iw/zoom + (ow - iw)/duration)': \ x='x + (3400 - 0)/duration': \ y='y + (2009 - 0)/duration': \ d=25:fps=25:s=1920x1080, \ trim=duration=1,setpts=PTS-STARTPTS[zoomin]; \ [hold_input]crop=1920:1080:3400:2009,trim=duration=4,setpts=PTS-STARTPTS[hold]; \ [zoomout_input]zoompan=\ zoom='if(eq(on,0),iw/ow,iw/(iw/zoom + (iw-ow)/duration))':\ x='if(eq(on,0),3400,x + (0-3400)/duration)':\ y='if(eq(on,0),2009,y + (0-2009)/duration)':\ d=25:fps=25:s=1920x1080, \ trim=duration=1,setpts=PTS-STARTPTS[zoomout]; [zoomin][hold][zoomout]concat=n=3:v=1:a=0[outv]" \ -map "[outv]" \ -crf 23 \ -preset medium \ -c:v libx264 \ -pix_fmt yuv420p \ output.mp4
Notes:
It creates a perfectly straight zoom path to the specific point on the screen (similar to pinch-zooming on a smartphone - straight zooming to the center of the focus rectangle)
To improve the quality of the output, I upscale it beforehand
What I want to achieve:
Instead of linear interpolation, I want to implement a spring function with these physics parameters:
- mass: 1
- tension: 120
- friction: 20
Note that these params can be changed.
Also, I want to preserve a perfectly straight zoom path to the specific point on the screen (similar to pinch-zooming on a smartphone).
Question:
How can I properly implement a spring animation function in FFmpeg's zoompan filter?
-
Merge video with ffmpeg
29 mai, par BjörnI have tried this command:
ffmpeg -i 'concat:10.mov|11.mov' -codec copy out.mov
The output file out.mov only shows whats in the first movie (10.mov).Been googling for several hours and tried lots of things but nothing works. I want this done without re-encoding the files. Just merge with the same codec
ffmpeg version 3.2.4 Copyright (c) 2000-2017 the FFmpeg developers built with Apple LLVM version 8.0.0 (clang-800.0.42.1) configuration: --prefix=/usr/local/Cellar/ffmpeg/3.2.4 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-libmp3lame --enable-libx264 --enable-libxvid --enable-opencl --disable-lzma --enable-vda libavutil 55. 34.101 / 55. 34.101 libavcodec 57. 64.101 / 57. 64.101 libavformat 57. 56.101 / 57. 56.101 libavdevice 57. 1.100 / 57. 1.100 libavfilter 6. 65.100 / 6. 65.100 libavresample 3. 1. 0 / 3. 1. 0 libswscale 4. 2.100 / 4. 2.100 libswresample 2. 3.100 / 2. 3.100 libpostproc 54. 1.100 / 54. 1.100 [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7ff678802600] Found duplicated MOOV Atom. Skipped it Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'concat:10.mov|11.mov': Metadata: creation_time : 2017-03-17T12:15:22.000000Z major_brand : qt minor_version : 537134592 compatible_brands: qt Duration: 00:00:29.96, start: 0.000000, bitrate: 140810 kb/s Stream #0:0: Video: prores (apcn / 0x6E637061), yuv422p10le, 1280x720, 116735 kb/s, SAR 1:1 DAR 16:9, 50 fps, 50 tbr, 5k tbn, 5k tbc (default) Metadata: handler_name : Telestream Inc. Telestream Media Framework - Local 99.99.999999 encoder : Apple ProRes 422 Output #0, mov, to 'out.mov': Metadata: compatible_brands: qt major_brand : qt minor_version : 537134592 encoder : Lavf57.56.101 Stream #0:0: Video: prores (apcn / 0x6E637061), yuv422p10le, 1280x720 [SAR 1:1 DAR 16:9], q=2-31, 116735 kb/s, 50 fps, 50 tbr, 10k tbn, 5k tbc (default) Metadata: handler_name : Telestream Inc. Telestream Media Framework - Local 99.99.999999 encoder : Apple ProRes 422 Stream mapping: Stream #0:0 -> #0:0 (copy) Press [q] to stop, [?] for help frame= 1498 fps=0.0 q=-1.0 Lsize= 426938kB time=00:00:29.94 bitrate=116815.8kbits/s speed=50.8x video:426930kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.001997%
Any ideas? Would make my life very much easier if I got this to work :)
-
Reuse PlainTransport for new FFmpeg stream without full reinit [closed]
29 mai, par Sang VoI'm building a backend-driven streaming application using Mediasoup + FFmpeg + PlainTransport.
My goal is to switch between different media sources (e.g.
welcome.mp4
,waiting.mp4
,streaming.ts
) from backend, while keeping the samePlainTransport
alive to avoid the overhead of tearing down and rebuilding the pipeline.Current setup
Backend: NestJS server with Mediasoup
Media ingest: FFmpeg sends RTP stream to PlainTransport
Producer: created after FFmpeg starts
Frontend: React client that consumes via Consumer after signaling
What I want
When the backend starts streaming a file (e.g., welcome.mp4), I want to:
- Start FFmpeg again with welcome.mp4 using the same RTP ports (same PlainTransport)
- After welcome is
wailting.mp4
. When has event streaming start play streaming, end event streaming play wailting - Create a new Producer with the new stream
- Notify the frontend to create a new Consumer
All this without having to destroy and recreate the
PlainTransport
for all eventQuestions
Is this a valid and recommended workflow in Mediasoup?
If so, does Mediasoup allow reusing the same PlainTransport across multiple Producer instances (one at a time)?
Will RTP stream re-sync correctly if FFmpeg restarts and sends new RTP packets?
Is it necessary to explicitly configure SSRC and payloadType to match, or will Mediasoup auto-detect again per new producer?
Any insights or recommendations on best practices for this dynamic switching scenario would be very helpful!
Thanks in advance 🙏
-
🧵 [QUESTION][QUESTION] Reuse PlainTransport for new FFmpeg stream without full reinit
29 mai, par Sang VoI'm building a backend-driven streaming application using Mediasoup + FFmpeg + PlainTransport. My goal is to switch between different media sources (e.g. welcome.mp4, waiting.mp4, streaming.ts) from backend, while keeping the same PlainTransport alive to avoid the overhead of tearing down and rebuilding the pipeline.
✅ Current Setup: Backend: NestJS server with Mediasoup
Media ingest: FFmpeg sends RTP stream to PlainTransport
Producer: created after FFmpeg starts
Frontend: React client that consumes via Consumer after signaling
🔄 What I want: When the backend Start streaming a file (e.g., welcome.mp4), I want to:
Start FFmpeg again with welcome.mp4 using the same RTP ports (same PlainTransport) After welcome is wailting.mp4. When has event streaming start play streaming, end event streaming play wailting
Create a new Producer with the new stream
Notify the frontend to create a new Consumer
All this without having to destroy and recreate the PlainTransport for all event
🧠 Questions: Is this a valid and recommended workflow in Mediasoup?
If so, does Mediasoup allow reusing the same PlainTransport across multiple Producer instances (one at a time)?
Will RTP stream re-sync correctly if FFmpeg restarts and sends new RTP packets?
Is it necessary to explicitly configure SSRC and payloadType to match, or will Mediasoup auto-detect again per new producer?
Any insights or recommendations on best practices for this dynamic switching scenario would be very helpful!
Thanks in advance 🙏
-
How to capture a web stream's picture in swift without playing it
28 mai, par SergioI am developing an application for iOS in which I need to capture one frame from the stream at every minute to show in a picture view.
I am thinking to use ffmpeg. I tried the
ffmpeg
command in my Mac terminal and it works:ffmpeg -probesize 4096 -analyzeduration 50000 -threads 1 -i -vf fps=fps=1 -frames 1 -threads 1 -y -s 320×240 -f mjpeg -pix_fmt yuvj444p
.jpg But I don't know how to call my compiled ffmpeg for iOS inside my app (I already have compiled it, with the '.a' libs).
I also read about using something like (not ffmpeg):
func thumbnail(sourceURL sourceURL:NSURL) -> UIImage? { let asset = AVAsset(URL: sourceURL) let imageGenerator = AVAssetImageGenerator(asset: asset) imageGenerator.appliesPreferredTrackTransform = true var time = asset.duration time.value = min(time.value, 2) do { let imageRef = try imageGenerator.copyCGImageAtTime(time, actualTime: nil) return UIImage(CGImage: imageRef) } catch let error as NSError { print("Image generation failed with error \(error)") return nil; } }
but it doesn't work. It returns this error:
Image generation failed with error Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={NSLocalizedDescription=The operation could not be completed, NSUnderlyingError=0x15734af0 {Error Domain=NSOSStatusErrorDomain Code=-12782 "(null)"}, NSLocalizedFailureReason=An unknown error occurred (-12782)}
fatal error: unexpectedly found nil while unwrapping an Optional value