Newest 'ffmpeg' Questions - Stack Overflow
Les articles publiés sur le site
-
Unable to import audio on python using ffmpeg [closed]
14 août, par Calebe PiacentiniI'm running my code on a VM, so I can't add it to my path. I want to do:
audio = AudioSegment.from_mp3(path_mp3) audio.export(path_wav, format="wav")
I keep getting:
FileNotFoundError: [WinError 2] The system cannot find the file specified
I tried GPT (and Claude, and Deepseek, etc....) and looking at other questions here posted, without sucess. I know I must have ffpmeg. I have download it and added the .exe at both my script folder (such as suggested elsewhere) and also tried adding the option directly to AudioSegmenter, as in
AudioSegment.converter = ffmpeg_path AudioSegment.ffmpeg = ffmpeg_path AudioSegment.ffprobe = ffprobe_path
ffmpeg is working. I tried using it directly on the command prompt and it went just fine. I really do not know what is going on here. And, yes, I'm sure that the audio files do exist. This is not the problem. I'll be happy to provide any additional information and/or context.
-
very low latency streaming with ffmpeg using a webcam [closed]
14 août, par userDtrmI'm trying to configure ffmpeg to do a real-time video streaming using a webcam. The ffmpeg encoder command I use is as follows.
ffmpeg -f v4l2 -input_format yuyv422 -s 640x480 -i /dev/video0 -c:v libx264 -profile:v baseline -trellis 0 -subq 1 -level 32 -preset superfast -tune zerolatency -me_method epzs -crf 30 -threads 0 -bufsize 1 -refs 4 -coder 0 -b_strategy 0 -bf 0 -sc_threshold 0 -x264-params vbv-maxrate=2000:slice-max-size=1500:keyint=30:min-keyint=10: -pix_fmt yuv420p -an -f mpegts udp://192.168.1.8:5001
The ffplay command used to display the video feed is,
ffplay -analyzeduration 1 -fflags -nobuffer -i udp://192.168.1.8:5001
However, I'm experiencing a latency of 0.5 - 1.0s latency in the video stream. Is there a way to reduce this to a number less than 100ms. Also, when I replace the v4l2 camera capture with a screen capture using x11grab, the stream is almost real-time and I experience no noticeable delays. Moreover, changing the encoder from x264 to mpeg2 had no effect on the latency. In addition, the statistics from the ffmpeg shows that the encoder is performing at a 30fps rate, which I believe indicates that the encoding is real-time. This leaves me with only one reason for the experienced delay.
- Is there a significant delay in buffers when using v4l2 during video capturing in a webcam?
- I don't think the transmission delay is in effect in this case as I see no latencies when screen capture is used under the same conditions.
- Can this latency be further reduced?. Can someone think of a different encoder configuration to be used instead of the one that I've been using?
-
How to write a video stream to a server ?
14 août, par The MaskRecently been playing with FFmpeg and it's powerful abilities. Working on a cool project where I'm trying to create a live video stream using FFmpeg. The client (reactJs) and server (nodeJS) are connected via web-socket. The client sends the byte packets to server and the server then spawns an FFmpeg process and serve it to an nginx server.
Client(live-stream.js):
const stream = await navigator.mediaDevices.getUserMedia({ video: true, audio: true, }); videoRef.current.srcObject = stream; const ingestUrl = `ws://localhost:8081/ws` const socket = new WebSocket(ingestUrl); socket.binaryType = "arraybuffer"; socket.onopen = () => { console.log("✅ WebSocket connection established"); socket.send(JSON.stringify({ type: "start", stream_key: streamKey })); mediaRecorderRef.current.start(500); }; socketRef.current = socket; socket.onerror = (error) => { console.error("❌ WebSocket error:", error); }; mediaRecorderRef.current = new MediaRecorder(stream, { mimeType: "video/webm;codecs=vp8,opus", videoBitsPerSecond: 1000000, audioBitsPerSecond: 128000 }); mediaRecorderRef.current.ondataavailable = (event) => { if (event.data.size > 0 && socket.readyState === WebSocket.OPEN) { event.data.arrayBuffer().then((buffer) => socket.send(buffer)); } };
Server(index.js):
const http = require('http'); const WebSocket = require('ws'); const { spawn } = require('child_process'); const fs = require('fs'); const server = new WebSocket.Server({ server:wss, path:'/ws'}); const startFFmpeg = (stream_key) => { return ffmpeg = spawn("ffmpeg", [ "-re", "-f", "matroska", "-i", "pipe:0", "-map", "0:v:0", "-map", "0:a:0", "-c:v", "libx264", "-c:a", "aac ", "-b:v", "6000k", "-maxrate", "6000k ", "-bufsize", "6000k ", "-pix_fmt", "yuv420p ", "-f", "flv", `rtmp://localhost/live/${stream_key}`, ]); } server.on('connection', (ws) => { console.log('📡 New WebSocket connection'); let ffmpeg = null; let buffer = Buffer.alloc(0); let streamStarted = false; ws.on('message', (msg, isBinary) => { if (!isBinary) { const parsed = JSON.parse(msg); if (parsed.type === "start") { const { stream_key } = parsed; console.log(`🔑 Stream key: ${stream_key}`); console.log(`🎥 Starting ingest for stream key: ${stream_key}`); ffmpeg = startFFmpeg(stream_key) ffmpeg.stdin.on("error", (e) => { console.error("FFmpeg stdin error:", e.message); }); ffmpeg.stderr.on("data", (data) => { console.log(`FFmpeg Data: ${data}`); }); ffmpeg.on("close", (code) => { console.log(`FFmpeg exited with code ${code}`); }); ffmpeg.on("exit", (code, signal) => { console.log(`FFmpeg exited with code: ${code}, signal: ${signal}`); if (signal === 'SIGSEGV') { console.log('🔄 FFmpeg segfaulted, attempting restart...'); setTimeout(() => { if (ws.readyState === WebSocket.OPEN) { startFFmpeg(stream_key); } }, 1000); } }); streamStarted = true; } } else if (isBinary && ffmpeg && ffmpeg.stdin.writable) { try { // Convert to Buffer if it's an ArrayBuffer let data; if (msg instanceof ArrayBuffer) { data = Buffer.from(msg); } else { data = Buffer.from(msg); } // Buffer the data buffer = Buffer.concat([buffer, data]); // Write in larger chunks to reduce overhead if (buffer.length >= 8192) { // 8KB threshold console.log(`📥 Writing ${buffer.length} bytes to FFmpeg`); if (ffmpeg.stdin.write(buffer)) { buffer = Buffer.alloc(0); } else { // Handle backpressure ffmpeg.stdin.once('drain', () => { buffer = Buffer.alloc(0); ffmpeg.stdin.setMaxListeners(20); // or a safe upper bound }); } } } catch (e) { console.error("FFmpeg write error:", e); } } }); ws.on('close', () => { console.log('❌ WebSocket closed'); streamStarted = false; if (ffmpeg){ // Write any remaining buffer if (buffer.length > 0 && ffmpeg.stdin.writable) { console.log(`📥 Writing final ${buffer.length} bytes to FFmpeg`); ffmpeg.stdin.write(buffer); } // Gracefully close FFmpeg if (ffmpeg.stdin.writable) { ffmpeg.stdin.end(); } setTimeout(() => { if (ffmpeg && !ffmpeg.killed) { ffmpeg.kill('SIGTERM'); setTimeout(() => { if (ffmpeg && !ffmpeg.killed) { ffmpeg.kill('SIGKILL'); } }, 5000); } }, 1000); } }); }); wss.listen(8081, "localhost", () => { console.log("🛰️ Server listening on http://localhost:8081/ws"); });
The problem statment: Been facing error like pixels drops in the video, bad quality. FFmpeg is crashing with error:
FFmpeg Data: Input #0, matroska,webm, from 'pipe:0': FFmpeg Data: Metadata: encoder : Chrome Duration: N/A, start: 0.000000, bitrate: N/A Stream #0:0(eng): Audio: opus, 48000 Hz, mono, fltp (default) FFmpeg Data: Stream #0:1(eng): Video: vp8, yuv420p(progressive), 640x480, SAR 1:1 DAR 4:3, FFmpeg Data: 1k tbr, 1k tbn (default) Metadata: alpha_mode : 1 FFmpeg Data: Unknown pixel format requested: yuv420p . FFmpeg stdin error: write EPIPE FFmpeg exited with code: 1, signal: null FFmpeg exited with code 1
-
How To Convert MP4 Video File into FLV Format Using FFMPEG [closed]
14 août, par nipul_techI have to convert an MP4 Video File into FLV format using FFMPEG which I received from different mobile device. I found most of the stuff to convert flv video into mp4 and all.
Can any body help me out to convert mp4 format into flv using FFMPEG. I am using a Windows 7 64bit machine.
-
Bash script having custom functions not running under systemd service
13 août, par nightcrawlerI have this script to get images from a webcam & process them via RKNN NPU
#!/bin/bash # Define the temporary directory for images TEMP_DIR="/media/32GB/pics" # Define the resize/letterbox option RESIZE_OPTION="letterbox" # or "letterbox" depending on your requirement # Define the output image path pattern OUTPUT_IMAGE_PATH="/media/32GB/processed_pics/%Y-%m-%d_%H-%M-%S_processed.jpg" # Define the path to the rknn_yolov5_demo_Linux binar BINARY_PATH="$HOME/ezrknn-toolkit2/rknpu2/examples/rknn_yolov5_demo/install/rknn_yolov5_demo_Linux" # Define ntfy variables NTFY_URL="https://ntfy.org/ho" NTFY_USER="xxx" NTFY_PASS="xxxx" # Empty existing content rm "$TEMP_DIR"/* # Function to run ffmpeg and write images to temporary files run_ffmpeg() { v380 -u xxxx -p xxxx -addr 192.168.1.xxx | ffmpeg -i - -f image2 -vf fps=3 -strftime 1 "$TEMP_DIR/%Y-%m-%d_%H-%M-%S_cap.jpg" -y } # Function to run rknn_yolov5_demo_Linux and process images from temporary files run_rknn_yolov5_demo() { while true; do # Find the most recent image file in the temporary directory IMAGE_PATH=$(ls -t "$TEMP_DIR"/*.jpg | head -n 1) # Check if the image path is not empty if [ -n "$IMAGE_PATH" ]; then # Define the output image path OUTPUT_IMAGE=$(date +"$OUTPUT_IMAGE_PATH") # Change to the binary directory and set LD_LIBRARY_PATH DETECTION_OUTPUT=$(cd "$BINARY_PATH" && LD_LIBRARY_PATH=./lib ./rknn_yolov5_demo ./model/RK3566_RK3568/yolov5s-640-640.rknn "$IMAGE_PATH" "$RESIZE_OPTION" "$OUTPUT_IMAGE") # Check if the detection output contains the word "person" if echo "$DETECTION_OUTPUT" | grep -q "person"; then echo "Human detected. Saving processed image to $OUTPUT_IMAGE" rm "$IMAGE_PATH" # Upload the image using the imgur binary and capture the link UPLOAD_OUTPUT=$(imgur "$OUTPUT_IMAGE") UPLOAD_LINK=$(echo "$UPLOAD_OUTPUT" | grep -m 1 '^http') if [ -n "$UPLOAD_LINK" ]; then echo "Image uploaded successfully. Link: $UPLOAD_LINK" # Send ntfy notification with the image link curl -u $NTFY_USER:$NTFY_PASS -H "tags:rotating_light" -H "Attach:$UPLOAD_LINK" -d "Human detected" $NTFY_URL else echo "Failed to upload image." fi else rm "$OUTPUT_IMAGE" rm "$IMAGE_PATH" fi fi # Sleep for a short period to avoid high CPU usage sleep 1 done } # Run ffmpeg and rknn_yolov5_demo_Linux in the background run_ffmpeg & run_rknn_yolov5_demo &
& the corresponding .service file
[Unit] Description=Process Images with rknn_yolov5_demo After=network.target #StartLimitIntervalSec=60 #StartLimitBurst=5 [Service] Type=simple ExecStartPre=/bin/sleep 30 ExecStart=/home/xxx/process_images_rknn.sh Restart=always RestartSec=3 TimeoutStartSec=60 [Install] WantedBy=default.target
Now last x2 lines of script are creating problems.
Case1] If I keep like this
htop
shows no initiation offfmpeg
norrknn
BinariesCase2] I removed
&
from both lines then onlyffmpeg
runs butrknn
is nowhere inhtop
Case3] Only this case works
run_ffmpeg & run_rknn_yolov5_demo
I am reloading systemctl daemon & restarting service after script modification in each case