
Recherche avancée
Médias (91)
-
#3 The Safest Place
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#4 Emo Creates
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#2 Typewriter Dance
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#1 The Wires
11 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
ED-ME-5 1-DVD
11 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Audio
-
Revolution of Open-source and film making towards open film making
6 octobre 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (66)
-
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs
Sur d’autres sites (7420)
-
Problem with FFmpeg breaking when streaming the entire screen and switching tabs
2 octobre 2024, par Ibad AhmadI'm working on a screen recording and streaming setup where the user records their entire screen and streams it to Twitch. The setup works fine initially, but when I switch tabs during recording, the stream breaks on the backend, and I get the following FFmpeg errors :


FFmpeg STDERR: [matroska,webm @ 0x7f9dcb904580] EBML header parsing failed
[in#0 @ 0x7f9dcb904380] Error opening input: Invalid data found when processing input
Error opening input file -.
Error opening input files: Invalid data found when processing input



My frontend code captures the screen and microphone and streams it via a WebSocket to the backend, where FFmpeg processes the stream. Below is my relevant frontend code :


const startRecording = async () => {
 try {
 const screenStream = await navigator.mediaDevices.getDisplayMedia({
 preferCurrentTab: true,
 systemAudio: 'include',
 surfaceSwitching: 'include',
 monitorTypeSurfaces: 'include',
 video: {
 displaySurface: 'browser',
 height: 720,
 width: 1280,
 frameRate: { ideal: 24, max: 30 },
 },
 });

 screenStream.getVideoTracks()[0].onended = () => {
 console.log('Screen sharing ended. Stopping the recorder.');
 stopRecording();
 };

 const micStream = await navigator.mediaDevices.getUserMedia({
 audio: true,
 });

 const combinedStream = new MediaStream([
 ...screenStream.getVideoTracks(),
 ...micStream.getAudioTracks(),
 ]);

 const recorder = new MediaRecorder(combinedStream, {
 mimeType: 'video/webm; codecs=vp8,opus',
 videoBitsPerSecond: 3 * 1024 * 1024,
 });

 const timeslice = 1000;

 recorder.ondataavailable = async (event) => {
 if (socket?.current?.connected && event.data.size > 0) {
 console.log('Sending chunk data:', socket.current.id);
 socket?.current.send(event.data);
 recordedChunks.current.push(event.data);
 } else if (!socket?.current?.connected) {
 handleSocketDisconnection();
 }
 };

 mediaRecorder.current = recorder;
 recorder.start(timeslice);
 setIsRecording(true);
 } catch (error) {
 console.log('Error starting screen recording:', error);
 toast.error('Failed to start screen recording: ' + error);
 }
};

const stopRecording = () => {
 if (socket?.current && mediaRecorder) {
 mediaRecorder?.current?.stop();
 socket.current.close();
 setIsRecording(false);
 downloadRecordedVideo();
 }
};




And here’s my backend code with FFmpeg settings for Twitch streaming :


const inputSettings = [
 '-f', 'webm', '-i', '-', '-v', 'error', '-analyzeduration', '1000000', '-probesize', '5000000',
];

const twitchSettings = (twitch) => {
 return [
 '-c:v', 'libx264', '-preset', 'veryfast', '-tune', 'zerolatency',
 '-g', '60', '-b:v', '2500k', '-maxrate', '3000k', '-bufsize', '8000k',
 '-r', '30', '-vf', 'tpad=stop_mode=clone:stop_duration=2',
 '-c:a', 'aac', '-ar', '44100', '-b:a', '96k',
 '-use_wallclock_as_timestamps', '1', '-async', '1',
 '-err_detect', 'ignore_err', '-reconnect', '1',
 '-reconnect_streamed', '1', '-reconnect_delay_max', '5',
 '-y', '-f', 'flv', twitch,
 ];
};




Problem : When switching tabs during screen sharing, it seems like the frame rate drops or the stream gets interrupted, leading to FFmpeg errors like
EBML header parsing failed
andInvalid data found when processing input
. I suspect this happens because the browser deprioritizes resources when the tab is not active, which might lead to corrupt chunks being sent to FFmpeg.

Questions :


- 

- Could switching tabs during screen capture be causing the issue by disrupting the frame rate or dropping frames ?
- Is there a way to ensure FFmpeg doesn’t break due to these interruptions ?
- Any suggestions on handling the stream more reliably when switching tabs or optimizing the FFmpeg setup for this scenario ?








I tried adjusting the bitrate, frame rate, and buffer size but still experienced the same issue. I'm trying to figure out if the issue is related to how browsers handle screen capture when tab switching or something specific with FFmpeg handling the video stream.


Any insights would be greatly appreciated.
Thanks in advance !


-
FFMPEG ERROR on streaming video generated from MediaRecorder API on RTMP url
20 février 2024, par Prince MishraRef : https://www.mux.com/blog/the-state-of-going-live-from-a-browser


The above blog states my problem in detail and presented a solution also.
I am trying to implement the solution which is using socketio


Here is the description of the problem :


I want to capture the video and audio from the browser using


navigator.mediaDevices
 .getUserMedia({ video: true, audio: true })



and i am using the


const options = {
 mimeType: "video/webm;codecs=vp8",
};
const mediaRecorder = new MediaRecorder(stream, options);



to record the video chunk by chunk from the stream given by getusermedia and then using the socket io to send the video to the backend. Where I am using the ffmpeg to stream the chunks on rtmp url.


I am using the following ffmpeg commands :


const { spawn } = require('child_process');

const ffmpegProcess = spawn('ffmpeg', [
 '-i', 'pipe:0',
 '-c:v', 'libx264',
 '-preset', 'veryfast',
 '-tune', 'zerolatency',
 '-c:a', 'aac',
 '-ar', '44100',
 '-f', 'flv',
 rtmpurl
]);



And I am getting the following errors :








Can anyone help me how to fix this. I am new to FFmpeg.


Here is the complete frontend and Backend code :


Frontend (App.jsx) :



import { useEffect } from "react";
import "./App.css";
import io from "socket.io-client";

function App() {
 let video;

 useEffect(() => {
 video = document.getElementById("video");
 }, []);

 const socket = io("http://localhost:3050");
 socket.on("connect", () => {
 console.log("Connected to server");
 });

 let stream;
 navigator.mediaDevices
 .getUserMedia({ video: true, audio: true })
 .then((strea) => {
 video.srcObject = strea;
 stream = strea;
 const options = {
 mimeType: "video/webm;codecs=vp8",
 };
 const mediaRecorder = new MediaRecorder(stream, options);
 console.log(mediaRecorder);
 let chunks = [];

 mediaRecorder.ondataavailable = function (e) {
 chunks.push(e.data);
 console.log(e.data);
 };
 mediaRecorder.onstop = function (e) {
 const blob = new Blob(chunks, { type: "video/webm;codecs=vp8" });
 console.log("emitted");
 socket.emit("videoChunk", blob);
 chunks = [];
 // const videoURL = URL.createObjectURL(blob);
 // const a = document.createElement('a');
 // a.href = videoURL;
 // a.download = 'video.mp4';
 // a.click();
 window.URL.revokeObjectURL(videoURL);
 };
 mediaRecorder.start();
 setInterval(() => {
 mediaRecorder.stop();
 mediaRecorder.start();
 }, 2000);
 })
 .catch((error) => {
 console.error("Error accessing camera:", error);
 });

 // Capture video after 10 seconds

 return (
 <>
 <video width="640" height="480" autoplay="autoplay"></video>
 <button>Capture</button>
 >
 );
}

export default App;



Backend Code :


const express = require('express');
const http = require('http');
const socketIo = require('socket.io');
const { spawn } = require('child_process');

const app = express();

const server = http.createServer(app);
const io = socketIo(server, {
 cors: {
 origin: "*",
 methods: ["GET", "POST"]
 }, maxhttpBufferSize: 1e8
 });

 const rtmpurl = 'rtmp://localhost/live/test';

io.on('connection', (socket) => {
 console.log('A user connected');

 const ffmpegProcess = spawn('ffmpeg', [
 '-i', 'pipe:0',
 '-c:v', 'libx264',
 '-preset', 'veryfast',
 '-tune', 'zerolatency',
 '-c:a', 'aac',
 '-ar', '44100',
 '-f', 'flv',
 rtmpurl
 ]);


 ffmpegProcess.stdin.on('error', (e) => {
 console.log(e);
 });
 
 ffmpegProcess.stderr.on('data', (data) => {
 console.log(data.toString());
 });

 ffmpegProcess.on('close', (code) => {
 console.log(`child process exited with code ${code}`);
 });


 socket.on('videoChunk', (chunk) => {
 console.log(chunk)
 ffmpegProcess.stdin.write(chunk);

 });

 socket.on('disconnect', () => {
 console.log('User disconnected');
 ffmpegProcess.stdin.end();
 });
});

const PORT = process.env.PORT || 3050;

app.get('/test', (req, res) => {
 res.send('Hello from /test route!');
});


server.listen(PORT, () => {
 console.log(`Server is running on port ${PORT}`);
});



-
No audio in the final video when converting webm blobs to mp4 using ffmpeg
28 septembre 2024, par alpeccaI trying to record user camera and microphone and using MediaRecorder to convert the stream to blobs and sending the blobs every 2 second to the backend using websocket. Everything is working fine, but when I checked the final mp4 video in the backend, it doesn't have any audio to it, I try specifying the audio codec, but still no help.


My frontend code :-


const micStream = await navigator.mediaDevices.getUserMedia({ audio: true });

const recorder = new MediaRecorder(stream, {
 mimeType: 'video/webm;codecs=H264',
 videoBitsPerSecond: 8000000,
 audioBitsPerSecond : 8000000
});

recorder.ondataavailable = (e: BlobEvent) => {
 websocket.send(e.data) 
} 
recorder.start(2000);



And here is the backend code :-


@router.websocket("/streamaudio")
async def websocket_endpoint(websocket: WebSocket):
 await manager.connect(websocket)

 recordingFile = os.path.join(os.getcwd(), f"recording_.mp4")

 command = [
 'ffmpeg', 
 '-y',
 '-i', 
 '-', 
 '-codec:v', 
 'copy', 
 '-c:a', 'aac', 
 '-y',
 '-f', 'mp4',
 recordingFile,
 # "-"
 # f'output{queueNumber}.mp4',
 ] 

 
 try:
 while True:
 try:
 
 data = await websocket.receive_bytes()
 
 process.stdin.send(data)
 
 except RuntimeError:
 break 
 except WebSocketDisconnect:
 print(f"Client disconnected: {websocket.client.host}")
 finally:
 manager.disconnect(websocket)
 await process.stdin.aclose()
 await process.wait()