
Recherche avancée
Médias (1)
-
MediaSPIP Simple : futur thème graphique par défaut ?
26 septembre 2013, par
Mis à jour : Octobre 2013
Langue : français
Type : Video
Autres articles (22)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
-
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
Sur d’autres sites (5520)
-
Merging multiple audios to a video with ffmpeg causes the volume being reduced. How to avoid that ?
25 janvier 2024, par Terry Windwalkerconst command = ffmpeg();

 const mp4Path = path.join(__dirname, '..', '..', 'temp', `q-video-${new Date().getTime()}.mp4`);

 fs.writeFileSync(mp4Path, videoBuff);
 console.log('mp4 file created at: ', mp4Path);

 // Set the video stream as the input for ffmpeg
 command.input(mp4Path);

 const mp3Paths = [];

 for (let i = 0; i < audios.length; i++) {
 const audio = audios[i];
 const mp3Path = path.join(__dirname, '..', '..', 'temp', `q-audio-${new Date().getTime()}-${i}.mp3`);
 mp3Paths.push(mp3Path);

 fs.writeFileSync(mp3Path, audio.questionBuf);
 console.log('mp3 file created at: ', mp3Path);
 // Set the audio stream as the input for ffmpeg
 command.input(mp3Path);
 }

 // -------
 // ChatGPT take 1
 const audioTags = [];
 const audioFilters = audios.map((audio, index) => {
 const startTime = audio.start_at; // Replace with your logic to calculate start time
 const endTime = audio.end_at; // Replace with your logic to calculate end time
 audioTags.push(`[delayed${index}]`);
 // Working
 // return `[${index + 1}:a]atrim=start=0:end=${(endTime - startTime) / 1000},adelay=${startTime}[delayed${index}]`;
 return `[${index + 1}:a]dynaudnorm=p=0.9:m=100:s=5,atrim=start=0:end=${(endTime - startTime) / 1000},adelay=${startTime}[delayed${index}]`;
 });
 
 // Concatenate the delayed audio streams
 const concatFilter = audioFilters.join(';');
 
 // Mix the concatenated audio streams
 const mixFilter = `${concatFilter};[0:a]${audioTags.join('')}amix=inputs=${audios.length + 1}:duration=first:dropout_transition=2[out]`;

 // Set the complex filter for ffmpeg
 command.complexFilter([mixFilter]);

 // Set the output size
 if (!isScreen) {
 command.videoFilter('scale=720:-1');
 }
 else {
 command.videoFilter('scale=1920:-1');
 }

 // Set input options
 command.inputOptions([
 '-analyzeduration 20M',
 '-probesize 100M'
 ]);

 // Set output options
 command.outputOptions([
 '-c:v libx264', // Specify a video codec
 '-c:a aac',
 '-map 0:v', // Map the video stream from the first input
 '-map [out]' // Map the audio stream from the complex filter
 ]);

 // Set the output format
 command.toFormat('mp4');

 // Set the output file path
 command.output(outputFilePath);

 // Event handling
 command
 .on('start', commandLine => {
 console.log('Spawned Ffmpeg with command: ' + commandLine);
 })
 .on('codecData', data => {
 console.log('Input is ' + data.audio + ' audio ' +
 'with ' + data.video + ' video');
 })
 .on('progress', progress => {
 // console.log('progress: ', progress);
 console.log(`Processing: ${
 progress.percent ?
 progress.percent.toFixed(2)
 :
 '0.00'
 }% done`);
 })
 .on('stderr', stderrLine => {
 console.log('Stderr output: ' + stderrLine);
 })
 .on('error', (err, stdout, stderr) => {
 console.error('Error merging streams:', err);
 console.error('ffmpeg stdout:', stdout);
 console.error('ffmpeg stderr:', stderr);
 reject(err);
 })
 .on('end', () => {
 console.log('Merging finished successfully.');
 const file = fs.readFileSync(outputFilePath);
 console.log('File read successfully.');
 setTimeout(() => {
 fs.unlinkSync(outputFilePath);
 console.log('Output file deleted successfully.');
 fs.unlinkSync(mp4Path);
 console.log('MP4 file deleted successfully.');
 console.log('mp3Paths: ', mp3Paths);
 for (let mp3Path of mp3Paths) {
 fs.unlinkSync(mp3Path);
 }
 console.log('MP3 file deleted successfully.');
 if (isScreen) {
 for (let path of pathsScreen) {
 fs.unlinkSync(path);
 }
 }
 else {
 for (let path of pathsCamera) {
 fs.unlinkSync(path);
 }
 }
 console.log('All temp files deleted successfully.');
 }, 3000);
 resolve(file);
 });
 
 // Run the command
 command.run();



This is how I am merging my video files (which is an array of webm files) right now. It seems this command is causing the volume of the video gradually increase from the beginning to the end (the earlier part of the video has much lower volume than later part of the video). How should I fix this ?


Things tried and investigated so far :


- 

- I have checked the original video, it does not have the volume issue. So the volume issue was caused by this piece of code without an doubt.
- I have tried dynaudnorm, not fully understanding how it works, though. Adding it to each of the audio file does not fix this issue, and adding it as a separated filter at the end of the combined filter string would break the session.






-
Streaming Webcam Over LAN : HTML5 Video Element Not Loading
27 novembre 2023, par Franck FreiburgerI am currently working on a project where I aim to stream my webcam over my LAN and read the stream in an HTML5 video element with minimal setup. My setup involves a server (192.168.0.1 dev/video0 -> ffmpeg) and a client (192.168.0.2 html5 browser). I am using ffmpeg with the codec set to h264.


Here is the ffmpeg command I am using :


ffmpeg -f video4linux2 -i /dev/video0
 -an -c:v libx264 -b:v 1024k -video_size 800x600 -pix_fmt yuv420p -preset ultrafast
 -tune zerolatency -g 16 -keyint_min 16 -f mpegts pipe:1



This command is spawned by a simple Node.js server that shares this stream without any transformation (just pipe ffpmeg stream to each incoming connection).


When I use vlc or ffplay with the following command, it works very well :


ffplay -fflags nobuffer -flags low_delay -probesize 32 -analyzeduration 0 -framedrop http://192.168.0.1:3000/stream



I can even run multiple instances of ffplay and the video is properly played. However, when I try to use the HTML5 element like this :


<video src="http://127.0.0.1:3000/stream" type="video/mp4"></video>



The video seems to "load forever" without any error, nothing suspect in
chrome://media-internals
. I can see in the network tab that the network is reading the stream, but the video does not play (got same result using hljs and videojs).



I am looking for help to understand :


- 

- What is wrong with the
<video></video>
element in this context ? - Is there a better approach to achieve this ?






Any help or guidance would be greatly appreciated.


- What is wrong with the
-
TypeError : _ffmpeg_ffmpeg__WEBPACK_IMPORTED_MODULE_1__ is not a constructor
10 novembre 2023, par Shubhamimport { useState, useRef } from "react";

import \* as FFmpeg from "@ffmpeg/ffmpeg";

const AudioRecorders = ({ onAudioRecorded }) =\> {
const \[permission, setPermission\] = useState(false);
const \[stream, setStream\] = useState(null);
const mimeType = "video/webm";
const mediaRecorder = useRef(null);
const \[recordingStatus, setRecordingStatus\] = useState("inactive");
const \[audioChunks, setAudioChunks\] = useState(\[\]);
const \[audio, setAudio\] = useState(null);

const ffmpeg = useRef(null);

const createFFmpeg = async ({ log = false }) =\> {
// here I am facing the error
const ffmpegInstance = new FFmpeg({ log });
await ffmpegInstance.load();
return ffmpegInstance;
};

const convertWebmToWav = async (webmBlob) =\> {
if (!ffmpeg.current) {
ffmpeg.current = await createFFmpeg({ log: false });
}

 const inputName = "input.webm";
 const outputName = "output.wav";
 
 ffmpeg.current.FS("writeFile", inputName, await webmBlob.arrayBuffer());
 await ffmpeg.current.run("-i", inputName, outputName);
 
 const outputData = ffmpeg.current.FS("readFile", outputName);
 const outputBlob = new Blob([outputData.buffer], { type: "audio/wav" });
 
 return outputBlob;

};

const getMicrophonePermission = async () =\> {
if ("MediaRecorder" in window) {
try {
const streamData = await navigator.mediaDevices.getUserMedia({
audio: true,
video: false,
});
setPermission(true);
setStream(streamData);
} catch (err) {
alert(err.message);
}
} else {
alert("The MediaRecorder API is not supported in your browser.");
}
};

const startRecording = async () =\> {
setRecordingStatus("recording");
//create new Media recorder instance using the stream
const media = new MediaRecorder(stream, { type: mimeType });
//set the MediaRecorder instance to the mediaRecorder ref
mediaRecorder.current = media;
//invokes the start method to start the recording process
mediaRecorder.current.start();
let localAudioChunks = \[\];
mediaRecorder.current.ondataavailable = (event) =\> {
if (typeof event.data === "undefined") return;
if (event.data.size === 0) return;
localAudioChunks.push(event.data);
};
setAudioChunks(localAudioChunks);
};

const stopRecording = () =\> {
setRecordingStatus("inactive");
//stops the recording instance
mediaRecorder.current.stop();
mediaRecorder.current.onstop = async () =\> {
//creates a blob file from the audiochunks data
const audioBlob = new Blob(audioChunks, { type: mimeType });
// creates a playable URL from the blob file.
const audioUrl = URL.createObjectURL(audioBlob);
// converts the WebM blob to a WAV blob.
const newBlob = await convertWebmToWav(audioBlob);
await onAudioRecorded(newBlob);
setAudio(audioUrl);
setAudioChunks(\[\]);
};
};

return (
\
<h2>Audio Recorder</h2>
\
\<div classname="audio-controls">
{!permission ? (
\<button type="button">
Get Microphone
\
) : null}
{permission && recordingStatus === "inactive" ? (
\<button type="button">
Start Recording
\
) : null}
{recordingStatus === "recording" ? (
\<button type="button">
Stop Recording
\
) : null}
{audio ? (
\<div classname="audio-container">
\<audio src="{audio}">\
<a>
Download Recording
</a>
\
) : null}
\
\
\
);
};
export default AudioRecorders;

\`

</audio></div></button></button></button></div>


ERROR
ffmpeg_ffmpeg__WEBPACK_IMPORTED_MODULE_1_ is not a constructor
TypeError : ffmpeg_ffmpeg__WEBPACK_IMPORTED_MODULE_1_ is not a constructor
at createFFmpeg (http://localhost:3000/main.48220156e0c620f1acd0.hot-update.js:41:28)
at convertWebmToWav (http://localhost:3000/main.48220156e0c620f1acd0.hot-update.js:49:30)
at mediaRecorder.current.onstop (http://localhost:3000/main.48220156e0c620f1acd0.hot-update.js:109:29)`


I am trying to record the voice in audio/wav formate but its recording in video/webm formate not because of \<const mimetype="video/webm">. Whatever the mimeType I am giving its showing the file type video/webm on "https://www.checkfiletype.com/". I am recording it for the speech_recognition used in flask backend which is accepting only audio/wav.
So in frontend I have written a function "convertWebmToWav " which is giving me the error :
Uncaught runtime errors:

</const>