
Recherche avancée
Autres articles (44)
-
Use, discuss, criticize
13 avril 2011, parTalk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
A discussion list is available for all exchanges between users. -
MediaSPIP Player : problèmes potentiels
22 février 2011, parLe lecteur ne fonctionne pas sur Internet Explorer
Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...) -
MediaSPIP Player : les contrôles
26 mai 2010, parLes contrôles à la souris du lecteur
En plus des actions au click sur les boutons visibles de l’interface du lecteur, il est également possible d’effectuer d’autres actions grâce à la souris : Click : en cliquant sur la vidéo ou sur le logo du son, celui ci se mettra en lecture ou en pause en fonction de son état actuel ; Molette (roulement) : en plaçant la souris sur l’espace utilisé par le média (hover), la molette de la souris n’exerce plus l’effet habituel de scroll de la page, mais diminue ou (...)
Sur d’autres sites (3544)
-
TypeError : _ffmpeg_ffmpeg__WEBPACK_IMPORTED_MODULE_1__ is not a constructor
10 novembre 2023, par Shubhamimport { useState, useRef } from "react";

import \* as FFmpeg from "@ffmpeg/ffmpeg";

const AudioRecorders = ({ onAudioRecorded }) =\> {
const \[permission, setPermission\] = useState(false);
const \[stream, setStream\] = useState(null);
const mimeType = "video/webm";
const mediaRecorder = useRef(null);
const \[recordingStatus, setRecordingStatus\] = useState("inactive");
const \[audioChunks, setAudioChunks\] = useState(\[\]);
const \[audio, setAudio\] = useState(null);

const ffmpeg = useRef(null);

const createFFmpeg = async ({ log = false }) =\> {
// here I am facing the error
const ffmpegInstance = new FFmpeg({ log });
await ffmpegInstance.load();
return ffmpegInstance;
};

const convertWebmToWav = async (webmBlob) =\> {
if (!ffmpeg.current) {
ffmpeg.current = await createFFmpeg({ log: false });
}

 const inputName = "input.webm";
 const outputName = "output.wav";
 
 ffmpeg.current.FS("writeFile", inputName, await webmBlob.arrayBuffer());
 await ffmpeg.current.run("-i", inputName, outputName);
 
 const outputData = ffmpeg.current.FS("readFile", outputName);
 const outputBlob = new Blob([outputData.buffer], { type: "audio/wav" });
 
 return outputBlob;

};

const getMicrophonePermission = async () =\> {
if ("MediaRecorder" in window) {
try {
const streamData = await navigator.mediaDevices.getUserMedia({
audio: true,
video: false,
});
setPermission(true);
setStream(streamData);
} catch (err) {
alert(err.message);
}
} else {
alert("The MediaRecorder API is not supported in your browser.");
}
};

const startRecording = async () =\> {
setRecordingStatus("recording");
//create new Media recorder instance using the stream
const media = new MediaRecorder(stream, { type: mimeType });
//set the MediaRecorder instance to the mediaRecorder ref
mediaRecorder.current = media;
//invokes the start method to start the recording process
mediaRecorder.current.start();
let localAudioChunks = \[\];
mediaRecorder.current.ondataavailable = (event) =\> {
if (typeof event.data === "undefined") return;
if (event.data.size === 0) return;
localAudioChunks.push(event.data);
};
setAudioChunks(localAudioChunks);
};

const stopRecording = () =\> {
setRecordingStatus("inactive");
//stops the recording instance
mediaRecorder.current.stop();
mediaRecorder.current.onstop = async () =\> {
//creates a blob file from the audiochunks data
const audioBlob = new Blob(audioChunks, { type: mimeType });
// creates a playable URL from the blob file.
const audioUrl = URL.createObjectURL(audioBlob);
// converts the WebM blob to a WAV blob.
const newBlob = await convertWebmToWav(audioBlob);
await onAudioRecorded(newBlob);
setAudio(audioUrl);
setAudioChunks(\[\]);
};
};

return (
\
<h2>Audio Recorder</h2>
\
\<div classname="audio-controls">
{!permission ? (
\<button type="button">
Get Microphone
\
) : null}
{permission && recordingStatus === "inactive" ? (
\<button type="button">
Start Recording
\
) : null}
{recordingStatus === "recording" ? (
\<button type="button">
Stop Recording
\
) : null}
{audio ? (
\<div classname="audio-container">
\<audio src="{audio}">\
<a>
Download Recording
</a>
\
) : null}
\
\
\
);
};
export default AudioRecorders;

\`

</audio></div></button></button></button></div>


ERROR
ffmpeg_ffmpeg__WEBPACK_IMPORTED_MODULE_1_ is not a constructor
TypeError : ffmpeg_ffmpeg__WEBPACK_IMPORTED_MODULE_1_ is not a constructor
at createFFmpeg (http://localhost:3000/main.48220156e0c620f1acd0.hot-update.js:41:28)
at convertWebmToWav (http://localhost:3000/main.48220156e0c620f1acd0.hot-update.js:49:30)
at mediaRecorder.current.onstop (http://localhost:3000/main.48220156e0c620f1acd0.hot-update.js:109:29)`


I am trying to record the voice in audio/wav formate but its recording in video/webm formate not because of \<const mimetype="video/webm">. Whatever the mimeType I am giving its showing the file type video/webm on "https://www.checkfiletype.com/". I am recording it for the speech_recognition used in flask backend which is accepting only audio/wav.
So in frontend I have written a function "convertWebmToWav " which is giving me the error :
Uncaught runtime errors:

</const>


-
How to Use FFmpeg to Fetch an Audio From Local Network and Decode it to PCM ?
26 mai 2020, par Yousef AlaqraCurrently, I have a node js server which is connected to a specific IP address on the local network (the source of the audio), to receive the audio using VBAN protocol. VBAN protocol, basically uses UDP to send audio over the local network.



Node js implementation :



http.listen(3000, () => {
 console.log("Server running on port 3000");
});

let PORT = 6980;
let HOST = "192.168.1.244";

io.on("connection", (socket) => {
 console.log("a user connected");
 socket.on("disconnect", () => {
 console.log("user disconnected");
 });
});

io.on("connection", () => {

 let dgram = require("dgram");
 let server = dgram.createSocket("udp4");

 server.on("listening", () => {
 let address = server.address();
 console.log("server host", address.address);
 console.log("server port", address.port);
 });

 server.on("message", function (message, remote) {
 let audioData = vban.ProcessPacket(message);
 io.emit("audio", audioData); // // console.log(`Received packet: ${remote.address}:${remote.port}`)
 });
 server.bind({
 address: "192.168.1.230",
 port: PORT,
 exclusive: false,
 });
});




once the server receives a package from the local network, it processes the package, then, using socket.io it emits the processed data to the client.



An example of the processed audio data that's being emitted from the socket, and received in the angular :



audio {
 format: {
 channels: 2,
 sampleRate: 44100,
 interleaved: true,
 float: false,
 signed: true,
 bitDepth: 16,
 byteOrder: 'LE'
 },
 sampleRate: 44100,
 buffer: <buffer 2e="2e" 00="00" ce="ce" ff="ff" 3d="3d" bd="bd" 44="44" b6="b6" 48="48" c3="c3" 32="32" d3="d3" 31="31" d4="d4" 30="30" dd="dd" 38="38" 34="34" e5="e5" 1d="1d" c6="c6" 25="25" 974="974" more="more" bytes="bytes">,
 channels: 2,
}
</buffer>



In the client-side (Angular), after receiving a package using socket.io.clinet, AudioConetext is used to decode the audio and play it :



playAudio(audioData) {
 let audioCtx = new AudioContext();
 let count = 0;
 let offset = 0;
 let msInput = 1000;
 let msToBuffer = Math.max(50, msInput);
 let bufferX = 0;
 let audioBuffer;
 let prevFormat = {};
 let source;

 if (!audioBuffer || Object.keys(audioData.format).some((key) => prevFormat[key] !== audioData.format[key])) {
 prevFormat = audioData.format;
 bufferX = Math.ceil(((msToBuffer / 1000) * audioData.sampleRate) / audioData.samples);
 if (bufferX < 3) {
 bufferX = 3;
 }
 audioBuffer = audioCtx.createBuffer(audioData.channels, audioData.samples * bufferX, audioData.sampleRate);
 if (source) {
 source.disconnect();
 }
 source = audioCtx.createBufferSource();
 console.log("source", source);
 source.connect(audioCtx.destination);
 source.loop = true;
 source.buffer = audioBuffer;
 source.start();
 }
 }




Regardless that audio isn't playing in the client-side, and there is something wrong, this isn't the correct implementation.



Brad, mentioned in the comments below, that I can implement this correctly and less complexity using FFmpeg child-process. And I'm very interested to know how to fetch the audio locally using FFmpeg.


-
How do I use FFmpeg to fetch an audio from a local network and decode it to PCM ?
26 mai 2020, par Yousef AlaqraCurrently, I have a node js server which is connected to a specific IP address on the local network (the source of the audio), to receive the audio using VBAN protocol. VBAN protocol, basically uses UDP to send audio over the local network.



Node js implementation :



http.listen(3000, () => {
 console.log("Server running on port 3000");
});

let PORT = 6980;
let HOST = "192.168.1.244";

io.on("connection", (socket) => {
 console.log("a user connected");
 socket.on("disconnect", () => {
 console.log("user disconnected");
 });
});

io.on("connection", () => {

 let dgram = require("dgram");
 let server = dgram.createSocket("udp4");

 server.on("listening", () => {
 let address = server.address();
 console.log("server host", address.address);
 console.log("server port", address.port);
 });

 server.on("message", function (message, remote) {
 let audioData = vban.ProcessPacket(message);
 io.emit("audio", audioData); // // console.log(`Received packet: ${remote.address}:${remote.port}`)
 });
 server.bind({
 address: "192.168.1.230",
 port: PORT,
 exclusive: false,
 });
});




once the server receives a package from the local network, it processes the package, then, using socket.io it emits the processed data to the client.



An example of the processed audio data that's being emitted from the socket, and received in the angular :



audio {
 format: {
 channels: 2,
 sampleRate: 44100,
 interleaved: true,
 float: false,
 signed: true,
 bitDepth: 16,
 byteOrder: 'LE'
 },
 sampleRate: 44100,
 buffer: <buffer 2e="2e" 00="00" ce="ce" ff="ff" 3d="3d" bd="bd" 44="44" b6="b6" 48="48" c3="c3" 32="32" d3="d3" 31="31" d4="d4" 30="30" dd="dd" 38="38" 34="34" e5="e5" 1d="1d" c6="c6" 25="25" 974="974" more="more" bytes="bytes">,
 channels: 2,
}
</buffer>



In the client-side (Angular), after receiving a package using socket.io.clinet, AudioConetext is used to decode the audio and play it :



playAudio(audioData) {
 let audioCtx = new AudioContext();
 let count = 0;
 let offset = 0;
 let msInput = 1000;
 let msToBuffer = Math.max(50, msInput);
 let bufferX = 0;
 let audioBuffer;
 let prevFormat = {};
 let source;

 if (!audioBuffer || Object.keys(audioData.format).some((key) => prevFormat[key] !== audioData.format[key])) {
 prevFormat = audioData.format;
 bufferX = Math.ceil(((msToBuffer / 1000) * audioData.sampleRate) / audioData.samples);
 if (bufferX < 3) {
 bufferX = 3;
 }
 audioBuffer = audioCtx.createBuffer(audioData.channels, audioData.samples * bufferX, audioData.sampleRate);
 if (source) {
 source.disconnect();
 }
 source = audioCtx.createBufferSource();
 console.log("source", source);
 source.connect(audioCtx.destination);
 source.loop = true;
 source.buffer = audioBuffer;
 source.start();
 }
 }




Regardless that audio isn't playing in the client-side, and there is something wrong, this isn't the correct implementation.



Brad, mentioned in the comments below, that I can implement this correctly and less complexity using FFmpeg child-process. And I'm very interested to know how to fetch the audio locally using FFmpeg.