
Recherche avancée
Médias (2)
-
Exemple de boutons d’action pour une collection collaborative
27 février 2013, par
Mis à jour : Mars 2013
Langue : français
Type : Image
-
Exemple de boutons d’action pour une collection personnelle
27 février 2013, par
Mis à jour : Février 2013
Langue : English
Type : Image
Autres articles (105)
-
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...) -
Menus personnalisés
14 novembre 2010, parMediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
Menus créés à l’initialisation du site
Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...) -
Gestion de la ferme
2 mars 2010, parLa ferme est gérée dans son ensemble par des "super admins".
Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
Dans un premier temps il utilise le plugin "Gestion de mutualisation"
Sur d’autres sites (10366)
-
Using ffmpeg to merge video segments created by the MediaRecorder API
10 avril 2023, par Dario CimminoI am recording a live video from a webcam using mediarecorder API un chunks of 3 seconds :


startButton.addEventListener('click', () => {
navigator.mediaDevices.getUserMedia({
 video: {
 width: 1280,
 height: 720,
 frameRate: { ideal: 30, max: 30 }
 }
})
 .then(stream => {
 video.srcObject = stream;
 mediaRecorder = new MediaRecorder(stream, { mimeType: 'video/webm' });
 mediaRecorder.ondataavailable = async (event) => {
 const blob = new Blob([event.data], { type: 'video/mp4' });
 const formData = new FormData();
 formData.append('segment', blob, `segment${segmentNumber}.mp4`);

 // When a new video segment is ready
 fetch('http://localhost:3000/upload', {
 method: 'POST',
 body: formData
 })
 .then((response) => response.text())
 .then((result) => {
 console.log('Upload result:', result);
 })
 .catch((error) => {
 console.error('Error uploading video segment:', error);
 });
 //Upload data to mysql
 fetch('upload.php', {
 method: 'POST',
 body: formData
 })
 .then(response => response.text())
 .then(result => {
 console.log('Upload result to MYSQL:', result);
 })
 .catch(error => {
 console.error('Error uploading video segment to MYSQL:', error);
 });
 segmentNumber++;
 };

 mediaRecorder.start(3000);
 })
 .catch(error => {
 console.error('Error accessing camera:', error);
 });



}) ;


I am left with only the first segment playable, as is expected.


How ever when the recording stops, i'd like to merge all those segments recorded using ffmpeg (or any other) with the help of my nodeJs server.


I am having difficulty understand the parsing of mp4 files.


if I try the command :


ffmpeg -i segment1.mp4 -i segment2.mp4 -i segment3.mp4 out.mp4



I get the following error :


ffmpeg version N-110223-gb18a9c2971-20230410 Copyright (c) 2000-2023 the FFmpeg developers
 built with gcc 12.2.0 (crosstool-NG 1.25.0.152_89671bf)
 configuration: --prefix=/ffbuild/prefix --pkg-config-flags=--static --pkg-config=pkg-config --cross-prefix=x86_64-w64-mingw32- --arch=x86_64 --target-os=mingw32 --enable-gpl --enable-version3 --disable-debug --disable-w32threads --enable-pthreads --enable-iconv --enable-libxml2 --enable-zlib --enable-libfreetype --enable-libfribidi --enable-gmp --enable-lzma --enable-fontconfig --enable-libvorbis --enable-opencl --disable-libpulse --enable-libvmaf --disable-libxcb --disable-xlib --enable-amf --enable-libaom --enable-libaribb24 --enable-avisynth --enable-chromaprint --enable-libdav1d --enable-libdavs2 --disable-libfdk-aac --enable-ffnvcodec --enable-cuda-llvm --enable-frei0r --enable-libgme --enable-libkvazaar --enable-libass --enable-libbluray --enable-libjxl --enable-libmp3lame --enable-libopus --enable-librist --enable-libssh --enable-libtheora --enable-libvpx --enable-libwebp --enable-lv2 --disable-libmfx --enable-libvpl --enable-openal --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopenmpt --enable-librav1e --enable-librubberband --enable-schannel --enable-sdl2 --enable-libsoxr --enable-libsrt --enable-libsvtav1 --enable-libtwolame --enable-libuavs3d --disable-libdrm --disable-vaapi --enable-libvidstab --enable-vulkan --enable-libshaderc --enable-libplacebo --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libzimg --enable-libzvbi --extra-cflags=-DLIBTWOLAME_STATIC --extra-cxxflags= --extra-ldflags=-pthread --extra-ldexeflags= --extra-libs=-lgomp --extra-version=20230410
 libavutil 58. 6.100 / 58. 6.100
 libavcodec 60. 9.100 / 60. 9.100
 libavformat 60. 4.101 / 60. 4.101
 libavdevice 60. 2.100 / 60. 2.100
 libavfilter 9. 5.100 / 9. 5.100
 libswscale 7. 2.100 / 7. 2.100
 libswresample 4. 11.100 / 4. 11.100
 libpostproc 57. 2.100 / 57. 2.100
Input #0, matroska,webm, from 'segment1.mp4':
 Metadata:
 encoder : Chrome
 Duration: N/A, start: 0.000000, bitrate: N/A
 Stream #0:0(eng): Video: h264 (Constrained Baseline), yuv420p(progressive), 1280x720 [SAR 1:1 DAR 16:9], 30.30 fps, 30 tbr, 1k tbn (default)
[mov,mp4,m4a,3gp,3g2,mj2 @ 000001d93cf25fc0] Format mov,mp4,m4a,3gp,3g2,mj2 detected only with low score of 1, misdetection possible!
[mov,mp4,m4a,3gp,3g2,mj2 @ 000001d93cf25fc0] moov atom not found
segment2.mp4: Invalid data found when processing input



any help or inputs are appreciated. THanks !


-
How to create video from a stream webcam and canvas ?
1er mai 2024, par StefdelecI am trying to generate a video on browser from different cut :
Slide : stream from canvas
Video : stream from webcam


I just want to allow user to download the video edit with
slide1 + video1 + slide2 + video2 + slide3 + video3.


Here is my code :


const canvas = document.getElementById('myCanvas');
const ctx = canvas.getContext('2d');
const webcam = document.getElementById('webcam');
const videoPlayer = document.createElement('video');
videoPlayer.controls = true;
document.body.appendChild(videoPlayer);
const videoWidth = 640;
const videoHeight = 480;
let keepAnimating = true;
const frameRate=30;
// Attempt to get webcam access
function setupWebcam() {
 const constraints = {
 video: {
 frameRate: frameRate,
 width: videoWidth, 
 height: videoHeight 
 }
 };
 navigator.mediaDevices.getUserMedia(constraints)
 .then(stream => {
 webcam.srcObject = stream;
 webcam.addEventListener('loadedmetadata', () => {
 recordSegments();
 console.log('Webcam feed is now displayed');
 });
 })
 .catch(err => {
 console.error("Error accessing webcam:", err);
 alert('Could not access the webcam. Please ensure permissions are granted and try again.');
 });
}


// Function to continuously draw on the canvas
function animateCanvas(content) {
 if (!keepAnimating) {
 console.log("keepAnimating", keepAnimating);
 return;
 }; // Stop the animation when keepAnimating is false

 ctx.clearRect(0, 0, canvas.width, canvas.height); // Clear previous drawings
 ctx.fillStyle = `rgba(${Math.floor(Math.random() * 255)}, ${Math.floor(Math.random() * 255)}, ${Math.floor(Math.random() * 255)}, 0.5)`;
 ctx.fillRect(0, 0, canvas.width, canvas.height);
 ctx.fillStyle = '#000';
 ctx.font = '48px serif';
 ctx.fillText(content + ' ' + new Date().toLocaleTimeString(), 50, 100);

 // Request the next frame
 requestAnimationFrame(() => animateCanvas(content));
}


// Initialize recording segments array
const recordedSegments = [];
// Modified startRecording to manage animation
function startRecording(stream, duration = 5000, content) {
 const recorder = new MediaRecorder(stream, { mimeType: 'video/webm' });
 const data = [];

 recorder.ondataavailable = e => data.push(e.data);


 // Start animating the canvas
 keepAnimating = true;
 animateCanvas(content);
 recorder.start();
 return new Promise((resolve) => {
 // Automatically stop recording after 'duration' milliseconds
 setTimeout(() => {
 recorder.stop();
 // Stop the animation when recording stops
 keepAnimating = false;
 }, duration);

 recorder.onstop = () => {
 const blob = new Blob(data, { type: 'video/webm' });
 recordedSegments.push(blob);
 keepAnimating = true;
 resolve(blob);
 };
 });
}

// Sequence to record segments
async function recordSegments() {
 // Record canvas with dynamic content
 await startRecording(canvas.captureStream(frameRate), 2000, 'Canvas Draw 1').then(() => console.log('Canvas 1 recorded'));

 await startRecording(webcam.srcObject,3000).then(() => console.log('Webcam 1 recorded'));

 await startRecording(webcam.srcObject).then(() => console.log('Webcam 1 recorded'));
 mergeAndDownloadVideo();
}

function downLoadVideo(blob){
 const url = URL.createObjectURL(blob);

 // Create an anchor element and trigger a download
 const a = document.createElement('a');
 a.style.display = 'none';
 a.href = url;
 a.download = 'merged-video.webm';
 document.body.appendChild(a);
 a.click();

 // Clean up by revoking the Blob URL and removing the anchor element after the download
 setTimeout(() => {
 document.body.removeChild(a);
 window.URL.revokeObjectURL(url);
 }, 100);
}
function mergeAndDownloadVideo() {
 console.log("recordedSegments length", recordedSegments.length);
 // Create a new Blob from all recorded video segments
 const superBlob = new Blob(recordedSegments, { type: 'video/webm' });
 
 downLoadVideo(superBlob)

 // Create a URL for the superBlob
 
}

// Start the process by setting up the webcam first
setupWebcam();



You can find it here : https://jsfiddle.net/Sulot/nmqf6wdj/25/


I am unable to have one "slide" + webcam video + "slide" + webcam video.


It merges only the first 2 segments, but not the other. I tried with ffmpeg browser side.


-
TypeError : _ffmpeg_ffmpeg__WEBPACK_IMPORTED_MODULE_1__ is not a constructor
10 novembre 2023, par Shubhamimport { useState, useRef } from "react";

import \* as FFmpeg from "@ffmpeg/ffmpeg";

const AudioRecorders = ({ onAudioRecorded }) =\> {
const \[permission, setPermission\] = useState(false);
const \[stream, setStream\] = useState(null);
const mimeType = "video/webm";
const mediaRecorder = useRef(null);
const \[recordingStatus, setRecordingStatus\] = useState("inactive");
const \[audioChunks, setAudioChunks\] = useState(\[\]);
const \[audio, setAudio\] = useState(null);

const ffmpeg = useRef(null);

const createFFmpeg = async ({ log = false }) =\> {
// here I am facing the error
const ffmpegInstance = new FFmpeg({ log });
await ffmpegInstance.load();
return ffmpegInstance;
};

const convertWebmToWav = async (webmBlob) =\> {
if (!ffmpeg.current) {
ffmpeg.current = await createFFmpeg({ log: false });
}

 const inputName = "input.webm";
 const outputName = "output.wav";
 
 ffmpeg.current.FS("writeFile", inputName, await webmBlob.arrayBuffer());
 await ffmpeg.current.run("-i", inputName, outputName);
 
 const outputData = ffmpeg.current.FS("readFile", outputName);
 const outputBlob = new Blob([outputData.buffer], { type: "audio/wav" });
 
 return outputBlob;

};

const getMicrophonePermission = async () =\> {
if ("MediaRecorder" in window) {
try {
const streamData = await navigator.mediaDevices.getUserMedia({
audio: true,
video: false,
});
setPermission(true);
setStream(streamData);
} catch (err) {
alert(err.message);
}
} else {
alert("The MediaRecorder API is not supported in your browser.");
}
};

const startRecording = async () =\> {
setRecordingStatus("recording");
//create new Media recorder instance using the stream
const media = new MediaRecorder(stream, { type: mimeType });
//set the MediaRecorder instance to the mediaRecorder ref
mediaRecorder.current = media;
//invokes the start method to start the recording process
mediaRecorder.current.start();
let localAudioChunks = \[\];
mediaRecorder.current.ondataavailable = (event) =\> {
if (typeof event.data === "undefined") return;
if (event.data.size === 0) return;
localAudioChunks.push(event.data);
};
setAudioChunks(localAudioChunks);
};

const stopRecording = () =\> {
setRecordingStatus("inactive");
//stops the recording instance
mediaRecorder.current.stop();
mediaRecorder.current.onstop = async () =\> {
//creates a blob file from the audiochunks data
const audioBlob = new Blob(audioChunks, { type: mimeType });
// creates a playable URL from the blob file.
const audioUrl = URL.createObjectURL(audioBlob);
// converts the WebM blob to a WAV blob.
const newBlob = await convertWebmToWav(audioBlob);
await onAudioRecorded(newBlob);
setAudio(audioUrl);
setAudioChunks(\[\]);
};
};

return (
\
<h2>Audio Recorder</h2>
\
\<div classname="audio-controls">
{!permission ? (
\<button type="button">
Get Microphone
\
) : null}
{permission && recordingStatus === "inactive" ? (
\<button type="button">
Start Recording
\
) : null}
{recordingStatus === "recording" ? (
\<button type="button">
Stop Recording
\
) : null}
{audio ? (
\<div classname="audio-container">
\<audio src="{audio}">\
<a>
Download Recording
</a>
\
) : null}
\
\
\
);
};
export default AudioRecorders;

\`

</audio></div></button></button></button></div>


ERROR
ffmpeg_ffmpeg__WEBPACK_IMPORTED_MODULE_1_ is not a constructor
TypeError : ffmpeg_ffmpeg__WEBPACK_IMPORTED_MODULE_1_ is not a constructor
at createFFmpeg (http://localhost:3000/main.48220156e0c620f1acd0.hot-update.js:41:28)
at convertWebmToWav (http://localhost:3000/main.48220156e0c620f1acd0.hot-update.js:49:30)
at mediaRecorder.current.onstop (http://localhost:3000/main.48220156e0c620f1acd0.hot-update.js:109:29)`


I am trying to record the voice in audio/wav formate but its recording in video/webm formate not because of \<const mimetype="video/webm">. Whatever the mimeType I am giving its showing the file type video/webm on "https://www.checkfiletype.com/". I am recording it for the speech_recognition used in flask backend which is accepting only audio/wav.
So in frontend I have written a function "convertWebmToWav " which is giving me the error :
Uncaught runtime errors:

</const>