
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (40)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
Sur d’autres sites (7068)
-
Why every audio part louder in FFmpeg while I join them in one audio ?
13 mai 2024, par Volodymyr BilovusI trying to make dubbing for audio. I have original audio track and I want to put translated audio parts on top of the original.


translated audio 100% vol : —p1--- ---p2— -----p3--- —p4—


original audio 5% vol : -----------------------------------------


Here is my FFmpeg command with filter_complex


ffmpeg -i video_wpmXlZF4XiE.opus -i 989-audio.mp3 -i 989-audio.mp3 -i 989-audio.mp3 -i 989-audio.mp3 \
-filter_complex "\
[0:a]loudnorm=I=-14:TP=-2:LRA=7, volume=0.05[original]; \
[1:a]loudnorm=I=-14:TP=-2:LRA=7, adelay=5000|5000, volume=1.0[sent1]; \
[2:a]loudnorm=I=-14:TP=-2:LRA=7, adelay=10000|10000, volume=1.0[sent2]; \
[3:a]loudnorm=I=-14:TP=-2:LRA=7, adelay=20000|20000, volume=1.0[sent3]; \
[4:a]loudnorm=I=-14:TP=-2:LRA=7, adelay=30000|30000, volume=1.0[sent4]; \
[original][sent1][sent2][sent3][sent4]amix=inputs=5:duration=longest[out]" \
-map "[out]" output.mp3



Audios I put on top of the original audio track is the same
-i 989-audio.mp3
I made it by purpose to show the problem
And here is the audio levels on final generated track.


As you can see, first and second only slightly different but third
and fourth have totally different(higher) volume level (Notice, audio is the same).
Why it's happened. And how can I workaround this odd behaviour ?


-
lavc/qsvenc : add support for oneVPL string API
29 février 2024, par Mandava, Mounikalavc/qsvenc : add support for oneVPL string API
A new option -qsv_params <str> is added, where <str> is a :-separated
list of key=value parameters.Example :
$ ffmpeg -y -f lavfi -i testsrc -vf "format=nv12" -c:v h264_qsv -qsv_params
"TargetUsage=1:GopPicSize=30:GopRefDist=2:TargetKbps=5000" -f null -Signed-off-by : Mounika Mandava <mounika.mandava@intel.com>
Signed-off-by : Haihao Xiang <haihao.xiang@intel.com> -
How to create video from a stream webcam and canvas ?
1er mai 2024, par StefdelecI am trying to generate a video on browser from different cut :
Slide : stream from canvas
Video : stream from webcam


I just want to allow user to download the video edit with
slide1 + video1 + slide2 + video2 + slide3 + video3.


Here is my code :


const canvas = document.getElementById('myCanvas');
const ctx = canvas.getContext('2d');
const webcam = document.getElementById('webcam');
const videoPlayer = document.createElement('video');
videoPlayer.controls = true;
document.body.appendChild(videoPlayer);
const videoWidth = 640;
const videoHeight = 480;
let keepAnimating = true;
const frameRate=30;
// Attempt to get webcam access
function setupWebcam() {
 const constraints = {
 video: {
 frameRate: frameRate,
 width: videoWidth, 
 height: videoHeight 
 }
 };
 navigator.mediaDevices.getUserMedia(constraints)
 .then(stream => {
 webcam.srcObject = stream;
 webcam.addEventListener('loadedmetadata', () => {
 recordSegments();
 console.log('Webcam feed is now displayed');
 });
 })
 .catch(err => {
 console.error("Error accessing webcam:", err);
 alert('Could not access the webcam. Please ensure permissions are granted and try again.');
 });
}


// Function to continuously draw on the canvas
function animateCanvas(content) {
 if (!keepAnimating) {
 console.log("keepAnimating", keepAnimating);
 return;
 }; // Stop the animation when keepAnimating is false

 ctx.clearRect(0, 0, canvas.width, canvas.height); // Clear previous drawings
 ctx.fillStyle = `rgba(${Math.floor(Math.random() * 255)}, ${Math.floor(Math.random() * 255)}, ${Math.floor(Math.random() * 255)}, 0.5)`;
 ctx.fillRect(0, 0, canvas.width, canvas.height);
 ctx.fillStyle = '#000';
 ctx.font = '48px serif';
 ctx.fillText(content + ' ' + new Date().toLocaleTimeString(), 50, 100);

 // Request the next frame
 requestAnimationFrame(() => animateCanvas(content));
}


// Initialize recording segments array
const recordedSegments = [];
// Modified startRecording to manage animation
function startRecording(stream, duration = 5000, content) {
 const recorder = new MediaRecorder(stream, { mimeType: 'video/webm' });
 const data = [];

 recorder.ondataavailable = e => data.push(e.data);


 // Start animating the canvas
 keepAnimating = true;
 animateCanvas(content);
 recorder.start();
 return new Promise((resolve) => {
 // Automatically stop recording after 'duration' milliseconds
 setTimeout(() => {
 recorder.stop();
 // Stop the animation when recording stops
 keepAnimating = false;
 }, duration);

 recorder.onstop = () => {
 const blob = new Blob(data, { type: 'video/webm' });
 recordedSegments.push(blob);
 keepAnimating = true;
 resolve(blob);
 };
 });
}

// Sequence to record segments
async function recordSegments() {
 // Record canvas with dynamic content
 await startRecording(canvas.captureStream(frameRate), 2000, 'Canvas Draw 1').then(() => console.log('Canvas 1 recorded'));

 await startRecording(webcam.srcObject,3000).then(() => console.log('Webcam 1 recorded'));

 await startRecording(webcam.srcObject).then(() => console.log('Webcam 1 recorded'));
 mergeAndDownloadVideo();
}

function downLoadVideo(blob){
 const url = URL.createObjectURL(blob);

 // Create an anchor element and trigger a download
 const a = document.createElement('a');
 a.style.display = 'none';
 a.href = url;
 a.download = 'merged-video.webm';
 document.body.appendChild(a);
 a.click();

 // Clean up by revoking the Blob URL and removing the anchor element after the download
 setTimeout(() => {
 document.body.removeChild(a);
 window.URL.revokeObjectURL(url);
 }, 100);
}
function mergeAndDownloadVideo() {
 console.log("recordedSegments length", recordedSegments.length);
 // Create a new Blob from all recorded video segments
 const superBlob = new Blob(recordedSegments, { type: 'video/webm' });
 
 downLoadVideo(superBlob)

 // Create a URL for the superBlob
 
}

// Start the process by setting up the webcam first
setupWebcam();



You can find it here : https://jsfiddle.net/Sulot/nmqf6wdj/25/


I am unable to have one "slide" + webcam video + "slide" + webcam video.


It merges only the first 2 segments, but not the other. I tried with ffmpeg browser side.