Recherche avancée

Médias (0)

Mot : - Tags -/formulaire

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (66)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

Sur d’autres sites (8356)

  • How to create video from a stream webcam and canvas ?

    1er mai 2024, par Stefdelec

    I am trying to generate a video on browser from different cut :
Slide : stream from canvas
Video : stream from webcam

    


    I just want to allow user to download the video edit with
slide1 + video1 + slide2 + video2 + slide3 + video3.

    


    Here is my code :

    


    const canvas = document.getElementById('myCanvas');
const ctx = canvas.getContext('2d');
const webcam = document.getElementById('webcam');
const videoPlayer = document.createElement('video');
videoPlayer.controls = true;
document.body.appendChild(videoPlayer);
const videoWidth = 640;
const videoHeight = 480;
let keepAnimating = true;
const frameRate=30;
// Attempt to get webcam access
function setupWebcam() {
 const constraints = {
        video: {
             frameRate: frameRate,
            width: videoWidth,  
            height: videoHeight 
        }
    };
  navigator.mediaDevices.getUserMedia(constraints)
    .then(stream => {
      webcam.srcObject = stream;
      webcam.addEventListener('loadedmetadata', () => {
        recordSegments();
        console.log('Webcam feed is now displayed');
      });
    })
    .catch(err => {
      console.error("Error accessing webcam:", err);
      alert('Could not access the webcam. Please ensure permissions are granted and try again.');
    });
}


// Function to continuously draw on the canvas
function animateCanvas(content) {
  if (!keepAnimating) {
    console.log("keepAnimating", keepAnimating);
    return;
  }; // Stop the animation when keepAnimating is false

  ctx.clearRect(0, 0, canvas.width, canvas.height); // Clear previous drawings
  ctx.fillStyle = `rgba(${Math.floor(Math.random() * 255)}, ${Math.floor(Math.random() * 255)}, ${Math.floor(Math.random() * 255)}, 0.5)`;
  ctx.fillRect(0, 0, canvas.width, canvas.height);
  ctx.fillStyle = '#000';
  ctx.font = '48px serif';
  ctx.fillText(content + ' ' + new Date().toLocaleTimeString(), 50, 100);

  // Request the next frame
  requestAnimationFrame(() => animateCanvas(content));
}


// Initialize recording segments array
const recordedSegments = [];
// Modified startRecording to manage animation
function startRecording(stream, duration = 5000, content) {
  const recorder = new MediaRecorder(stream, { mimeType: 'video/webm' });
  const data = [];

  recorder.ondataavailable = e => data.push(e.data);


  // Start animating the canvas
  keepAnimating = true;
  animateCanvas(content);
  recorder.start();
  return new Promise((resolve) => {
    // Automatically stop recording after 'duration' milliseconds
    setTimeout(() => {
      recorder.stop();
      // Stop the animation when recording stops
      keepAnimating = false;
    }, duration);

    recorder.onstop = () => {
      const blob = new Blob(data, { type: 'video/webm' });
      recordedSegments.push(blob);
       keepAnimating = true;
      resolve(blob);
    };
  });
}

// Sequence to record segments
async function recordSegments() {
  // Record canvas with dynamic content
  await startRecording(canvas.captureStream(frameRate), 2000, 'Canvas Draw 1').then(() => console.log('Canvas 1 recorded'));

      await startRecording(webcam.srcObject,3000).then(() => console.log('Webcam 1 recorded'));

          await startRecording(webcam.srcObject).then(() => console.log('Webcam 1 recorded'));
  mergeAndDownloadVideo();
}

function downLoadVideo(blob){
 const url = URL.createObjectURL(blob);

  // Create an anchor element and trigger a download
  const a = document.createElement('a');
  a.style.display = 'none';
  a.href = url;
  a.download = 'merged-video.webm';
  document.body.appendChild(a);
  a.click();

  // Clean up by revoking the Blob URL and removing the anchor element after the download
  setTimeout(() => {
    document.body.removeChild(a);
    window.URL.revokeObjectURL(url);
  }, 100);
}
function mergeAndDownloadVideo() {
  console.log("recordedSegments length", recordedSegments.length);
  // Create a new Blob from all recorded video segments
  const superBlob = new Blob(recordedSegments, { type: 'video/webm' });
  
  downLoadVideo(superBlob)

  // Create a URL for the superBlob
 
}

// Start the process by setting up the webcam first
setupWebcam();


    


    You can find it here : https://jsfiddle.net/Sulot/nmqf6wdj/25/

    


    I am unable to have one "slide" + webcam video + "slide" + webcam video.

    


    It merges only the first 2 segments, but not the other. I tried with ffmpeg browser side.

    


  • lavc/qsvenc : add support for oneVPL string API

    29 février 2024, par Mandava, Mounika
    lavc/qsvenc : add support for oneVPL string API
    

    A new option -qsv_params <str> is added, where <str> is a :-separated
    list of key=value parameters.

    Example :
    $ ffmpeg -y -f lavfi -i testsrc -vf "format=nv12" -c:v h264_qsv -qsv_params
    "TargetUsage=1:GopPicSize=30:GopRefDist=2:TargetKbps=5000" -f null -

    Signed-off-by : Mounika Mandava <mounika.mandava@intel.com>
    Signed-off-by : Haihao Xiang <haihao.xiang@intel.com>

    • [DH] Changelog
    • [DH] configure
    • [DH] doc/encoders.texi
    • [DH] libavcodec/qsvenc.c
    • [DH] libavcodec/qsvenc.h
  • Why every audio part louder in FFmpeg while I join them in one audio ?

    13 mai 2024, par Volodymyr Bilovus

    I trying to make dubbing for audio. I have original audio track and I want to put translated audio parts on top of the original.

    &#xA;

    translated audio 100% vol : —p1--- ---p2— -----p3--- —p4—

    &#xA;

    original audio 5% vol : -----------------------------------------

    &#xA;

    Here is my FFmpeg command with filter_complex

    &#xA;

    ffmpeg -i video_wpmXlZF4XiE.opus -i 989-audio.mp3 -i 989-audio.mp3 -i 989-audio.mp3 -i 989-audio.mp3 \&#xA;-filter_complex "\&#xA;[0:a]loudnorm=I=-14:TP=-2:LRA=7, volume=0.05[original]; \&#xA;[1:a]loudnorm=I=-14:TP=-2:LRA=7, adelay=5000|5000, volume=1.0[sent1]; \&#xA;[2:a]loudnorm=I=-14:TP=-2:LRA=7, adelay=10000|10000, volume=1.0[sent2]; \&#xA;[3:a]loudnorm=I=-14:TP=-2:LRA=7, adelay=20000|20000, volume=1.0[sent3]; \&#xA;[4:a]loudnorm=I=-14:TP=-2:LRA=7, adelay=30000|30000, volume=1.0[sent4]; \&#xA;[original][sent1][sent2][sent3][sent4]amix=inputs=5:duration=longest[out]" \&#xA;-map "[out]" output.mp3&#xA;

    &#xA;

    Audios I put on top of the original audio track is the same -i 989-audio.mp3 I made it by purpose to show the problem&#xA;And here is the audio levels on final generated track.&#xA;enter image description here

    &#xA;

    As you can see, first and second only slightly different but third&#xA;and fourth have totally different(higher) volume level (Notice, audio is the same).&#xA;Why it's happened. And how can I workaround this odd behaviour ?

    &#xA;