Recherche avancée

Médias (0)

Mot : - Tags -/masques

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (46)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

Sur d’autres sites (7344)

  • FFMPEG fast quality video encoding without quality loss & less storage occupancy (maybe using GPU)

    27 mars 2024, par Diwash Mainali

    I Have written a go code but it is slow and the video compression rate is also not that impressive. I am new to FFMPEG and my entire project depends on FFMPEG. I have tried different video codecs like vp9, h264, h265, NVENC, AV1, etc. All of them were too slow (maybe I am not good enough to optimize it). My project is based on Go and the current codec that I am using is libx264. Can anyone help me optimize the video encoding part of my project.

    


    Libx264 :

    


    func encodeVideo(fileName, bitrate, crf, preset, resolution string) *exec.Cmd {
    return exec.Command("C:\\ffmpeg-6.1-full_build\\bin\\ffmpeg",
        "-i", "./userUploadDatas/videos/"+fileName,
        "-c:v", "libx264",
        "-b:v", bitrate,
        "-crf", crf,
        "-preset", preset,
        "-vf", "scale="+resolution,
        "./userUploadDatas/videos/"+fileName+"_encoded"+".mp4")
}


    


    Please provide static value of each parameters. Any codec will work for me as long as it is fast, occupies less space & doesn't loose spaces.

    


    The problems I have faces with different codecs are :

    


      

    1. NVENC : Fast but the size of video is doubled & loss of video quality.
    2. 


    3. libx264 : Best I can find currently, but is slow.
    4. 


    5. h264, h265 : Occupies more space
    6. 


    7. Av1 & vp9 : Was too slow and wasn't able to encode 30sec video in 1hrs.
    8. 


    


    The specs of hardware that I am using is Ryzen7 5000 series CPU, NVIDIA RTX 3050 Ti Laptop GPU.

    


  • Streams stop when creating a quad split Multi-viewer using FFMPEG [closed]

    6 avril 2024, par Chris Parker

    Created a FFMPEG script that takes in 4 SRT Live Streams and displays them in a quad split MV. However when I launch this script all 4 SRT streams will play but 3 will pause and 1 will play.

    


    ffmpeg \
-re -hwaccel auto -hwaccel_output_csp videocore \
-stream_loop -1 \
-i "srt://address:5000?streamid=stream-id" \
-stream_loop -1 \
-i "srt://address:5000?streamid=stream-id" \
-stream_loop -1 \
-i "srt://address:5000?streamid=stream-id" \
-stream_loop -1 \
-i "srt://address:5000?streamid=stream-id" \
-filter_complex "\
[0]scale=1920x1080,setdar=16/9[a];\
[1]scale=1920x1080,setdar=16/9[b];\
[2]scale=1920x1080,setdar=16/9[c];\
[3]scale=1920x1080,setdar=16/9[d];\
[a][b][c][d]xstack=inputs=4:layout=0_0|w0_0|0_h0|w0_h0[v]" \
-map "[v]" \
-c:v libx264 -preset veryfast -crf 28 \
-pix_fmt yuv420p -g 48 -bf 2 -refs 4 \
-f mpegts -b:v 10M -maxrate:v 15M -bufsize:v 30M \
-fflags +discardcorrupt -reset_timestamps 1 \
"srt://address:5000?streamid=stream-id"


    


    getting a lot of the below error ;

    


    16:48:49.628704/SRT:RcvQ:w2!W:SRT.qr: @214003038:No room to store incoming packet seqno 649744568, insert offset 7190. Space avail 0/8192 pkts. (TSBPD ready in -12342ms, timespan 6093 ms). GETTIME_MONOTONIC drift 0 ms.


    


    I've tried smaller resolution and adjusting buffer sizes as well as the other flags but I'm not sure what else I can do.

    


  • How to create video from a stream webcam and canvas ?

    1er mai 2024, par Stefdelec

    I am trying to generate a video on browser from different cut :
Slide : stream from canvas
Video : stream from webcam

    


    I just want to allow user to download the video edit with
slide1 + video1 + slide2 + video2 + slide3 + video3.

    


    Here is my code :

    


    const canvas = document.getElementById('myCanvas');
const ctx = canvas.getContext('2d');
const webcam = document.getElementById('webcam');
const videoPlayer = document.createElement('video');
videoPlayer.controls = true;
document.body.appendChild(videoPlayer);
const videoWidth = 640;
const videoHeight = 480;
let keepAnimating = true;
const frameRate=30;
// Attempt to get webcam access
function setupWebcam() {
 const constraints = {
        video: {
             frameRate: frameRate,
            width: videoWidth,  
            height: videoHeight 
        }
    };
  navigator.mediaDevices.getUserMedia(constraints)
    .then(stream => {
      webcam.srcObject = stream;
      webcam.addEventListener('loadedmetadata', () => {
        recordSegments();
        console.log('Webcam feed is now displayed');
      });
    })
    .catch(err => {
      console.error("Error accessing webcam:", err);
      alert('Could not access the webcam. Please ensure permissions are granted and try again.');
    });
}


// Function to continuously draw on the canvas
function animateCanvas(content) {
  if (!keepAnimating) {
    console.log("keepAnimating", keepAnimating);
    return;
  }; // Stop the animation when keepAnimating is false

  ctx.clearRect(0, 0, canvas.width, canvas.height); // Clear previous drawings
  ctx.fillStyle = `rgba(${Math.floor(Math.random() * 255)}, ${Math.floor(Math.random() * 255)}, ${Math.floor(Math.random() * 255)}, 0.5)`;
  ctx.fillRect(0, 0, canvas.width, canvas.height);
  ctx.fillStyle = '#000';
  ctx.font = '48px serif';
  ctx.fillText(content + ' ' + new Date().toLocaleTimeString(), 50, 100);

  // Request the next frame
  requestAnimationFrame(() => animateCanvas(content));
}


// Initialize recording segments array
const recordedSegments = [];
// Modified startRecording to manage animation
function startRecording(stream, duration = 5000, content) {
  const recorder = new MediaRecorder(stream, { mimeType: 'video/webm' });
  const data = [];

  recorder.ondataavailable = e => data.push(e.data);


  // Start animating the canvas
  keepAnimating = true;
  animateCanvas(content);
  recorder.start();
  return new Promise((resolve) => {
    // Automatically stop recording after 'duration' milliseconds
    setTimeout(() => {
      recorder.stop();
      // Stop the animation when recording stops
      keepAnimating = false;
    }, duration);

    recorder.onstop = () => {
      const blob = new Blob(data, { type: 'video/webm' });
      recordedSegments.push(blob);
       keepAnimating = true;
      resolve(blob);
    };
  });
}

// Sequence to record segments
async function recordSegments() {
  // Record canvas with dynamic content
  await startRecording(canvas.captureStream(frameRate), 2000, 'Canvas Draw 1').then(() => console.log('Canvas 1 recorded'));

      await startRecording(webcam.srcObject,3000).then(() => console.log('Webcam 1 recorded'));

          await startRecording(webcam.srcObject).then(() => console.log('Webcam 1 recorded'));
  mergeAndDownloadVideo();
}

function downLoadVideo(blob){
 const url = URL.createObjectURL(blob);

  // Create an anchor element and trigger a download
  const a = document.createElement('a');
  a.style.display = 'none';
  a.href = url;
  a.download = 'merged-video.webm';
  document.body.appendChild(a);
  a.click();

  // Clean up by revoking the Blob URL and removing the anchor element after the download
  setTimeout(() => {
    document.body.removeChild(a);
    window.URL.revokeObjectURL(url);
  }, 100);
}
function mergeAndDownloadVideo() {
  console.log("recordedSegments length", recordedSegments.length);
  // Create a new Blob from all recorded video segments
  const superBlob = new Blob(recordedSegments, { type: 'video/webm' });
  
  downLoadVideo(superBlob)

  // Create a URL for the superBlob
 
}

// Start the process by setting up the webcam first
setupWebcam();


    


    You can find it here : https://jsfiddle.net/Sulot/nmqf6wdj/25/

    


    I am unable to have one "slide" + webcam video + "slide" + webcam video.

    


    It merges only the first 2 segments, but not the other. I tried with ffmpeg browser side.