Recherche avancée

Médias (1)

Mot : - Tags -/graphisme

Autres articles (44)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Déploiements possibles

    31 janvier 2010, par

    Deux types de déploiements sont envisageable dépendant de deux aspects : La méthode d’installation envisagée (en standalone ou en ferme) ; Le nombre d’encodages journaliers et la fréquentation envisagés ;
    L’encodage de vidéos est un processus lourd consommant énormément de ressources système (CPU et RAM), il est nécessaire de prendre tout cela en considération. Ce système n’est donc possible que sur un ou plusieurs serveurs dédiés.
    Version mono serveur
    La version mono serveur consiste à n’utiliser qu’une (...)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

Sur d’autres sites (10991)

  • create a timelapse video using MediaRecorder API ( and ffmpeg ? )

    24 août 2022, par The Blind Hawk

    Summary

    


    I have a version of my code already working on Chrome and Edge (Mac Windows and Android), but I need some fixes for it to work on IOS (Safari/Chrome).
    
My objective is to record around 25 minutes and download a timelapse version of the recording.
    
final product requirements :

    


    speed: 3fps
length: ~25s

(I need to record one frame every 20 seconds for 25 mins)


    


    this.secondStream settings :

    


    this.secondStream = await navigator.mediaDevices.getUserMedia({
    audio: false,
    video: {width: 430, height: 430, facingMode: "user"}
});


    


    My code for IOS so far :

    


            startIOSVideoRecording: function() {
            console.log("setting up recorder");
            var self = this;
            this.data = [];

            if (MediaRecorder.isTypeSupported('video/mp4')) {
                // IOS does not support webm, so I will be using mp4
                var options = {mimeType: 'video/mp4', videoBitsPerSecond : 1000000};
            } else {
                console.log("ERROR: mp4 is not supported, trying to default to webm");
                var options = {mimeType: 'video/webm'};
            }
            console.log("options settings:");
            console.log(options);

            this.recorder = new MediaRecorder(this.secondStream, options);

            this.recorder.ondataavailable = function(evt) {
                if (evt.data && evt.data.size > 0) {
                    self.data.push(evt.data);
                    console.log('chunk size: ' + evt.data.size);
                }
            }

            this.recorder.onstop = function(evt) {
                console.log('recorder stopping');
                var blob = new Blob(self.data, {type: "video/mp4"});
                self.download(blob, "mp4");
                self.sendMail(videoBlob);
            }

            console.log("finished setup, starting")
            this.recorder.start(1200);

            function sleep(ms) { return new Promise(resolve => setTimeout(resolve, ms));}

            async function looper() {
                // I am trying to pick one second every 20 more or less
                await sleep(500);
                self.recorder.pause();
                await sleep(18000);
                self.recorder.resume();
                looper();
            }
            looper();
        },


    


    Issues

    


    Only one call to getUserMedia()

    


    I am already using this.secondstream elsewhere, and I need the settings to stay as they are for the other functionality.
    
On Chrome and Edge, I could just call getUserMedia() again with different settings, and the issue would be solved, but on IOS calling getUserMedia() a second time kills the first stream.
    
The settings that I was planning to use (works for Chrome and Edge) :

    


    navigator.mediaDevices.getUserMedia({
    audio: false,
    video: { 
        width: 360, height: 240, facingMode: "user", 
        frameRate: { min:0, ideal: 0.05, max:0.1 } 
    },
}


    


    The timelapse library I am using does not support mp4 (ffmpeg as alternative ?)

    


    I am forced to use mp4 on IOS apparently, but this does not allow me to use the library I was relying on so I need an alternative.
    
I am thinking of using ffmpeg but cannot find any documentation to make it interact with the blob before the download.
    
I do not want to edit the video after downloading it, but I want to be able to download the already edited version, so no terminal commands.

    


    MediaRecorder pause and resume are not ideal

    


    On Chrome and Edge I would keep one frame every 20 seconds by setting the frameRate to 0.05, but this does not seem to work on IOS for two reasons.
    
First one is related to the first issue of not being able to change the settings of getUserMedia() without destroying the initial stream in the first place.
    
And even after changing the settings, It seems that setting the frame rate below 1 is not supported on IOS. Maybe I wrote something else wrong, but I was not able to open the downloaded file.
    
Therefore I tried relying on pausing and resuming the MediaRecorder, but this brings forth another two issues :
    
I am currently saving 1 second every 20 seconds and not 1 frame every 20 seconds, and I cannot find any workarounds.
    
Pause and Resume take a little bit of time, making the code unreliable, as I sometimes pick 2/20 seconds instead of 1/20, and I have no reliability that the loop is actually running every 20 seconds (might be 18 might be 25).

    


    My working code for other platforms

    


    This is my code for the other platforms, hope it helps !
    
Quick note : you will need to give it a bit of time between setup and start.
    
The timelapse library is here

    


    
        setupVideoRecording: function() {
            let video  = { 
                width: 360, height: 240, facingMode: "user", 
                frameRate: { min:0, ideal: 0.05, max:0.1 } 
            };
            navigator.mediaDevices.getUserMedia({
                audio: false,
                video: video,
            }).then((stream) => {
                // this is a video element
                const recVideo = document.getElementById('self-recorder');
                recVideo.muted = true;
                recVideo.autoplay = true;
                recVideo.srcObject = stream;
                recVideo.play();
            });
        },

        startVideoRecording: function() {
            console.log("setting up recorder");
            var self = this;
            this.data = [];

            var video = document.getElementById('self-recorder');

            if (MediaRecorder.isTypeSupported('video/webm; codecs=vp9')) {
                var options = {mimeType: 'video/webm; codecs=vp9'};
            } else  if (MediaRecorder.isTypeSupported('video/webm')) {
                var options = {mimeType: 'video/webm'};
            }
            console.log("options settings:");
            console.log(options);

            this.recorder = new MediaRecorder(video.captureStream(), options);

            this.recorder.ondataavailable = function(evt) {
                self.data.push(evt.data);
                console.log('chunk size: ' + evt.data.size);
            }

            this.recorder.onstop = function(evt) {
                console.log('recorder stopping');
                timelapse(self.data, 3, function(blob) {
                    self.download(blob, "webm");
                });
            }

            console.log("finished setup, starting");
            this.recorder.start(40000);
        }


    


  • FFmpeg Wasm, error while creating video from canvas

    12 octobre 2023, par NineCattoRules

    I'm using ffmpeg.wasm in my Next.JS app.

    


    Here my specs :

    


    "@ffmpeg/ffmpeg": "^0.12.5",
"@ffmpeg/util": "^0.12.0",
"next": "^13.0.6",
"react": "^18.2.0",


    


    I want to simply record a 5s video from a canvas, so I tried :

    


    &#x27;use client&#x27;&#xA;&#xA;import React, { useEffect, useRef, useState } from &#x27;react&#x27;;&#xA;import { FFmpeg } from &#x27;@ffmpeg/ffmpeg&#x27;;&#xA;import { fetchFile } from &#x27;@ffmpeg/util&#x27;;&#xA;&#xA;const CanvasVideoRecorder = () => {&#xA;    const canvasRef = useRef(null);&#xA;    const videoChunksRef = useRef([]);&#xA;    const ffmpegRef = useRef(new FFmpeg({ log: true }));&#xA;    const [loaded, setLoaded] = useState(false);&#xA;    const [videoUrl, setVideoUrl] = useState(null);&#xA;&#xA;    const load = async () => {&#xA;        await ffmpegRef.current.load({&#xA;            coreURL: &#x27;/js/ffmpeg-core.js&#x27;,&#xA;            wasmURL: &#x27;/js/ffmpeg-core.wasm&#x27;,&#xA;        });&#xA;        setLoaded(true);&#xA;    };&#xA;&#xA;    useEffect(() => {&#xA;        const ctx = canvasRef.current.getContext(&#x27;2d&#x27;);&#xA;        function drawFrame(timestamp) {&#xA;            ctx.fillStyle = `rgb(${(Math.sin(timestamp / 500) * 128) &#x2B; 128}, 0, 0)`;&#xA;            ctx.fillRect(0, 0, canvasRef.current.width, canvasRef.current.height);&#xA;            requestAnimationFrame(drawFrame);&#xA;        }&#xA;        requestAnimationFrame(drawFrame);&#xA;    }, []);&#xA;&#xA;    const startRecording = async () => {&#xA;        const videoStream = canvasRef.current.captureStream(30);&#xA;        const videoRecorder = new MediaRecorder(videoStream, { mimeType: &#x27;video/webm&#x27; });&#xA;&#xA;        videoRecorder.ondataavailable = (event) => {&#xA;            if (event.data.size > 0) {&#xA;                videoChunksRef.current.push(event.data);&#xA;            }&#xA;        };&#xA;&#xA;        videoRecorder.start();&#xA;        setTimeout(() => videoRecorder.stop(), 5000);&#xA;&#xA;        videoRecorder.onstop = async () => {&#xA;            try {&#xA;                await ffmpegRef.current.writeFile(&#x27;recorded.webm&#x27;, await fetchFile(new Blob(videoChunksRef.current, { type: &#x27;video/webm&#x27; })));&#xA;&#xA;                await ffmpegRef.current.exec(&#x27;-y&#x27;, &#x27;-i&#x27;, &#x27;recorded.webm&#x27;, &#x27;-an&#x27;, &#x27;-c:v&#x27;, &#x27;copy&#x27;, &#x27;output_copy.webm&#x27;);&#xA;&#xA;                const data = await ffmpegRef.current.readFile(&#x27;output_copy.webm&#x27;);&#xA;                const url = URL.createObjectURL(new Blob([data.buffer], { type: &#x27;video/webm&#x27; }));&#xA;&#xA;                setVideoUrl(url);&#xA;            } catch (error) {&#xA;                console.error("Error during processing:", error);&#xA;            }&#xA;        };&#xA;    };&#xA;&#xA;    return (&#xA;        <div>&#xA;            <canvas ref="{canvasRef}" width="640" height="480"></canvas>&#xA;&#xA;            {loaded ? (&#xA;                &lt;>&#xA;&#xA;                    <button>Start Recording</button>&#xA;                    {videoUrl &amp;&amp; <video controls="controls" src="{videoUrl}"></video>}&#xA;                >&#xA;            ) : (&#xA;                <button>Load FFmpeg</button>&#xA;            )}&#xA;        </div>&#xA;    );&#xA;};&#xA;&#xA;export default CanvasVideoRecorder;&#xA;

    &#xA;

    I don't know why but it catch an error :

    &#xA;

    ErrnoError: FS error&#xA;

    &#xA;

    This error occurs when I do this :

    &#xA;

    await ffmpegRef.current.exec(&#x27;-y&#x27;, &#x27;-i&#x27;, &#x27;recorded.webm&#x27;, &#x27;-an&#x27;, &#x27;-c:v&#x27;, &#x27;copy&#x27;, &#x27;output_copy.webm&#x27;);&#xA;const data = await ffmpegRef.current.readFile(&#x27;output_copy.webm&#x27;);&#xA;

    &#xA;

    The recorded.webm file is written correctly and I can read it, ffmpegRef.current is well defined, so what's wrong with my logic, why the exec command doesn't work ?

    &#xA;

  • How can I have ffmpeg receive both video and audio over RTP ?

    23 mai 2018, par KallDrexx

    I am trying to instruct FFMPEG to receive h264 video and aac audio via RTP using out of band session initialization.

    To do that I have the following local SDP :

    v=0
    o=sb
    s=-
    t=0 0
    c=IN IP4 127.0.0.1
    m=video 12100 RTP/AVP 96
    a=rtpmap:96 H264/90000
    m=audio 12101 RTP/AVP 97
    a=rtpmap:97 MPEG4-GENERIC/44100/2

    When I load ffmpeg with :

    ffmpeg -loglevel debug -protocol_whitelist "file,rtp,udp" -i .\test.sdp -strict -2 test.flv

    I get the following error :

    [udp @ 0000022d7fdafe80] bind failed: Error number -10048 occurred
    [AVIOContext @ 0000022d7fd84900] Statistics: 154 bytes read, 0 seeks
    .\test.sdp: Invalid data found when processing input

    Confused by that error code I loaded it up on a Linux VM and the bind error I got was Address already in use.

    I tried changing both of those port numbers all around and kept getting that error. Finally I removed one of the media streams from the SDP so it ONLY had video or ONLY had audio and no binding error occurred.

    How can I get ffmpeg to bind to multiple RTP ports for RTP ingestion ?