Recherche avancée

Médias (1)

Mot : - Tags -/net art

Autres articles (60)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • (Dés)Activation de fonctionnalités (plugins)

    18 février 2011, par

    Pour gérer l’ajout et la suppression de fonctionnalités supplémentaires (ou plugins), MediaSPIP utilise à partir de la version 0.2 SVP.
    SVP permet l’activation facile de plugins depuis l’espace de configuration de MediaSPIP.
    Pour y accéder, il suffit de se rendre dans l’espace de configuration puis de se rendre sur la page "Gestion des plugins".
    MediaSPIP est fourni par défaut avec l’ensemble des plugins dits "compatibles", ils ont été testés et intégrés afin de fonctionner parfaitement avec chaque (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

Sur d’autres sites (9326)

  • create a timelapse video using MediaRecorder API ( and ffmpeg ? )

    24 août 2022, par The Blind Hawk

    Summary

    


    I have a version of my code already working on Chrome and Edge (Mac Windows and Android), but I need some fixes for it to work on IOS (Safari/Chrome).
    
My objective is to record around 25 minutes and download a timelapse version of the recording.
    
final product requirements :

    


    speed: 3fps
length: ~25s

(I need to record one frame every 20 seconds for 25 mins)


    


    this.secondStream settings :

    


    this.secondStream = await navigator.mediaDevices.getUserMedia({
    audio: false,
    video: {width: 430, height: 430, facingMode: "user"}
});


    


    My code for IOS so far :

    


            startIOSVideoRecording: function() {
            console.log("setting up recorder");
            var self = this;
            this.data = [];

            if (MediaRecorder.isTypeSupported('video/mp4')) {
                // IOS does not support webm, so I will be using mp4
                var options = {mimeType: 'video/mp4', videoBitsPerSecond : 1000000};
            } else {
                console.log("ERROR: mp4 is not supported, trying to default to webm");
                var options = {mimeType: 'video/webm'};
            }
            console.log("options settings:");
            console.log(options);

            this.recorder = new MediaRecorder(this.secondStream, options);

            this.recorder.ondataavailable = function(evt) {
                if (evt.data && evt.data.size > 0) {
                    self.data.push(evt.data);
                    console.log('chunk size: ' + evt.data.size);
                }
            }

            this.recorder.onstop = function(evt) {
                console.log('recorder stopping');
                var blob = new Blob(self.data, {type: "video/mp4"});
                self.download(blob, "mp4");
                self.sendMail(videoBlob);
            }

            console.log("finished setup, starting")
            this.recorder.start(1200);

            function sleep(ms) { return new Promise(resolve => setTimeout(resolve, ms));}

            async function looper() {
                // I am trying to pick one second every 20 more or less
                await sleep(500);
                self.recorder.pause();
                await sleep(18000);
                self.recorder.resume();
                looper();
            }
            looper();
        },


    


    Issues

    


    Only one call to getUserMedia()

    


    I am already using this.secondstream elsewhere, and I need the settings to stay as they are for the other functionality.
    
On Chrome and Edge, I could just call getUserMedia() again with different settings, and the issue would be solved, but on IOS calling getUserMedia() a second time kills the first stream.
    
The settings that I was planning to use (works for Chrome and Edge) :

    


    navigator.mediaDevices.getUserMedia({
    audio: false,
    video: { 
        width: 360, height: 240, facingMode: "user", 
        frameRate: { min:0, ideal: 0.05, max:0.1 } 
    },
}


    


    The timelapse library I am using does not support mp4 (ffmpeg as alternative ?)

    


    I am forced to use mp4 on IOS apparently, but this does not allow me to use the library I was relying on so I need an alternative.
    
I am thinking of using ffmpeg but cannot find any documentation to make it interact with the blob before the download.
    
I do not want to edit the video after downloading it, but I want to be able to download the already edited version, so no terminal commands.

    


    MediaRecorder pause and resume are not ideal

    


    On Chrome and Edge I would keep one frame every 20 seconds by setting the frameRate to 0.05, but this does not seem to work on IOS for two reasons.
    
First one is related to the first issue of not being able to change the settings of getUserMedia() without destroying the initial stream in the first place.
    
And even after changing the settings, It seems that setting the frame rate below 1 is not supported on IOS. Maybe I wrote something else wrong, but I was not able to open the downloaded file.
    
Therefore I tried relying on pausing and resuming the MediaRecorder, but this brings forth another two issues :
    
I am currently saving 1 second every 20 seconds and not 1 frame every 20 seconds, and I cannot find any workarounds.
    
Pause and Resume take a little bit of time, making the code unreliable, as I sometimes pick 2/20 seconds instead of 1/20, and I have no reliability that the loop is actually running every 20 seconds (might be 18 might be 25).

    


    My working code for other platforms

    


    This is my code for the other platforms, hope it helps !
    
Quick note : you will need to give it a bit of time between setup and start.
    
The timelapse library is here

    


    
        setupVideoRecording: function() {
            let video  = { 
                width: 360, height: 240, facingMode: "user", 
                frameRate: { min:0, ideal: 0.05, max:0.1 } 
            };
            navigator.mediaDevices.getUserMedia({
                audio: false,
                video: video,
            }).then((stream) => {
                // this is a video element
                const recVideo = document.getElementById('self-recorder');
                recVideo.muted = true;
                recVideo.autoplay = true;
                recVideo.srcObject = stream;
                recVideo.play();
            });
        },

        startVideoRecording: function() {
            console.log("setting up recorder");
            var self = this;
            this.data = [];

            var video = document.getElementById('self-recorder');

            if (MediaRecorder.isTypeSupported('video/webm; codecs=vp9')) {
                var options = {mimeType: 'video/webm; codecs=vp9'};
            } else  if (MediaRecorder.isTypeSupported('video/webm')) {
                var options = {mimeType: 'video/webm'};
            }
            console.log("options settings:");
            console.log(options);

            this.recorder = new MediaRecorder(video.captureStream(), options);

            this.recorder.ondataavailable = function(evt) {
                self.data.push(evt.data);
                console.log('chunk size: ' + evt.data.size);
            }

            this.recorder.onstop = function(evt) {
                console.log('recorder stopping');
                timelapse(self.data, 3, function(blob) {
                    self.download(blob, "webm");
                });
            }

            console.log("finished setup, starting");
            this.recorder.start(40000);
        }


    


  • Video streaming to YouTube using JavaScript and Java

    28 septembre 2020, par user1597121

    I'm trying to stream live video from a user's browser to YouTube Live. I already have the following working :

    


      

    1. Capture video from the webcam using navigator.mediaDevices.getUserMedia
    2. 


    3. Send video data to the server via WebSocket by periodically invoking this function :
    4. 


    


    function getFrame(video)
{
    var canvas = document.createElement('canvas');
    canvas.width = video.videoWidth;
    canvas.height = video.videoHeight;
    canvas.getContext('2d').drawImage(video, 0, 0);

    return canvas.toDataURL('image/png', 1);
}


    


      

    1. Creating a live broadcast and stream on YouTube via their API and receiving the RTMP info where they expect the video stream to be sent.
    2. 


    


    This is where I seem to be stuck. I'm not sure how to send the video data from my Java server to YouTube's RTMP endpoint. I've looked into using Red5 or ffmpeg, but haven't been able to find an example where the data is continually being sent via WebSocket. Rather, there's always some "stream" that is being redirected to YouTube, coming in on a dedicated port, or perhaps from a pre-recorded video file.

    


    I have very limited knowledge of how video streaming works, so that's presumably making things more difficult than they should be. I'd really appreciate some help with getting this figured out. Thank you !

    


  • Unrecognized option 'crf'

    6 septembre 2022, par Arjit Kaushal

    I am trying compress video using ffmpeg but i am facing errors in the command.
Although it runs perfectly fine on my linux terminal.( ffmpeg -i input.avi -vcodec libx264 -crf 24 output.avi).

    


    my code :

    


    void _compress() {
        if (_videoModel == null) return;
        String inputPath = _videoModel!.originalCachePath;
        String outputPath = _videoModel!.editCachePath;
    
        FFmpegKit.execute("-i $inputPath -vcodec libx264 -crf 24 -y $outputPath")
            .then((session) async {
          final returnCode = await session.getReturnCode();
          if (ReturnCode.isSuccess(returnCode)) {
            Navigator.pushNamed(context, PreviewPage.routeName,
                arguments: _videoModel);
          } else if (ReturnCode.isCancel(returnCode)) {
            print("compress cancel");
          } else {
            print("compress error : $returnCode");
            FFmpegKitConfig.enableLogCallback((log){
              final message = log.getMessage();
              print(message);
            });
    
    
          }
        });
      }


    


    I am facing the following errors :
Unrecognized option 'crf',
I/flutter (31056) : Error splitting the argument list,
Option not found.