Recherche avancée

Médias (0)

Mot : - Tags -/protocoles

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (28)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (2713)

  • Merging multiple videos in a template/layout with Python FFMPEG ?

    14 janvier 2021, par J. M. Arnold

    I'm currently trying to edit videos with the Python library of FFMPEG. I'm working with multiple file formats, precisely .mp4, .png and text inputs (.txt). The goal is to embed the different video files within a "layout" - for demonstration purposes I tried to design an example picture :

    


    Example

    


    The output is supposed to be a 1920x1080 .mp4 file with the following Elements :

    


      

    • Element 3 is the video itself (due to it being a mobile phone screen recording, it's about the size displayed there)
    • 


    • Element 1 and 2 are the "boarders", i.e. static pictures (?)
    • 


    • Element 4 represents a regularly changing text - input through the python script (probably be read from a .txt file)
    • 


    • Element 5 portrays a .png, .svg or alike ; in general a "picture" in the broad sense.
    • 


    


    What I'm trying to achieve is to create a sort of template file in which I "just" need to input the different .mp4 and .png files, as well as the text and in the end I'll receive a .mp4 file whereas my Python script functions as the navigator sending the data packages to FFMPEG to process the video itself.

    


    I dug into the FFMPEG library as well as the python-specific repository and wasn't able to find such an option. There were lots of articles explaining the usage of "channel layouts" (though these don't seem to fit my need).

    


    In case anyone wants to try on the same versions :

    


      

    • python --version :
Python 3.7.3
    • 


    • pip show ffmpeg : Version : 1.4 (it's the most recent ; on an off-topic note : It's not obligatory to use FFMPEG, I'd prefer using this library though if it doesn't offer the functionality I'm looking for, I'd highly appreciate if someone suggested something else)
    • 


    


  • Converting a binary stream to an mpegts stream

    22 décembre 2018, par John Kim

    I’m trying to create a livestream web app using NodeJS. The code I currently have emits a raw binary stream from the webcam on the client using socket IO and the node server receives this raw data. Using fluent-ffmpeg, I want to encode this binary stream into mpegts and send it to an RTMP server in real time, without creating any intermediary files. Could I somehow convert the binary stream into a webm stream and pipe that stream into an mpegts encoder in one ffmpeg command ?

    My relevant frontend client code :

    navigator.mediaDevices.getUserMedia(constraints).then(function(stream) {
       socket.emit('config_rtmpDestination',url);
       socket.emit('start','start');
       mediaRecorder = new MediaRecorder(stream);
       mediaRecorder.start(2000);

       mediaRecorder.onstop = function(e) {
           stream.stop();
       }

       mediaRecorder.ondataavailable = function(e) {
         socket.emit("binarystream",e.data);
       }
    }).catch(function(err) {
       console.log('The following error occured: ' + err);
       show_output('Local getUserMedia ERROR:'+err);
    });

    Relevant NodeJS server code :

    socket.on('binarystream',function(m){
       feedStream(m);
    });

    socket.on('start',function(m){
       ...
       var ops=[
           '-vcodec', socket._vcodec,'-i','-',
           '-c:v', 'libx264', '-preset', 'veryfast', '-tune', 'zerolatency',
           '-an', '-bufsize', '1000',
           '-f', 'mpegts', socket._rtmpDestination
       ];
       ffmpeg_process=spawn('ffmpeg', ops);
       feedStream=function(data){
           ffmpeg_process.stdin.write(data);
       }
       ...
    }

    The above code of course doesn’t work, I get these errors on ffmpeg :

    Error while decoding stream #0:1: Invalid data found when processing input
    [NULL @ 000001b15e67bd80] Invalid sync code 61f192.
    [libvpx @ 000001b15e6c5000] Failed to decode frame: Bitstream not supported by this decoder

    because I’m trying to convert raw binary data into mpegts.

  • WebRTC : unsync audio after processing using ffmpeg (audio length is less than that of video)

    22 novembre 2013, par QuickSilver

    I am recording a video and using RecordRTC : WebRTC . After receiving the webm video and wav audio at server, I'm encoding it to a mp4 file using ffmpeg(executing shell command via php). But after encoding process, the audio is unsync with video (audio ends before video). How can I fix this ?

    I have noticed that the recorded audio is 1 sec less in length with video.

    js code is here

    record.onclick = function() {
       record.disabled = true;
       var video_constraints = {
           mandatory: {
               "minWidth": "320",
               "minHeight": "240",
               "minFrameRate": "24",
               "maxWidth": "320",
               "maxHeight": "240",
               "maxFrameRate": "24"
           },
           optional: []
       };
       navigator.getUserMedia({
           audio: true,
           video: video_constraints
       }, function(stream) {
           preview.src = window.URL.createObjectURL(stream);
           preview.play();

           // var legalBufferValues = [256, 512, 1024, 2048, 4096, 8192, 16384];
           // sample-rates in at least the range 22050 to 96000.
           recordAudio = RecordRTC(stream, {
               /* extra important, we need to set a big buffer when capturing audio and video at the same time*/
               bufferSize: 16384
               //sampleRate: 45000
           });

           recordVideo = RecordRTC(stream, {
               type: 'video'
           });

           recordVideo.startRecording();
           recordAudio.startRecording();

           stop.disabled = false;
           recording_flag = true;
           $("#divcounter").show();
           $("#second-step-title").text('Record your video');
           initCountdown();
           uploadStatus.video = false;
           uploadStatus.audio = false;
       });
    };

    ffmpeg command used is :

    ffmpeg -y -i 166890589.wav -i 166890589.webm -vcodec libx264 166890589.mp4

    Currently I'm adding an offset of -1 to ffmpeg, but i don't think it's right.

    ffmpeg -y -itsoffset -1 -i 166890589.wav -i 166890589.webm -vcodec libx264 166890589.mp4