Recherche avancée

Médias (1)

Mot : - Tags -/remix

Autres articles (88)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • Gestion des droits de création et d’édition des objets

    8 février 2011, par

    Par défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;

Sur d’autres sites (12525)

  • Encountered an exception of ffmpeg.wasm can only run one command at a time

    2 mars 2023, par Itay113

    I want to make a video chat using ffmepg wasm (I know the standard is WebRTC but my assignment is to do this with ffmpeg wasm and a server connecting the 2 clients) and when doing the follow code I am getting ffmpeg.wasm can only run one command at a time exception on the ffmpegWorker.run line

    


    function App() {&#xA;  const ffmpegWorker = createFFmpeg({&#xA;    log: true&#xA;  })&#xA;&#xA;  async function initFFmpeg() {&#xA;    await ffmpegWorker.load();&#xA;  }&#xA;&#xA;  async function transcode(webcamData) {&#xA;    const name = &#x27;record.webm&#x27;;&#xA;    await ffmpegWorker.FS(&#x27;writeFile&#x27;, name, await fetchFile(webcamData));&#xA;    ffmpegWorker.run(&#x27;-i&#x27;, name, &#x27;-preset&#x27;, &#x27;ultrafast&#x27;, &#x27;-c:v&#x27;, &#x27;h264&#x27;, &#x27;-crf&#x27;, &#x27;28&#x27;, &#x27;-b:v&#x27;, &#x27;0&#x27;, &#x27;-row-mt&#x27;, &#x27;1&#x27;, &#x27;-f&#x27;, &#x27;mp4&#x27;, &#x27;output.mp4&#x27;)&#xA;    .then(()=> {&#xA;&#xA;      const data = ffmpegWorker.FS(&#x27;readFile&#x27;, &#x27;output.mp4&#x27;);&#xA;  &#xA;      const video = document.getElementById(&#x27;output-video&#x27;);&#xA;      video.src = URL.createObjectURL(new Blob([data.buffer], { type: &#x27;video/mp4&#x27; }));&#xA;      ffmpegWorker.FS(&#x27;unlink&#x27;, &#x27;output.mp4&#x27;);&#xA;    })&#xA;  }&#xA;&#xA;  function requestMedia() {&#xA;    const webcam = document.getElementById(&#x27;webcam&#x27;);&#xA;    const chunks = []&#xA;    navigator.mediaDevices.getUserMedia({ video: true, audio: true })&#xA;    .then(async (stream) => {&#xA;      webcam.srcObject = stream;&#xA;      await webcam.play();&#xA;      const mediaRecorder = new MediaRecorder(stream);&#xA;      mediaRecorder.start(0);&#xA;      mediaRecorder.onstop = function(e) {&#xA;        stream.stop();   &#xA;      }&#xA;      mediaRecorder.ondataavailable = async function(e) {&#xA;        chunks.push(e.data);&#xA;        await transcode(new Uint8Array(await (new Blob(chunks)).arrayBuffer()));&#xA;        &#xA;      }&#xA;    })&#xA;  }&#xA;&#xA;  useEffect(() => {&#xA;    requestMedia();&#xA;  }, [])&#xA;&#xA;  return (&#xA;    <div classname="App">&#xA;      <div>&#xA;          <video width="320px" height="180px"></video>&#xA;          <video width="320px" height="180px"></video>&#xA;      </div>&#xA;    </div>&#xA;  );&#xA;}&#xA;

    &#xA;

    I have tried messing around with the time slice on the media recorder start method argument but it didn't helped

    &#xA;

  • Ffmpeg : Bitstream not supported by this decoder

    23 décembre 2018, par John Kim

    I’m trying to create a livestream web app using NodeJS. The code I currently have emits a raw (webm format) binary stream from the webcam on the client using socket IO and the node server receives this raw data. Using fluent-ffmpeg, I want to encode this binary stream into mpegts and send it to an RTMP server in real time, without creating any intermediary files. However when trying to convert the blobs in ffmpeg I get the following error :

    Error while decoding stream #0:1: Invalid data found when processing input
    [NULL @ 000001b15e67bd80] Invalid sync code 61f192.
    [libvpx @ 000001b15e6c5000] Failed to decode frame: Bitstream not supported by this decoder

    My relevant frontend client code :

    navigator.mediaDevices.getUserMedia(constraints).then(function(stream) {
       video_show(stream);//only show locally, not remotely

       socket.emit('config_rtmpDestination',url);
       socket.emit('start','start');
       var options = {mimeType: 'video/webm;codecs=h264'};

       mediaRecorder = new MediaRecorder(stream, options);
       mediaRecorder.start(2000);

       mediaRecorder.onstop = function(e) {
           stream.stop();
       }

       mediaRecorder.ondataavailable = function(e) {
           //var arraybuffer = Uint8Array.from(e.data).buffer;
           socket.emit("binarystream", blob);
           //chunks.push(e.data);
       }
    }).catch(function(err) {
       console.log('The following error occured: ' + err);
       show_output('Local getUserMedia ERROR:'+err);
    });

    Relevant NodeJS server code :

    socket.on('binarystream',function(m){
       feedStream(m);
    });

    socket.on('start',function(m){
       ...
       var ops=[
           '-vcodec', socket._vcodec,'-i','-',
           '-c:v', 'libx264', '-preset', 'veryfast', '-tune', 'zerolatency',
           '-an', '-bufsize', '1000',
           '-f', 'mpegts', socket._rtmpDestination
       ];
       ffmpeg_process=spawn('ffmpeg', ops);
       feedStream=function(data){
           ffmpeg_process.stdin.write(data);
       }
       ...
    }
  • Is ffmpeg able to read ArrayBuffer input from stream

    7 juillet 2017, par jAndy

    I want to accomplish the following tasks :

    • Record Video+Audio from any HTML5 (MediaStream) capable browser
    • Send that data via WebSocket as Blob / ArrayBuffer chunks to a server
    • Broadcast that input stream-data to multiple clients

    As it turns out, this brought me into a world of pain. The first task is fairly simple using the HTML5 MediaStream objects alongside WebSockets.

    // ... for simplicity...
    navigator.mediaDevices.getUserMedia({ audio: true, video: true }).then(stream => {
       let mediaRecorder = new MediaRecorder( stream );
       // ...
       mediaRecorder.ondataavailable = e => {
           webSocket.send( 'newVideoData', e.data ); // configured for binary data
       };
    });

    Now, I want to receive those data fragments and stream those via nginx vod module, because I guess I want the output stream in HLS or DASH.
    I could write a little nodejs script as backend, which just receives the binary chunks and write them to a file or stream, and just reference it so nginx vod module could possibly read it and create the m3u8 manifest on the fly ?

    I am wondering now,

    • if ffmpeg is able to read that binary data directly (should be webm format), without a man-in-the-middle script, "somehow" ?
    • If not, do I have to write the data down into a file and pass that as input to ffmpeg or can I (should I) pipe the data to a self spawned ffmpeg instance ? (if so, how ?)
    • Do I actually need the nginx server (probably alongside rtmp module) to deliver the output stream as HLS or could I just use ffmpeg to also create a dynamic manifest ?
    • Is the nginx vod module capable of creating a dynamic hls/dash manifest or must the input data be complete beforehand ?
    • Ultimately, am I on the totally wrong track here ? :P

    Actually I just want to create a little video-live-chat demo, without any plugins or 3rd party encoding software, pure browser.