Recherche avancée

Médias (1)

Mot : - Tags -/intégration

Autres articles (109)

  • Gestion de la ferme

    2 mars 2010, par

    La ferme est gérée dans son ensemble par des "super admins".
    Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
    Dans un premier temps il utilise le plugin "Gestion de mutualisation"

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

  • Submit enhancements and plugins

    13 avril 2011

    If you have developed a new extension to add one or more useful features to MediaSPIP, let us know and its integration into the core MedisSPIP functionality will be considered.
    You can use the development discussion list to request for help with creating a plugin. As MediaSPIP is based on SPIP - or you can use the SPIP discussion list SPIP-Zone.

Sur d’autres sites (8421)

  • Create a 44-byte header with ffmpeg

    13 juillet 2015, par Joe Allen

    I made a program using ffmpeg libraries that converts an audio file to a wav file. Except the only problem is that it doesn’t create a 44-byte header. When input the file into Kaldi Speech Recognition, it produces the error :

    ERROR (online2-wav-nnet2-latgen-faster:Read4ByteTag():wave-reader.cc:74) WaveData: expected 4-byte chunk-name, got read errror

    I ran the file thru shntool and it reports a 78-byte header. Is there anyway I can get the standard 44-byte header using ffmpeg libraries ?

  • JavaScript MediaSource && ffmpeg chunks

    17 mai 2023, par OmriHalifa

    I have written the following code for a player that can receive chunks sent by ffmpeg through stdout and display them using mediaSource :

    


    index.js (server of this request)

    


    const express = require('express')
const app = express()
const port = 4545
const cp = require('child_process')
const cors = require('cors')
const { Readable } = require('stream');



app.use(cors())

app.get('/startRecording', (req, res) => {
    const ffmpeg = cp.spawn('ffmpeg', ['-f', 'dshow', '-i', 'video=HP Wide Vision HD Camera', '-profile:v', 'high', '-pix_fmt', 'yuvj420p', '-level:v', '4.1', '-preset', 'ultrafast', '-tune', 'zerolatency', '-vcodec', 'libx264', '-r', '10', '-b:v', '512k', '-s', '640x360', '-acodec', 'aac', '-ac', '2', '-ab', '32k', '-ar', '44100', '-f', 'mpegts', '-flush_packets', '0', '-' /*'udp://235.235.235.235:12345?pkt_size=1316'*/ ]);
   
    ffmpeg.stdout.on('data', (data) => {
        //console.log(`stdout: ${data}`);
        res.write(data)
    });

    ffmpeg.stderr.on('data', (data) => {
      const byteData = Buffer.from(data, 'utf8');  // Replace with your actual byte data
      const byteStream = new Readable();
      byteStream.push(byteData);
      byteStream.push(null);
      const encoding = 'utf8';
      let text = '';
      byteStream.on('data', (chunk) => {
        text += chunk.toString(encoding);
      });

      byteStream.on('end', () => {
        console.log(text);  // Output the converted text
      });


      //console.log({data})
        //res.write(data)
    });

    ffmpeg.on('close', (code) => {
        console.log(`child process exited with code ${code}`);
    });
})

app.listen(port, () => {
  console.log(`Video's Server listening on port ${port}`); 
});


    


    App.js (In react, the side of the player) :

    


    import { useEffect } from &#x27;react&#x27;;&#xA;&#xA;function App() {&#xA;  async function transcode() {&#xA;    const mediaSource = new MediaSource();&#xA;    const videoElement = document.getElementById(&#x27;videoElement&#x27;);&#xA;    videoElement.src = URL.createObjectURL(mediaSource);&#xA;  &#xA;    &#xA;    mediaSource.addEventListener(&#x27;sourceopen&#x27;, async () => {&#xA;      console.log(&#x27;MediaSource open&#x27;);&#xA;      const sourceBuffer = mediaSource.addSourceBuffer(&#x27;video/mp4; codecs="avc1.42c01e"&#x27;);&#xA;      try {&#xA;        const response = await fetch(&#x27;http://localhost:4545/startRecording&#x27;);&#xA;        const reader = response.body.getReader();&#xA;  &#xA;        reader.read().then(async function processText({ done, value }) {&#xA;          if (done) {&#xA;            console.log(&#x27;Stream complete&#x27;);&#xA;            return;&#xA;          }&#xA;&#xA;          console.log("B4 append", videoElement)&#xA;          await sourceBuffer.appendBuffer(value);&#xA;          console.log("after append",value);&#xA;          // Display the contents of the sourceBuffer&#xA;          sourceBuffer.addEventListener(&#x27;updateend&#x27;, function(e) {         if (!sourceBuffer.updating &amp;&amp; mediaSource.readyState === &#x27;open&#x27;) {           mediaSource.endOfStream();         }       });&#xA;  &#xA;          // Call next read and repeat the process&#xA;          return reader.read().then(processText);&#xA;        });&#xA;      } catch (error) {&#xA;        console.error(error);&#xA;      }&#xA;    });&#xA;&#xA;    console.log("B4 play")&#xA;    await videoElement.play();&#xA;    console.log("after play")&#xA;&#xA;  }&#xA;  &#xA;  &#xA;  useEffect(() => {}, []);&#xA;&#xA;  return (&#xA;    <div classname="App">&#xA;      <div>&#xA;        <video></video>&#xA;      </div>&#xA;      <button>start streaming</button>&#xA;    </div>&#xA;  );&#xA;}&#xA;&#xA;export default App;&#xA;&#xA;

    &#xA;

    this what i get :&#xA;what i get

    &#xA;

    the chunks are being received and passed to the Uint8Array correctly, but the video is not being displayed. why can be the result of this and how to correct it ?

    &#xA;

  • python imageIO() ffmpeg output 3D ndarray

    27 juillet 2017, par Daimoj

    I’m trying to encode and than decode a collection of images using imageIO in python with the ffmpeg plugin and the HEVC codec.

    The stream I’m using is an ndarray of shape (1024,512). When I use the writer.append_data() on each image, the shape is as above (1024,512). After writer.close() is called, I create another reader on the video just made from above. When interrogating a single image of the video it’s shape is (1024,512,3). This is all grayscale, so I only expected an array of uint8’s in the shape of (1024,512). Why did ImageIO add 2 more dimensions to my video ? I only want one.