Recherche avancée

Médias (0)

Mot : - Tags -/inscription3

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (47)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (10523)

  • Convert mediarecorder blobs to a type that google speech to text can transcribe

    5 janvier 2021, par Manesha Ramesh

    I am making an app where the user browser records the user speaking and sends it to the server which then passes it on to the Google speech to the text interface. I am using mediaRecorder to get 1-second blobs which are sent to a server. On the server-side, I send these blobs over to the Google speech to the text interface. However, I am getting an empty transcriptions.

    



    I know what the issue is. Mediarecorder's default Mime Type id audio/WebM codec=opus, which is not accepted by google's speech to text API. After doing some research, I realize I need to use ffmpeg to convert blobs to LInear16. However, ffmpeg only accepts audio FILES and I want to be able to convert BLOBS. Then I can send the resulting converted blobs over to the API interface.

    



    server.js

    



    wsserver.on('connection', socket => {
    console.log("Listening on port 3002")
    audio = {
        content: null
    }
  socket.on('message',function(message){
        // const buffer = new Int16Array(message, 0, Math.floor(data.byteLength / 2));
        // console.log(`received from a client: ${new Uint8Array(message)}`);
        // console.log(message);
        audio.content = message.toString('base64')
        console.log(audio.content);
        livetranscriber.createRequest(audio).then(request => {
            livetranscriber.recognizeStream(request);
        });


  });
});


    



    livetranscriber

    



    module.exports = {
    createRequest: function(audio){
        const encoding = 'LINEAR16';
const sampleRateHertz = 16000;
const languageCode = 'en-US';
        return new Promise((resolve, reject, err) =>{
            if (err){
                reject(err)
            }
            else{
                const request = {
                    audio: audio,
                    config: {
                      encoding: encoding,
                      sampleRateHertz: sampleRateHertz,
                      languageCode: languageCode,
                    },
                    interimResults: false, // If you want interim results, set this to true
                  };
                  resolve(request);
            }
        });

    },
    recognizeStream: async function(request){
        const [response] = await client.recognize(request)
        const transcription = response.results
            .map(result => result.alternatives[0].transcript)
            .join('\n');
        console.log(`Transcription: ${transcription}`);
        // console.log(message);
        // message.pipe(recognizeStream);
    },

}


    



    client

    



     recorder.ondataavailable = function(e) {
            console.log('Data', e.data);

            var ws = new WebSocket('ws://localhost:3002/websocket');
            ws.onopen = function() {
              console.log("opening connection");

              // const stream = websocketStream(ws)
              // const duplex = WebSocket.createWebSocketStream(ws, { encoding: 'utf8' });
              var blob = new Blob(e, { 'type' : 'audio/wav; base64' });
              ws.send(blob.data);
              // e.data).pipe(stream); 
              // console.log(e.data);
              console.log("Sent the message")
            };

            // chunks.push(e.data);
            // socket.emit('data', e.data);
        }


    


  • avformat_open_input fails intermittently with avfoundation due to "audio format is not supported"

    27 septembre 2019, par NaderNader

    My application uses the ffmpeg APIs (avformat, avdevice, etc) to open a selected audio input for encoding. For some inputs I can reliably open them the first time, but when I close and reopen that input later, the avformat_open_input() call fails due to "audio format is not supported". My testing shows that it never fails the first time after starting my, and has only about a 50% chance of success when reopening.

    The failure only occurs with my "Built-in Microphone" audio input. I have a USB audio card that reliably opens and closes repeatedly. I have read the documentation and see that the proper way to free the resources after opening is to call avformat_close_input. The only way I have found to guarantee success is to only open the input once.

    I have written a test program to recreate these failures.

    int main() {

       avdevice_register_all();

       cout << "Running open audio test" << endl;


       int i;
       for(i = 0; i< 10; i++) {

           AVDictionary* options = NULL;
           AVInputFormat* inputFormat = av_find_input_format("avfoundation");
           if (!inputFormat) {
               cout << "avfoundation inputFormat=null" << endl;
           }

           AVFormatContext* formatContext = avformat_alloc_context();
           int result = avformat_open_input(&formatContext, ":1", inputFormat, &options);
           if (result < 0) {
               char error[256];
               av_strerror(result, error, sizeof(error));
               cout << "result=" << result << " " << error << endl;
           } else {
               cout << "input opened successfully" << endl;
           }

           sleep(1);

           avformat_close_input(&formatContext);

           sleep(1);

       }

       return 0;
    }

    I would expect the main loop to succeed each time but a typical output shows a high failure rate :

    Running open audio test

    input opened successfully
    [avfoundation @ 0x7fdeb281de00] audio format is not supported
    result=-5 Input/output error
    [avfoundation @ 0x7fdeb2001400] audio format is not supported
    result=-5 Input/output error
    [avfoundation @ 0x7fdeb2001400] audio format is not supported
    result=-5 Input/output error
    input opened successfully
    input opened successfully
    input opened successfully
    [avfoundation @ 0x7fdeb2068800] audio format is not supported
    result=-5 Input/output error
    input opened successfully
    input opened successfully

    I have tried increasing the sleep time between close and open to 5 seconds, but saw no difference in behavior.

    The source of the failure is https://github.com/FFmpeg/FFmpeg/blob/master/libavdevice/avfoundation.m#L672

    It appears avfoundation.m internally is opening an input stream and grabbing an audio frame to determine the format, but the value returned is not valid sometimes, when the process has previously opened and closed that input.

    Am I not closing the resources properly ? Do I have a hardware issue specific to my macbook ?

    Additional Details :

    Tested MacBook Pro with MacOS Mojave 10.14.6
    Tested with Ffmpeg 3.4.1, 4.0, and 4.1

    list_devices :

    [AVFoundation input device @ 0x7f80fff066c0] AVFoundation video devices:
    [AVFoundation input device @ 0x7f80fff066c0] [0] FaceTime HD Camera
    [AVFoundation input device @ 0x7f80fff066c0] [1] Capture screen 0
    [AVFoundation input device @ 0x7f80fff066c0] AVFoundation audio devices:
    [AVFoundation input device @ 0x7f80fff066c0] [0] Behringer Duplex
    [AVFoundation input device @ 0x7f80fff066c0] [1] Built-in Microphone
    [AVFoundation input device @ 0x7f80fff066c0] [2] USB Audio CODEC
  • MJPEG Stream works in Firefox but not in Chrome

    11 décembre 2019, par Maoration

    We have a system that contains cameras and we want to stream them to multiple clients.
    Behind the scenes on the server we get connections from cameras, we keep everything related to that camera in a CameraContainer, which also includes a mpegtsToMjpegStream that extends Duplex (both Readable and Writable Stream).

    after a camera connects, we open an ffmpeg process to work on the incoming mpegts stream, and output an MJPEG stream :

         '-map', '0:v', '-c:v', 'mjpeg','-f', 'mjpeg', `-`,

    ** we are doing this because we are also mapping to other outputs, like writing files.


    On the ’serving’ side, we are currently testing a simple API to GET an mjpeg stream with a camera id :

     async getCameraStream(@Param('cameraId') cameraId: string, @Res() res): Promise<any> {
    const cameraContainer = this.cameraBridgeService.getCameraContainer(cameraId);
    if (!cameraContainer) {
     throw new Error(`Camera with id: ${cameraId} was not found`);
    }

    if (!cameraContainer.mpegtsToMjpegStream) {
     throw new Error('ERROR: mpegtsToMjpegStream stream does not exist on this camera container');
    }

    const writable = new Writable({
     write: (chunk, encoding, callback) => {
       res.write(`Content-Type: image/jpeg\r\nContent-Length: ${chunk.length}\r\n\r\n`);
       res.write(chunk);
       res.write('\r\n--ffmpeg_streamer\r\n');
       callback();
     },
    });

    res.set('Content-Type', 'multipart/x-mixed-replace;boundary=ffmpeg_streamer');
    res.write('--ffmpeg_streamer\r\n');

    cameraContainer.mpegtsToMjpegStream.pipe(writable, { end: false });

    res.once('close', () => {
     if (cameraContainer.mpegtsToMjpegStream &amp;&amp; writable) {
       cameraContainer.mpegtsToMjpegStream.unpipe(writable);
     }
     res.destroy();
    });
    </any>

    The problem is this code works very nicely when accessing the stream with Firefox- after 1-2 seconds we get a stable, high quality, low latency stream.
    With Chrome however, the same code does not behave- the video output is corrupted, disappears into a black screen all the time, and we have to keep refreshing the page constantly just to view a few seconds of the stream until it disappears again.

    Any suggestions as to why this happens and how can I fix it ?