Recherche avancée

Médias (0)

Mot : - Tags -/optimisation

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (103)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

Sur d’autres sites (11071)

  • Audio/video out of sync after Google Cloud Transcoding

    18 mai 2021, par Renov Jung

    I have been using Google Cloud Transcoding to convert video into HLS.
There are only few videos out of audio sync after the transcoding.
The Audio is behind +1 2 seconds.
The jobConfig looks no problem to me. So, I have no idea how to solve it or even what is causing the trouble from jobConfig setting.

    


    jobConfig :

    


    job: {
    inputUri: 'gs://' + BUCKET_NAME + '/' + inputPath,
    outputUri: 'gs://' + BUCKET_NAME + '/videos/' + outputName + '/',
    templateId: 'preset/web-hd',
    config: {
      elementaryStreams: [
        {
          key: 'video-stream0',
          videoStream: {
            codec: 'h264',
            widthPixels: 360,
            bitrateBps: 1000000,
            frameRate: 60,
          },
        },
        {
          key: 'video-stream1',
          videoStream: {
            codec: 'h264',
            widthPixels: 720,
            bitrateBps: 2000000,
            frameRate: 60,
          },
        },
        {
          key: 'audio-stream0',
          audioStream: {
            codec: 'aac',
            bitrateBps: 64000,
          },
        },
      ],
      muxStreams: [
        {
          key: 'hls_540p',
          container: 'ts',
          segmentSettings: {
            "individualSegments": true
          },
          elementaryStreams: ['video-stream0', 'audio-stream0'],
        },
         {
           key: 'hls2_720p',
           container: 'ts',
           segmentSettings: {
             "individualSegments": true
           },
           elementaryStreams: ['video-stream1', 'audio-stream0'],
         },
      ],
      manifests: [
        {
          "type": "HLS",
          "muxStreams": [
            "hls_540p", "hls2_720p"
          ]
        },
      ],
    },
  },


    


    Before transcoding video :

    


    


    After transcoding video :

    


    


  • Convert mediarecorder blobs to a type that google speech to text can transcribe

    5 janvier 2021, par Manesha Ramesh

    I am making an app where the user browser records the user speaking and sends it to the server which then passes it on to the Google speech to the text interface. I am using mediaRecorder to get 1-second blobs which are sent to a server. On the server-side, I send these blobs over to the Google speech to the text interface. However, I am getting an empty transcriptions.

    



    I know what the issue is. Mediarecorder's default Mime Type id audio/WebM codec=opus, which is not accepted by google's speech to text API. After doing some research, I realize I need to use ffmpeg to convert blobs to LInear16. However, ffmpeg only accepts audio FILES and I want to be able to convert BLOBS. Then I can send the resulting converted blobs over to the API interface.

    



    server.js

    



    wsserver.on('connection', socket => {
    console.log("Listening on port 3002")
    audio = {
        content: null
    }
  socket.on('message',function(message){
        // const buffer = new Int16Array(message, 0, Math.floor(data.byteLength / 2));
        // console.log(`received from a client: ${new Uint8Array(message)}`);
        // console.log(message);
        audio.content = message.toString('base64')
        console.log(audio.content);
        livetranscriber.createRequest(audio).then(request => {
            livetranscriber.recognizeStream(request);
        });


  });
});


    



    livetranscriber

    



    module.exports = {
    createRequest: function(audio){
        const encoding = 'LINEAR16';
const sampleRateHertz = 16000;
const languageCode = 'en-US';
        return new Promise((resolve, reject, err) =>{
            if (err){
                reject(err)
            }
            else{
                const request = {
                    audio: audio,
                    config: {
                      encoding: encoding,
                      sampleRateHertz: sampleRateHertz,
                      languageCode: languageCode,
                    },
                    interimResults: false, // If you want interim results, set this to true
                  };
                  resolve(request);
            }
        });

    },
    recognizeStream: async function(request){
        const [response] = await client.recognize(request)
        const transcription = response.results
            .map(result => result.alternatives[0].transcript)
            .join('\n');
        console.log(`Transcription: ${transcription}`);
        // console.log(message);
        // message.pipe(recognizeStream);
    },

}


    



    client

    



     recorder.ondataavailable = function(e) {
            console.log('Data', e.data);

            var ws = new WebSocket('ws://localhost:3002/websocket');
            ws.onopen = function() {
              console.log("opening connection");

              // const stream = websocketStream(ws)
              // const duplex = WebSocket.createWebSocketStream(ws, { encoding: 'utf8' });
              var blob = new Blob(e, { 'type' : 'audio/wav; base64' });
              ws.send(blob.data);
              // e.data).pipe(stream); 
              // console.log(e.data);
              console.log("Sent the message")
            };

            // chunks.push(e.data);
            // socket.emit('data', e.data);
        }


    


  • Revision b50e518ab6 : Require webm when explicitly requested https://code.google.com/p/webm/issues/de

    31 janvier 2015, par Johann

    Changed Paths :
     Modify /vpxenc.c



    Require webm when explicitly requested

    https://code.google.com/p/webm/issues/detail?id=906

    Change-Id : I72841078ff81152d21d84ccf4d5548e757685a6d