
Recherche avancée
Autres articles (100)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)
Sur d’autres sites (15118)
-
Revision b50e518ab6 : Require webm when explicitly requested https://code.google.com/p/webm/issues/de
31 janvier 2015, par JohannChanged Paths :
Modify /vpxenc.c
Require webm when explicitly requestedhttps://code.google.com/p/webm/issues/detail?id=906
Change-Id : I72841078ff81152d21d84ccf4d5548e757685a6d
-
Convert mediarecorder blobs to a type that google speech to text can transcribe
5 janvier 2021, par Manesha RameshI am making an app where the user browser records the user speaking and sends it to the server which then passes it on to the Google speech to the text interface. I am using mediaRecorder to get 1-second blobs which are sent to a server. On the server-side, I send these blobs over to the Google speech to the text interface. However, I am getting an empty transcriptions.



I know what the issue is. Mediarecorder's default Mime Type id audio/WebM codec=opus, which is not accepted by google's speech to text API. After doing some research, I realize I need to use ffmpeg to convert blobs to LInear16. However, ffmpeg only accepts audio FILES and I want to be able to convert BLOBS. Then I can send the resulting converted blobs over to the API interface.



server.js



wsserver.on('connection', socket => {
 console.log("Listening on port 3002")
 audio = {
 content: null
 }
 socket.on('message',function(message){
 // const buffer = new Int16Array(message, 0, Math.floor(data.byteLength / 2));
 // console.log(`received from a client: ${new Uint8Array(message)}`);
 // console.log(message);
 audio.content = message.toString('base64')
 console.log(audio.content);
 livetranscriber.createRequest(audio).then(request => {
 livetranscriber.recognizeStream(request);
 });


 });
});




livetranscriber



module.exports = {
 createRequest: function(audio){
 const encoding = 'LINEAR16';
const sampleRateHertz = 16000;
const languageCode = 'en-US';
 return new Promise((resolve, reject, err) =>{
 if (err){
 reject(err)
 }
 else{
 const request = {
 audio: audio,
 config: {
 encoding: encoding,
 sampleRateHertz: sampleRateHertz,
 languageCode: languageCode,
 },
 interimResults: false, // If you want interim results, set this to true
 };
 resolve(request);
 }
 });

 },
 recognizeStream: async function(request){
 const [response] = await client.recognize(request)
 const transcription = response.results
 .map(result => result.alternatives[0].transcript)
 .join('\n');
 console.log(`Transcription: ${transcription}`);
 // console.log(message);
 // message.pipe(recognizeStream);
 },

}




client



recorder.ondataavailable = function(e) {
 console.log('Data', e.data);

 var ws = new WebSocket('ws://localhost:3002/websocket');
 ws.onopen = function() {
 console.log("opening connection");

 // const stream = websocketStream(ws)
 // const duplex = WebSocket.createWebSocketStream(ws, { encoding: 'utf8' });
 var blob = new Blob(e, { 'type' : 'audio/wav; base64' });
 ws.send(blob.data);
 // e.data).pipe(stream); 
 // console.log(e.data);
 console.log("Sent the message")
 };

 // chunks.push(e.data);
 // socket.emit('data', e.data);
 }



-
Audio/video out of sync after Google Cloud Transcoding
18 mai 2021, par Renov JungI have been using Google Cloud Transcoding to convert video into HLS.
There are only few videos out of audio sync after the transcoding.
The Audio is behind +1 2 seconds.
The jobConfig looks no problem to me. So, I have no idea how to solve it or even what is causing the trouble from jobConfig setting.


jobConfig :


job: {
 inputUri: 'gs://' + BUCKET_NAME + '/' + inputPath,
 outputUri: 'gs://' + BUCKET_NAME + '/videos/' + outputName + '/',
 templateId: 'preset/web-hd',
 config: {
 elementaryStreams: [
 {
 key: 'video-stream0',
 videoStream: {
 codec: 'h264',
 widthPixels: 360,
 bitrateBps: 1000000,
 frameRate: 60,
 },
 },
 {
 key: 'video-stream1',
 videoStream: {
 codec: 'h264',
 widthPixels: 720,
 bitrateBps: 2000000,
 frameRate: 60,
 },
 },
 {
 key: 'audio-stream0',
 audioStream: {
 codec: 'aac',
 bitrateBps: 64000,
 },
 },
 ],
 muxStreams: [
 {
 key: 'hls_540p',
 container: 'ts',
 segmentSettings: {
 "individualSegments": true
 },
 elementaryStreams: ['video-stream0', 'audio-stream0'],
 },
 {
 key: 'hls2_720p',
 container: 'ts',
 segmentSettings: {
 "individualSegments": true
 },
 elementaryStreams: ['video-stream1', 'audio-stream0'],
 },
 ],
 manifests: [
 {
 "type": "HLS",
 "muxStreams": [
 "hls_540p", "hls2_720p"
 ]
 },
 ],
 },
 },



Before transcoding video :


- 

- https://storage.googleapis.com/greyd/video/1621244451071_7aa8b5358afe8c5690e3bf8e67c69a52.mp4




After transcoding video :