
Recherche avancée
Autres articles (59)
-
Submit enhancements and plugins
13 avril 2011If you have developed a new extension to add one or more useful features to MediaSPIP, let us know and its integration into the core MedisSPIP functionality will be considered.
You can use the development discussion list to request for help with creating a plugin. As MediaSPIP is based on SPIP - or you can use the SPIP discussion list SPIP-Zone. -
Use, discuss, criticize
13 avril 2011, parTalk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
A discussion list is available for all exchanges between users. -
MediaSPIP Player : problèmes potentiels
22 février 2011, parLe lecteur ne fonctionne pas sur Internet Explorer
Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...)
Sur d’autres sites (10106)
-
How to create Animation video from Images like Google photos ?
21 mars 2017, par AhmedI am working on a mobile application to create a video/animation of multiple images.
I have used server side tools like ffmpeg and whammy, what are other server side or mobile open source tools that i can use ? -
Audio/video out of sync after Google Cloud Transcoding
18 mai 2021, par Renov JungI have been using Google Cloud Transcoding to convert video into HLS.
There are only few videos out of audio sync after the transcoding.
The Audio is behind +1 2 seconds.
The jobConfig looks no problem to me. So, I have no idea how to solve it or even what is causing the trouble from jobConfig setting.


jobConfig :


job: {
 inputUri: 'gs://' + BUCKET_NAME + '/' + inputPath,
 outputUri: 'gs://' + BUCKET_NAME + '/videos/' + outputName + '/',
 templateId: 'preset/web-hd',
 config: {
 elementaryStreams: [
 {
 key: 'video-stream0',
 videoStream: {
 codec: 'h264',
 widthPixels: 360,
 bitrateBps: 1000000,
 frameRate: 60,
 },
 },
 {
 key: 'video-stream1',
 videoStream: {
 codec: 'h264',
 widthPixels: 720,
 bitrateBps: 2000000,
 frameRate: 60,
 },
 },
 {
 key: 'audio-stream0',
 audioStream: {
 codec: 'aac',
 bitrateBps: 64000,
 },
 },
 ],
 muxStreams: [
 {
 key: 'hls_540p',
 container: 'ts',
 segmentSettings: {
 "individualSegments": true
 },
 elementaryStreams: ['video-stream0', 'audio-stream0'],
 },
 {
 key: 'hls2_720p',
 container: 'ts',
 segmentSettings: {
 "individualSegments": true
 },
 elementaryStreams: ['video-stream1', 'audio-stream0'],
 },
 ],
 manifests: [
 {
 "type": "HLS",
 "muxStreams": [
 "hls_540p", "hls2_720p"
 ]
 },
 ],
 },
 },



Before transcoding video :


- 

- https://storage.googleapis.com/greyd/video/1621244451071_7aa8b5358afe8c5690e3bf8e67c69a52.mp4




After transcoding video :




-
Convert mediarecorder blobs to a type that google speech to text can transcribe
5 janvier 2021, par Manesha RameshI am making an app where the user browser records the user speaking and sends it to the server which then passes it on to the Google speech to the text interface. I am using mediaRecorder to get 1-second blobs which are sent to a server. On the server-side, I send these blobs over to the Google speech to the text interface. However, I am getting an empty transcriptions.



I know what the issue is. Mediarecorder's default Mime Type id audio/WebM codec=opus, which is not accepted by google's speech to text API. After doing some research, I realize I need to use ffmpeg to convert blobs to LInear16. However, ffmpeg only accepts audio FILES and I want to be able to convert BLOBS. Then I can send the resulting converted blobs over to the API interface.



server.js



wsserver.on('connection', socket => {
 console.log("Listening on port 3002")
 audio = {
 content: null
 }
 socket.on('message',function(message){
 // const buffer = new Int16Array(message, 0, Math.floor(data.byteLength / 2));
 // console.log(`received from a client: ${new Uint8Array(message)}`);
 // console.log(message);
 audio.content = message.toString('base64')
 console.log(audio.content);
 livetranscriber.createRequest(audio).then(request => {
 livetranscriber.recognizeStream(request);
 });


 });
});




livetranscriber



module.exports = {
 createRequest: function(audio){
 const encoding = 'LINEAR16';
const sampleRateHertz = 16000;
const languageCode = 'en-US';
 return new Promise((resolve, reject, err) =>{
 if (err){
 reject(err)
 }
 else{
 const request = {
 audio: audio,
 config: {
 encoding: encoding,
 sampleRateHertz: sampleRateHertz,
 languageCode: languageCode,
 },
 interimResults: false, // If you want interim results, set this to true
 };
 resolve(request);
 }
 });

 },
 recognizeStream: async function(request){
 const [response] = await client.recognize(request)
 const transcription = response.results
 .map(result => result.alternatives[0].transcript)
 .join('\n');
 console.log(`Transcription: ${transcription}`);
 // console.log(message);
 // message.pipe(recognizeStream);
 },

}




client



recorder.ondataavailable = function(e) {
 console.log('Data', e.data);

 var ws = new WebSocket('ws://localhost:3002/websocket');
 ws.onopen = function() {
 console.log("opening connection");

 // const stream = websocketStream(ws)
 // const duplex = WebSocket.createWebSocketStream(ws, { encoding: 'utf8' });
 var blob = new Blob(e, { 'type' : 'audio/wav; base64' });
 ws.send(blob.data);
 // e.data).pipe(stream); 
 // console.log(e.data);
 console.log("Sent the message")
 };

 // chunks.push(e.data);
 // socket.emit('data', e.data);
 }