
Recherche avancée
Autres articles (24)
-
La file d’attente de SPIPmotion
28 novembre 2010, parUne file d’attente stockée dans la base de donnée
Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...) -
La sauvegarde automatique de canaux SPIP
1er avril 2010, parDans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (3793)
-
avformat/apetag : account for header size if present when returning the start position
10 février 2017, par James Almeravformat/apetag : account for header size if present when returning the start position
The size field in the header/footer accounts for the entire APE tag
structure except the 32 bytes from header, for compatibility with
APEv1.Reviewed-by : Paul B Mahol <onemda@gmail.com>
Signed-off-by : James Almer <jamrial@gmail.com> -
FFMPEG - Overlay WebM over PNG Start Position
26 avril 2022, par joshuaalvarezI am trying to place WebM videos over PNGs layered on top of each other. I have successfully done so, however, the WebM videos are not present in the first frame in the video and are added one frame at a time. For example, if I place 2 WebM videos over a PNG, it will layer one WebM per frame at a time.


It may be important to note that I am using PNGs and WebMs as the inputs (WebM to use alpha channel) and exporting as MP4.


I am using complex filters and the overlay in this way :
overlay=format=auto
per image/video.

Here is a link to a MP4 that shows my issue. See how there is only the PNG portion in the video at 0:00 and the WebM portions come in shortly after.


-
Detect volume via mic, start recording, end on silence, transcribe and sent to endpoint
15 juin 2023, par alphadmonI have been attempting to get this to work in many ways but I can't seem to get it right. Most of the time I get a part of it to work and then when I try to make other parts work, I generally break other things.


I am intercepting the volume coming from the mic and if it is louder than 50, I start a recording. I then keep recording until there is a silence, if the silence is equal to 5 seconds I then stop the recording.


I then send the recording to be transcribed by
whisper
using OpenAI API.

Once that is returned, I then want to send it to the open ai chat end point and get the response.


After that, I would like to start listening again.


Here is what I have that is sort of working so far, but the recording is an empty file always :


// DETECT SPEECH
const recorder = require('node-record-lpcm16');

// TRANSCRIBE
const fs = require("fs");
const ffmpeg = require("fluent-ffmpeg");
const mic = require("mic");
const { Readable } = require("stream");
const ffmpegPath = require("@ffmpeg-installer/ffmpeg").path;
require('dotenv').config();

// CHAT
const { Configuration, OpenAIApi } = require("openai");

// OPEN AI
const configuration = new Configuration({
 organization: process.env.OPENAI_ORG,
 apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);

// SETUP
ffmpeg.setFfmpegPath(ffmpegPath);

// VARS
let isRecording = false;
const audioFilename = 'recorded_audio.wav';
const micInstance = mic({
 rate: '16000',
 channels: '1',
 fileType: 'wav',
});

// DETECT SPEECH
const file = fs.createWriteStream('determine_speech.wav', { encoding: 'binary' });
const recording = recorder.record();
recording.stream().pipe(file);


recording.stream().on('data', async (data) => {
 let volume = parseInt(calculateVolume(data));
 if (volume > 50 && !isRecording) {
 console.log('You are talking.');
 await recordAudio(audioFilename);
 } else {
 setTimeout(async () => {
 console.log('You are quiet.');
 micInstance.stop();
 console.log('Finished recording');
 const transcription = await transcribeAudio(audioFilename);
 console.log('Transcription:', transcription);
 setTimeout(async () => {
 await askAI(transcription);
 }, 5000);
 }, 5000);
 }
});

function calculateVolume(data) {
 let sum = 0;

 for (let i = 0; i < data.length; i += 2) {
 const sample = data.readInt16LE(i);
 sum += sample * sample;
 }

 const rms = Math.sqrt(sum / (data.length / 2));

 return rms;
}

// TRANSCRIBE
function recordAudio(filename) {
 const micInputStream = micInstance.getAudioStream();
 const output = fs.createWriteStream(filename);
 const writable = new Readable().wrap(micInputStream);

 console.log('Listening...');

 writable.pipe(output);

 micInstance.start();

 micInputStream.on('error', (err) => {
 console.error(err);
 });
}

// Transcribe audio
async function transcribeAudio(filename) {
 const transcript = await openai.createTranscription(
 fs.createReadStream(filename),
 "whisper-1",
 );
 return transcript.data.text;
}

// CHAT
async function askAI(text) {
 let completion = await openai.createChatCompletion({
 model: "gpt-4",
 temperature: 0.2,
 stream: false,
 messages: [
 { role: "user", content: text },
 { role: "system", content: "Act like you are a rude person." }
 ],
 });

 completion = JSON.stringify(completion.data, null, 2);
 console.log(completion);
}