
Recherche avancée
Autres articles (68)
-
List of compatible distributions
26 avril 2011, parThe table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...) -
Les tâches Cron régulières de la ferme
1er décembre 2010, parLa gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
Le super Cron (gestion_mutu_super_cron)
Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (8479)
-
Merge videos with different start times and show them on a grid
1er août 2017, par shamaleyteI have multiple video files of a conference call. However, each participant joined the call at a different time, which resulted in the fact that each video file has a different start time offset values.
Video Start Time
------------------
Video1 00:00
Video2 00:10
Video3 01:40My purpose is to play back this conference. However, I did not record the conference as 1 video, it is recorded with multiple video files, instead.
How do I stitch these videos ?There is also a paid solution to merge video fragments to a single clip – this will make the client-side much simpler. But can I do it for free ?
The expected outcome is to have one video showing three videos on a grid.
When ffmpeg stitches the videos, it should consider their start time values properly so that the videos are played accordingly. -
Discord Bot Not Playing Audio When Using YouTube-dl and FFmpeg
5 novembre 2020, par John Henry 5I'm trying to get a bot to join a voice chat and then play the audio in a youtube url. This is the code I have :


@client.command() async def play(ctx):
 channel = ctx.message.author.voice.channel
 voice_client = await channel.connect()

 opts = {'format': 'bestaudio'}
 FFMPEG_OPTIONS = {'before_options': '-reconnect 1 -reconnect_streamed 1 -reconnect_delay_max 5', 'options': '-vn'}
 with youtube_dl.YoutubeDL(opts) as ydl:
 song_info = ydl.extract_info('video', download=False)
 URL = song_info['formats'][0]['url']
 voice_client.play(FFmpegPCMAudio(URL, **FFMPEG_OPTIONS))



No errors are thrown but it just says [youtube] video : Downloading webpage


-
Detect volume via mic, start recording, end on silence, transcribe and sent to endpoint
15 juin 2023, par alphadmonI have been attempting to get this to work in many ways but I can't seem to get it right. Most of the time I get a part of it to work and then when I try to make other parts work, I generally break other things.


I am intercepting the volume coming from the mic and if it is louder than 50, I start a recording. I then keep recording until there is a silence, if the silence is equal to 5 seconds I then stop the recording.


I then send the recording to be transcribed by
whisper
using OpenAI API.

Once that is returned, I then want to send it to the open ai chat end point and get the response.


After that, I would like to start listening again.


Here is what I have that is sort of working so far, but the recording is an empty file always :


// DETECT SPEECH
const recorder = require('node-record-lpcm16');

// TRANSCRIBE
const fs = require("fs");
const ffmpeg = require("fluent-ffmpeg");
const mic = require("mic");
const { Readable } = require("stream");
const ffmpegPath = require("@ffmpeg-installer/ffmpeg").path;
require('dotenv').config();

// CHAT
const { Configuration, OpenAIApi } = require("openai");

// OPEN AI
const configuration = new Configuration({
 organization: process.env.OPENAI_ORG,
 apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);

// SETUP
ffmpeg.setFfmpegPath(ffmpegPath);

// VARS
let isRecording = false;
const audioFilename = 'recorded_audio.wav';
const micInstance = mic({
 rate: '16000',
 channels: '1',
 fileType: 'wav',
});

// DETECT SPEECH
const file = fs.createWriteStream('determine_speech.wav', { encoding: 'binary' });
const recording = recorder.record();
recording.stream().pipe(file);


recording.stream().on('data', async (data) => {
 let volume = parseInt(calculateVolume(data));
 if (volume > 50 && !isRecording) {
 console.log('You are talking.');
 await recordAudio(audioFilename);
 } else {
 setTimeout(async () => {
 console.log('You are quiet.');
 micInstance.stop();
 console.log('Finished recording');
 const transcription = await transcribeAudio(audioFilename);
 console.log('Transcription:', transcription);
 setTimeout(async () => {
 await askAI(transcription);
 }, 5000);
 }, 5000);
 }
});

function calculateVolume(data) {
 let sum = 0;

 for (let i = 0; i < data.length; i += 2) {
 const sample = data.readInt16LE(i);
 sum += sample * sample;
 }

 const rms = Math.sqrt(sum / (data.length / 2));

 return rms;
}

// TRANSCRIBE
function recordAudio(filename) {
 const micInputStream = micInstance.getAudioStream();
 const output = fs.createWriteStream(filename);
 const writable = new Readable().wrap(micInputStream);

 console.log('Listening...');

 writable.pipe(output);

 micInstance.start();

 micInputStream.on('error', (err) => {
 console.error(err);
 });
}

// Transcribe audio
async function transcribeAudio(filename) {
 const transcript = await openai.createTranscription(
 fs.createReadStream(filename),
 "whisper-1",
 );
 return transcript.data.text;
}

// CHAT
async function askAI(text) {
 let completion = await openai.createChatCompletion({
 model: "gpt-4",
 temperature: 0.2,
 stream: false,
 messages: [
 { role: "user", content: text },
 { role: "system", content: "Act like you are a rude person." }
 ],
 });

 completion = JSON.stringify(completion.data, null, 2);
 console.log(completion);
}