Recherche avancée

Médias (91)

Autres articles (95)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

  • L’agrémenter visuellement

    10 avril 2011

    MediaSPIP est basé sur un système de thèmes et de squelettes. Les squelettes définissent le placement des informations dans la page, définissant un usage spécifique de la plateforme, et les thèmes l’habillage graphique général.
    Chacun peut proposer un nouveau thème graphique ou un squelette et le mettre à disposition de la communauté.

Sur d’autres sites (6487)

  • Detect volume via mic, start recording, end on silence, transcribe and sent to endpoint

    15 juin 2023, par alphadmon

    I have been attempting to get this to work in many ways but I can't seem to get it right. Most of the time I get a part of it to work and then when I try to make other parts work, I generally break other things.

    


    I am intercepting the volume coming from the mic and if it is louder than 50, I start a recording. I then keep recording until there is a silence, if the silence is equal to 5 seconds I then stop the recording.

    


    I then send the recording to be transcribed by whisper using OpenAI API.

    


    Once that is returned, I then want to send it to the open ai chat end point and get the response.

    


    After that, I would like to start listening again.

    


    Here is what I have that is sort of working so far, but the recording is an empty file always :

    


    // DETECT SPEECH
const recorder = require('node-record-lpcm16');

// TRANSCRIBE
const fs = require("fs");
const ffmpeg = require("fluent-ffmpeg");
const mic = require("mic");
const { Readable } = require("stream");
const ffmpegPath = require("@ffmpeg-installer/ffmpeg").path;
require('dotenv').config();

// CHAT
const { Configuration, OpenAIApi } = require("openai");

// OPEN AI
const configuration = new Configuration({
    organization: process.env.OPENAI_ORG,
    apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);

// SETUP
ffmpeg.setFfmpegPath(ffmpegPath);

// VARS
let isRecording = false;
const audioFilename = 'recorded_audio.wav';
const micInstance = mic({
    rate: '16000',
    channels: '1',
    fileType: 'wav',
});

// DETECT SPEECH
const file = fs.createWriteStream('determine_speech.wav', { encoding: 'binary' });
const recording = recorder.record();
recording.stream().pipe(file);


recording.stream().on('data', async (data) => {
    let volume = parseInt(calculateVolume(data));
    if (volume > 50 && !isRecording) {
        console.log('You are talking.');
        await recordAudio(audioFilename);
    } else {
        setTimeout(async () => {
            console.log('You are quiet.');
            micInstance.stop();
            console.log('Finished recording');
            const transcription = await transcribeAudio(audioFilename);
            console.log('Transcription:', transcription);
            setTimeout(async () => {
                await askAI(transcription);
            }, 5000);
        }, 5000);
    }
});

function calculateVolume(data) {
    let sum = 0;

    for (let i = 0; i < data.length; i += 2) {
        const sample = data.readInt16LE(i);
        sum += sample * sample;
    }

    const rms = Math.sqrt(sum / (data.length / 2));

    return rms;
}

// TRANSCRIBE
function recordAudio(filename) {
    const micInputStream = micInstance.getAudioStream();
    const output = fs.createWriteStream(filename);
    const writable = new Readable().wrap(micInputStream);

    console.log('Listening...');

    writable.pipe(output);

    micInstance.start();

    micInputStream.on('error', (err) => {
        console.error(err);
    });
}

// Transcribe audio
async function transcribeAudio(filename) {
    const transcript = await openai.createTranscription(
        fs.createReadStream(filename),
        "whisper-1",
    );
    return transcript.data.text;
}

// CHAT
async function askAI(text) {
    let completion = await openai.createChatCompletion({
        model: "gpt-4",
        temperature: 0.2,
        stream: false,
        messages: [
            { role: "user", content: text },
            { role: "system", content: "Act like you are a rude person." }
        ],
    });

    completion = JSON.stringify(completion.data, null, 2);
    console.log(completion);
}


    


  • FFMPEG for aws s3 bucket signed url not working in node js

    21 août 2018, par ahmed sharief

    I am trying to create thumbnails from an amazon s3 bucket signed url. I am able to generate thumbnails when i run the command in terminal

    ffmpeg -ss 00:00:02 -i "https://test-s3-bucket.s3.ap-south-1.amazonaws.com/user_gallery_assets/5b6936069ac2bf0602085367/gallery/images/5b7be08527641dee8c1f8134.mp4?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=xxxxxxxxx%2F20180821%2Fap-south-1%2Fs3%2Faws4_request&X-Amz-Date=20180821T095101Z&X-Amz-Expires=900&X-Amz-Signature=d7f81f4eed3d6c87c04dc1b0ad06beeb946afa33d417585f57fad72aeadb3ac0&X-Amz-SignedHeaders=host" -vframes 1 -q:v 2 -f image2 output.jpg

    I am running the above command in terminal and its working fine but when i try to implement the same in node js its showing "no such file or directory error". Though i am encoding the url with double quotes then also its showign same error. Here is my node js code....

    function uploadThumbNailForVideo(obj,url, thumb_url){
       return new Promise((resolve, reject) => resolve(url))
       .then((url) => awsHelper.getImage(url))
       .then((result) => {

           var resUrl = "\""+result+"\"";

           var args = [
               '-i', resUrl,
               '-ss', '00:00:02',
               '-vframes', '1',
               '-f','image2',
               'output.jpg'
           ]
           //console.log(args)
           var ffmpeg = require('child_process').spawn('ffmpeg', args);

           ffmpeg.on('error', function (err) {
               console.log(err);
           });

           ffmpeg.on('close', function (code) {

           });

           ffmpeg.stderr.on('data', function (data) {
               var tData = data.toString('utf8');
               var a = tData.split('\n');
               console.log("A",a);
           });

           ffmpeg.stdout.on('data', function (data) {
               //
           });


       });

    }

    and i am getting the following error

    A [ 'ffmpeg version 4.0.2 Copyright (c) 2000-2018 the FFmpeg
    developers' ]
    A [ '',
     '  built with Apple LLVM version 9.1.0 (clang-902.0.39.2)',
     '  configuration: --prefix=/usr/local/Cellar/ffmpeg/4.0.2 --enable-
    shared --enable-pthreads --enable-version3 --enable-hardcoded-tables --
    enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-
    gpl --enable-libmp3lame --enable-libx264 --enable-libxvid --enable-
    opencl --enable-videotoolbox --disable-lzma',
      '  libavutil      56. 14.100 / 56. 14.100',
      '  libavcodec     58. 18.100 / 58. 18.100',
     '  libavformat    58. 12.100 / 58. 12.100',
     '  libavdevice    58.  3.100 / 58.  3.100',
     '  libavfilter     7. 16.100 /  7. 16.100',
     '  libavresample   4.  0.  0 /  4.  0.  0',
     '  libswscale      5.  1.100 /  5.  1.100',
     '  libswresample   3.  1.100 /  3.  1.100',
     '  libpostproc    55.  1.100 / 55.  1.100',
     '"https://test-s3-bucket.s3.ap-south-1.amazonaws.com/user_gallery_assets/5b6936069ac2bf0602085367/gallery/images/5b7be08527641dee8c1f8134.mp4?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=xxxxxxxxx%2F20180821%2Fap-south-1%2Fs3%2Faws4_request&X-Amz-Date=20180821T095101Z&X-Amz-Expires=900&X-Amz-Signature=d7f81f4eed3d6c87c04dc1b0ad06beeb946afa33d417585f57fad72aeadb3ac0&X-Amz-SignedHeaders=host": No such file or directory',
     '' ]

    ffmpeg exited with code 1
  • How to generate video as fast as possible with subtitles and audio on node.js + ffmpeg ?

    12 septembre 2018, par DSeregin

    Intro :

    We receive from the site some pieces of text
    Pieces arrive to node.js-server

    At the output we need to get a video, merged from all the pieces of text, voiced by the machine voice, with the added subtitles and audio substrate. So that user could be share this video in the social networks. MKV format doesn`t supported by VK.com

    The options that we have tried :
    1. Get all the text at once, generate the entire speech, create a file with subtitles, burn subtitles in the video .mp4 (vk.com does not support the .mkv container). It took 12 seconds of operations for a 45-second video on the local computer.
    2. Generate audio and video files for each piece of text (with added subtitles). It took one second for one piece of text. At the final request, we merge all pieces together. The last request (merging) took 2-3 seconds, which is already bearable.

    The second variant looks acceptable in terms of speed, but if you run 50 clients at the same time, then the computer (tested on a MacBook PRO 2013, 2.4 GHz i7, 8gb 1600 Mhz DDR3, SSD 256gb) processed only 1 piece from 1 client in 60 seconds (60 times slower), then the computer hung tight.

    The commands we used :

    • Burn video subtitles and trim up to conditional 6 seconds (in the code send unix timestamp)

    ffmpeg -i import / back.mov -i export_0 / tmp.srt -scodec mov_text -t 6 export_0 / output.mov

    • Merging all audio

    ffmpeg -i audio1.mp3 .... -i audio15.mp3 merged.mp3

    • Overlay audio-substrate on the text

    ffmpeg -i merged.mp3 -i back.mp3 -filter_complex amerge -ac 2-c: a libmp3lame -q: a 4 -shortest audio.mp3

    • Merging all videos

    ffmpeg -i video.txt -f concat -c copy video.mp4

    • Overlay audio on video

    ffmpeg -i audio.mp3 -i video.mp4 -i test.mp4 -i export / output.mp3 -c: v copy -c: a aac -map 0: v: 0 -map 1: a: 0 -shortest output .mp4

    Questions that torment :

    1. Is it faster ?

    2. Can I use other codecs or methods of gluing without re-encoding ?

    3. Try to call ffmpeg directly without a wrapper ? (in fact, it gives 50-100 ms of speed)

    4. Try not to save to disk, and write data to Stream and have them glue together in the end ?