Recherche avancée

Médias (10)

Mot : - Tags -/wav

Autres articles (61)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (8252)

  • lavc/libopenh264enc : add bit rate control select support

    29 avril 2020, par Linjie Fu
    lavc/libopenh264enc : add bit rate control select support
    

    RC_BITRATE_MODE :
    set BITS_EXCEEDED to iCurrentBitsLevel and allows QP adjust
    in RcCalculatePictureQp().

    RC_BUFFERBASED_MODE :
    use buffer status to adjust the video quality.

    RC_TIMESTAMP_MODE :
    bit rate control based on timestamp, introduced in release 1.4.

    Default to use RC_QUALITY_MODE.

    Signed-off-by : Linjie Fu <linjie.fu@intel.com>
    Signed-off-by : Martin Storsjö <martin@martin.st>

    • [DH] libavcodec/libopenh264enc.c
  • What determine number of cpu core usage of filter select=gt(scene,0.1) ?

    15 décembre 2022, par kocoten1992

    I've notice that when using filter gt(scene,0.1), for example :

    &#xA;

    ffmpeg -i big_buck_bunny.mkv -filter:v "select=&#x27;gt(scene,0.1)&#x27;,showinfo" -f null -

    &#xA;

    Depends on the video, number of cpu cores usage varies extremely (sometimes it 3 cores usage - other time 12 cores usage in different video).

    &#xA;

    Would like to ask what determine that logic ?

    &#xA;

    I try to read ffmpeg source code but not familiar with it, a general explanation would be enough, but much appreciate if you point out the line/directory determine that logic in https://github.com/FFmpeg/FFmpeg.

    &#xA;

    (Also not asking how to reduce cpu usage, interested in the logic determine that).

    &#xA;

  • Detect volume via mic, start recording, end on silence, transcribe and sent to endpoint

    15 juin 2023, par alphadmon

    I have been attempting to get this to work in many ways but I can't seem to get it right. Most of the time I get a part of it to work and then when I try to make other parts work, I generally break other things.

    &#xA;

    I am intercepting the volume coming from the mic and if it is louder than 50, I start a recording. I then keep recording until there is a silence, if the silence is equal to 5 seconds I then stop the recording.

    &#xA;

    I then send the recording to be transcribed by whisper using OpenAI API.

    &#xA;

    Once that is returned, I then want to send it to the open ai chat end point and get the response.

    &#xA;

    After that, I would like to start listening again.

    &#xA;

    Here is what I have that is sort of working so far, but the recording is an empty file always :

    &#xA;

    // DETECT SPEECH&#xA;const recorder = require(&#x27;node-record-lpcm16&#x27;);&#xA;&#xA;// TRANSCRIBE&#xA;const fs = require("fs");&#xA;const ffmpeg = require("fluent-ffmpeg");&#xA;const mic = require("mic");&#xA;const { Readable } = require("stream");&#xA;const ffmpegPath = require("@ffmpeg-installer/ffmpeg").path;&#xA;require(&#x27;dotenv&#x27;).config();&#xA;&#xA;// CHAT&#xA;const { Configuration, OpenAIApi } = require("openai");&#xA;&#xA;// OPEN AI&#xA;const configuration = new Configuration({&#xA;    organization: process.env.OPENAI_ORG,&#xA;    apiKey: process.env.OPENAI_API_KEY,&#xA;});&#xA;const openai = new OpenAIApi(configuration);&#xA;&#xA;// SETUP&#xA;ffmpeg.setFfmpegPath(ffmpegPath);&#xA;&#xA;// VARS&#xA;let isRecording = false;&#xA;const audioFilename = &#x27;recorded_audio.wav&#x27;;&#xA;const micInstance = mic({&#xA;    rate: &#x27;16000&#x27;,&#xA;    channels: &#x27;1&#x27;,&#xA;    fileType: &#x27;wav&#x27;,&#xA;});&#xA;&#xA;// DETECT SPEECH&#xA;const file = fs.createWriteStream(&#x27;determine_speech.wav&#x27;, { encoding: &#x27;binary&#x27; });&#xA;const recording = recorder.record();&#xA;recording.stream().pipe(file);&#xA;&#xA;&#xA;recording.stream().on(&#x27;data&#x27;, async (data) => {&#xA;    let volume = parseInt(calculateVolume(data));&#xA;    if (volume > 50 &amp;&amp; !isRecording) {&#xA;        console.log(&#x27;You are talking.&#x27;);&#xA;        await recordAudio(audioFilename);&#xA;    } else {&#xA;        setTimeout(async () => {&#xA;            console.log(&#x27;You are quiet.&#x27;);&#xA;            micInstance.stop();&#xA;            console.log(&#x27;Finished recording&#x27;);&#xA;            const transcription = await transcribeAudio(audioFilename);&#xA;            console.log(&#x27;Transcription:&#x27;, transcription);&#xA;            setTimeout(async () => {&#xA;                await askAI(transcription);&#xA;            }, 5000);&#xA;        }, 5000);&#xA;    }&#xA;});&#xA;&#xA;function calculateVolume(data) {&#xA;    let sum = 0;&#xA;&#xA;    for (let i = 0; i &lt; data.length; i &#x2B;= 2) {&#xA;        const sample = data.readInt16LE(i);&#xA;        sum &#x2B;= sample * sample;&#xA;    }&#xA;&#xA;    const rms = Math.sqrt(sum / (data.length / 2));&#xA;&#xA;    return rms;&#xA;}&#xA;&#xA;// TRANSCRIBE&#xA;function recordAudio(filename) {&#xA;    const micInputStream = micInstance.getAudioStream();&#xA;    const output = fs.createWriteStream(filename);&#xA;    const writable = new Readable().wrap(micInputStream);&#xA;&#xA;    console.log(&#x27;Listening...&#x27;);&#xA;&#xA;    writable.pipe(output);&#xA;&#xA;    micInstance.start();&#xA;&#xA;    micInputStream.on(&#x27;error&#x27;, (err) => {&#xA;        console.error(err);&#xA;    });&#xA;}&#xA;&#xA;// Transcribe audio&#xA;async function transcribeAudio(filename) {&#xA;    const transcript = await openai.createTranscription(&#xA;        fs.createReadStream(filename),&#xA;        "whisper-1",&#xA;    );&#xA;    return transcript.data.text;&#xA;}&#xA;&#xA;// CHAT&#xA;async function askAI(text) {&#xA;    let completion = await openai.createChatCompletion({&#xA;        model: "gpt-4",&#xA;        temperature: 0.2,&#xA;        stream: false,&#xA;        messages: [&#xA;            { role: "user", content: text },&#xA;            { role: "system", content: "Act like you are a rude person." }&#xA;        ],&#xA;    });&#xA;&#xA;    completion = JSON.stringify(completion.data, null, 2);&#xA;    console.log(completion);&#xA;}&#xA;

    &#xA;