Recherche avancée

Médias (1)

Mot : - Tags -/stallman

Autres articles (68)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Mise à disposition des fichiers

    14 avril 2011, par

    Par défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
    Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
    Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

Sur d’autres sites (9675)

  • rtmp : Rename packet types to closer match the spec

    31 janvier 2017, par Martin Storsjö
    rtmp : Rename packet types to closer match the spec
    

    Also rename comments and log messages accordingly,
    and add clarifying comments for some hardcoded values.

    The previous names were taken from older, reverse engineered
    references.

    These names match the official public rtmp specification, and
    matches the names used by wirecast in annotating captured
    streams. These names also avoid hardcoding the roles of server
    and client, since the handling of them is irrelevant of whether
    we act as server or client.

    The RTMP_PT_PING type maps to RTMP_PT_USER_CONTROL.

    The SERVER_BW and CLIENT_BW types are a bit more intertwined ;
    RTMP_PT_SERVER_BW maps to RTMP_PT_WINDOW_ACK_SIZE and
    RTMP_PT_CLIENT_BW maps to RTMP_PT_SET_PEER_BW.

    Signed-off-by : Martin Storsjö <martin@martin.st>

    • [DBH] libavformat/rtmppkt.c
    • [DBH] libavformat/rtmppkt.h
    • [DBH] libavformat/rtmpproto.c
  • Detect volume via mic, start recording, end on silence, transcribe and sent to endpoint

    15 juin 2023, par alphadmon

    I have been attempting to get this to work in many ways but I can't seem to get it right. Most of the time I get a part of it to work and then when I try to make other parts work, I generally break other things.

    &#xA;

    I am intercepting the volume coming from the mic and if it is louder than 50, I start a recording. I then keep recording until there is a silence, if the silence is equal to 5 seconds I then stop the recording.

    &#xA;

    I then send the recording to be transcribed by whisper using OpenAI API.

    &#xA;

    Once that is returned, I then want to send it to the open ai chat end point and get the response.

    &#xA;

    After that, I would like to start listening again.

    &#xA;

    Here is what I have that is sort of working so far, but the recording is an empty file always :

    &#xA;

    // DETECT SPEECH&#xA;const recorder = require(&#x27;node-record-lpcm16&#x27;);&#xA;&#xA;// TRANSCRIBE&#xA;const fs = require("fs");&#xA;const ffmpeg = require("fluent-ffmpeg");&#xA;const mic = require("mic");&#xA;const { Readable } = require("stream");&#xA;const ffmpegPath = require("@ffmpeg-installer/ffmpeg").path;&#xA;require(&#x27;dotenv&#x27;).config();&#xA;&#xA;// CHAT&#xA;const { Configuration, OpenAIApi } = require("openai");&#xA;&#xA;// OPEN AI&#xA;const configuration = new Configuration({&#xA;    organization: process.env.OPENAI_ORG,&#xA;    apiKey: process.env.OPENAI_API_KEY,&#xA;});&#xA;const openai = new OpenAIApi(configuration);&#xA;&#xA;// SETUP&#xA;ffmpeg.setFfmpegPath(ffmpegPath);&#xA;&#xA;// VARS&#xA;let isRecording = false;&#xA;const audioFilename = &#x27;recorded_audio.wav&#x27;;&#xA;const micInstance = mic({&#xA;    rate: &#x27;16000&#x27;,&#xA;    channels: &#x27;1&#x27;,&#xA;    fileType: &#x27;wav&#x27;,&#xA;});&#xA;&#xA;// DETECT SPEECH&#xA;const file = fs.createWriteStream(&#x27;determine_speech.wav&#x27;, { encoding: &#x27;binary&#x27; });&#xA;const recording = recorder.record();&#xA;recording.stream().pipe(file);&#xA;&#xA;&#xA;recording.stream().on(&#x27;data&#x27;, async (data) => {&#xA;    let volume = parseInt(calculateVolume(data));&#xA;    if (volume > 50 &amp;&amp; !isRecording) {&#xA;        console.log(&#x27;You are talking.&#x27;);&#xA;        await recordAudio(audioFilename);&#xA;    } else {&#xA;        setTimeout(async () => {&#xA;            console.log(&#x27;You are quiet.&#x27;);&#xA;            micInstance.stop();&#xA;            console.log(&#x27;Finished recording&#x27;);&#xA;            const transcription = await transcribeAudio(audioFilename);&#xA;            console.log(&#x27;Transcription:&#x27;, transcription);&#xA;            setTimeout(async () => {&#xA;                await askAI(transcription);&#xA;            }, 5000);&#xA;        }, 5000);&#xA;    }&#xA;});&#xA;&#xA;function calculateVolume(data) {&#xA;    let sum = 0;&#xA;&#xA;    for (let i = 0; i &lt; data.length; i &#x2B;= 2) {&#xA;        const sample = data.readInt16LE(i);&#xA;        sum &#x2B;= sample * sample;&#xA;    }&#xA;&#xA;    const rms = Math.sqrt(sum / (data.length / 2));&#xA;&#xA;    return rms;&#xA;}&#xA;&#xA;// TRANSCRIBE&#xA;function recordAudio(filename) {&#xA;    const micInputStream = micInstance.getAudioStream();&#xA;    const output = fs.createWriteStream(filename);&#xA;    const writable = new Readable().wrap(micInputStream);&#xA;&#xA;    console.log(&#x27;Listening...&#x27;);&#xA;&#xA;    writable.pipe(output);&#xA;&#xA;    micInstance.start();&#xA;&#xA;    micInputStream.on(&#x27;error&#x27;, (err) => {&#xA;        console.error(err);&#xA;    });&#xA;}&#xA;&#xA;// Transcribe audio&#xA;async function transcribeAudio(filename) {&#xA;    const transcript = await openai.createTranscription(&#xA;        fs.createReadStream(filename),&#xA;        "whisper-1",&#xA;    );&#xA;    return transcript.data.text;&#xA;}&#xA;&#xA;// CHAT&#xA;async function askAI(text) {&#xA;    let completion = await openai.createChatCompletion({&#xA;        model: "gpt-4",&#xA;        temperature: 0.2,&#xA;        stream: false,&#xA;        messages: [&#xA;            { role: "user", content: text },&#xA;            { role: "system", content: "Act like you are a rude person." }&#xA;        ],&#xA;    });&#xA;&#xA;    completion = JSON.stringify(completion.data, null, 2);&#xA;    console.log(completion);&#xA;}&#xA;

    &#xA;

  • mp4 Vj Animation video lagging hi res video

    21 février 2020, par Ryan Stone

    I am trying to get a video to play inside a video tag at the top left hand corner of my page, it loads ok, the resolution is good and it seems to be looping but it is lagging very much, definatly not achieving 60fps it is in mp4 format and the resolution on the original mp4 is 1920x1080 it is a hi resolution vj free loop called GlassVein, you can see it if you search on youtube. On right clicking properties it comes up with the following inforamtion ;

    Bitrate:127kbs
    Data rate:11270kbps
    Total bitrate:11398kbs
    Audio sample rate is : 44khz
    filetype is:VLC media file(.mp4)
    (but i do not want or need the audio)

    & it also says 30fps, but I’m not sure i believe this as it runs smooth as butter on vlc media player no lagging, just smooth loop animation

    I have searched on :https://trac.ffmpeg.org/wiki/Encode/AAC for encoding information but it is complete gobbldygook to me, I don’t understand a word its saying

    My code is so far as follows ;

       <video src="GlassVeinColorful.mp4" autoplay="1" preload="auto" class="Vid" width="640" height="360" loop="1" viewport="" faststart="faststart" mpeg4="mpeg4" 320x240="320x240" 1080="1080" 128k="128k">  
       </video>

    Does anyone know why this is lagging so much, or what I could do about it.
    it is a quality animation and I don’t really want to loose an of its resolution or crispness.. the -s section was originally set to 1920x1080 as this is what the original file is but i have changed it to try and render it quicker...

    Any helpful sites, articles or answers would be great..

    2020 Update

    The Solution to this problem was to convert the Video to WebM, then use Javascript & a Html5 Canvas Element to render the Video to the page instead of using the video tag to embed the video.

    Html

    <section>
           <video src="Imgs/Vid/PurpGlassVein.webm" type="video/webm" width="684" height="auto" muted="muted" loop="loop" autoplay="autoplay">
                  <source>
                  <source>
                  <source>
           </source></source></source></video>
           <canvas style="filter:opacity(0);"></canvas>
    </section>

    Css

    video{
      display:none !important;
      visibility:hidden;
    }

    Javascript

       const Canv = document.querySelector("canvas");
       const Video = document.querySelector("video");
       const Ctx = Canv.getContext("2d");

       Video.addEventListener('play',()=>{
         function step() {
           Ctx.drawImage(Video, 0, 0, Canv.width, Canv.height)
           requestAnimationFrame(step)
         }
         requestAnimationFrame(step);
       })

       Canv.animate({
           filter: ['opacity(0) blur(5.28px)','opacity(1) blur(8.20px)']
       },{
           duration: 7288,
           fill: 'forwards',
           easing: 'ease-in',
           iterations: 1,
           delay: 728
       })

    I’ve Also Used the Vanilla Javascript .animate() API to fade the element into the page when the page loads. But one Caveat is that both the Canvas and the off-screen Video Tag must match the original videos resolution otherwise it starts to lag again, however you can use Css to scale it down via transform:scale(0.5) ; which doesn’t seem to effect performance at all.

    runs smooth as butter, and doesn’t loose any of the high resolution image.
    Added a slight blur 0.34px onto it aswell to smooth it even more.

    Possibly could of still used ffmpeg to get a better[Smaller File Size] WebM Output file but thats something I’ll have to look into at a later date.