Recherche avancée

Médias (17)

Mot : - Tags -/wired

Autres articles (78)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

Sur d’autres sites (6388)

  • Real Time indoor streaming and music mixing

    9 novembre 2015, par Saneet

    I am working on this project where we are doing a live performance with about 6 musicians placed away from each other in a big space. The audience will be wearing their headphones and as they move around we want them to hear different kinds of effects in different areas of the place. For calculating the position of users we are using bluetooth beacons. We’re expecting around a 100 users and we can’t have a latency of more than 2 seconds.

    Is such kind of a setup possible ?

    The current way we’re thinking of implementing this is that we’ll divide the place into about 30 different sections.
    For the server we’ll take the input from all the musicians and mix a different stream for every section and stream it on a local WLAN using the RTP protocol.
    We’ll have Android and iOS apps that will locate the users using Bluetooth beacons and switch the live streams accordingly.

    Presonus Studio One music mixer - Can have multiple channels that can be output to devices. 30 channels.
    Virtual Audio Cable - Used to create virtual devices that will get the output from the channels. 30 devices.
    FFMpeg streaming - Used to create an RTP stream for each of the devices. 30 streams.

    Is this a good idea ? Are there other ways of doing this ?
    Any help will be appreciated.

  • getting a `InValid URL` when I send a voice message

    9 septembre 2023, par Ammad

    When I try to send voice messages I always get invalid url error with. I am using whisper to convert the audio to text but for some reason I cannot seem to pass the file to the whisper. It worked when I used this in java script but not in typescript for some reason

    


    async function createFile(path: string): Promise<file> {&#xA;  const response = await fetch(path);&#xA;  const data = await response.blob();&#xA;  &#xA;  // Extract file name from the path&#xA;  const fileName = path.split(&#x27;/&#x27;).pop() || &#x27;unknown&#x27;;&#xA;  &#xA;  // Extract file extension and determine MIME type&#xA;  const fileExtension = fileName.split(&#x27;.&#x27;).pop()?.toLowerCase() || &#x27;&#x27;;&#xA;  const mimeTypes: Record = {&#xA;    &#x27;mp3&#x27;: &#x27;audio/mpeg&#x27;,&#xA;    // Add more mappings as needed&#xA;  };&#xA;  const fileType = mimeTypes[fileExtension] || &#x27;application/octet-stream&#x27;;&#xA;  &#xA;  const metadata = {&#xA;    type: fileType&#xA;  };&#xA;  &#xA;  return new File([data], fileName, metadata);&#xA;}&#xA;&#xA;async function sendAudioForTranscription(file_path:string) {&#xA;  try {&#xA;    &#xA;    // const audioData = fs.createReadStream(file_path);&#xA;    const audioFile = await createFile(file_path)&#xA;&#xA;    const response = await openai.createTranscription(audioFile, "whisper-1");&#xA;    const transcribed = response.data.text;&#xA;&#xA;    return transcribed;&#xA;  } catch (error) {&#xA;    console.error("Error transcribing the audio:", error);&#xA;    return null;&#xA;  }&#xA;}&#xA;</file>

    &#xA;

    I am new to this so any help would be appreciated. This is the error

    &#xA;

    Error transcribing the audio: TypeError: Failed to parse URL from src\audio_files\false_xxxxxxxxx8@c.us_B161BC6FA04DB01B8B31F5E0F83EDAD5.mp3&#xA;    at Object.fetch (node:internal/deps/undici/undici:11576:11)&#xA;    at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {&#xA;  [cause]: TypeError [ERR_INVALID_URL]: Invalid URL&#xA;      at new NodeError (node:internal/errors:405:5)&#xA;      at new URL (node:internal/url:778:13)&#xA;      at new Request (node:internal/deps/undici/undici:7132:25)&#xA;      at fetch2 (node:internal/deps/undici/undici:10715:25)&#xA;      at Object.fetch (node:internal/deps/undici/undici:11574:18)&#xA;      at fetch (node:internal/process/pre_execution:270:25)&#xA;      at C:\Users\Ammad Ali\Documents\Documents\alex-whatsapp-bot\build\openai\transcript.js:28:32&#xA;      at C:\Users\Ammad Ali\Documents\Documents\alex-whatsapp-bot\build\openai\transcript.js:8:71&#xA;      at new Promise (<anonymous>)&#xA;      at __awaiter (C:\Users\Ammad Ali\Documents\Documents\alex-whatsapp-bot\build\openai\transcript.js:4:12)&#xA;      at createFile (C:\Users\Ammad Ali\Documents\Documents\alex-whatsapp-bot\build\openai\transcript.js:27:12)&#xA;      at Object.<anonymous> (C:\Users\Ammad Ali\Documents\Documents\alex-whatsapp-bot\build\openai\transcript.js:49:37)&#xA;      at Generator.next (<anonymous>)&#xA;      at C:\Users\Ammad Ali\Documents\Documents\alex-whatsapp-bot\build\openai\transcript.js:8:71&#xA;      at new Promise (<anonymous>) {&#xA;    input: &#x27;src\\audio_files\\false_xxxxxxxxx8@c.us_B161BC6FA04DB01B8B31F5E0F83EDAD5.mp3&#x27;,&#xA;    code: &#x27;ERR_INVALID_URL&#x27;&#xA;  }&#xA;}&#xA;</anonymous></anonymous></anonymous></anonymous>

    &#xA;

    To get a response back in voice message

    &#xA;

  • Yes or no, will ffmpeg api do hardware decoding on iOS ?

    15 janvier 2019, par Fattie

    There seems to be conflicting information on this.

    https://trac.ffmpeg.org/wiki/HWAccelIntro

    notice the first diagram, it firmly marks iOS as “Y” on VideoToolbox

    enter image description here

    however in the comments down the bottom it says

    VideoToolbox. ​VideoToolbox, only supported on macOS. H.264 decoding is available in FFmpeg/libavcodec.

    And in the confusing second diagram it says "Standalone" is not done for VideoToolbox.

    We have found that using ffmpeg compiled in to iOS .... it seems to not use hardware decoding, which is really a pain.

    1. With avcodec_get_hw_config() we get AV_PIX_FMT_VIDEOTOOLBOX, AV_HWDEVICE_TYPE_VIDEOTOOLBOX which is seemingly correct.

    2. But usage and framerates clearly shows everything is being done in CPU. The code is in ff_hevc_hls_residual_coding all the time. (That’s fffmpeg’s software decoder.)

    3. This very diff very long git.videolan.org URL here seems to suggest again it should all be working.

    4. Have tried every iPhone etc. of course