
Recherche avancée
Médias (17)
-
Matmos - Action at a Distance
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
DJ Dolores - Oslodum 2004 (includes (cc) sample of “Oslodum” by Gilberto Gil)
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Danger Mouse & Jemini - What U Sittin’ On ? (starring Cee Lo and Tha Alkaholiks)
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Cornelius - Wataridori 2
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
The Rapture - Sister Saviour (Blackstrobe Remix)
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Chuck D with Fine Arts Militia - No Meaning No
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (78)
-
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...) -
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ; -
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.
Sur d’autres sites (6388)
-
Real Time indoor streaming and music mixing
9 novembre 2015, par SaneetI am working on this project where we are doing a live performance with about 6 musicians placed away from each other in a big space. The audience will be wearing their headphones and as they move around we want them to hear different kinds of effects in different areas of the place. For calculating the position of users we are using bluetooth beacons. We’re expecting around a 100 users and we can’t have a latency of more than 2 seconds.
Is such kind of a setup possible ?
The current way we’re thinking of implementing this is that we’ll divide the place into about 30 different sections.
For the server we’ll take the input from all the musicians and mix a different stream for every section and stream it on a local WLAN using the RTP protocol.
We’ll have Android and iOS apps that will locate the users using Bluetooth beacons and switch the live streams accordingly.Presonus Studio One music mixer - Can have multiple channels that can be output to devices. 30 channels.
Virtual Audio Cable - Used to create virtual devices that will get the output from the channels. 30 devices.
FFMpeg streaming - Used to create an RTP stream for each of the devices. 30 streams.Is this a good idea ? Are there other ways of doing this ?
Any help will be appreciated. -
getting a `InValid URL` when I send a voice message
9 septembre 2023, par AmmadWhen I try to send voice messages I always get invalid url error with. I am using whisper to convert the audio to text but for some reason I cannot seem to pass the file to the whisper. It worked when I used this in java script but not in typescript for some reason


async function createFile(path: string): Promise<file> {
 const response = await fetch(path);
 const data = await response.blob();
 
 // Extract file name from the path
 const fileName = path.split('/').pop() || 'unknown';
 
 // Extract file extension and determine MIME type
 const fileExtension = fileName.split('.').pop()?.toLowerCase() || '';
 const mimeTypes: Record = {
 'mp3': 'audio/mpeg',
 // Add more mappings as needed
 };
 const fileType = mimeTypes[fileExtension] || 'application/octet-stream';
 
 const metadata = {
 type: fileType
 };
 
 return new File([data], fileName, metadata);
}

async function sendAudioForTranscription(file_path:string) {
 try {
 
 // const audioData = fs.createReadStream(file_path);
 const audioFile = await createFile(file_path)

 const response = await openai.createTranscription(audioFile, "whisper-1");
 const transcribed = response.data.text;

 return transcribed;
 } catch (error) {
 console.error("Error transcribing the audio:", error);
 return null;
 }
}
</file>


I am new to this so any help would be appreciated. This is the error


Error transcribing the audio: TypeError: Failed to parse URL from src\audio_files\false_xxxxxxxxx8@c.us_B161BC6FA04DB01B8B31F5E0F83EDAD5.mp3
 at Object.fetch (node:internal/deps/undici/undici:11576:11)
 at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {
 [cause]: TypeError [ERR_INVALID_URL]: Invalid URL
 at new NodeError (node:internal/errors:405:5)
 at new URL (node:internal/url:778:13)
 at new Request (node:internal/deps/undici/undici:7132:25)
 at fetch2 (node:internal/deps/undici/undici:10715:25)
 at Object.fetch (node:internal/deps/undici/undici:11574:18)
 at fetch (node:internal/process/pre_execution:270:25)
 at C:\Users\Ammad Ali\Documents\Documents\alex-whatsapp-bot\build\openai\transcript.js:28:32
 at C:\Users\Ammad Ali\Documents\Documents\alex-whatsapp-bot\build\openai\transcript.js:8:71
 at new Promise (<anonymous>)
 at __awaiter (C:\Users\Ammad Ali\Documents\Documents\alex-whatsapp-bot\build\openai\transcript.js:4:12)
 at createFile (C:\Users\Ammad Ali\Documents\Documents\alex-whatsapp-bot\build\openai\transcript.js:27:12)
 at Object.<anonymous> (C:\Users\Ammad Ali\Documents\Documents\alex-whatsapp-bot\build\openai\transcript.js:49:37)
 at Generator.next (<anonymous>)
 at C:\Users\Ammad Ali\Documents\Documents\alex-whatsapp-bot\build\openai\transcript.js:8:71
 at new Promise (<anonymous>) {
 input: 'src\\audio_files\\false_xxxxxxxxx8@c.us_B161BC6FA04DB01B8B31F5E0F83EDAD5.mp3',
 code: 'ERR_INVALID_URL'
 }
}
</anonymous></anonymous></anonymous></anonymous>


To get a response back in voice message


-
Yes or no, will ffmpeg api do hardware decoding on iOS ?
15 janvier 2019, par FattieThere seems to be conflicting information on this.
https://trac.ffmpeg.org/wiki/HWAccelIntro
notice the first diagram, it firmly marks iOS as “Y” on VideoToolbox
however in the comments down the bottom it says
VideoToolbox. VideoToolbox, only supported on macOS. H.264 decoding is available in FFmpeg/libavcodec.
And in the confusing second diagram it says "Standalone" is not done for VideoToolbox.
We have found that using ffmpeg compiled in to iOS .... it seems to not use hardware decoding, which is really a pain.
-
With
avcodec_get_hw_config()
we getAV_PIX_FMT_VIDEOTOOLBOX
,AV_HWDEVICE_TYPE_VIDEOTOOLBOX
which is seemingly correct. -
But usage and framerates clearly shows everything is being done in CPU. The code is in
ff_hevc_hls_residual_coding
all the time. (That’s fffmpeg’s software decoder.) -
This very diff very long git.videolan.org URL here seems to suggest again it should all be working.
-
Have tried every iPhone etc. of course
-