
Recherche avancée
Médias (91)
-
Spoon - Revenge !
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
My Morning Jacket - One Big Holiday
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Zap Mama - Wadidyusay ?
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
David Byrne - My Fair Lady
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Beastie Boys - Now Get Busy
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Granite de l’Aber Ildut
9 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
Autres articles (61)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)
Sur d’autres sites (10660)
-
How to make your plugin multilingual – Introducing the Piwik Platform
29 octobre 2014, par Thomas Steur — DevelopmentThis is the next post of our blog series where we introduce the capabilities of the Piwik platform (our previous post was Generating test data – Introducing the Piwik Platform). This time you’ll learn how to equip your plugin with translations. Users of your plugin will be very thankful that they can use and translate the plugin in their language !
Getting started
In this post, we assume that you have already set up your development environment and created a plugin. If not, visit the Piwik Developer Zone where you’ll find the tutorial Setting up Piwik and other Guides that help you to develop a plugin.
Managing translations
Piwik is available in over 50 languages and comes with many translations. The core itself provides some basic translations for words like “Visitor” and “Help”. They are stored in the directory
/lang
. In addition, each plugin can provide its own translations for wordings that are used in this plugin. They are located in/plugins/*/lang
. In those directories you’ll find one JSON file for each language. Each language file consists in turn of tokens that belong to a group.{
"MyPlugin":{
"BlogPost": "Blog post",
"MyToken": "My translation",
"InteractionRate": "Interaction Rate"
}
}A group usually represents the name of a plugin, in this case “MyPlugin”. Within this group, all the tokens are listed on the left side and the related translations on the right side.
Building a translation key
As you will later see to actually translate a word or a sentence you’ll need to know the corresponding translation key. This key is built by combining a group and a token separated by an underscore. You can for instance use the key
MyPlugin_BlogPost
to get a translation of “Blog post”. Defining a new key is as easy as adding a new entry to the “MyPlugin” group.Providing default translations
If a translation cannot be found then the English translation will be used as a default. Therefore, you should always provide a default translation in English for all keys in the file
en.json
(ie,/plugins/MyPlugin/lang/en.json
).Adding translations for other languages
This is as easy as creating new files in the lang subdirectory of your plugin. The filename consists of a 2 letter ISO 639-1 language code completed by the extension
.json
. This means German translations go into a file namedde.json
, French ones into a file namedfr.json
. To see a list of languages you can use have a look at the /lang directory.Reusing translations
As mentioned Piwik comes with quite a lot of translations. You can and should reuse them but you are supposed to be aware that a translation key might be removed or renamed in the future. It is also possible that a translation key was added in a recent version and therefore is not available in older versions of Piwik. We do not currently announce any of such changes. Still, 99% of the translation keys do not change and it is therefore usually a good idea to reuse existing translations. Especially when you or your company would otherwise not be able to provide them. To find any existing translation keys go to Settings => Translation search in your Piwik installation. The menu item will only appear if the development mode is enabled.
Translations in PHP
Use the Piwik::translate() function to translate any text in PHP. Simply pass any existing translation key and you will get the translated text in the language of the current user in return. The English translation will be returned in case none for the current language exists.
$translatedText = Piwik::translate('MyPlugin_BlogPost');
Translations in Twig Templates
To translate text in Twig templates, use the translate filter.
{{ 'MyPlugin_BlogPost'|translate }}
Contributing translations to Piwik
Did you know you can contribute translations to Piwik ? In case you want to improve an existing translation, translate a missing one or add a new language go to Piwik Translations and sign up for an account. You won’t need any knowledge in development to do this.
Advanced features
Of course there are more useful things you can do with translations. For instance you can use placeholders like
%s
in your translations and you can use translations in JavaScript as well. In case you want to know more about those topics check out our Internationalization guide. Currently, this guide only covers translations but we will cover more topics like formatting numbers and handling currencies in the future.Congratulations, you have learnt how to make your plugin multilingual !
If you have any feedback regarding our APIs or our guides in the Developer Zone feel free to send it to us.
-
Computer crashing when using python tools in same script
5 février 2023, par SL1997I am attempting to use the speech recognition toolkit VOSK and the speech diarization package Resemblyzer to transcibe audio and then identify the speakers in the audio.


Tools :


https://github.com/alphacep/vosk-api

https://github.com/resemble-ai/Resemblyzer

I can do both things individually but run into issues when trying to do them when running the one python script.


I used the following guide when setting up the diarization system :




Computer specs are as follows :


Intel(R) Core(TM) i3-7100 CPU @ 3.90GHz, 3912 Mhz, 2 Core(s), 4 Logical Processor(s)

32GB RAM

The following is my code, I am not to sure if using threading is appropriate or if I even implemented it correctly, how can I best optimize this code as to achieve the results I am looking for and not crash.


from vosk import Model, KaldiRecognizer
from pydub import AudioSegment
import json
import sys
import os
import subprocess
import datetime
from resemblyzer import preprocess_wav, VoiceEncoder
from pathlib import Path
from resemblyzer.hparams import sampling_rate
from spectralcluster import SpectralClusterer
import threading
import queue
import gc



def recognition(queue, audio, FRAME_RATE):

 model = Model("Vosk_Models/vosk-model-small-en-us-0.15")

 rec = KaldiRecognizer(model, FRAME_RATE)
 rec.SetWords(True)

 rec.AcceptWaveform(audio.raw_data)
 result = rec.Result()

 transcript = json.loads(result)#["text"]

 #return transcript
 queue.put(transcript)



def diarization(queue, audio):

 wav = preprocess_wav(audio)
 encoder = VoiceEncoder("cpu")
 _, cont_embeds, wav_splits = encoder.embed_utterance(wav, return_partials=True, rate=16)
 print(cont_embeds.shape)

 clusterer = SpectralClusterer(
 min_clusters=2,
 max_clusters=100,
 p_percentile=0.90,
 gaussian_blur_sigma=1)

 labels = clusterer.predict(cont_embeds)

 def create_labelling(labels, wav_splits):

 times = [((s.start + s.stop) / 2) / sampling_rate for s in wav_splits]
 labelling = []
 start_time = 0

 for i, time in enumerate(times):
 if i > 0 and labels[i] != labels[i - 1]:
 temp = [str(labels[i - 1]), start_time, time]
 labelling.append(tuple(temp))
 start_time = time
 if i == len(times) - 1:
 temp = [str(labels[i]), start_time, time]
 labelling.append(tuple(temp))

 return labelling

 #return
 labelling = create_labelling(labels, wav_splits)
 queue.put(labelling)



def identify_speaker(queue1, queue2):

 transcript = queue1.get()
 labelling = queue2.get()

 for speaker in labelling:

 speakerID = speaker[0]
 speakerStart = speaker[1]
 speakerEnd = speaker[2]

 result = transcript['result']
 words = [r['word'] for r in result if speakerStart < r['start'] < speakerEnd]
 #return
 print("Speaker",speakerID,":",' '.join(words), "\n")





def main():

 queue1 = queue.Queue()
 queue2 = queue.Queue()

 FRAME_RATE = 16000
 CHANNELS = 1

 podcast = AudioSegment.from_mp3("Podcast_Audio/Film-Release-Clip.mp3")
 podcast = podcast.set_channels(CHANNELS)
 podcast = podcast.set_frame_rate(FRAME_RATE)

 first_thread = threading.Thread(target=recognition, args=(queue1, podcast, FRAME_RATE))
 second_thread = threading.Thread(target=diarization, args=(queue2, podcast))
 third_thread = threading.Thread(target=identify_speaker, args=(queue1, queue2))

 first_thread.start()
 first_thread.join()
 gc.collect()

 second_thread.start()
 second_thread.join()
 gc.collect()

 third_thread.start()
 third_thread.join()
 gc.collect()

 # transcript = recognition(podcast,FRAME_RATE)
 #
 # labelling = diarization(podcast)
 #
 # print(identify_speaker(transcript, labelling))


if __name__ == '__main__':
 main()



When I say crash I mean everything freezes, I have to hold down the power button on the desktop and turn it back on again. No blue/blank screen, just frozen in my IDE looking at my code. Any help in resolving this issue would be greatly appreciated.


-
ANSI FATE
24 août 2010, par Multimedia Mike — FATE ServerThe new FATE server is shaping up well. I think most of the old configurations have been migrated to the new server. I see one new compiler for x86_64– PathScale. It’s not faring particularly well at this point.
New Tests
As I write this, I noticed that there are now an even 700 tests, twice as many as the last time I trumpeted such a milestone. (It should be noted that the new FATE system finally breaks down the master regression suite into individual tests.) Thankfully, it’s no longer necessary to wait for me to create or edit tests (anyone with FFmpeg privileges can do this), nor is it necessary to keep up with this blog to know exactly what tests are new. Now, you can simply inspect the file history on tests/fate.mak and tests/fate2.mak (I think these 2 files are going to merge in the near future).Vitor, as of r24865 : “Add FATE test for ANSI/ASCII animation and TTY demuxer.” Eh ? What’s this about ? I admit I was completely removed from FFmpeg development for much of June and July so I could have missed a lot. Fortunately, I can check the file history to see which lines were added to make this test happen. And if FATE is exercising the test, you know exactly where the samples will live. Here’s this new decoder in action on the relevant sample :
The file history fingers Suxen drol/Peter Ross for this handiwork. I might have guessed– the only person who is arguably more enamored with old, weird formats than even I. Now we wait for the day that YouTube has support for this format. I’m sure there are huge archives of these animations out there (and I wager that Trixter and Jason Scott know where).
It’s an animation — it just keeps going
Meanwhile, the FATE suite now encompasses a bunch of perceptual audio formats, thanks to the 1-off testing method and a few other techniques. These formats include Bink audio, WMA Pro, WMA voice, Vorbis, ATRAC1, ATRAC3, MS-GSM, AC3, E-AC3, NellyMoser, TrueSpeech, Intel Music Coder, QDM2, RealAudio Cooker, QCELP (just going down the source control log here), and others, no doubt.
Then there’s this curious tidbit : “Add FATE test for WMV8 DRM”. The test spec is
"fate-wmv8-drm: CMD = framecrc -cryptokey 137381538c84c068111902a59c5cf6c340247c39 -i $(SAMPLES)/wmv8/wmv_drm.wmv -an"
. I would still like to investigate FFmpeg’s cryptographic capabilities, which I suspect are moving in a direction to function as a complete SSL stack one day.New Platforms
As for new platforms, the new FATE system finally allows testing on OS/2 (remember that classic ? It was “the totally cool way to run your computer”). Thanks to Dave Yeo for taking this on.Further, a new MIPS-based platform recently appeared on the FATE list. This one reports itself as running on 74kf CPU. Googling for this processor quickly brings up Mans’ post about the Popcorn Hour device. So, congratulations to him for getting the mundane box to serve a higher purpose. Perhaps one day, I’ll be able to do the same for that Belco Alpha-400 netbook.