Recherche avancée

Médias (0)

Mot : - Tags -/formulaire

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (20)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (4167)

  • How to make your plugin multilingual – Introducing the Piwik Platform

    29 octobre 2014, par Thomas Steur — Development

    This is the next post of our blog series where we introduce the capabilities of the Piwik platform (our previous post was Generating test data – Introducing the Piwik Platform). This time you’ll learn how to equip your plugin with translations. Users of your plugin will be very thankful that they can use and translate the plugin in their language !

    Getting started

    In this post, we assume that you have already set up your development environment and created a plugin. If not, visit the Piwik Developer Zone where you’ll find the tutorial Setting up Piwik and other Guides that help you to develop a plugin.

    Managing translations

    Piwik is available in over 50 languages and comes with many translations. The core itself provides some basic translations for words like “Visitor” and “Help”. They are stored in the directory /lang. In addition, each plugin can provide its own translations for wordings that are used in this plugin. They are located in /plugins/*/lang. In those directories you’ll find one JSON file for each language. Each language file consists in turn of tokens that belong to a group.

    {
       "MyPlugin":{
           "BlogPost": "Blog post",
           "MyToken": "My translation",
           "InteractionRate": "Interaction Rate"
       }
    }

    A group usually represents the name of a plugin, in this case “MyPlugin”. Within this group, all the tokens are listed on the left side and the related translations on the right side.

    Building a translation key

    As you will later see to actually translate a word or a sentence you’ll need to know the corresponding translation key. This key is built by combining a group and a token separated by an underscore. You can for instance use the key MyPlugin_BlogPost to get a translation of “Blog post”. Defining a new key is as easy as adding a new entry to the “MyPlugin” group.

    Providing default translations

    If a translation cannot be found then the English translation will be used as a default. Therefore, you should always provide a default translation in English for all keys in the file en.json (ie, /plugins/MyPlugin/lang/en.json).

    Adding translations for other languages

    This is as easy as creating new files in the lang subdirectory of your plugin. The filename consists of a 2 letter ISO 639-1 language code completed by the extension .json. This means German translations go into a file named de.json, French ones into a file named fr.json. To see a list of languages you can use have a look at the /lang directory.

    Reusing translations

    As mentioned Piwik comes with quite a lot of translations. You can and should reuse them but you are supposed to be aware that a translation key might be removed or renamed in the future. It is also possible that a translation key was added in a recent version and therefore is not available in older versions of Piwik. We do not currently announce any of such changes. Still, 99% of the translation keys do not change and it is therefore usually a good idea to reuse existing translations. Especially when you or your company would otherwise not be able to provide them. To find any existing translation keys go to Settings => Translation search in your Piwik installation. The menu item will only appear if the development mode is enabled.

    Translations in PHP

    Use the Piwik::translate() function to translate any text in PHP. Simply pass any existing translation key and you will get the translated text in the language of the current user in return. The English translation will be returned in case none for the current language exists.

    $translatedText = Piwik::translate('MyPlugin_BlogPost');

    Translations in Twig Templates

    To translate text in Twig templates, use the translate filter.

    {{ 'MyPlugin_BlogPost'|translate }}

    Contributing translations to Piwik

    Did you know you can contribute translations to Piwik ? In case you want to improve an existing translation, translate a missing one or add a new language go to Piwik Translations and sign up for an account. You won’t need any knowledge in development to do this.

    Advanced features

    Of course there are more useful things you can do with translations. For instance you can use placeholders like %s in your translations and you can use translations in JavaScript as well. In case you want to know more about those topics check out our Internationalization guide. Currently, this guide only covers translations but we will cover more topics like formatting numbers and handling currencies in the future.

    Congratulations, you have learnt how to make your plugin multilingual !

    If you have any feedback regarding our APIs or our guides in the Developer Zone feel free to send it to us.

  • Computer crashing when using python tools in same script

    5 février 2023, par SL1997

    I am attempting to use the speech recognition toolkit VOSK and the speech diarization package Resemblyzer to transcibe audio and then identify the speakers in the audio.

    


    Tools :

    


    https://github.com/alphacep/vosk-api
    
https://github.com/resemble-ai/Resemblyzer

    


    I can do both things individually but run into issues when trying to do them when running the one python script.

    


    I used the following guide when setting up the diarization system :

    


    https://medium.com/saarthi-ai/who-spoke-when-build-your-own-speaker-diarization-module-from-scratch-e7d725ee279

    


    Computer specs are as follows :

    


    Intel(R) Core(TM) i3-7100 CPU @ 3.90GHz, 3912 Mhz, 2 Core(s), 4 Logical Processor(s)
    
32GB RAM

    


    The following is my code, I am not to sure if using threading is appropriate or if I even implemented it correctly, how can I best optimize this code as to achieve the results I am looking for and not crash.

    


    from vosk import Model, KaldiRecognizer
from pydub import AudioSegment
import json
import sys
import os
import subprocess
import datetime
from resemblyzer import preprocess_wav, VoiceEncoder
from pathlib import Path
from resemblyzer.hparams import sampling_rate
from spectralcluster import SpectralClusterer
import threading
import queue
import gc



def recognition(queue, audio, FRAME_RATE):

    model = Model("Vosk_Models/vosk-model-small-en-us-0.15")

    rec = KaldiRecognizer(model, FRAME_RATE)
    rec.SetWords(True)

    rec.AcceptWaveform(audio.raw_data)
    result = rec.Result()

    transcript = json.loads(result)#["text"]

    #return transcript
    queue.put(transcript)



def diarization(queue, audio):

    wav = preprocess_wav(audio)
    encoder = VoiceEncoder("cpu")
    _, cont_embeds, wav_splits = encoder.embed_utterance(wav, return_partials=True, rate=16)
    print(cont_embeds.shape)

    clusterer = SpectralClusterer(
        min_clusters=2,
        max_clusters=100,
        p_percentile=0.90,
        gaussian_blur_sigma=1)

    labels = clusterer.predict(cont_embeds)

    def create_labelling(labels, wav_splits):

        times = [((s.start + s.stop) / 2) / sampling_rate for s in wav_splits]
        labelling = []
        start_time = 0

        for i, time in enumerate(times):
            if i > 0 and labels[i] != labels[i - 1]:
                temp = [str(labels[i - 1]), start_time, time]
                labelling.append(tuple(temp))
                start_time = time
            if i == len(times) - 1:
                temp = [str(labels[i]), start_time, time]
                labelling.append(tuple(temp))

        return labelling

    #return
    labelling = create_labelling(labels, wav_splits)
    queue.put(labelling)



def identify_speaker(queue1, queue2):

    transcript = queue1.get()
    labelling = queue2.get()

    for speaker in labelling:

        speakerID = speaker[0]
        speakerStart = speaker[1]
        speakerEnd = speaker[2]

        result = transcript['result']
        words = [r['word'] for r in result if speakerStart < r['start'] < speakerEnd]
        #return
        print("Speaker",speakerID,":",' '.join(words), "\n")





def main():

    queue1 = queue.Queue()
    queue2 = queue.Queue()

    FRAME_RATE = 16000
    CHANNELS = 1

    podcast = AudioSegment.from_mp3("Podcast_Audio/Film-Release-Clip.mp3")
    podcast = podcast.set_channels(CHANNELS)
    podcast = podcast.set_frame_rate(FRAME_RATE)

    first_thread = threading.Thread(target=recognition, args=(queue1, podcast, FRAME_RATE))
    second_thread = threading.Thread(target=diarization, args=(queue2, podcast))
    third_thread = threading.Thread(target=identify_speaker, args=(queue1, queue2))

    first_thread.start()
    first_thread.join()
    gc.collect()

    second_thread.start()
    second_thread.join()
    gc.collect()

    third_thread.start()
    third_thread.join()
    gc.collect()

    # transcript = recognition(podcast,FRAME_RATE)
    #
    # labelling = diarization(podcast)
    #
    # print(identify_speaker(transcript, labelling))


if __name__ == '__main__':
    main()


    


    When I say crash I mean everything freezes, I have to hold down the power button on the desktop and turn it back on again. No blue/blank screen, just frozen in my IDE looking at my code. Any help in resolving this issue would be greatly appreciated.

    


  • Best intra-frame codec for editing ? Having color issues with DNxHR and serious sync issues with Prores

    23 janvier 2020, par Raulo1985

    I have a couple of questions regarding intra-frame codecs for editing purposes (in Premiere Pro).

    A LITTLE CONTEXT :

    I ripped a HDR movie to a MP4 container some months ago, using H.264 as codec (high 10, level 5, UHD, YUV, subsampling 4:2:0). The video looks great, as it should. Now I want to edit a trailer for that movie (I edit in Adobe Premiere Pro, current version), and for fast playback I need to work with proxies or transcode the source file to an intra-frame non long GOP file and use it as the source file (hopefully as lossless as that transcoding step can be). I tried for several days to transcode the source file to DNxHR 444 10 bits (using FFmpeg and then Adobe Media Encoder), but the result was always a video with the colors messed up (sometimes very washed out, sometimes over saturated).

    FFprobe of the resulting DNxHR file said that the color space was BT709 (source file is obviously BT2020), and I don’t know why. The transcoding involved upsampling since the source file is 4:2:0 and DNxHR doesn’t support it, but I tried upsampling to 4:4:4 and also to 4:2:2, and both of those files looked exactly the same to me, and very different from the original footage (so, I don’t think upsampling is the cause of the color issue, but maybe the apparent color space change or something wrong with the metadata). The results were the same when transcoding with Adobe Media Encoder. Anyway, I seem to have given up transcoding to DNxHR and use it as the source file, unless someone has an idea of what’s causing this problem. I could have worked with the source file for exporting and DNxHR LB for proxies, but there were sync issues (between source file and proxy) that defeated all purposes while editing. Prores is out of the picture, sync issues were worse (several seconds of delay).

    For the record, the command used that didn’t work as expected (color wise) is :

    ffmpeg -channel_layout 63 -i input.mkv -map 0:0 -c:v dnxhd -vf "scale=in_range=limited:out_range=full" -color_range 2 -profile:v dnxhr_444 -pix_fmt yuv444p10le -acodec pcm_s24le -ar 48000 -ac 6 -channel_layout 63 -map 0:2 -hide_banner output.mxf

    I also tried without the commands "scale=in_range=limited:out_range=full" and "-color_range 2", with same results. Always used FFmpeg latest version, and I’m working in Windows 10 Pro, latest drivers and latest Klite codec pack. Video files were compared with Mediainfo, FFprobe, and visually with VLC.

    Well, like I said, I’m giving up using DNxHR as the source file for my project (it would have been ideal since it doesn’t have sync issues with the DNxHR proxies, and file size is not a problem for me). A user here at the forums suggested transcoding and use H.264 intra-frame as source file, which I didn’t know was an option (I didn’t know H.264 was capable of intra-frame, my bad). I’m aware that one should avoid unnecesary transcoding steps, but I can’t work with a H.264 UHD HDR source file (ultra slow playback), and the sync issues with proxies, no matter the codec, make it impossible to make accurate cuts.
    So, bottom line, I need to find a way to fix the color issue when transcoding to DNxHR, or try with an inter-frame codec that’s not DNxHR and that’s capable of preserving all the HDR info (and then see if it doesn’t have sync issues. I’m assuming that those may dissapear when using intra-frame for both source file and proxies).

    NOTE : When importing the DNxHR 444 file to Premiere Pro and looking at the Lumetri scopes tab, you can tell that the colors are clipped at 100 nits, like a regular SDR video. So apparently the color space was really reduced to BT709, and I don’t know why. The H.264 source file behaves as expected, with colors going past 100 nits.

    MY QUESTIONS :

    1) Is H.264 intra-frame a good format for editing with a good playback performance ?

    2) If H.264 intra-frame is a good option, what would be the advantages of the DNxHR codec (or Prores) over it for editing purposes ? Everybody suggests DNxHD/DNxHR or Prores as intermediate codecs, but if intra-frame H.264 has the same advantages for editing and supports HDR, what would be the reason to choose another codec like DNxHR ?

    3) Any ideas on what could be the cause of the colors not transcoding correctly from the H.264 source file to the DNxHR 444 10 bits one ? The command looks ok to me, but FFprobe output says the DNxHR video is BT709, while with the source file it says BT2020. Like I said, apparently there’s something wrong with the transcoding process regarding metadata or color space.

    4) I haven’t tried to transcode the source file to a DNxHR 444 10 bits video file, but in a MOV container. I don’t know how this works internally, but maybe the color issue has something to do with the container metadata or something. I may try this if there’s not another suggestion (transcoding this kind of files, as you know, takes time, so I’ll wait for some ideas first).

    NOTE : I tried to transcode the source file in the same way (DNxHR 444, 10 bits, etc) with Adobe Media Encoder 2020 and the result was the same, with colors messed up and FFprobe saying the video is BT709. Also tried transcoding to DNxHR HQX profile (10 bits), same result.

    Any help would be greatly appreciated.

    Thanks in advance.