Recherche avancée

Médias (1)

Mot : - Tags -/biomaping

Autres articles (112)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (6476)

  • Stuck in installing a voicecloner via Python (module not found)

    25 novembre 2023, par Wimmah

    I use Python 3.11.5

    


    As a great Python n00b I enter this forum because I'm stuck with installing a Voice Cloner (for personal use to do a funny trick for X-mas with my family) Its this tool that i'm trying to install : https://github.com/CorentinJ/Real-Time-Voice-Cloning

    


    With a little help of chatGTP I came quite far but for some reason the downloaded datasets cant be found. Instructions of the tool state :

    


    Install intructions form Github
So my tree looks like this :

    


    (base) willem@willems-air Voice cloner % tree
.
├── demo_cli.py
├── demo_toolbox.py
├── encoder_preprocess.py
├── encoder_train.py
├── saved_models
│   └── default
│       ├── encoder.pt
│       ├── synthesizer.pt
│       └── vocoder.pt
├── synthesizer_preprocess_audio.py
├── synthesizer_preprocess_embeds.py
├── synthesizer_train.py
└── vocoder_train.py

3 directories, 11 files


    


    However, when I give the command to execute the demo, I get the message that a needed module cant be found :

    


    (base) willem@willems-air Voice cloner % python demo_cli.py&#xA;Traceback (most recent call last):&#xA;  File "/Users/willem/Desktop/Voice cloner/demo_cli.py", line 10, in <module>&#xA;    from encoder import inference as encoder&#xA;ModuleNotFoundError: No module named &#x27;encoder&#x27;&#xA;</module>

    &#xA;

    I build a tree that (for me) looks inline with the installation instructions...(And of course i downloaded the modules without any errors)&#xA;Here also the first lines of the command demo_cli.py where you also see the path :

    &#xA;

    import argparse&#xA;import os&#xA;from pathlib import Path&#xA;&#xA;import librosa&#xA;import numpy as np&#xA;import soundfile as sf&#xA;import torch&#xA;&#xA;from encoder import inference as encoder&#xA;from encoder.params_model import model_embedding_size as speaker_embedding_size&#xA;from synthesizer.inference import Synthesizer&#xA;from utils.argutils import print_args&#xA;from utils.default_models import ensure_default_models&#xA;from vocoder import inference as vocoder&#xA;&#xA;&#xA;if __name__ == &#x27;__main__&#x27;:&#xA;    parser = argparse.ArgumentParser(&#xA;        formatter_class=argparse.ArgumentDefaultsHelpFormatter&#xA;    )&#xA;    parser.add_argument("-e", "--enc_model_fpath", type=Path,&#xA;                        default="saved_models/default/encoder.pt",&#xA;

    &#xA;

    I think i missed out a quite basic step here, but this far ChatGTP is looping and cant help any more, so I need a human tip i guess ;)

    &#xA;

    Thx in advance !

    &#xA;

  • sws/rgb2rgb : fix unaligned accesses in R-V V YUYV to I422p

    9 novembre 2023, par Rémi Denis-Courmont
    sws/rgb2rgb : fix unaligned accesses in R-V V YUYV to I422p
    

    In my personal opinion, we should not need to support unaligned YUY2
    pixel maps. They should always be aligned to at least 32 bits, and the
    current code assumes just 16 bits. However checkasm does test for
    unaligned input bitmaps. QEMU accepts it, but real hardware dose not.

    In this particular case, we can at the same time improve performance and
    handle unaligned inputs, so do just that.

    uyvytoyuv422_c : 104379.0
    uyvytoyuv422_c : 104060.0
    uyvytoyuv422_rvv_i32 : 25284.0 (before)
    uyvytoyuv422_rvv_i32 : 19303.2 (after)

    • [DH] libswscale/riscv/rgb2rgb.c
    • [DH] libswscale/riscv/rgb2rgb_rvv.S
  • Stream mp4 file with watermark through a web using ffmpeg

    24 mars 2023, par Jose A. Matarán

    I'm having problems with ffmpeg, probably due to my inexperience with this software.

    &#xA;

    My basic need is the following : I have a series of videos with material that I want to protect so that it is not plagiarized. For this I want to add a watermark so that when a user views it, they also see some personal data that prevents them from downloading and sharing it without permission.

    &#xA;

    What I would like is to create a small Angular + Java application that does this task (invoking ffmpeg via Runtime#exec)

    &#xA;

    I have seen that from ffmpeg I can emit to a server, like ffserver but I wonder if there is a somewhat simpler way. Something like launching the ffmpeg command from my java application with the necessary configuration and having ffmpeg emit the video along with the watermark through some port/protocol.

    &#xA;

    EDIT

    &#xA;

    I have continued to investigate and I have seen that ffmpeg allows you to broadcast for WebRTC, but you need an adapter. What I would like and I don't know if it is possible is to launch ffmpeg so that it acts as a server and it can be consumed from the web.

    &#xA;