
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (112)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (6476)
-
Stuck in installing a voicecloner via Python (module not found)
25 novembre 2023, par WimmahI use Python 3.11.5


As a great Python n00b I enter this forum because I'm stuck with installing a Voice Cloner (for personal use to do a funny trick for X-mas with my family) Its this tool that i'm trying to install : https://github.com/CorentinJ/Real-Time-Voice-Cloning


With a little help of chatGTP I came quite far but for some reason the downloaded datasets cant be found. Instructions of the tool state :


Install intructions form Github
So my tree looks like this :


(base) willem@willems-air Voice cloner % tree
.
├── demo_cli.py
├── demo_toolbox.py
├── encoder_preprocess.py
├── encoder_train.py
├── saved_models
│   └── default
│   ├── encoder.pt
│   ├── synthesizer.pt
│   └── vocoder.pt
├── synthesizer_preprocess_audio.py
├── synthesizer_preprocess_embeds.py
├── synthesizer_train.py
└── vocoder_train.py

3 directories, 11 files



However, when I give the command to execute the demo, I get the message that a needed module cant be found :


(base) willem@willems-air Voice cloner % python demo_cli.py
Traceback (most recent call last):
 File "/Users/willem/Desktop/Voice cloner/demo_cli.py", line 10, in <module>
 from encoder import inference as encoder
ModuleNotFoundError: No module named 'encoder'
</module>


I build a tree that (for me) looks inline with the installation instructions...(And of course i downloaded the modules without any errors)
Here also the first lines of the command demo_cli.py where you also see the path :


import argparse
import os
from pathlib import Path

import librosa
import numpy as np
import soundfile as sf
import torch

from encoder import inference as encoder
from encoder.params_model import model_embedding_size as speaker_embedding_size
from synthesizer.inference import Synthesizer
from utils.argutils import print_args
from utils.default_models import ensure_default_models
from vocoder import inference as vocoder


if __name__ == '__main__':
 parser = argparse.ArgumentParser(
 formatter_class=argparse.ArgumentDefaultsHelpFormatter
 )
 parser.add_argument("-e", "--enc_model_fpath", type=Path,
 default="saved_models/default/encoder.pt",



I think i missed out a quite basic step here, but this far ChatGTP is looping and cant help any more, so I need a human tip i guess ;)


Thx in advance !


-
sws/rgb2rgb : fix unaligned accesses in R-V V YUYV to I422p
9 novembre 2023, par Rémi Denis-Courmontsws/rgb2rgb : fix unaligned accesses in R-V V YUYV to I422p
In my personal opinion, we should not need to support unaligned YUY2
pixel maps. They should always be aligned to at least 32 bits, and the
current code assumes just 16 bits. However checkasm does test for
unaligned input bitmaps. QEMU accepts it, but real hardware dose not.In this particular case, we can at the same time improve performance and
handle unaligned inputs, so do just that.uyvytoyuv422_c : 104379.0
uyvytoyuv422_c : 104060.0
uyvytoyuv422_rvv_i32 : 25284.0 (before)
uyvytoyuv422_rvv_i32 : 19303.2 (after) -
Stream mp4 file with watermark through a web using ffmpeg
24 mars 2023, par Jose A. MataránI'm having problems with ffmpeg, probably due to my inexperience with this software.


My basic need is the following : I have a series of videos with material that I want to protect so that it is not plagiarized. For this I want to add a watermark so that when a user views it, they also see some personal data that prevents them from downloading and sharing it without permission.


What I would like is to create a small Angular + Java application that does this task (invoking ffmpeg via
Runtime#exec
)

I have seen that from ffmpeg I can emit to a server, like ffserver but I wonder if there is a somewhat simpler way. Something like launching the ffmpeg command from my java application with the necessary configuration and having ffmpeg emit the video along with the watermark through some port/protocol.


EDIT


I have continued to investigate and I have seen that ffmpeg allows you to broadcast for WebRTC, but you need an adapter. What I would like and I don't know if it is possible is to launch ffmpeg so that it acts as a server and it can be consumed from the web.