Recherche avancée

Médias (91)

Autres articles (16)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • Contribute to documentation

    13 avril 2011

    Documentation is vital to the development of improved technical capabilities.
    MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
    To contribute, register to the project users’ mailing (...)

  • Selection of projects using MediaSPIP

    2 mai 2011, par

    The examples below are representative elements of MediaSPIP specific uses for specific projects.
    MediaSPIP farm @ Infini
    The non profit organizationInfini develops hospitality activities, internet access point, training, realizing innovative projects in the field of information and communication technologies and Communication, and hosting of websites. It plays a unique and prominent role in the Brest (France) area, at the national level, among the half-dozen such association. Its members (...)

Sur d’autres sites (2781)

  • Stuck in installing a voicecloner via Python (module not found)

    25 novembre 2023, par Wimmah

    I use Python 3.11.5

    


    As a great Python n00b I enter this forum because I'm stuck with installing a Voice Cloner (for personal use to do a funny trick for X-mas with my family) Its this tool that i'm trying to install : https://github.com/CorentinJ/Real-Time-Voice-Cloning

    


    With a little help of chatGTP I came quite far but for some reason the downloaded datasets cant be found. Instructions of the tool state :

    


    Install intructions form Github
So my tree looks like this :

    


    (base) willem@willems-air Voice cloner % tree
.
├── demo_cli.py
├── demo_toolbox.py
├── encoder_preprocess.py
├── encoder_train.py
├── saved_models
│   └── default
│       ├── encoder.pt
│       ├── synthesizer.pt
│       └── vocoder.pt
├── synthesizer_preprocess_audio.py
├── synthesizer_preprocess_embeds.py
├── synthesizer_train.py
└── vocoder_train.py

3 directories, 11 files


    


    However, when I give the command to execute the demo, I get the message that a needed module cant be found :

    


    (base) willem@willems-air Voice cloner % python demo_cli.py&#xA;Traceback (most recent call last):&#xA;  File "/Users/willem/Desktop/Voice cloner/demo_cli.py", line 10, in <module>&#xA;    from encoder import inference as encoder&#xA;ModuleNotFoundError: No module named &#x27;encoder&#x27;&#xA;</module>

    &#xA;

    I build a tree that (for me) looks inline with the installation instructions...(And of course i downloaded the modules without any errors)&#xA;Here also the first lines of the command demo_cli.py where you also see the path :

    &#xA;

    import argparse&#xA;import os&#xA;from pathlib import Path&#xA;&#xA;import librosa&#xA;import numpy as np&#xA;import soundfile as sf&#xA;import torch&#xA;&#xA;from encoder import inference as encoder&#xA;from encoder.params_model import model_embedding_size as speaker_embedding_size&#xA;from synthesizer.inference import Synthesizer&#xA;from utils.argutils import print_args&#xA;from utils.default_models import ensure_default_models&#xA;from vocoder import inference as vocoder&#xA;&#xA;&#xA;if __name__ == &#x27;__main__&#x27;:&#xA;    parser = argparse.ArgumentParser(&#xA;        formatter_class=argparse.ArgumentDefaultsHelpFormatter&#xA;    )&#xA;    parser.add_argument("-e", "--enc_model_fpath", type=Path,&#xA;                        default="saved_models/default/encoder.pt",&#xA;

    &#xA;

    I think i missed out a quite basic step here, but this far ChatGTP is looping and cant help any more, so I need a human tip i guess ;)

    &#xA;

    Thx in advance !

    &#xA;

  • How to use Intel Quick Sync/iGPU in OVH dedicated server

    3 octobre 2022, par Meir

    I have a dedicated server with the following HW :

    &#xA;

    CPU: Intel(R) Xeon(R) E-2386G CPU @ 3.50GHz&#xA;Motherboard: Manufacturer: ASRockRack, Product Name: E3C252D4U-2T/OVH&#xA;

    &#xA;

    According to the Intel website, E-2386G has Intel Quick Sync, and I want to use it.&#xA;I tried to check which VGA I have in the system (expected to see Intel + the local), and this is the output :

    &#xA;

    05:00.0 VGA compatible controller: ASPEED Technology, Inc. ASPEED Graphics Family (rev 41)&#xA;

    &#xA;

    I.e., the Intel iGPU doesn't recognize at all in the system, I tried to check in /dev/dri what are the existing devices there, and this is the output :

    &#xA;

    ls -alh /dev/dri&#xA;total 0&#xA;drwxr-xr-x  3 root root      80 Sep 19 10:28 .&#xA;drwxr-xr-x 18 root root    4.2K Sep 20 13:05 ..&#xA;drwxr-xr-x  2 root root      60 Sep 19 10:28 by-path&#xA;crw-rw----  1 root video 226, 0 Sep 19 10:28 card0&#xA;

    &#xA;

    When I tried to run vainfo tool, I get the following results :

    &#xA;

    Vanilla run :

    &#xA;

    vainfo&#xA;error: can&#x27;t connect to X server!&#xA;libva info: VA-API version 1.7.0&#xA;libva error: vaGetDriverNameByIndex() failed with unknown libva error, driver_name = (null)&#xA;vaInitialize failed with error code -1 (unknown libva error),exit&#xA;

    &#xA;

    Run after setting export LIBVA_DRIVER_NAME=i965 :

    &#xA;

    vainfo&#xA;error: can&#x27;t connect to X server!&#xA;libva info: VA-API version 1.7.0&#xA;libva info: User environment variable requested driver &#x27;i965&#x27;&#xA;libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so&#xA;libva info: Found init function __vaDriverInit_1_6&#xA;libva error: /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so init failed&#xA;libva info: va_openDriver() returns -1&#xA;vaInitialize failed with error code -1 (unknown libva error),exit&#xA;

    &#xA;

    Run with sudo :

    &#xA;

    sudo vainfo&#xA;error: XDG_RUNTIME_DIR not set in the environment.&#xA;error: can&#x27;t connect to X server!&#xA;libva info: VA-API version 1.7.0&#xA;libva error: vaGetDriverNameByIndex() failed with unknown libva error, driver_name = (null)&#xA;vaInitialize failed with error code -1 (unknown libva error),exit&#xA;

    &#xA;

    How can I use Intel Quick Sync ?

    &#xA;

    --- edit ---

    &#xA;

    Running the suggested commands :

    &#xA;

    vainfo --display DRM&#xA;libva info: VA-API version 1.7.0&#xA;libva error: vaGetDriverNameByIndex() failed with unknown libva error, driver_name = (null)&#xA;vaInitialize failed with error code -1 (unknown libva error),exit&#xA;&#xA;vainfo --display wayland&#xA;error: failed to initialize display &#x27;wayland&#x27;&#xA;&#xA;vainfo --display help&#xA;Available displays:&#xA;  wayland&#xA;  x11&#xA;  DRM&#xA;&#xA;&#xA;sudo journalctl -b | grep i965  (no results)&#xA;

    &#xA;

  • Python code mutes whole video instead of sliding a song. What shall I do ?

    16 juillet 2023, par Armed Nun

    I am trying to separate a song into 4 parts and slide the parts in random parts of a video. The problem with my code is that the final output video is muted. I want to play parts of the song at random intervals and while the song is playing the original video shall be muted. Thanks to everyone who helps

    &#xA;

    import random&#xA;from moviepy.editor import *&#xA;&#xA;def split_audio_into_parts(mp3_path, num_parts):&#xA;    audio = AudioFileClip(mp3_path)&#xA;    duration = audio.duration&#xA;    part_duration = duration / num_parts&#xA;&#xA;    parts = []&#xA;    for i in range(num_parts):&#xA;        start_time = i * part_duration&#xA;        end_time = start_time &#x2B; part_duration if i &lt; num_parts - 1 else duration&#xA;        part = audio.subclip(start_time, end_time)&#xA;        parts.append(part)&#xA;&#xA;    return parts&#xA;&#xA;def split_video_into_segments(video_path, num_segments):&#xA;    video = VideoFileClip(video_path)&#xA;    duration = video.duration&#xA;    segment_duration = duration / num_segments&#xA;&#xA;    segments = []&#xA;    for i in range(num_segments):&#xA;        start_time = i * segment_duration&#xA;        end_time = start_time &#x2B; segment_duration if i &lt; num_segments - 1 else duration&#xA;        segment = video.subclip(start_time, end_time)&#xA;        segments.append(segment)&#xA;&#xA;    return segments&#xA;&#xA;def insert_audio_into_segments(segments, audio_parts):&#xA;    modified_segments = []&#xA;    for segment, audio_part in zip(segments, audio_parts):&#xA;        audio_part = audio_part.volumex(0)  # Mute the audio part&#xA;        modified_segment = segment.set_audio(audio_part)&#xA;        modified_segments.append(modified_segment)&#xA;&#xA;    return modified_segments&#xA;&#xA;def combine_segments(segments):&#xA;    final_video = concatenate_videoclips(segments)&#xA;    return final_video&#xA;&#xA;# Example usage&#xA;mp3_file_path = "C:/Users/Kris/PycharmProjects/videoeditingscript124234/DENKATA - Podvodnica Demo (1).mp3"&#xA;video_file_path = "C:/Users/Kris/PycharmProjects/videoeditingscript124234/family.guy.s21e13.1080p.web.h264-cakes[eztv.re].mkv"&#xA;num_parts = 4&#xA;&#xA;audio_parts = split_audio_into_parts(mp3_file_path, num_parts)&#xA;segments = split_video_into_segments(video_file_path, num_parts)&#xA;segments = insert_audio_into_segments(segments, audio_parts)&#xA;final_video = combine_segments(segments)&#xA;final_video.write_videofile("output.mp4", codec="libx264", audio_codec="aac")&#xA;

    &#xA;

    I tried entering most stuff into chatGPT and asking questions around forums but without sucess, so lets hope I can see my solution here

    &#xA;