Recherche avancée

Médias (2)

Mot : - Tags -/kml

Autres articles (70)

  • La sauvegarde automatique de canaux SPIP

    1er avril 2010, par

    Dans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
    Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

Sur d’autres sites (8578)

  • How to export wave slices to the same bits per sample as the original file

    23 avril 2019, par tzujan

    I am looping through a large wave file, via a dictionary of new file names, lengths and versions. The loop exports the individual slices as files :

    mix.export(key, format='wav')

    However, it converts the original 24-bit file to 32-bit slices. I have been doing a round trip to pro tools to get the files back to 24, as I can’t figure out either ffmpeg settings, or getting the slice into a subprocess.

    I have tried several variations of this theme :

    mix.export(key, format='wav', codec='pcm_s24le')

    As well as this one :

    mix.export(k, format='wav', parameters=['ffmpeg', '-i', '-acodec', 'pcm_s24le', '-ar', '48000'])

    I can’t seem to get the individual slices to work in the following subprocess call. key is the file name from the key-value pair. This works well in the 32-bit export, but not sure how to make it work when a slice’s temp file, such as /var/folders/vc/q7jggn7900l099w45463lgx40000gn/T/tmpw_6mxyg8 needs to be called.

    subprocess.call(['ffmpeg', '-i', key,
                    '-acodec', 'pcm_s24le', '-ar', '48000', 'output.wav'])

    Hoping for slices of the exact same format as the original input :

    mix_file = AudioSegment.from_wav(file_name)
  • ffmpeg : concatinating files creates audio artefacts

    28 octobre 2022, par LML

    I'm currently trying to create a video out of multiple short video files. However, the final video always has audio artefacts, where it sounds like a short high pitch or echo at certain times during the audio. All the audio is a text-to-speech generated voice. No music. The artefacts appear sometimes more, sometimes less. But I would obviously prefer to have 0 of it.

    


    My starting point is a long audio file (mono with audio codec "mp3" according to ffprobe). Within that file are a bunch of short pauses of 4-5 seconds. I detect the silences and create individual audio files from there. Afterwards I create an mp4 file with this audio and a still image. Up to this point, the audio is perfectly fine and sounds the exact same as in the original file.

    


    After this I want to create the final video : each of the individual parts added into one long video. There is a transition between each file to mark the changing of image and audio. But even when skipping the transition and simply adding all of these clips that were generated the same way together, the artefacts are still present.

    


    The commands I use to create the different files.

    


    Create individual audio files :
.\ffmpeg.exe -y -hide_banner -i TTSAudio.mp3 -ss 359.944 -to 372.02479 -c copy partXY.mp3

    


    Create individual video files by using a .png file as the video stream and the partXY.mp3 as the audio stream :
.\ffmpeg.exe -y -hide_banner -framerate 30 -loop 1 -i XY_full.png -i partXY.mp3 -c:v libx265 -c:a copy -shortest partXY.mp4

    


    For concatenating the files :
.\ffmpeg.exe -y -hide_banner -i part000.mp4 -i part001.mp4 -i part002.mp4 -filter_complex "[0:v] [0:a] [1:v] [1:a] [2:v] [2:a] concat=n=3:v=1:a=1 [v] [a]" -map "[v]" -map "[a]" -c:v copy -c:a copy final_video.mp4

    


    I've tried a lot of different things and codecs for the audio, without any luck. I use h265, as using h264 was causing weird video artefacts after uploading the file to YouTube.
I have tried reencoding, instead of copying (-c:a copy) at various stages, especially the final video. All without any luck.
I've used the different concatenation where you provide a list of files, which created a whole different set of problems.

    


    I've managed to filter the artefacts out by using -af "lowpass=f=2800", but that changes the voice a lot. I was also not able to notice the pitch visually when opening the audio in audacity, for example.

    


    Example :
https://soundcloud.com/thelml/sets/ffmpeg-audio-artefacts/s-LNr6UaMPgz9?si=f7b30e1e64bf4333ad055fa1fe21e9ec
Due to the files being so short, I seem to have to sometimes have to replay the bugged file to hear the artefact.

    


    So my question is : how do I fix this, without using a lowpass that basically changes the whole voice ?

    


  • hw_base_encode : make recon_frames_ref optional

    30 août 2024, par Lynne
    hw_base_encode : make recon_frames_ref optional
    

    Vulkan supports some stupidly odd hardware, that unfortunately,
    most modern GPUs happen to be.
    The DPB images for encoders may be required to be preallocated
    all at once, and rather than be individual frames, be layers of
    a single frame.

    As the hw_base_encode code is written with the thought that either
    the driver or the device itself supports sane image allocation,
    Vulkan does not leave us with this option.

    So, in the case that the hardware does not support individual
    frames to be used as DPBs, make the DBP frames context optional,
    and let the subsystem manage this.

    • [DH] libavcodec/hw_base_encode.c