
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (34)
-
List of compatible distributions
26 avril 2011, parThe table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...) -
Automated installation script of MediaSPIP
25 avril 2011, parTo overcome the difficulties mainly due to the installation of server side software dependencies, an "all-in-one" installation script written in bash was created to facilitate this step on a server with a compatible Linux distribution.
You must have access to your server via SSH and a root account to use it, which will install the dependencies. Contact your provider if you do not have that.
The documentation of the use of this installation script is available here.
The code of this (...) -
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)
Sur d’autres sites (7328)
-
bash : find truncates some paths [duplicate]
17 novembre 2023, par fweberI'm using MacOS 13.0.1 and try to find + loop over
.mov
files to convert them to MP4 using FFMPEG. Many of my path contain spaces and special characters. I found a way of putting things together thanks to this post :

function convert_to_mp4_then_rm() {
 while IFS= read -r -d '' file
 do
 ffmpeg -i "$file" "${file%.mov}.mp4"
 done < <(find /Users/f.weber/Downloads -type f -name "*.mov" -print0)
}



This runs but I found a (random ?) error : it looks like some of the paths are truncated when they arrive to the
ffmpeg
CLI.

Example to reproduce with a basic content :


ll Downloads/
total 9472
-rw-r--r-- 1 f.weber staff 2,3M 26 sep 08:56 23-09-26 08-56-50-2538.mov
-rw-r--r-- 1 f.weber staff 2,3M 26 sep 08:56 23-09-26 08-56-50-2539.mov



When I call
convert_to_mp4_then_rm
in a terminal, the first MOV file is properly processed then I have the following error from FFMPEG :/Downloads/23-09-26 08-56-50-2539.mov: No such file or directory
. In some conditions (e.g. when the path is longer) the truncation is more obvious and can occur in the middle of a word.

What is the explanation for this ? How to forward untruncated paths to my function's core ?


Thanks !


-
how to apostrophe with os.system in ffmpeg drawtext in python
28 septembre 2023, par Ishu singhI just want to execute this code with os.system('command') in ffmpeg drawtext() but unable to execute it just because of ' (apostrophe) , it fails


The code goes here ->


the \f is working as \n but I'm using that for seprating word


from PIL import ImageFont
import os

def create_lines(longline, start, end, fontsize=75, fontfile='OpenSansCondensedBold.ttf'):

 fit = fit_text(longline, 700, fontfile)

 texts = []
 now = 0
 # breaking line on basis of '\f'
 for wordIndex in range(len(fit)):
 if fit[wordIndex] == '\f' or wordIndex == len(fit)-1:
 texts.append(fit[now:wordIndex+1].strip('\f'))
 now = wordIndex

 # adding multiple lines to video
 string = ''
 count = 0
 for line in texts:
 string += f''',drawtext=fontfile={fontfile}:fontsize={fontsize}:text='{line[enter image description here](https://i.stack.imgur.com/iuceq.png)}':fontcolor=black:bordercolor=white:borderw=4:x=(w-text_w)/2:y=(h-text_h)/2-100+{count}:'enable=between(t,{start},{end})' '''
 count += 100

 print(string)
 return string

def createVideo(content):
 input_video = 'video.mp4'
 output_video = 'output.mp4'
 font_file = 'BebasKai.ttf'
 text_file = 'OpenSansCondensedBold.ttf'
 font_size = 75
 font_color = 'white'

 part1 = create_lines(content[1], 0.5, 7)
 part2 = create_lines(content[2], 7.5, 10)

 os.system(
 f"""ffmpeg -i {} -vf "drawtext=fontfile={font_file}:fontsize={95}:text={content[0]}:fontcolor={font_color}:box=1:boxcolor=black@0.9:boxborderw=20:x=(w-text_w)/2:y=(h-text_h)/4-100{str(part1)}{str(part2)}" -c:v libx264 -c:a aac -t 10 {output_video} -y""")

my_text =['The Brain', "Your brain can't multitask effectively", "Multitasking is a myth, it's just rapid switching between tasks"]

createVideo(my_text)





what I want is that, I would able to execute this correctly


-
Batch splitting large audio files into small fixed-length audio files in moments of silence
26 juillet 2023, par Haldjärvito train the SO-VITS-SVC neural network, we need 10-14 second voice files. As a material, let's say I use phrases from some game. I have already made a batch script for decoding different files into one working format, another batch script for removing silence, as well as a batch script for combining small audio files into files of 13-14 seconds (I used Python, pydub and FFmpeg). To successfully automatically create a training dataset, it remains only to make one batch script - Cutting audio files lasting more than 14 seconds into separate files lasting 10-14 seconds, cutting in places of silence or close to silence is highly preferable.


So, it is necessary to batch cut large audio files (20 seconds, 70 seconds, possibly several hundred seconds) into segments of approximately 10-14 seconds, however, the main task is to look for the quietest place in the cut areas so as not to cut phrases in the middle of a word (this is not very good for model training). So, is it really possible to do this in a very optimal way, so that the processing of a 30-second file does not take 15 seconds, but is fast ? Quiet zone detection is required only in the area of cuts, that is, 10-14 seconds, if counted from the very beginning of the file.


I would be very grateful for any help.


I tried to write a script together with ChatGPT, but all options gave completely unpredictable results and were not even close to what I needed... I had to stop at the option with a sharp cut of files for exactly 14000 milliseconds. However, I hope there is a chance to make a variant with cutting exactly in quiet areas.


import os
from pydub import AudioSegment

input_directory = ".../RemSilence/"
output_directory = ".../Split/"
max_duration = 14000

def split_audio_by_duration(input_file, duration):
 audio = AudioSegment.from_file(input_file)
 segments = []
 for i in range(0, len(audio), duration):
 segment = audio[i:i + duration]
 segments.append(segment)
 return segments

if __name__ == "__main__":
 os.makedirs(output_directory, exist_ok=True)
 audio_files = [os.path.join(input_directory, file) for file in os.listdir(input_directory) if file.endswith(".wav")]
 audio_files.sort(key=lambda file: len(AudioSegment.from_file(file)))
 for file in audio_files:
 audio = AudioSegment.from_file(file)
 if len(audio) > max_duration:
 segments = split_audio_by_duration(file, max_duration)
 for i, segment in enumerate(segments):
 output_filename = f"output_{len(os.listdir(output_directory))+1}.wav"
 output_file_path = os.path.join(output_directory, output_filename)
 segment.export(output_file_path, format="wav")
 else:
 output_filename = f"output_{len(os.listdir(output_directory))+1}.wav"
 output_file_path = os.path.join(output_directory, output_filename)
 audio.export(output_file_path, format="wav")