
Recherche avancée
Médias (2)
-
Exemple de boutons d’action pour une collection collaborative
27 février 2013, par
Mis à jour : Mars 2013
Langue : français
Type : Image
-
Exemple de boutons d’action pour une collection personnelle
27 février 2013, par
Mis à jour : Février 2013
Langue : English
Type : Image
Autres articles (54)
-
Demande de création d’un canal
12 mars 2010, parEn fonction de la configuration de la plateforme, l’utilisateur peu avoir à sa disposition deux méthodes différentes de demande de création de canal. La première est au moment de son inscription, la seconde, après son inscription en remplissant un formulaire de demande.
Les deux manières demandent les mêmes choses fonctionnent à peu près de la même manière, le futur utilisateur doit remplir une série de champ de formulaire permettant tout d’abord aux administrateurs d’avoir des informations quant à (...) -
Gestion de la ferme
2 mars 2010, parLa ferme est gérée dans son ensemble par des "super admins".
Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
Dans un premier temps il utilise le plugin "Gestion de mutualisation" -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (6957)
-
OSError : [WinError 6] The handle is invalid Python 3.10 Windows moviepy
21 mars 2024, par ernesto casco velazquezI'm trying to make a clip using MoviePy but I getting error
OSError: [WinError 6]


Using Windows 11, Python 3.10.11
This is the method I'm trying to replicate.
(The full github is here)


def clip(
 content: str, 
 video_file: str, 
 outfile: str, 
 image_file: str = '', 
 offset: int = 0, 
 duration: int = 0):
 """
 Generate the Complete Clip
 content: str - Full content text
 video_file: str - Background video
 outfile: str - Filename of output
 image_file: str - Banner to display
 offset: int - Offset starting point of background video (default: 0)
 duration: int - Limit the video (default: audio length)
 """
 audio_comp, text_comp = generate_audio_text(split_text(content))

 audio_comp_list = []
 for audio_file in track(audio_comp, description='Stitching Audio...'):
 audio_comp_list.append(AudioFileClip(audio_file))
 audio_comp_stitch = concatenate_audioclips(audio_comp_list)
 audio_comp_stitch.write_audiofile('temp_audio.mp3', fps=44100)

 audio_duration = audio_comp_stitch.duration
 if duration == 0:
 duration = audio_duration

 audio_comp_stitch.close()

 vid_clip = VideoFileClip(video_file).subclip(offset, offset + duration)
 vid_clip = vid_clip.resize((1980, 1280))
 vid_clip = vid_clip.crop(x_center=1980 / 2, y_center=1280 / 2, width=720, height=1280)

 if image_file != '':
 image_clip = ImageClip(image_file).set_duration(duration).set_position(("center", 'center')).resize(0.8) # Adjust if the Banner is too small
 vid_clip = CompositeVideoClip([vid_clip, image_clip])

 vid_clip = CompositeVideoClip([vid_clip, concatenate_videoclips(text_comp).set_position(('center', 860))])

 vid_clip = vid_clip.set_audio(AudioFileClip('temp_audio.mp3').subclip(0, duration))
 vid_clip.write_videofile(outfile, audio_codec='aac')
 vid_clip.close()



My overly simplified code looks like this :


clip1 = TextClip(
 txt="test",
 font="Komika", # Change Font if not found
 fontsize=32,
 color="white",
 align="center",
 method="caption",
 size=(660, None),
 stroke_width=2,
 stroke_color="black",
)
clip2 = TextClip(
 txt="webos",
 font="Komika", # Change Font if not found
 fontsize=32,
 color="white",
 align="center",
 method="caption",
 size=(660, None),
 stroke_width=2,
 stroke_color="black",
)
conc = concatenate_videoclips([clip1, clip2]).set_position(("center", 860))



The problem is happening in the concatenate statement.


The error output :


PS C:\Users\ernes\OneDrive\Desktop\Bots\autocap\autocap_with_mp3> python .\autocap_with_mp3.py text.txt
Traceback (most recent call last):
 File "C:\Users\ernes\OneDrive\Desktop\Bots\autocap\autocap_with_mp3\autocap_with_mp3.py", line 218, in <module>
 main()
 File "C:\Users\ernes\OneDrive\Desktop\Bots\autocap\autocap_with_mp3\autocap_with_mp3.py", line 207, in main
 generate_video(
 File "C:\Users\ernes\OneDrive\Desktop\Bots\autocap\autocap_with_mp3\autocap_with_mp3.py", line 165, in generate_video
 conc = concatenate_videoclips([clip1, clip2]).set_position(("center", 860))
 File "C:\Users\ernes\AppData\Local\Programs\Python\Python310\lib\site-packages\moviepy\video\compositing\concatenate.py", line 71, in concatenate_videoclips
 tt = np.cumsum([0] + [c.duration for c in clips])
 File "C:\Users\ernes\AppData\Local\Programs\Python\Python310\lib\site-packages\numpy\core\fromnumeric.py", line 2586, in cumsum
 return _wrapfunc(a, 'cumsum', axis=axis, dtype=dtype, out=out)
 File "C:\Users\ernes\AppData\Local\Programs\Python\Python310\lib\site-packages\numpy\core\fromnumeric.py", line 56, in _wrapfunc
 return _wrapit(obj, method, *args, **kwds)
 File "C:\Users\ernes\AppData\Local\Programs\Python\Python310\lib\site-packages\numpy\core\fromnumeric.py", line 45, in _wrapit
 result = getattr(asarray(obj), method)(*args, **kwds)
TypeError: unsupported operand type(s) for +: 'int' and 'NoneType'
Exception ignored in: <function at="at" 0x000001e727215120="0x000001e727215120">
Traceback (most recent call last):
 File "C:\Users\ernes\AppData\Local\Programs\Python\Python310\lib\site-packages\moviepy\audio\io\readers.py", line 254, in __del__
 self.close_proc()
 File "C:\Users\ernes\AppData\Local\Programs\Python\Python310\lib\site-packages\moviepy\audio\io\readers.py", line 149, in close_proc
 self.proc.terminate()
 File "C:\Users\ernes\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 1589, in terminate
 _winapi.TerminateProcess(self._handle, 1)
OSError: [WinError 6] The handle is invalid
PS C:\Users\ernes\OneDrive\Desktop\Bots\autocap\autocap_with_mp3> 
</function></module>


I've tried closing the
vid_clip
variable and audio clips I have throughout the code but nothing seems to work.

I'm trying to solve this without the need of multithreading, because I saw a couple of fixes that involved that but I don't think it is necessary in my case


-
ffmpeg split by silence (with logic to achieve 12 split segments)
14 mars 2023, par Martinhttps://github.com/MartinBarker/split_by_silence


I am trying to automate the process of splitting a single audio file into 12 tracks. you can see in the below image that this 35:62 length mp3 file has 11 visible split points (where the audio more quiet), which means 12 distinct segments.



I'd like to be able to run a script to automatically find these split points and split my file, my first split point should be around
159
seconds, and second around360
, third around540
, 4th around780
, 5th around960
, and so on for a total of 11 split points :

1 159
2 360
3 540
4 780
5 960
6 1129
7 1309
8 1500
9 1680
10 1832
11 1980



but my test results have not been working so good :


- Goal:
11 split points found
12 tracks rendered

- Test 1
SD_PARAMS="-24dB"
MIN_FRAGMENT_DURATION="3"
5 split points found: 361.212,785.811,790.943,969.402,2150.24`
6 tracks rendered

-Test 2
SD_PARAMS="-24dB"
MIN_FRAGMENT_DURATION="3"
10 split points found: 151.422,155.026,158.526,361.212,534.254,783.667,967.253,1128.91,2150.2
11 tracks rendered



- 

- Test 2 Problem :
Even though 12 tracks were rendered, some split points are very close


leading to tracks being exported that are very short, such as 3, 5, and 2 seconds. as well as one long track being 16 minutes





So I added a variable
MIN_SEGMENT_LENGTH
and ran another test

- Test 3
SD_PARAMS="-18dB"
MIN_FRAGMENT_DURATION="3"
MIN_SEGMENT_LENGTH=120 (02:00)

log:
_______________________
Determining split points...
split points list= 150.482,155.026,158.526,361.212,530.019,534.254,783.667,967.245,1127.67,2144.57,2150.2
1. The difference between 150.482 and 155.026 is 4.544
 diff is less than MIN_SEGMENT_LENGTH=120
2. The difference between 155.026 and 158.526 is 3.500
 diff is less than MIN_SEGMENT_LENGTH=120
3. The difference between 158.526 and 361.212 is 202.686
4. The difference between 361.212 and 530.019 is 168.807
5. The difference between 530.019 and 534.254 is 4.235
 diff is less than MIN_SEGMENT_LENGTH=120
6. The difference between 534.254 and 783.667 is 249.413
7. The difference between 783.667 and 967.245 is 183.578
8. The difference between 967.245 and 1127.67 is 160.425
9. The difference between 1127.67 and 2144.57 is 1016.90
10. The difference between 2144.57 and 2150.2 is 5.63
 diff is less than MIN_SEGMENT_LENGTH=120
_______________________
Exporting 12 tracks with ffmpeg...



I'm unsure how to change my script and vars so that by running it, are calculating the split points, if any of them are too short (less then 120 seconds) to regenerate the split point(s) ?


Here is my audio file :
https://filetransfer.io/data-package/HC7GG07k#link


And here is my script, which can be ran by running
./split_by_silence.sh


# -----------------------
# SPLIT BY SILENCE
# Requirements:
# ffmpeg
# $ apt-get install bc
# How To Run:
# $ ./split_by_silence.sh "full_lowq.flac" %03d_output.flac

# output title format
OUTPUTTITLE="%03d_output.mp3"
# input audio filepath
IN="/mnt/e/martinradio/rips/vinyl/L.T.D. – Gittin' Down/lowquality_example.mp3"
# output audio filepath
OUTPUTFILEPATH="/mnt/e/folder/rips"
# ffmpeg option: split input audio based on this silencedetect value
SD_PARAMS="-18dB"
# split option: minimum fragment duration
MIN_FRAGMENT_DURATION=3
# minimum segment length
MIN_SEGMENT_LENGTH=120

# -----------------------
# step: ffmpeg
# goal: get comma separated list of split points (use ffmpeg to determine points where audio is at SD_PARAMS [-18db] )

echo "_______________________"
echo "Determining split points..." >& 2
SPLITS=$(
 ffmpeg -v warning -i "$IN" -af silencedetect="$SD_PARAMS",ametadata=mode=print:file=-:key=lavfi.silence_start -vn -sn -f s16le -y /dev/null \
 | grep lavfi.silence_start= \
 | cut -f 2-2 -d= \
 | perl -ne '
 our $prev;
 INIT { $prev = 0.0; }
 chomp;
 if (($_ - $prev) >= $ENV{MIN_FRAGMENT_DURATION}) {
 print "$_,";
 $prev = $_;
 }
 ' \
 | sed 's!,$!!'
)
echo "split points list= $SPLITS"
# determine if the difference between any two splits is less than MIN_SEGMENT_LENGTH seconds
IFS=',' read -ra VALUES <<< "$SPLITS"

for (( i=0; i<${#VALUES[@]}-1; i++ )); do
 diff=$(echo "${VALUES[$i+1]} - ${VALUES[$i]}" | bc)
 display_i=$((i+1))
 echo "$display_i. The difference between ${VALUES[$i]} and ${VALUES[$i+1]} is $diff"
 if (( $(echo "$diff < $MIN_SEGMENT_LENGTH" | bc -l) )); then
 echo " diff is less than MIN_SEGMENT_LENGTH=$MIN_SEGMENT_LENGTH"
 fi
done


# using the split points list, calculate how many output audio files will be created 
num=0
res="${SPLITS//[^,]}"
CHARCOUNT="${#res}"
num=$((CHARCOUNT + 2))
echo "_______________________"
echo "Exporting $num tracks with ffmpeg"

ffmpeg -i "$IN" -c copy -map 0 -f segment -segment_times "$SPLITS" "$OUTPUTFILEPATH/$OUTPUTTITLE"

echo "Done."




- Test 2 Problem :
Even though 12 tracks were rendered, some split points are very close

-
FFMPEG scaling, how to set scale so width AND height don't exceed a certain amount ?
12 octobre 2013, par DariusI have 2 videos, one is 500 pixels by 100 pixels (just an example, like something recorded sideways on an iphone). And a 1980 x 400 pixels videos. I need the video to convert maintaining aspect ratios. I know of the -vf scale filter such as -vf scale=-1:320, but that only takes the width and scales the height accordingly. My 500 x 100 video would be 320px wide and 1600 pixels tall. That's bad, I need it to be max 500 pixels tall and max width of 320 (just example sizes).
How would I configure the -vf scale function to do that ?
Using latest ffmpeg 0.11
Recap : scale any video to max 500 height : 320 width while keeping aspect ratio