
Recherche avancée
Autres articles (99)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
Sur d’autres sites (11664)
-
How to Adjust Google TTS SSML to Match Original SRT Timing ?
2 avril, par Alexandre SilkinI have an .srt file where each speech segment is supposed to last a specific duration (e.g., 4 seconds). However, when I generate the speech using Google Text-to-Speech (TTS) with SSML, the resulting audio plays the same segment in a shorter time (e.g., 3 seconds).


I want to adjust the speech rate dynamically in SSML so that each segment matches its original timing. My idea is to use ffmpeg to extract the actual duration of each generated speech segment, then calculate the speech rate percentage as :
generated duration
speech rate = --------------------
original duration


This percentage would then be applied in SSML using the tag, like :
Text to be spoken


How can I accurately measure the duration of each segment using ffmpeg, and what is the best way to apply the correct speech rate in SSML to match the original .srt timing ?


I tried duration and my SSML should look like this :


f.write(f'\t<p>{break_until_start}{text}<break time="{value["></break></p>\n')



Code writing the SSML :


text = value['text']
start_time_ms = int(value['start_ms']) # Start time in milliseconds
previous_end_ms = int(subsDict.get(str(int(key) - 1), {}).get('end_ms', 0)) # Get the previous end time
gap_to_fill = max(0, start_time_ms - previous_end_ms)


text = text.replace("&", "&amp;").replace('"', "&quot;").replace("'", "&apos;").replace("<", "&lt;").replace(
 ">", "&gt;")

 break_until_start = f'<break time="{gap_to_fill}ms"></break>' if gap_to_fill > 0 else ''

 f.write(f'\t<p>{break_until_start}{text}<break time="{value["></break></p>\n')

 f.write('\n')



-
Google - Shaka | Deleting SegmentTimeline in manifest.mpd after restart container
27 juin 2022, par burakkiymazShaka is running inside docker container. When I restarted container, SegmentTimeline part in manifest.mpd file deleting. Is possible appending old SegmentTimeline to new manifest.mpd file or recover it when I restarted ?


Operating System :


NAME="CentOS Linux"
VERSION="7 (Core)"



Shaka Packager Version :


google/shaka-packager:v2.5.1



You can find out my configuration file below :


CH_PATH=/some/path/$CH_NAME

/usr/bin/packager \
 'in=udp://127.0.0.1:'$PORT',stream=audio,init_segment='$CH_PATH'/audio_init.m4s,segment_template='$CH_PATH'/audio_$Time$.m4s' \
 'in=udp://127.0.0.1:'$PORT',stream=video,init_segment='$CH_PATH'/h264_360p_init.m4s,segment_template='$CH_PATH'/h264_360p_$Time$.m4s' \
 'in=udp://127.0.0.1:'$(($PORT + 1))',stream=video,init_segment='$CH_PATH'/h264_540p_init.m4s,segment_template='$CH_PATH'/h264_540p_$Time$.m4s' \
 'in=udp://127.0.0.1:'$(($PORT + 2))',stream=video,init_segment='$CH_PATH'/h264_720p_init.m4s,segment_template='$CH_PATH'/h264_720p_$Time$.m4s' \
 'in=udp://127.0.0.1:'$(($PORT + 3))',stream=video,init_segment='$CH_PATH'/h264_1080p_init.m4s,segment_template='$CH_PATH'/h264_1080p_$Time$.m4s' \
 --enable_widevine_encryption \
 --key_server_url ************ \
 --content_id ********** \
 --signer ********** \
 --aes_signing_key ************ \
 --aes_signing_iv ************* \
 --mpd_output $CH_PATH/manifest.mpd \
 --hls_playlist_type LIVE \
 --hls_master_playlist_output $CH_PATH/mn.m3u8 \
 --time_shift_buffer_depth 43200 \
 --preserved_segments_outside_live_window 43200



-
Is it possible to convert all video files in subdirectories on google drive using ffmpeg & rclone ?
24 avril 2020, par rmsCurrently I'm using this code whereby rclone fetches 1 file from my google drive, converts it using ffpmeg on a server and moves the converted files to the same folder. It's shown as below.
Step 1 is generating a list over which rclone can iterate over and the conversion process begins with the second script in step 2



step 1



rclone lsf "gdrive:/folder" --files-only > list.txt




step 2



while read file; do
 rclone copy "gdrive:/folder/""$file" . -P
 ffmpeg -i "$file" -vf scale=-1:540 -vcodec libx265 -crf 26 "${file%.*}.mkv" null
 rm -f "$file"
 rclone move . "gdrive:/folder/" --exclude list.txt -P
done code>



However, some sub directories have nested videos to convert which would rather take a long time if I'm to do it for every folder. This brought me to my question whether it's possible to modify the above process to work with subdirectories.



I've tried
rclone lsf
to generate the list recursively using the-R
flag but ffmpeg doesn't seem to read the file from the list to make it work. Is there a way to make this work with some tweaking possibly ?