
Recherche avancée
Médias (91)
-
Head down (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Echoplex (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Discipline (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Letting you (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
1 000 000 (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
999 999 (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
Autres articles (62)
-
Gestion des droits de création et d’édition des objets
8 février 2011, parPar défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;
-
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Dépôt de média et thèmes par FTP
31 mai 2013, parL’outil MédiaSPIP traite aussi les média transférés par la voie FTP. Si vous préférez déposer par cette voie, récupérez les identifiants d’accès vers votre site MédiaSPIP et utilisez votre client FTP favori.
Vous trouverez dès le départ les dossiers suivants dans votre espace FTP : config/ : dossier de configuration du site IMG/ : dossier des média déjà traités et en ligne sur le site local/ : répertoire cache du site web themes/ : les thèmes ou les feuilles de style personnalisées tmp/ : dossier de travail (...)
Sur d’autres sites (6205)
-
Multiple Dialogue lines of an ASS subtitle file is displayed at the same time on the video file
14 janvier 2024, par Furkan GözükaraI am trying to code an ASS subtitle burner.


Converting given SRT file into ASS subtitle


Let me show examples


Below is given SRT file - generated with Whisper


1
00:00:00,000 --> 00:00:00,080
<u>American</u> XL Bully Dog

2
00:00:00,080 --> 00:00:00,640
American <u>XL</u> Bully Dog

3
00:00:00,640 --> 00:00:01,140
American XL <u>Bully</u> Dog

4
00:00:01,140 --> 00:00:01,280
American XL Bully <u>Dog</u>

5
00:00:01,280 --> 00:00:01,520
<u>is</u> a danger to

6
00:00:01,520 --> 00:00:01,640
is <u>a</u> danger to

7
00:00:01,640 --> 00:00:01,800
is a <u>danger</u> to

8
00:00:01,800 --> 00:00:02,220
is a danger <u>to</u>

9
00:00:02,220 --> 00:00:02,380
<u>our</u> communities, particularly our

10
00:00:02,380 --> 00:00:02,680
our <u>communities,</u> particularly our

11
00:00:02,680 --> 00:00:03,360
our communities, particularly our

12
00:00:03,360 --> 00:00:03,580
our communities, <u>particularly</u> our

13
00:00:03,580 --> 00:00:04,060
our communities, particularly <u>our</u>

14
00:00:04,060 --> 00:00:04,280
<u>children.</u>



Then this above SRT file is converted into the below ASS subtitle


[Script Info]
ScriptType: v4.00+
PlayResX: 384
PlayResY: 288

[V4+ Styles]
Format: Name, Fontname, Fontsize, PrimaryColour, SecondaryColour, OutlineColour, BackColour, Bold, Italic, Underline, StrikeOut, ScaleX, ScaleY, Spacing, Angle, BorderStyle, Outline, Shadow, Alignment, MarginL, MarginR, MarginV, Encoding
Style: Default,Arial,16,&H00FFFFFF,&H0000FF00,&H00000000,&H00000000,0,0,0,0,100,100,0,0,1,1,0,2,10,10,10,1

[Events]
Format: Layer, Start, End, Style, Name, MarginL, MarginR, MarginV, Effect, Text
Dialogue: 0,00:00:00.000,00:00:00.080,Default,,0,0,0,,{\c&H00FF00&}American{\c&HFFFFFF&} XL Bully Dog
Dialogue: 0,00:00:00.080,00:00:00.640,Default,,0,0,0,,American {\c&H00FF00&}XL{\c&HFFFFFF&} Bully Dog
Dialogue: 0,00:00:00.640,00:00:01.140,Default,,0,0,0,,American XL {\c&H00FF00&}Bully{\c&HFFFFFF&} Dog
Dialogue: 0,00:00:01.140,00:00:01.280,Default,,0,0,0,,American XL Bully {\c&H00FF00&}Dog{\c&HFFFFFF&}
Dialogue: 0,00:00:01.280,00:00:01.520,Default,,0,0,0,,{\c&H00FF00&}is{\c&HFFFFFF&} a danger to
Dialogue: 0,00:00:01.520,00:00:01.640,Default,,0,0,0,,is {\c&H00FF00&}a{\c&HFFFFFF&} danger to
Dialogue: 0,00:00:01.640,00:00:01.800,Default,,0,0,0,,is a {\c&H00FF00&}danger{\c&HFFFFFF&} to
Dialogue: 0,00:00:01.800,00:00:02.220,Default,,0,0,0,,is a danger {\c&H00FF00&}to{\c&HFFFFFF&}
Dialogue: 0,00:00:02.220,00:00:02.380,Default,,0,0,0,,{\c&H00FF00&}our{\c&HFFFFFF&} communities, particularly our
Dialogue: 0,00:00:02.380,00:00:02.680,Default,,0,0,0,,our {\c&H00FF00&}communities,{\c&HFFFFFF&} particularly our
Dialogue: 0,00:00:02.680,00:00:03.360,Default,,0,0,0,,our communities, particularly our
Dialogue: 0,00:00:03.360,00:00:03.580,Default,,0,0,0,,our communities, {\c&H00FF00&}particularly{\c&HFFFFFF&} our
Dialogue: 0,00:00:03.580,00:00:04.060,Default,,0,0,0,,our communities, particularly {\c&H00FF00&}our{\c&HFFFFFF&}
Dialogue: 0,00:00:04.060,00:00:04.280,Default,,0,0,0,,{\c&H00FF00&}children.{\c&HFFFFFF&}



Both when playing the subtitle in any video player or burning into video via FFMPEG, what happens is, multiple Dialogue lines are being displayed at the same time on the screen.


I am doing a lot of research regarding this but still couldn't find out the issue.


Here screenshot of what I mean. So how can I fix this issue ? What is wrong with my ASS file format ?




Here below the functio that I am use to generate that ASS format


def convert_srt_to_ass(srt_content):
 # ASS header
 ass_header = (
 "[Script Info]\n"
 "ScriptType: v4.00+\n"
 "PlayResX: 384\n"
 "PlayResY: 288\n\n"
 "[V4+ Styles]\n"
 "Format: Name, Fontname, Fontsize, PrimaryColour, SecondaryColour, OutlineColour, BackColour, Bold, Italic, Underline, StrikeOut, ScaleX, ScaleY, Spacing, Angle, BorderStyle, Outline, Shadow, Alignment, MarginL, MarginR, MarginV, Encoding\n"
 "Style: Default,Arial,16,&H00FFFFFF,&H0000FF00,&H00000000,&H00000000,0,0,0,0,100,100,0,0,1,1,0,2,10,10,10,1\n\n"
 "[Events]\n"
 "Format: Layer, Start, End, Style, Name, MarginL, MarginR, MarginV, Effect, Text\n"
 )

 ass_content = ass_header
 # Adjust regex to properly capture subtitle number, start time, end time, and text
 matches = list(re.finditer(r'(\d+)\n(\d{2}:\d{2}:\d{2},\d{3}) --> (\d{2}:\d{2}:\d{2},\d{3})\n(.+?)\n\n', srt_content, re.DOTALL))

 prev_end = None
 
 for match in matches:
 start, end, text = match.group(2), match.group(3), match.group(4)
 start = start.replace(',', '.')
 end = end.replace(',', '.')

 # Calculate the correct start and end times to ensure no overlap
 if prev_end and start <= prev_end:
 # Adjust the previous end time to be a bit before the current start time
 prev_end_time = datetime.strptime(prev_end, '%H:%M:%S.%f')
 adjusted_end_time = prev_end_time - timedelta(milliseconds=100) # Adjust by 100 milliseconds
 prev_end = adjusted_end_time.strftime('%H:%M:%S.%f')[:-3] # Truncate to 3 decimal places

 ass_content = ass_content.rstrip()
 ass_content = re.sub(r'(\d{2}:\d{2}:\d{2}\.\d{3}),Default,,$', f'{prev_end},Default,,', ass_content, 1)
 ass_content += '\n'

 prev_end = end

 # Formatting the text and adding it to the content
 text = text.replace('<u>', '{\\c&H00FF00&}').replace('</u>', '{\\c&HFFFFFF&}')
 text = text.replace('\n', '\\N') # Convert newlines within text for ASS format
 ass_content += f"Dialogue: 0,{start},{end},Default,,0,0,0,,{text}\n"

 
 # Conversion of text and other formatting remains the same

 return ass_content



-
What is the Relationship Between RMS Level and Amplitude ?
2 juillet 2024, par Xavier HugoI have an Android recorder project, and I'm trying to implement a waveform display for recording and playing audio.


During recording, I chose to use
mediaRecorder.maxAmplitude
to get the data I need to draw the waveform.

During audio playback (importing other audio from storage, so I can't use the above method), I used
ffprobe -v error -f lavfi -i amovie=audioFile,asetnsamples=44100,astats=metadata=1:reset=1 -show_entries frame_tags=lavfi.astats.Overall.RMS_level -of csv=p=0
to get the data. However, their outputs look very different.

The amplitude data looks like this :


0
351
650
31987
402
443
674
432
774
1156
32139
565
532
511
355
366
628
25996
610
700
423
1317
1241
621
1078
1994
1068
1549
0



The RMS level data looks like this :


-63.081060
-47.268557
-46.641208
-29.585361
-47.808792
-46.119954
-45.888205
-46.613955
-39.633273
-29.618461
-48.102711
-45.607349
-47.897675
-48.915841
-50.470556
-51.066509
-45.216680
-29.337245
-49.955258
-47.591584
-50.107631
-38.120322
-42.553827
-45.452827
-41.609616
-37.368340
-42.241799
-53.744867



It seems there is some correlation between them (e.g., their trends are identical). I want to know how to convert RMS level to amplitude so that the waveforms of the recording and the audio playback look similar.


-
How do I properly use Flutter FFmpegKit to convert video file formats to H.264 ?
21 août 2024, par SpencerI've been using GPT and publications from Medium to help with my how I'm supposed to use the FFmpeg kit for Flutter but they have been no help. Perhaps I need clarification on what the tool is supposed to do because links like these below have not been of any help and are very outdated :


https://dev.to/usp/flutter-live-streaming-a-complete-guide-2634




https://github.com/arthenica/ffmpeg-kit/blob/main/flutter/flutter/README.md


This is the code i've been trying to run to convert a video file before I upload to Firestore Database.


Future convertToH264(Uint8List bytes) async {
 try {
 final filename = const Uuid().v4();
 final tempDir = await getTemporaryDirectory();
 final tempVideoFile = File('${tempDir.path}/$filename.mp4');
 await tempVideoFile.writeAsBytes(bytes);
 final outputPath = '${tempDir.path}/$filename-aac.mp4';

 await FFmpegKit.execute(
 '-i ${tempVideoFile.path} -c:v libx264 -c:a aac $outputPath',
 );
 return await File(outputPath).readAsBytes();
 } catch (e) {
 rethrow;
 }
}