
Recherche avancée
Médias (91)
-
Spitfire Parade - Crisis
15 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Wired NextMusic
14 mai 2011, par
Mis à jour : Février 2012
Langue : English
Type : Video
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
-
Sintel MP4 Surround 5.1 Full
13 mai 2011, par
Mis à jour : Février 2012
Langue : English
Type : Video
-
Carte de Schillerkiez
13 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
-
Publier une image simplement
13 avril 2011, par ,
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (40)
-
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Use, discuss, criticize
13 avril 2011, parTalk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
A discussion list is available for all exchanges between users. -
Contribute to a better visual interface
13 avril 2011MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.
Sur d’autres sites (6757)
-
Ffmpeg - How can I create HLS multiple language streams, in multiple qualities ?
28 avril 2022, par Daniel EllisPreface


I'm working on converting videos from 4k to multiple qualities with multiple languages but am having issues with the multiple languages overlaying, sometimes losing quality and sometimes being out of sync. (this is less of a problem in the German audio, as this is voice over anyhow)


We as a team are complete noobs in terms of Video / Audio + HLS — I'm a front end developer who has no experience of this so apologies if my question is poorly phrased



Videos


I have the video in a 4k format and have removed the original sound as I have English and German audio files that need to be overlayed. I am then taking these files and throwing them together into a .ts file like this :


$ ffmpeg -i ep03-ns-4k.mp4 -i nkit-ep3-de-output.m4a -i nkit-ep3-en-output.m4a \
> -thread 0 -muxdelay 0 -y \
> -map 0:v -map 1 -map 2 -movflags +faststart -refs 1 \
> -vcodec libx264 -acodec aac -profile:v baseline -level 30 -ar 44100 -ab 64k -f mpegts out.ts 



This outputs a 4k
out.ts
video, with both audio tracks playing.

The hard part


This is where I'm finding it tricky, I now need to convert this single file into multiple quality levels (480, 720, 1080, 1920) and I attempt this with the following command :


ffmpeg -hide_banner -y -i out.ts \
-crf 20 -sc_threshold 0 -g 48 -keyint_min 48 -ar 48000 \
-map 0:v:0 -map 0:v:0 -map 0:v:0 -map 0:v:0 \
-c:v:0 h264 -profile:v:0 main -filter:v:0 "scale=w=848:h=480:force_original_aspect_ratio=decrease" -b:v:0 1400k -maxrate:v:0 1498k -bufsize:v:0 2100k \
-c:v:1 h264 -profile:v:1 main -filter:v:1 "scale=w=1280:h=720:force_original_aspect_ratio=decrease" -b:v:1 2800k -maxrate:v:1 2996k -bufsize:v:1 4200k \
-c:v:2 h264 -profile:v:2 main -filter:v:2 "scale=w=1920:h=1080:force_original_aspect_ratio=decrease" -b:v:2 5600k -maxrate:v:2 5992k -bufsize:v:2 8400k \
-c:v:3 h264 -profile:v:3 main -filter:v:3 "scale=w=3840:h=1920:force_original_aspect_ratio=decrease" -b:v:3 11200k -maxrate:v:3 11984k -bufsize:v:3 16800k \
-var_stream_map "v:0 v:1 v:2 v:3" \
-master_pl_name master.m3u8 \
-f hls -hls_time 4 -hls_playlist_type vod -hls_list_size 0 \
-hls_segment_filename "%v/episode-%03d.ts" "%v/episode.m3u8"



This creates the required qualities, but I'm now at a loss of how this might work with the audio


Audio


For the audio I run this command :


ffmpeg -i out.ts -threads 0 -muxdelay 0 -y -map 0:a:0 -codec copy -f segment -segment_time 4 -segment_list_size 0 -segment_list audio-de/audio-de.m3u8 -segment_format mpegts audio-de/audio-de_%d.aac
ffmpeg -i out.ts -threads 0 -muxdelay 0 -y -map 0:a:1 -codec copy -f segment -segment_time 4 -segment_list_size 0 -segment_list audio-en/audio-en.m3u8 -segment_format mpegts audio-en/audio-en_%d.aac




This creates the required audio segments.


The question


I realise this is quite an ask, but is there anything wrong with our inputs ? Is there a way that this can be done a bit more streamlined ?


Any answers are greatly appreciated.


-
Convert DTS to AC3 but only if there is no AC3 track already present in container
24 mars, par DomagojI have sound system that does not support DTS only AC3. I'm automating the process using bash that detects when movie was added to folder, downloads subtitles and converts audio track to AC3 using this command (one part of it) :


ffmpeg -i "{{episode}}" -map 0:v -map 0:a:0 -map 0:a -map 0:s -c:v copy -c:a copy -c:s copy -c:a:0 ac3 -b:a:0 640k "{{directory}}"/{{episode_name}}temp2.mkv


This works without issue and I end up with a .mkv file that contains original DTS audio track and newly created AC3 audio track. The issue is that some files already contain both AC3 and DTS tracks and in those cases I end up with two AC3 tracks and one DTS track. Another issue is that this command is triggered every time there is update to subtitles. So it's possible that the command will execute multiple times in a period of a few days and the container will have X number of the AC3 tracks.


I need a way to detect if file already contains AC3 track before I initiate command from above, but I'm not sure what the command would be. Any help is appreciated !


-
Feed output of one filter to the input of one filter multiple times with ffmpeg
27 août 2021, par kentcdoddsI have the following ffmpeg commands to create a podcast episode :


# remove all silence at start and end of the audio files
ffmpeg -i call.mp3 -af silenceremove=1:0:-50dB call1.mp3
ffmpeg -i response.mp3 -af silenceremove=1:0:-50dB response1.mp3

# remove silence longer than 1 second anywhere within the audio files
ffmpeg -i call1.mp3 -af silenceremove=stop_periods=-1:stop_duration=1:stop_threshold=-50dB call2.mp3
ffmpeg -i response1.mp3 -af silenceremove=stop_periods=-1:stop_duration=1:stop_threshold=-50dB response2.mp3

# normalize audio files
ffmpeg -i call2.mp3 -af loudnorm=I=-16:LRA=11:TP=0.0 call3.mp3
ffmpeg -i response2.mp3 -af loudnorm=I=-16:LRA=11:TP=0.0 response3.mp3

# cross fade audio files with intro/interstitial/outro
ffmpeg -i intro.mp3 -i call3.mp3 -i interstitial.mp3 -i response3.mp3 -i outro.mp3
 -filter_complex "[0][1]acrossfade=d=1:c2=nofade[a01];
 [a01][2]acrossfade=d=1:c1=nofade[a02];
 [a02][3]acrossfade=d=1:c2=nofade[a03];
 [a03][4]acrossfade=d=1:c1=nofade"
 output.mp3



While this "works" fine, I can't help but feel like it would be more efficient to do this all in one ffmpeg command. Based on what I found online this should be possible, but I don't understand the syntax well enough to know how to make it work. Here's what I tried :


ffmpeg -i intro.mp3 -i call.mp3 -i interstitial.mp3 -i response.mp3 -i outro.mp3
 -af [1]silenceremove=1:0:-50dB[trimmedCall]
 -af [3]silenceremove=1:0:-50dB[trimmedResponse]
 -af [trimmedCall]silenceremove=stop_periods=-1:stop_duration=1:stop_threshold=-50dB[noSilenceCall]
 -af [trimmedResponse]silenceremove=stop_periods=-1:stop_duration=1:stop_threshold=-50dB[noSilenceResponse]
 -af [noSilenceCall]loudnorm=I=-16:LRA=11:TP=0.0[call]
 -af [noSilenceResponse]loudnorm=I=-16:LRA=11:TP=0.0[response]
 -filter_complex "[0][call]acrossfade=d=1:c2=nofade[a01];
 [a01][2]acrossfade=d=1:c1=nofade[a02];
 [a02][response]acrossfade=d=1:c2=nofade[a03];
 [a03][4]acrossfade=d=1:c1=nofade"
 output.mp3



But I have a feeling I have a fundamental misunderstanding of this because I got this error which I don't understand :


Stream specifier 'call' in filtergraph description 
[0][call]acrossfade=d=1:c2=nofade[a01];
[a01][2]acrossfade=d=1:c1=nofade[a02];
[a02][response]acrossfade=d=1:c2=nofade[a03];
[a03][4]acrossfade=d=1:c1=nofade
 matches no streams.



For added context, I'm running all these commands through @ffmpeg/ffmpeg so that last command actually looks like this (in JavaScript) :


await ffmpeg.run(
 '-i', 'intro.mp3',
 '-i', 'call.mp3',
 '-i', 'interstitial.mp3',
 '-i', 'response.mp3',
 '-i', 'outro.mp3',
 '-af', '[1]silenceremove=1:0:-50dB[trimmedCall]',
 '-af', '[3]silenceremove=1:0:-50dB[trimmedResponse]',
 '-af', '[trimmedCall]silenceremove=stop_periods=-1:stop_duration=1:stop_threshold=-50dB[noSilenceCall]',
 '-af', '[trimmedResponse]silenceremove=stop_periods=-1:stop_duration=1:stop_threshold=-50dB[noSilenceResponse]',
 '-af', '[noSilenceCall]loudnorm=I=-16:LRA=11:TP=0.0[call]',
 '-af', '[noSilenceResponse]loudnorm=I=-16:LRA=11:TP=0.0[response]',
 '-filter_complex', `
[0][call]acrossfade=d=1:c2=nofade[a01];
[a01][2]acrossfade=d=1:c1=nofade[a02];
[a02][response]acrossfade=d=1:c2=nofade[a03];
[a03][4]acrossfade=d=1:c1=nofade
 `,
 'output.mp3',
)