
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (14)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
L’agrémenter visuellement
10 avril 2011MediaSPIP est basé sur un système de thèmes et de squelettes. Les squelettes définissent le placement des informations dans la page, définissant un usage spécifique de la plateforme, et les thèmes l’habillage graphique général.
Chacun peut proposer un nouveau thème graphique ou un squelette et le mettre à disposition de la communauté. -
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)
Sur d’autres sites (3031)
-
x264 Downsides of a high CRF (22) intermediary codec between conversions instead of lossless
18 décembre 2019, par bobtheencoderI have a huge collection of video files that are in the range of CRF 16-20 taking up TB’s of space. The only need I have for these originals is that I have to encode them from time to time but the CRF of these final encodes is very low (CRF 26-28).
I understand that a lossy to lossy converstion ALWAYS results in some quality loss but my question is what if the intermediate file is almost visually lossless compared to the final output.
So to sum up, what quality difference should I expect from the following routes ?
CRF 18 (original) -----> CRF 28 (final)
CRF 18 (original) -----> CRF 22 (long-term storage) -----> Lossy CRF 28 (final) -
Issue in FFmpegAndroid library when i compress video it convert video time into 1 or 2 second
25 mai 2017, par Fateh Singh SainiUsed this dependencies :
compile ’com.writingminds:FFmpegAndroid:0.3.2’I used blow code for video compress
public static final String VIDEOCODEC = "-vcodec" ;
public static final String AUDIOCODEC = "-acodec" ;public static final String VIDEOBITSTREAMFILTER = "-vbsf";
public static final String AUDIOBITSTREAMFILTER = "-absf";
public static final String VERBOSITY = "-v";
public static final String FILE_INPUT = "-i";
public static final String SIZE = "-s";
public static final String FRAMERATE = "-r";
public static final String FORMAT = "-f";
public static final String BITRATE_VIDEO = "-b:v";
public static final String BITRATE_AUDIO = "-b:a";
public static final String CHANNELS_AUDIO = "-ac";
public static final String FREQ_AUDIO = "-ar";String[] complexCommand = "-y", FILE_INPUT, yourRealPath, SIZE, "480x360", FRAMERATE, "25", VIDEOCODEC, "mpeg4", BITRATE_VIDEO, "150k", BITRATE_AUDIO, "48000", CHANNELS_AUDIO, "2", FREQ_AUDIO, "22050", filePath ;
/**
* Executing ffmpeg binary
*/
private static String execFFmpegBinary(final String[] command) {
try {
ffmpeg.execute(command, new ExecuteBinaryResponseHandler() {
@Override
public void onFailure(String s) {
Log.d(TAG, "FAILED with output : " + s);
}
@Override
public void onSuccess(String s) {
Log.d(TAG, "SUCCESS with output : " + s);
}
@Override
public void onProgress(String s) {
Log.d(TAG, "Started command : ffmpeg " + command);
Log.d(TAG, "progress : " + s);
}
@Override
public void onStart() {
Log.d(TAG, "Started command : ffmpeg " + command);
}
@Override
public void onFinish() {
Log.d(TAG, "Finished command : ffmpeg " + command);
}
});
} catch (FFmpegCommandAlreadyRunningException e) {
// do nothing for now
}
return filePath;
} -
Normalizing audio in ffmpeg - how ?
11 novembre 2020, par Betty CrokkerI'm creating one of those "Brady Bunch" videos for a choir using a C# application I'm writing that uses ffmpeg for all the heavy lifting, and for the most part it's working great but I'm having trouble getting the audio levels just right.


What I'm doing right now, is first "normalizing" the audio from the individual singers like this :


- 

- Extract audio into a WAV file using ffmpeg
- Load the WAV file into my application using NAudio
- Find the maximum 16-bit value
- When I create the merged video, specify a volume for this stream that boosts the maximum value to 32767










So, for example, if I have 3 streams : stream A's maximum audio is 32767 already, stream B's maximum audio is 32000, and stream C's maximum audio is 16000, then when I merge these videos I will specify


[0:a]volume=1.0,aresample=async=1:first_pts=0[aud0]
[1:a]volume=1.02,aresample=async=1:first_pts=0[aud1]
[2:a]volume=2.05,aresample=async=1:first_pts=0[aud2]
[aud0][aud1][aud2]amix=inputs=3[a]



(I have an additional "volume tweak" that lets me adjust the volume level of individual singers as necessary, but we can ignore that for this question)


I am reading the ffmpeg wiki on Audio Volume Manipulation, and I will implement that next, but I don't know what to do with the output it generates. It looks like I'm going to get mean and max volume levels in dB and while I understand decibels in a "yeah, I learned about those in college 30 years ago" kind of way, I don't know how to use those values to normalize the audio of my input videos.


The problem is, in the ffmpeg output video, the audio level is quite low. If I do the same process of extracting the audio and looking at the WAV file in the merged video that ffmpeg generated, the maximum value is only 4904.


How do I implement an algorithm that automatically sets the output volume to a "reasonable" level ? I realize I can simply add a manual volume filter and have the human set the level, but that's going to be a lot of back & forth of generating the merged video, listening to it, adjusting the level, merging again, etc. I want a way where my application figures out an appropriate output volume (possibly with human adjustment allowed).


EDIT


Asking ffmpeg to determine the mean and max volume of each clip does provide mean and max volume in dB, and I can then use those values to scale each input clip :


[0:a]volume=3.40dB,aresample=async=1:first_pts=0[aud0]
[1:a]volume=3.90dB,aresample=async=1:first_pts=0[aud1]
[2:a]volume=4.40dB,aresample=async=1:first_pts=0[aud2]
[3:a]volume=-0.00dB,aresample=async=1:first_pts=0[aud3]



But my final video is still strangely quiet. For now, I've added a manually-entered volume factor that gets applied at the very end :


[aud0][aud1][aud2]amix=inputs=3[a]
[a]volume=volume=3.00[b]



So my question is, in effect, how do I determine algorithmically what this final volume factor needs to be ?


MORE EDIT


There's something deeper going on here, I just set the volume filter to 100 and the output is only slightly louder. Here are my filters, and the relevant portions of the command line :


color=size=1920x1080:c=0x0000FF [base];
[0:v] scale=576x324 [clip0];
[0:a]volume=1.48,aresample=async=1:first_pts=0[aud0];
[1:v] crop=808:1022:202:276,scale=384x486 [clip1];
[1:a]volume=1.57,aresample=async=1:first_pts=0[aud1];
[2:v] crop=1160:1010:428:70,scale=558x486 [clip2];
[2:a]volume=1.66,aresample=async=1:first_pts=0[aud2];
[3:v] crop=1326:1080:180:0,scale=576x469 [clip3];
[3:a]volume=1.70,aresample=async=1:first_pts=0[aud3];
[4:a]volume=0.20,aresample=async=1:first_pts=0[aud4];
[5:a]volume=0.73,aresample=async=1:first_pts=0[aud5];
[6:v] crop=1326:1080:276:0,scale=576x469 [clip4];
[6:a]volume=1.51,aresample=async=1:first_pts=0[aud6];
[base][clip0] overlay=shortest=1:x=32:y=158 [tmp0];
[tmp0][clip1] overlay=shortest=1:x=768:y=27 [tmp1];
[tmp1][clip2] overlay=shortest=1:x=1321:y=27 [tmp2];
[tmp2][clip3] overlay=shortest=1:x=32:y=625 [tmp3];
[tmp3][clip4] overlay=shortest=1:x=672:y=625 [tmp4];
[aud0][aud1][aud2][aud3][aud4][aud5][aud6]amix=inputs=7[a];
[a]adelay=delays=200:all=1[b];
[b]volume=volume=100.00[c];
[c]asplit[a1][a2];

ffmpeg -y ....
 -map "[tmp4]" -map "[a1]" -c:v libx264 "D:\voutput.mp4" 
 -map "[a2]" "D:\aoutput.mp3""



When I do this, the audio I want is louder (loud enough to clip and get distorted), but definitely not 100x louder.