
Recherche avancée
Médias (1)
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (108)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)
Sur d’autres sites (10794)
-
Normalizing audio in ffmpeg - how ?
11 novembre 2020, par Betty CrokkerI'm creating one of those "Brady Bunch" videos for a choir using a C# application I'm writing that uses ffmpeg for all the heavy lifting, and for the most part it's working great but I'm having trouble getting the audio levels just right.


What I'm doing right now, is first "normalizing" the audio from the individual singers like this :


- 

- Extract audio into a WAV file using ffmpeg
- Load the WAV file into my application using NAudio
- Find the maximum 16-bit value
- When I create the merged video, specify a volume for this stream that boosts the maximum value to 32767










So, for example, if I have 3 streams : stream A's maximum audio is 32767 already, stream B's maximum audio is 32000, and stream C's maximum audio is 16000, then when I merge these videos I will specify


[0:a]volume=1.0,aresample=async=1:first_pts=0[aud0]
[1:a]volume=1.02,aresample=async=1:first_pts=0[aud1]
[2:a]volume=2.05,aresample=async=1:first_pts=0[aud2]
[aud0][aud1][aud2]amix=inputs=3[a]



(I have an additional "volume tweak" that lets me adjust the volume level of individual singers as necessary, but we can ignore that for this question)


I am reading the ffmpeg wiki on Audio Volume Manipulation, and I will implement that next, but I don't know what to do with the output it generates. It looks like I'm going to get mean and max volume levels in dB and while I understand decibels in a "yeah, I learned about those in college 30 years ago" kind of way, I don't know how to use those values to normalize the audio of my input videos.


The problem is, in the ffmpeg output video, the audio level is quite low. If I do the same process of extracting the audio and looking at the WAV file in the merged video that ffmpeg generated, the maximum value is only 4904.


How do I implement an algorithm that automatically sets the output volume to a "reasonable" level ? I realize I can simply add a manual volume filter and have the human set the level, but that's going to be a lot of back & forth of generating the merged video, listening to it, adjusting the level, merging again, etc. I want a way where my application figures out an appropriate output volume (possibly with human adjustment allowed).


EDIT


Asking ffmpeg to determine the mean and max volume of each clip does provide mean and max volume in dB, and I can then use those values to scale each input clip :


[0:a]volume=3.40dB,aresample=async=1:first_pts=0[aud0]
[1:a]volume=3.90dB,aresample=async=1:first_pts=0[aud1]
[2:a]volume=4.40dB,aresample=async=1:first_pts=0[aud2]
[3:a]volume=-0.00dB,aresample=async=1:first_pts=0[aud3]



But my final video is still strangely quiet. For now, I've added a manually-entered volume factor that gets applied at the very end :


[aud0][aud1][aud2]amix=inputs=3[a]
[a]volume=volume=3.00[b]



So my question is, in effect, how do I determine algorithmically what this final volume factor needs to be ?


MORE EDIT


There's something deeper going on here, I just set the volume filter to 100 and the output is only slightly louder. Here are my filters, and the relevant portions of the command line :


color=size=1920x1080:c=0x0000FF [base];
[0:v] scale=576x324 [clip0];
[0:a]volume=1.48,aresample=async=1:first_pts=0[aud0];
[1:v] crop=808:1022:202:276,scale=384x486 [clip1];
[1:a]volume=1.57,aresample=async=1:first_pts=0[aud1];
[2:v] crop=1160:1010:428:70,scale=558x486 [clip2];
[2:a]volume=1.66,aresample=async=1:first_pts=0[aud2];
[3:v] crop=1326:1080:180:0,scale=576x469 [clip3];
[3:a]volume=1.70,aresample=async=1:first_pts=0[aud3];
[4:a]volume=0.20,aresample=async=1:first_pts=0[aud4];
[5:a]volume=0.73,aresample=async=1:first_pts=0[aud5];
[6:v] crop=1326:1080:276:0,scale=576x469 [clip4];
[6:a]volume=1.51,aresample=async=1:first_pts=0[aud6];
[base][clip0] overlay=shortest=1:x=32:y=158 [tmp0];
[tmp0][clip1] overlay=shortest=1:x=768:y=27 [tmp1];
[tmp1][clip2] overlay=shortest=1:x=1321:y=27 [tmp2];
[tmp2][clip3] overlay=shortest=1:x=32:y=625 [tmp3];
[tmp3][clip4] overlay=shortest=1:x=672:y=625 [tmp4];
[aud0][aud1][aud2][aud3][aud4][aud5][aud6]amix=inputs=7[a];
[a]adelay=delays=200:all=1[b];
[b]volume=volume=100.00[c];
[c]asplit[a1][a2];

ffmpeg -y ....
 -map "[tmp4]" -map "[a1]" -c:v libx264 "D:\voutput.mp4" 
 -map "[a2]" "D:\aoutput.mp3""



When I do this, the audio I want is louder (loud enough to clip and get distorted), but definitely not 100x louder.


-
FFmpeg - Save %{pts\:hms} in %variable% for saving as environment variable
29 novembre 2020, par cmd_kevinAfter closing a video in ffplay I want that the last timestamp will saved as a value of an environment variable.


To show the current timestamp in the playback window, I can use :


ffplay -vf "drawtext=text='%{pts\:hms}'" input.mp4



So there is a variable (
%{pts\:hms}
) for my purposes.

To change the value of a variable (in my case
%variable01%
), I used :

SET variable01=value



To change the environment variable with
%variable01%
, I used :SETX environment_variable_01 %variable01%
.

My problem is that I can't use
%{pts\:hms}
to define%variable01%
andecho %{pts\:hms}
orSETX environment_variable_01 %{pts\:hms}
didn't work.

So I'm trying to transfer the
%{pts\:hms}
value to%variable01%
but all of my attempts failed.

What can I do to solve my problem ?


I want this environment value to continue a video at the timestamp I closed ffplay and the command prompt.


Edit1 : I have used search facility but I didn't find an answer, especially to this specific syntax with only one percent sign
%{pts\:hms}
.

Here's my reproducible example :

ffplay -vf "drawtext=text='%{pts\:hms}'" input.mp4 & echo %{pts:hms}

Answer :%{pts\:hms}
.

I also tried :


ffplay -vf "drawtext=text='%{pts\:hms}'" input.mp4 & SET variable01=%{pts\:hms}



but the answer with


echo %variable01%



is


%{pts\:hms}.



Changing to
%{pts\:hms}%
didn't work too. The answer of echo is%{pts\:hms}%
.

-
php. ffmpeg. read from input stream. How to ?
3 décembre 2020, par rzlvmpI trying to create mp4 file from looping jpg image. By transfer jpg image as a stdin stream.


$cwd = getcwd();
$env = [];
 
$descriptorspec = array(
 0 => ["pipe", "r"], // stdin
 1 => ["pipe", "w"], // stdout
 2 => ["pipe", "w"] // stderr
);

$transcode_command = 'ffmpeg -y -f lavfi -i anullsrc=channel_layout=stereo:sample_rate=44100 -f jpeg_pipe -loop 1 -i pipe:0 -c:v libx264 -r 29.97 -t 30 -pix_fmt yuv420p -vf "scale=trunc(iw/2)*2:trunc(ih/2)*2,setsar=1" -c:a aac -shortest -movflags faststart -f mp4 /tmp/output.mp4';
$process = proc_open($transcode_command, $descriptorspec, $pipes, $cwd, $env);

$s3_client = new Aws\S3\S3Client([
 'calculate_md5' => true,
 'region' => $REGION,
 'version' => 'latest',
 'credentials' => [
 'key' => $AWS_KEY,
 'secret' => $AWS_SECRET
 ]
]);
$s3_client->registerStreamWrapper();

if (is_resource($process)) {

 $input_stream = fopen('s3://somebucket/image.jpg', 'r');

 if ($input_stream) {

 while (!feof($input_stream)) {
 fwrite($pipes[0], fread($input_stream, 1024));
 fclose($pipes[0]);
 }
 
 $output = stream_get_contents($pipes[1]);
 $error = stream_get_contents($pipes[2]);

 fclose($input_stream);
 fclose($pipes[1]);
 fclose($pipes[2]);

 print_r($output);
 print_r($error);
 }
}



And I've get response


...
Input #0, lavfi, from 'anullsrc=channel_layout=stereo:sample_rate=44100':
 Duration: N/A, start: 0.000000, bitrate: 705 kb/s
 Stream #0:0: Audio: pcm_u8, 44100 Hz, stereo, u8, 705 kb/s
[mjpeg @ 0xa705a0] EOI missing, emulating
Input #1, jpeg_pipe, from 'pipe:0':
 Duration: N/A, bitrate: N/A
 Stream #1:0: Video: mjpeg, yuvj420p(pc, bt470bg/unknown/unknown), 200x200 [SAR 72:72 DAR 1:1], 25 tbr, 25 tbn, 25 tbc
Stream mapping:
 Stream #1:0 -> #0:0 (mjpeg (native) -> h264 (libx264))
 Stream #0:0 -> #0:1 (pcm_u8 (native) -> aac (native))
[mjpeg @ 0xa70c40] overread 8
[mjpeg @ 0xa70c40] EOI missing, emulating
...



Looks like ffmpeg don't wait for stream end (
EOI missing, emulating
) and trying to create output video after firstwhile
iteration. Resulting file is mp4 with one broken frame.
If I set chunk size (1024
) bigger than image size, ffmpeg creates only one completed frame.

ffmpeg command working pretty good if I use
aws cli
(see my answer). But when I try to send input stream via PHP ffmpeg failing.

What problem may be ?..