
Recherche avancée
Médias (1)
-
Publier une image simplement
13 avril 2011, par ,
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (90)
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
L’utiliser, en parler, le critiquer
10 avril 2011La première attitude à adopter est d’en parler, soit directement avec les personnes impliquées dans son développement, soit autour de vous pour convaincre de nouvelles personnes à l’utiliser.
Plus la communauté sera nombreuse et plus les évolutions seront rapides ...
Une liste de discussion est disponible pour tout échange entre utilisateurs. -
Mise à disposition des fichiers
14 avril 2011, parPar défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)
Sur d’autres sites (10757)
-
Normalizing audio in ffmpeg - how ?
11 novembre 2020, par Betty CrokkerI'm creating one of those "Brady Bunch" videos for a choir using a C# application I'm writing that uses ffmpeg for all the heavy lifting, and for the most part it's working great but I'm having trouble getting the audio levels just right.


What I'm doing right now, is first "normalizing" the audio from the individual singers like this :


- 

- Extract audio into a WAV file using ffmpeg
- Load the WAV file into my application using NAudio
- Find the maximum 16-bit value
- When I create the merged video, specify a volume for this stream that boosts the maximum value to 32767










So, for example, if I have 3 streams : stream A's maximum audio is 32767 already, stream B's maximum audio is 32000, and stream C's maximum audio is 16000, then when I merge these videos I will specify


[0:a]volume=1.0,aresample=async=1:first_pts=0[aud0]
[1:a]volume=1.02,aresample=async=1:first_pts=0[aud1]
[2:a]volume=2.05,aresample=async=1:first_pts=0[aud2]
[aud0][aud1][aud2]amix=inputs=3[a]



(I have an additional "volume tweak" that lets me adjust the volume level of individual singers as necessary, but we can ignore that for this question)


I am reading the ffmpeg wiki on Audio Volume Manipulation, and I will implement that next, but I don't know what to do with the output it generates. It looks like I'm going to get mean and max volume levels in dB and while I understand decibels in a "yeah, I learned about those in college 30 years ago" kind of way, I don't know how to use those values to normalize the audio of my input videos.


The problem is, in the ffmpeg output video, the audio level is quite low. If I do the same process of extracting the audio and looking at the WAV file in the merged video that ffmpeg generated, the maximum value is only 4904.


How do I implement an algorithm that automatically sets the output volume to a "reasonable" level ? I realize I can simply add a manual volume filter and have the human set the level, but that's going to be a lot of back & forth of generating the merged video, listening to it, adjusting the level, merging again, etc. I want a way where my application figures out an appropriate output volume (possibly with human adjustment allowed).


EDIT


Asking ffmpeg to determine the mean and max volume of each clip does provide mean and max volume in dB, and I can then use those values to scale each input clip :


[0:a]volume=3.40dB,aresample=async=1:first_pts=0[aud0]
[1:a]volume=3.90dB,aresample=async=1:first_pts=0[aud1]
[2:a]volume=4.40dB,aresample=async=1:first_pts=0[aud2]
[3:a]volume=-0.00dB,aresample=async=1:first_pts=0[aud3]



But my final video is still strangely quiet. For now, I've added a manually-entered volume factor that gets applied at the very end :


[aud0][aud1][aud2]amix=inputs=3[a]
[a]volume=volume=3.00[b]



So my question is, in effect, how do I determine algorithmically what this final volume factor needs to be ?


MORE EDIT


There's something deeper going on here, I just set the volume filter to 100 and the output is only slightly louder. Here are my filters, and the relevant portions of the command line :


color=size=1920x1080:c=0x0000FF [base];
[0:v] scale=576x324 [clip0];
[0:a]volume=1.48,aresample=async=1:first_pts=0[aud0];
[1:v] crop=808:1022:202:276,scale=384x486 [clip1];
[1:a]volume=1.57,aresample=async=1:first_pts=0[aud1];
[2:v] crop=1160:1010:428:70,scale=558x486 [clip2];
[2:a]volume=1.66,aresample=async=1:first_pts=0[aud2];
[3:v] crop=1326:1080:180:0,scale=576x469 [clip3];
[3:a]volume=1.70,aresample=async=1:first_pts=0[aud3];
[4:a]volume=0.20,aresample=async=1:first_pts=0[aud4];
[5:a]volume=0.73,aresample=async=1:first_pts=0[aud5];
[6:v] crop=1326:1080:276:0,scale=576x469 [clip4];
[6:a]volume=1.51,aresample=async=1:first_pts=0[aud6];
[base][clip0] overlay=shortest=1:x=32:y=158 [tmp0];
[tmp0][clip1] overlay=shortest=1:x=768:y=27 [tmp1];
[tmp1][clip2] overlay=shortest=1:x=1321:y=27 [tmp2];
[tmp2][clip3] overlay=shortest=1:x=32:y=625 [tmp3];
[tmp3][clip4] overlay=shortest=1:x=672:y=625 [tmp4];
[aud0][aud1][aud2][aud3][aud4][aud5][aud6]amix=inputs=7[a];
[a]adelay=delays=200:all=1[b];
[b]volume=volume=100.00[c];
[c]asplit[a1][a2];

ffmpeg -y ....
 -map "[tmp4]" -map "[a1]" -c:v libx264 "D:\voutput.mp4" 
 -map "[a2]" "D:\aoutput.mp3""



When I do this, the audio I want is louder (loud enough to clip and get distorted), but definitely not 100x louder.


-
ffmpeg dash Segment offset
18 mars 2019, par inkubuxI’m trying to integrate live-transcoding like "plex" or "emby" with my application.
I am able to serve dash content over to shaka-player or dash.js but only in ’live-mode’. But I want to enable seeking through the player.
I looked at plex and to enable this they create their own mpd file with duration so the player will have a full seekbar.
However when seeking the player will ask for a segment number eg : 449. I need to stop ffmpeg and restart with an offset
(-ss <<segment>>)</segment>
, but ffmpeg will just restart a transcode session from segment 0 with an initial segment.What I want is to tell ffmpeg to start at a seekpoint but only output from segment number and now-on.
When playing with hls and mpegts, I can tell ffmpeg to output at a certain segment : with the option
-segment_start_number
but this is not available for dash. And plex use their own transcoder based of ffmpeg with the option-skip_to_segment
I tried to ’hack’ around by keeping a manual offset on my web-server, even if I serve the "supposed" right segment after the seek point dash.js and shaka-player can’t recover the stream.. VLC on the other habd is able to (probably more tolerent) to errors in segments.
Is the supposed right segment after a seek in dash (contains the initial segment) or only the segment.
Is ffmpeg able to start segmenting dash as a supposed segment (for seek and resume)
The same technique works in hls with forced key frames and a custom m3u8 (with all the "predicted" segments) but calculating the right segment length and the right bandwidth is much harder and hackish and dash is more tolerant to variation.
I would really like to be able to seek through my live transcoding video.
For reference here is a custom mpd file I serve to enable "seeking" :
<mpd xmlns="urn:mpeg:dash:schema:mpd:2011" profiles="urn:mpeg:dash:profile:isoff-live:2011" type="static" suggestedpresentationdelay="PT1S" mediapresentationduration="PT49M2.920S" maxsegmentduration="PT2S" minbuffertime="PT10S">
<period start="PT0S" duration="PT49M2.920S">
<adaptationset segmentalignment="true">
<segmenttemplate timescale="1" duration="1" initialization="$RepresentationID$/initial.mp4" media="$RepresentationID$/$Number$.m4s" startnumber="1">
</segmenttemplate>
<representation mimetype="video/mp4" codecs="avc1.640029" bandwidth="3766000" width="1920" height="1080">
</representation>
</adaptationset>
<adaptationset segmentalignment="true">
<segmenttemplate timescale="1" duration="1" initialization="$RepresentationID$/initial.mp4" media="$RepresentationID$/$Number$.m4s" startnumber="1">
</segmenttemplate>
<representation mimetype="audio/mp4" codecs="mp4a.40.2" bandwidth="188000" audiosamplingrate="48000">
<audiochannelconfiguration schemeiduri="urn:mpeg:dash:23003:3:audio_channel_configuration:2011" value="6"></audiochannelconfiguration>
</representation>
</adaptationset>
</period>
</mpd>And here is the ffmpeg command to pull it off :
ffmpeg -ss 0 -i movie.mkv -y -acodec aac -vcodec libx264 -f dash -min_seg_duration 1000000 -individual_header_trailer 0 -pix_fmt yuv420p -vf scale=trunc(min(max(iw\,ih*dar)\,1920)/2)*2:trunc(ow/dar/2)*2 -bsf:v h264_mp4toannexb -profile:v high -level 4.1 -map_chapters -1 -map_metadata -1 -preset veryfast -movflags frag_keyframe+empty_moov -use_template 1 -use_timeline 0 -remove_at_exit 1 -crf 23 -bufsize 7532k -maxrate 3766k -start_at_zero -threads 0 -force_key_frames expr:if(isnan(prev_forced_t),eq(t,t),gte(t,prev_forced_t+1)) -init_seg_name $RepresentationID$/0_initial.mp4 -media_seg_name $RepresentationID$/0_$Number$.m4s /transcoding_temp/Z1GVWEc/index.mpd
The
media_seg_name
is where I prepend the custom seek_point let’s say I want to seek to segment 1233 the template would be :-media_seg_name $RepresentationID$/1233_$Number$.m4s
and the segments would be 1233_1 1233_2 1233_* So I can serve the right segment after seek. but the player does not recover and still downloading subsequent segments. I guess since a new initial segment is generated and I somehow miss headers for continuous playback after seek but I’m probably wrong.
Thanks for your help
-
Track API calls in Node.js with Piwik
When using Piwik for analytics, sometimes you don’t want to track only your website’s visitors. Especially as modern web services usually offer RESTful APIs, why not use Piwik to track those requests as well ? It really gives you a more accurate view on how users interact with your services : In which ways do your clients use your APIs compared to your website ? Which of your services are used the most ? And what kind of tools are consuming your API ?
If you’re using Node.js as your application platform, you can use piwik-tracker. It’s a lightweight wrapper for Piwik’s own Tracking HTTP API, which helps you tracking your requests.
First, start with installing
piwik-tracker
as a dependency for your project :npm install piwik-tracker --save
Then create a new tracking instance with your Piwik URL and the site ID of the project you want to track. As Piwik requires a fully qualified URL for analytics, add it in front of the actual request URL.
var PiwikTracker = require('piwik-tracker');
// Initialize with your site ID and Piwik URL
var piwik = new PiwikTracker(1, 'http://mywebsite.com/piwik.php');
// Piwik works with absolute URLs, so you have to provide protocol and hostname
var baseUrl = 'http://example.com';
// Track a request URL:
piwik.track(baseUrl + req.url);Of cause you can do more than only tracking simple URLs : All parameters offered by Piwik’s Tracking HTTP API Reference are supported, this also includes custom variables. During Piwik API calls, those are referenced as JSON string, so for better readability, you should use
JSON.stringify({})
instead of manual encoding.piwik.track({
// The full request URL
url: baseUrl + req.url,
// This will be shown as title in your Piwik backend
action_name: 'API call',
// User agent and language settings of the client
ua: req.header('User-Agent'),
lang: req.header('Accept-Language'),
// Custom request variables
cvar: JSON.stringify({
'1': ['API version', 'v1'],
'2': ['HTTP method', req.method]
})
});As you can see, you can pass along arbitrary fields of a Node.js request object like HTTP header fields, status code or request method (GET, POST, PUT, etc.) as well. That should already cover most of your needs.
But so far, all requests have been tracked with the IP/hostname of your Node.js application. If you also want the API user’s IP to show up in your analytics data, you have to override Piwik’s default setting, which requires your secret Piwik token :
function getRemoteAddr(req) {
if (req.ip) return req.ip;
if (req._remoteAddress) return req._remoteAddress;
var sock = req.socket;
if (sock.socket) return sock.socket.remoteAddress;
return sock.remoteAddress;
}
piwik.track({
// …
token_auth: '<YOUR SECRET API TOKEN>',
cip: getRemoteAddr(req)
});As we have now collected all the values that we wanted to track, we’re basically done. But if you’re using Express or restify for your backend, we can still go one step further and put all of this together into a custom middleware, which makes tracking requests even easier.
First we start off with the basic code of our new middleware and save it as
lib/express-piwik-tracker.js
:// ./lib/express-piwik-tracker.js
var PiwikTracker = require('piwik-tracker');
function getRemoteAddr(req) {
if (req.ip) return req.ip;
if (req._remoteAddress) return req._remoteAddress;
var sock = req.socket;
if (sock.socket) return sock.socket.remoteAddress;
return sock.remoteAddress;
}
exports = module.exports = function analytics(options) {
var piwik = new PiwikTracker(options.siteId, options.piwikUrl);
return function track(req, res, next) {
piwik.track({
url: options.baseUrl + req.url,
action_name: 'API call',
ua: req.header('User-Agent'),
lang: req.header('Accept-Language'),
cvar: JSON.stringify({
'1': ['API version', 'v1'],
'2': ['HTTP method', req.method]
}),
token_auth: options.piwikToken,
cip: getRemoteAddr(req)
});
next();
}
}Now to use it in our application, we initialize it in our main
app.js
file :// app.js
var express = require('express'),
piwikTracker = require('./lib/express-piwik-tracker.js'),
app = express();
// This tracks ALL requests to your Express application
app.use(piwikTracker({
siteId : 1,
piwikUrl : 'http://mywebsite.com/piwik.php',
baseUrl : 'http://example.com',
piwikToken: '<YOUR SECRET API TOKEN>'
}));This will now track each request going to every URL of your API. If you want to limit tracking to a certain path, you can also attach it to a single route instead :
var tracker = piwikTracker({
siteId : 1,
piwikUrl : 'http://mywebsite.com/piwik.php',
baseUrl : 'http://example.com',
piwikToken: '<YOUR SECRET API TOKEN>'
});
router.get('/only/track/me', tracker, function(req, res) {
// Your code that handles the route and responds to the request
});And that’s everything you need to track your API users alongside your regular website users.