
Recherche avancée
Autres articles (106)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (12386)
-
Recording voice using HTML5 and processing it with ffmpeg
22 mars 2015, par user3789242I need to use ffmpeg in my javascript/HTML5 project which allows the user to select the format he wants the audio to open with.I don’t know anything about ffmpeg and I’ve been doing lots of research I don’t know how to use it in my project. I found an example https://github.com/sopel39/audioconverter.js but the problem how can I install the ffmpeg.js which is 8 mg to m project. please if someone can help me I’ll be very thankfull
here is my full code :the javascript page :
// variables
var leftchannel = [];
var rightchannel = [];
var recorder = null;
var recording = false;
var recordingLength = 0;
var volume = null;
var audioInput = null;
var sampleRate = 44100;
var audioContext = null;
var context = null;
var outputString;
if (!navigator.getUserMedia)
navigator.getUserMedia = navigator.getUserMedia ||
navigator.webkitGetUserMedia ||
navigator.mozGetUserMedia ||
navigator.msGetUserMedia;
if (navigator.getUserMedia){
navigator.getUserMedia({audio:true}, success, function(e) {
alert('Error capturing audio.');
});
} else alert('getUserMedia not supported in this browser.');
function getVal(value)
{
// if R is pressed, we start recording
if ( value == "record"){
recording = true;
// reset the buffers for the new recording
leftchannel.length = rightchannel.length = 0;
recordingLength = 0;
document.getElementById('output').innerHTML="Recording now...";
// if S is pressed, we stop the recording and package the WAV file
} else if ( value == "stop" ){
// we stop recording
recording = false;
document.getElementById('output').innerHTML="Building wav file...";
// we flat the left and right channels down
var leftBuffer = mergeBuffers ( leftchannel, recordingLength );
var rightBuffer = mergeBuffers ( rightchannel, recordingLength );
// we interleave both channels together
var interleaved = interleave ( leftBuffer, rightBuffer );
var buffer = new ArrayBuffer(44 + interleaved.length * 2);
var view = new DataView(buffer);
// RIFF chunk descriptor
writeUTFBytes(view, 0, 'RIFF');
view.setUint32(4, 44 + interleaved.length * 2, true);
writeUTFBytes(view, 8, 'WAVE');
// FMT sub-chunk
writeUTFBytes(view, 12, 'fmt ');
view.setUint32(16, 16, true);
view.setUint16(20, 1, true);
// stereo (2 channels)
view.setUint16(22, 2, true);
view.setUint32(24, sampleRate, true);
view.setUint32(28, sampleRate * 4, true);
view.setUint16(32, 4, true);
view.setUint16(34, 16, true);
// data sub-chunk
writeUTFBytes(view, 36, 'data');
view.setUint32(40, interleaved.length * 2, true);
var lng = interleaved.length;
var index = 44;
var volume = 1;
for (var i = 0; i < lng; i++){
view.setInt16(index, interleaved[i] * (0x7FFF * volume), true);
index += 2;
}
var blob = new Blob ( [ view ], { type : 'audio/wav' } );
// let's save it locally
document.getElementById('output').innerHTML='Handing off the file now...';
var url = (window.URL || window.webkitURL).createObjectURL(blob);
var li = document.createElement('li');
var au = document.createElement('audio');
var hf = document.createElement('a');
au.controls = true;
au.src = url;
hf.href = url;
hf.download = 'audio_recording_' + new Date().getTime() + '.wav';
hf.innerHTML = hf.download;
li.appendChild(au);
li.appendChild(hf);
recordingList.appendChild(li);
}
}
function success(e){
audioContext = window.AudioContext || window.webkitAudioContext;
context = new audioContext();
volume = context.createGain();
// creates an audio node from the microphone incoming stream(source)
source = context.createMediaStreamSource(e);
// connect the stream(source) to the gain node
source.connect(volume);
var bufferSize = 2048;
recorder = context.createScriptProcessor(bufferSize, 2, 2);
//node for the visualizer
analyser = context.createAnalyser();
analyser.smoothingTimeConstant = 0.3;
analyser.fftSize = 512;
splitter = context.createChannelSplitter();
//when recording happens
recorder.onaudioprocess = function(e){
if (!recording) return;
var left = e.inputBuffer.getChannelData (0);
var right = e.inputBuffer.getChannelData (1);
leftchannel.push (new Float32Array (left));
rightchannel.push (new Float32Array (right));
recordingLength += bufferSize;
// get the average for the first channel
var array = new Uint8Array(analyser.frequencyBinCount);
analyser.getByteFrequencyData(array);
var c=document.getElementById("myCanvas");
var ctx = c.getContext("2d");
// clear the current state
ctx.clearRect(0, 0, 1000, 325);
var gradient = ctx.createLinearGradient(0,0,0,300);
gradient.addColorStop(1,'#000000');
gradient.addColorStop(0.75,'#ff0000');
gradient.addColorStop(0.25,'#ffff00');
gradient.addColorStop(0,'#ffffff');
// set the fill style
ctx.fillStyle=gradient;
drawSpectrum(array);
function drawSpectrum(array) {
for ( var i = 0; i < (array.length); i++ ){
var value = array[i];
ctx.fillRect(i*5,325-value,3,325);
}
}
}
function getAverageVolume(array) {
var values = 0;
var average;
var length = array.length;
// get all the frequency amplitudes
for (var i = 0; i < length; i++) {
values += array[i];
}
average = values / length;
return average;
}
// we connect the recorder(node to destination(speakers))
volume.connect(splitter);
splitter.connect(analyser, 0, 0);
analyser.connect(recorder);
recorder.connect(context.destination);
}
function mergeBuffers(channelBuffer, recordingLength){
var result = new Float32Array(recordingLength);
var offset = 0;
var lng = channelBuffer.length;
for (var i = 0; i < lng; i++){
var buffer = channelBuffer[i];
result.set(buffer, offset);
offset += buffer.length;
}
return result;
}
function interleave(leftChannel, rightChannel){
var length = leftChannel.length + rightChannel.length;
var result = new Float32Array(length);
var inputIndex = 0;
for (var index = 0; index < length; ){
result[index++] = leftChannel[inputIndex];
result[index++] = rightChannel[inputIndex];
inputIndex++;
}
return result;
}
function writeUTFBytes(view, offset, string){
var lng = string.length;
for (var i = 0; i < lng; i++){
view.setUint8(offset + i, string.charCodeAt(i));
}
}and here is the html code :
<code class="echappe-js"><script src="http://stackoverflow.com/feeds/tag/js/functions.js"></script>
-
FFmpeg remove silence with exact duration detected by detect silence
17 mars 2021, par davI have an audio file, that have some silences, which I am detecting with ffmpeg detectsilence and then trying to remove with removesilence, however there is some strange behavior. Specifically :


1) File's Basic info based on ffprobe show_streams


Input #0, mp3, from 'my_file.mp3':
 Metadata:
 encoder : Lavf58.64.100
 Duration: 00:00:25.22, start: 0.046042, bitrate: 32 kb/s
 Stream #0:0: Audio: mp3, 24000 Hz, mono, fltp, 32 kb/s



2) Using detectsilence


ffmpeg -i my_file.mp3 -af silencedetect=noise=-50dB:d=0.2 -f null -



I get this result


[mp3float @ 000001ee50074280] overread, skip -7 enddists: -1 -1
[silencedetect @ 000001ee5008a1c0] silence_start: 6.21417
[silencedetect @ 000001ee5008a1c0] silence_end: 6.91712 | silence_duration: 0.702958
[silencedetect @ 000001ee5008a1c0] silence_start: 16.44
[silencedetect @ 000001ee5008a1c0] silence_end: 17.1547 | silence_duration: 0.714708
[mp3float @ 000001ee50074280] overread, skip -10 enddists: -3 -3
[mp3float @ 000001ee50074280] overread, skip -5 enddists: -4 -4
[silencedetect @ 000001ee5008a1c0] silence_start: 24.4501
size=N/A time=00:00:25.17 bitrate=N/A speed=1.32e+03x
video:0kB audio:1180kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
[silencedetect @ 000001ee5008a1c0] silence_end: 25.176 | silence_duration: 0.725917



That also match the values and points based on Adobe Audition




So far all good.


3) Now, based on some calculations (which is based on application's logic on what should be the final duration of the audio) I am trying to delete the silence with "0.725917"s duration. For that, based on ffmpeg docs (https://ffmpeg.org/ffmpeg-filters.html#silencedetect)




Trim all silence encountered from beginning to end where there is more
than 1 second of silence in audio :
silenceremove=stop_periods=-1:stop_duration=1:stop_threshold=-90dB




I run this command


ffmpeg -i my_file.mp3 -af silenceremove=stop_periods=-1:stop_threshold=-50dB:stop_duration=0.72 result1.mp3



So, I am expecting that it should delete only the silence with "0.725917" duration (the last one in the above image), however it is deleting the silence that starts at 16.44s with duration of "0.714708"s. Please see the following comparison :




4) Running detectsilence on result1.mp3 with same options gives even stranger results


ffmpeg -i result1.mp3 -af silencedetect=noise=-50dB:d=0.2 -f null -



result


[mp3float @ 0000017723404280] overread, skip -5 enddists: -4 -4
[silencedetect @ 0000017723419540] silence_start: 6.21417
[silencedetect @ 0000017723419540] silence_end: 6.92462 | silence_duration: 0.710458
[mp3float @ 0000017723404280] overread, skip -7 enddists: -6 -6
[mp3float @ 0000017723404280] overread, skip -7 enddists: -2 -2
[mp3float @ 0000017723404280] overread, skip -6 enddists: -1 -1
 Last message repeated 1 times
[silencedetect @ 0000017723419540] silence_start: 23.7308
size=N/A time=00:00:24.45 bitrate=N/A speed=1.33e+03x
video:0kB audio:1146kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
[silencedetect @ 0000017723419540] silence_end: 24.456 | silence_duration: 0.725167



So, the results are :


- 

- With command to remove silences that are longer than "0.72 second", a silence that was "0.714708"s, got removed and - a silence with "0.725917"s remained as is (well, actually changed a little - as per 3rd point)
- The first silence that had started at "6.21417" and had a duration of "0.702958"s, suddenly now has a duration of "0.710458"s
- The 3rd silence that had started at "24.4501" (which now starts at 23.7308 - obviously because the 2nd silence was removed) and had a duration of "0.725917", now suddenly is "0.725167"s (this one is not a big difference, but still why even removing other silence, this silence's duration should change at all).








Accordingly the expected results are :


- 

- Only the silences that match the provided condition (stop_duration=0.72) should be removed. In this specific example only the last one, but in general any silence that matches the condition of the length - irrelevant of their positioning (start, end or in the middle)
- Other silences should remain with same exact duration they were before






FFMpeg : 4.2.4-1ubuntu0.1, Ubuntu : 20.04.2


Some attempts and results, while playing with ffmpeg options


a)


ffmpeg -i my_file.mp3 -af silenceremove=stop_periods=-1:stop_threshold=-50dB:stop_duration=0.72:detection=peak tmp1.mp3



result :
First and second silences are removed, 3rd silence's duration remains exactly the same


b)


ffmpeg -i my_file.mp3 -af silenceremove=stop_periods=-1:stop_threshold=-50dB:stop_duration=0.71 tmp_0.71.mp3



result :
First and second silences are removed, 3rd silence remains, but the duration becomes "0.72075"s


c)


ffmpeg -i my_file.mp3 -af silenceremove=stop_periods=-1:stop_threshold=-50dB:stop_duration=0.7 tmp_0.7.mp3



result :
all 3 silence are removed


d) the edge case


this command still removes the second silence (after which the first silence become exactly as in point #4 and last silence becomes "0.721375")


ffmpeg -i my_file.mp3 -af silenceremove=stop_periods=-1:stop_threshold=-50dB:stop_duration=0.72335499999 tmp_0.72335499999.mp3



but this one, again does not remove any silence :


ffmpeg -i my_file.mp3 -af silenceremove=stop_periods=-1:stop_threshold=-50dB:stop_duration=0.723355 tmp_0.723355.mp3



e) window param case 0.03


ffmpeg -i my_file.mp3 -af silenceremove=stop_periods=-1:stop_threshold=-50dB:stop_duration=0.72:window=0.03 window_0.03.mp3



does not remove any silence, but the detect silence


ffmpeg -i window_0.03.mp3 -af silencedetect=noise=-50dB:d=0.2 -f null -



gives this result (compare with silences in result1.mp3 - from point #4 )


[mp3float @ 000001c5c8824280] overread, skip -5 enddists: -4 -4
[silencedetect @ 000001c5c883a040] silence_start: 6.21417
[silencedetect @ 000001c5c883a040] silence_end: 6.92462 | silence_duration: 0.710458
[mp3float @ 000001c5c8824280] overread, skip -7 enddists: -6 -6
[mp3float @ 000001c5c8824280] overread, skip -7 enddists: -2 -2
[silencedetect @ 000001c5c883a040] silence_start: 16.4424
[silencedetect @ 000001c5c883a040] silence_end: 17.1555 | silence_duration: 0.713167
[mp3float @ 000001c5c8824280] overread, skip -6 enddists: -1 -1
 Last message repeated 1 times
[silencedetect @ 000001c5c883a040] silence_start: 24.4508
size=N/A time=00:00:25.17 bitrate=N/A speed=1.24e+03x
video:0kB audio:1180kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
[silencedetect @ 000001c5c883a040] silence_end: 25.176 | silence_duration: 0.725167



f) window case 0.01


ffmpeg -i my_file.mp3 -af silenceremove=stop_periods=-1:stop_threshold=-50dB:stop_duration=0.72:window=0.01 window_0.01.mp3



removes first and second silences, the detect silence with same params has the following result


[mp3float @ 000001ea631d4280] overread, skip -5 enddists: -4 -4
 Last message repeated 1 times
[mp3float @ 000001ea631d4280] overread, skip -7 enddists: -2 -2
[mp3float @ 000001ea631d4280] overread, skip -6 enddists: -1 -1
 Last message repeated 1 times
[silencedetect @ 000001ea631ea1c0] silence_start: 23.0108
size=N/A time=00:00:23.73 bitrate=N/A speed=1.2e+03x
video:0kB audio:1113kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
[silencedetect @ 000001ea631ea1c0] silence_end: 23.736 | silence_duration: 0.725167




Any thoughts, ideas, points are much appreciated.


-
Revision 34657 : lister les plugins non utilises (le glob() est pourri, qui fait mieux)
23 janvier 2010, par fil@… — Loglister les plugins non utilises (le glob() est pourri, qui fait mieux)