
Recherche avancée
Médias (1)
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (65)
-
Qu’est ce qu’un éditorial
21 juin 2013, parEcrivez votre de point de vue dans un article. Celui-ci sera rangé dans une rubrique prévue à cet effet.
Un éditorial est un article de type texte uniquement. Il a pour objectif de ranger les points de vue dans une rubrique dédiée. Un seul éditorial est placé à la une en page d’accueil. Pour consulter les précédents, consultez la rubrique dédiée.
Vous pouvez personnaliser le formulaire de création d’un éditorial.
Formulaire de création d’un éditorial Dans le cas d’un document de type éditorial, les (...) -
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Mise à disposition des fichiers
14 avril 2011, parPar défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)
Sur d’autres sites (11478)
-
ffmpeg, lower fps with gpu - too many packets buffered
29 mai 2020, par AlterI'm trying to lower the fps for a large set of videos. Unfortunately, I don't have much experience with ffmpeg



This is my current command, which is a hybrid of multiple posts. At this point it's more guess work than anything else



ffmpeg \
 -y -hwaccel_output_format cuda -hwaccel_device 0 -hwaccel cuvid -c:v mpeg2_cuvid \
 -i myinput.mp4 -r 25 -c:v hevc_nvenc-b:v 128K -strict -2 -movflags faststart \ 
 /workspace/videos/24fps_trial2/rat1-control2.mp4 -c:v hevc_nvenc




The gist of it is just that I'm trying to lower fps and use hardware acceleration since my dataset is large



The message I get is



Trailing option(s) found in the command: may be ignored.
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x453a100] decoding for stream 0 failed
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x453a100] Could not find codec parameters for stream 0 (Video: mpeg2video (hvc1 / 0x31637668), none(tv), 2704x1520, 59946 kb/s): unspecified pixel format
Consider increasing the value for the 'analyzeduration' and 'probesize' options
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/workspace/gs/Rat Controls Jan 2020 /Rat 1 Control 2 .MP4':
 Metadata:
 major_brand : mp41
 minor_version : 538120216
 compatible_brands: mp41
 creation_time : 2020-01-07T09:45:34.000000Z
 firmware : HD7.01.01.70.00
 Duration: 00:01:49.63, start: 0.000000, bitrate: 60202 kb/s
 Stream #0:0(eng): Video: mpeg2video (hvc1 / 0x31637668), none(tv), 2704x1520, 59946 kb/s, 119.88 fps, 119.88 tbr, 120k tbn, 120k tbc (default)
 Metadata:
 creation_time : 2020-01-07T09:45:34.000000Z
 handler_name : GoPro H.265
 encoder : GoPro H.265 encoder
 Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 189 kb/s (default)
 Metadata:
 creation_time : 2020-01-07T09:45:34.000000Z
 handler_name : GoPro AAC
 Stream #0:2(eng): Data: bin_data (gpmd / 0x646D7067), 29 kb/s (default)
 Metadata:
 creation_time : 2020-01-07T09:45:34.000000Z
 handler_name : GoPro MET
 Stream #0:3(eng): Data: none (fdsc / 0x63736466), 21 kb/s (default)
 Metadata:
 creation_time : 2020-01-07T09:45:34.000000Z
 handler_name : GoPro SOS
Stream mapping:
 Stream #0:0 -> #0:0 (mpeg2video (mpeg2_cuvid) -> hevc (hevc_nvenc))
 Stream #0:1 -> #0:1 (aac (native) -> aac (native))
Press [q] to stop, [?] for help
Too many packets buffered for output stream 0:1.
[aac @ 0x45bf880] Qavg: 746.362
[aac @ 0x45bf880] 2 frames left in the queue on closing
Conversion failed!




-
ffmpeg : Extract unknown data stream from video container
23 juillet 2020, par PikkostackI have a .MOV container which contains the following tracks :


Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 3840x2160 [SAR 1:1 DAR 16:9], 100619 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc (default)
Metadata:
 creation_time : 2020-07-21T22:48:24.000000Z
 handler_name : DJI.AVC
 encoder : AVC encoder
Stream #0:1(eng): Data: none (priv / 0x76697270), 87 kb/s
Metadata:
 creation_time : 2020-07-21T22:48:24.000000Z
 handler_name : DJI.Meta
Stream #0:2(eng): Subtitle: mov_text (text / 0x74786574), 2 kb/s (default)
Metadata:
 creation_time : 2020-07-21T22:48:24.000000Z
 handler_name : DJI.Subtitle



As you can see, stream #0:1, called DJI.meta, is of an unknown data format. I just want to extract the raw data of this stream to a file. So that is the ffmpeg command I tried :


ffmpeg -i .\DJI_0001.MOV -map 0:1 metadata



But using this command results in the following error :


Unable to find a suitable output format for 'metadata'
metadata: Invalid argument



How can I tell ffmpeg that the data should not be formated, so that only the raw data is extracted ?


-
audio convert to mp3,pcm and vox using ffmpeg
8 juillet 2014, par user3789242Please can someone help me with the code for ffmpeg.
I have to useffmpeg
to convert a recorder voice (using HTML5) intomp3
,pcm
orvox
depending on the user’s selection.
I don’t know how to write the code forffmpeg
if some one can help me with the code or libraries.
Thank you in advance.Here is my code for recording the voice with a visualizer :
// variables
var leftchannel = [];
var rightchannel = [];
var recorder = null;
var recording = false;
var recordingLength = 0;
var volume = null;
var audioInput = null;
var sampleRate = 44100;
var audioContext = null;
var context = null;
var outputString;
if (!navigator.getUserMedia)
navigator.getUserMedia = navigator.getUserMedia ||
navigator.webkitGetUserMedia ||
navigator.mozGetUserMedia ||
navigator.msGetUserMedia;
if (navigator.getUserMedia){
navigator.getUserMedia({audio:true}, success, function(e) {
alert('Error capturing audio.');
});
} else alert('getUserMedia not supported in this browser.');
function getVal(value)
{
// if R is pressed, we start recording
if ( value == "record"){
recording = true;
// reset the buffers for the new recording
leftchannel.length = rightchannel.length = 0;
recordingLength = 0;
document.getElementById('output').innerHTML="Recording now...";
// if S is pressed, we stop the recording and package the WAV file
} else if ( value == "stop" ){
// we stop recording
recording = false;
document.getElementById('output').innerHTML="Building wav file...";
// we flat the left and right channels down
var leftBuffer = mergeBuffers ( leftchannel, recordingLength );
var rightBuffer = mergeBuffers ( rightchannel, recordingLength );
// we interleave both channels together
var interleaved = interleave ( leftBuffer, rightBuffer );
var buffer = new ArrayBuffer(44 + interleaved.length * 2);
var view = new DataView(buffer);
// RIFF chunk descriptor
writeUTFBytes(view, 0, 'RIFF');
view.setUint32(4, 44 + interleaved.length * 2, true);
writeUTFBytes(view, 8, 'WAVE');
// FMT sub-chunk
writeUTFBytes(view, 12, 'fmt ');
view.setUint32(16, 16, true);
view.setUint16(20, 1, true);
// stereo (2 channels)
view.setUint16(22, 2, true);
view.setUint32(24, sampleRate, true);
view.setUint32(28, sampleRate * 4, true);
view.setUint16(32, 4, true);
view.setUint16(34, 16, true);
// data sub-chunk
writeUTFBytes(view, 36, 'data');
view.setUint32(40, interleaved.length * 2, true);
var lng = interleaved.length;
var index = 44;
var volume = 1;
for (var i = 0; i < lng; i++){
view.setInt16(index, interleaved[i] * (0x7FFF * volume), true);
index += 2;
}
var blob = new Blob ( [ view ], { type : 'audio/wav' } );
// let's save it locally
document.getElementById('output').innerHTML='Handing off the file now...';
var url = (window.URL || window.webkitURL).createObjectURL(blob);
var li = document.createElement('li');
var au = document.createElement('audio');
var hf = document.createElement('a');
au.controls = true;
au.src = url;
hf.href = url;
hf.download = 'audio_recording_' + new Date().getTime() + '.wav';
hf.innerHTML = hf.download;
li.appendChild(au);
li.appendChild(hf);
recordingList.appendChild(li);
}
}
function success(e){
audioContext = window.AudioContext || window.webkitAudioContext;
context = new audioContext();
volume = context.createGain();
// creates an audio node from the microphone incoming stream(source)
source = context.createMediaStreamSource(e);
// connect the stream(source) to the gain node
source.connect(volume);
var bufferSize = 2048;
recorder = context.createScriptProcessor(bufferSize, 2, 2);
//node for the visualizer
analyser = context.createAnalyser();
analyser.smoothingTimeConstant = 0.3;
analyser.fftSize = 512;
splitter = context.createChannelSplitter();
//when recording happens
recorder.onaudioprocess = function(e){
if (!recording) return;
var left = e.inputBuffer.getChannelData (0);
var right = e.inputBuffer.getChannelData (1);
leftchannel.push (new Float32Array (left));
rightchannel.push (new Float32Array (right));
recordingLength += bufferSize;
// get the average for the first channel
var array = new Uint8Array(analyser.frequencyBinCount);
analyser.getByteFrequencyData(array);
var c=document.getElementById("myCanvas");
var ctx = c.getContext("2d");
// clear the current state
ctx.clearRect(0, 0, 1000, 325);
var gradient = ctx.createLinearGradient(0,0,0,300);
gradient.addColorStop(1,'#000000');
gradient.addColorStop(0.75,'#ff0000');
gradient.addColorStop(0.25,'#ffff00');
gradient.addColorStop(0,'#ffffff');
// set the fill style
ctx.fillStyle=gradient;
drawSpectrum(array);
function drawSpectrum(array) {
for ( var i = 0; i < (array.length); i++ ){
var value = array[i];
ctx.fillRect(i*5,325-value,3,325);
}
}
}
function getAverageVolume(array) {
var values = 0;
var average;
var length = array.length;
// get all the frequency amplitudes
for (var i = 0; i < length; i++) {
values += array[i];
}
average = values / length;
return average;
}
// we connect the recorder(node to destination(speakers))
volume.connect(splitter);
splitter.connect(analyser, 0, 0);
analyser.connect(recorder);
recorder.connect(context.destination);
}
function mergeBuffers(channelBuffer, recordingLength){
var result = new Float32Array(recordingLength);
var offset = 0;
var lng = channelBuffer.length;
for (var i = 0; i < lng; i++){
var buffer = channelBuffer[i];
result.set(buffer, offset);
offset += buffer.length;
}
return result;
}
function interleave(leftChannel, rightChannel){
var length = leftChannel.length + rightChannel.length;
var result = new Float32Array(length);
var inputIndex = 0;
for (var index = 0; index < length; ){
result[index++] = leftChannel[inputIndex];
result[index++] = rightChannel[inputIndex];
inputIndex++;
}
return result;
}
function writeUTFBytes(view, offset, string){
var lng = string.length;
for (var i = 0; i < lng; i++){
view.setUint8(offset + i, string.charCodeAt(i));
}
}