
Recherche avancée
Autres articles (71)
-
Use, discuss, criticize
13 avril 2011, parTalk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
A discussion list is available for all exchanges between users. -
MediaSPIP Player : les contrôles
26 mai 2010, parLes contrôles à la souris du lecteur
En plus des actions au click sur les boutons visibles de l’interface du lecteur, il est également possible d’effectuer d’autres actions grâce à la souris : Click : en cliquant sur la vidéo ou sur le logo du son, celui ci se mettra en lecture ou en pause en fonction de son état actuel ; Molette (roulement) : en plaçant la souris sur l’espace utilisé par le média (hover), la molette de la souris n’exerce plus l’effet habituel de scroll de la page, mais diminue ou (...) -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)
Sur d’autres sites (10054)
-
FFMPEG or FFPLAY, catch FFT signal in real time as floats
25 avril 2021, par NVRMLooking to extract in real time a FFT snapshot of waveforms data with
ffplay
, in the view of creating animations.

This is exactly what I am looking to catch, but this demo is using JavaScript in a browser. (Source own post)




const audio = document.getElementById('music');
audio.load();
audio.play();

const ctx = new AudioContext();
const audioSrc = ctx.createMediaElementSource(audio);
const analyser = ctx.createAnalyser();

audioSrc.connect(analyser);
analyser.connect(ctx.destination);

analyser.fftSize = 256;
const bufferLength = analyser.frequencyBinCount;
const frequencyData = new Uint8Array(bufferLength);

setInterval(() => {
 analyser.getByteFrequencyData(frequencyData);
 console.log(frequencyData);
}, 1000);


<audio src="http://strm112.1.fm/reggae_mobile_mp3" crossorigin="use-URL-credentials" controls="true"></audio>








I tried many variations around the method posted on https://trac.ffmpeg.org/wiki/Waveform .




The problem is the output format for FFT is
PCM
(Pulse Code Modulation), and not real time.


In a generic way, is there a simple way to do this, while the sound is playing, to retrieve this data ?


ffplay -fft file.mp3 > fft.json




Using C, same stuff : Apply FFT on pcm data and convert to a spectrogram


FFMPEG waveform filter documentation


-
FFMPEG : Is defining a context for a codec compulsary ?
28 novembre 2013, par samI'm having a a decoder code. I'm trying to integrate it into
ffmpeg framework
.I'm referring to the HOW TO given here : http://wiki.multimedia.cx/index.php?title=FFmpeg_codec_howto
According to that article i need to define a structure in my
decoder_name.c
file.The example structure is shown below :
AVCodec sample_decoder =
{
.name = "sample",
.type = AVCODEC_TYPE_VIDEO,
.id = AVCODEC_ID_SAMPLE,
// .priv_data_size = sizeof(COOKContext),
.init = sample_decode_init,
.close = sample_decode_close,
.decode = sample_decode_frame,
};Where,
.name -> specifies the short name of my decoder.
.type -> is used to specify that it is a video decoder.
.id -> is an unique id that i'm assigning to my video decoder.
.init -> is a function pointer to the function in my decoder code that performs decoder related initializations
.decode -> is a function pointer to the function in my decoder code that decodes a single frame, given the input data (elementary stream).
.close -> is a function pointer to the function in my decoder that frees all allocated memory i.e. the memory allocated in init.However, my doubt is according to the above mentioned article, there is another field called
.priv_data_size
which hold the size of some context.Is it compulsory to have this field
.priv_data_size
because according to the above article, i need not define all the parameters of the structureAVCodec
. Further i do not possess any such context for my decoder.However, when i go through the code of other available decoders in
libavcodec
of ffmpeg, i find that every decoder has defined this. Will my decoder work if i do not specify this ? I'm unable to proceed because of this. please provide some guidance regrading the same.—Thanks in advance.
-
Revision 99981 : Sur Opera 40 sous Windows 10, ça provoquait un « Uncaught ReferenceError : ...
20 octobre 2016, par real3t@… — LogSur Opera 40 sous Windows 10, ça provoquait un « Uncaught ReferenceError ? : tableau_sites is not defined » qui rendait les boutons calculer inutiles.