
Recherche avancée
Médias (2)
-
Core Media Video
4 avril 2013, par
Mis à jour : Juin 2013
Langue : français
Type : Video
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (111)
-
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Automated installation script of MediaSPIP
25 avril 2011, parTo overcome the difficulties mainly due to the installation of server side software dependencies, an "all-in-one" installation script written in bash was created to facilitate this step on a server with a compatible Linux distribution.
You must have access to your server via SSH and a root account to use it, which will install the dependencies. Contact your provider if you do not have that.
The documentation of the use of this installation script is available here.
The code of this (...) -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs
Sur d’autres sites (15296)
-
Revision 99981 : Sur Opera 40 sous Windows 10, ça provoquait un « Uncaught ReferenceError : ...
20 octobre 2016, par real3t@… — LogSur Opera 40 sous Windows 10, ça provoquait un « Uncaught ReferenceError ? : tableau_sites is not defined » qui rendait les boutons calculer inutiles.
-
FFMPEG : Is defining a context for a codec compulsary ?
28 novembre 2013, par samI'm having a a decoder code. I'm trying to integrate it into
ffmpeg framework
.I'm referring to the HOW TO given here : http://wiki.multimedia.cx/index.php?title=FFmpeg_codec_howto
According to that article i need to define a structure in my
decoder_name.c
file.The example structure is shown below :
AVCodec sample_decoder =
{
.name = "sample",
.type = AVCODEC_TYPE_VIDEO,
.id = AVCODEC_ID_SAMPLE,
// .priv_data_size = sizeof(COOKContext),
.init = sample_decode_init,
.close = sample_decode_close,
.decode = sample_decode_frame,
};Where,
.name -> specifies the short name of my decoder.
.type -> is used to specify that it is a video decoder.
.id -> is an unique id that i'm assigning to my video decoder.
.init -> is a function pointer to the function in my decoder code that performs decoder related initializations
.decode -> is a function pointer to the function in my decoder code that decodes a single frame, given the input data (elementary stream).
.close -> is a function pointer to the function in my decoder that frees all allocated memory i.e. the memory allocated in init.However, my doubt is according to the above mentioned article, there is another field called
.priv_data_size
which hold the size of some context.Is it compulsory to have this field
.priv_data_size
because according to the above article, i need not define all the parameters of the structureAVCodec
. Further i do not possess any such context for my decoder.However, when i go through the code of other available decoders in
libavcodec
of ffmpeg, i find that every decoder has defined this. Will my decoder work if i do not specify this ? I'm unable to proceed because of this. please provide some guidance regrading the same.—Thanks in advance.
-
FFMPEG or FFPLAY, catch FFT signal in real time as floats
25 avril 2021, par NVRMLooking to extract in real time a FFT snapshot of waveforms data with
ffplay
, in the view of creating animations.

This is exactly what I am looking to catch, but this demo is using JavaScript in a browser. (Source own post)




const audio = document.getElementById('music');
audio.load();
audio.play();

const ctx = new AudioContext();
const audioSrc = ctx.createMediaElementSource(audio);
const analyser = ctx.createAnalyser();

audioSrc.connect(analyser);
analyser.connect(ctx.destination);

analyser.fftSize = 256;
const bufferLength = analyser.frequencyBinCount;
const frequencyData = new Uint8Array(bufferLength);

setInterval(() => {
 analyser.getByteFrequencyData(frequencyData);
 console.log(frequencyData);
}, 1000);


<audio src="http://strm112.1.fm/reggae_mobile_mp3" crossorigin="use-URL-credentials" controls="true"></audio>








I tried many variations around the method posted on https://trac.ffmpeg.org/wiki/Waveform .




The problem is the output format for FFT is
PCM
(Pulse Code Modulation), and not real time.


In a generic way, is there a simple way to do this, while the sound is playing, to retrieve this data ?


ffplay -fft file.mp3 > fft.json




Using C, same stuff : Apply FFT on pcm data and convert to a spectrogram


FFMPEG waveform filter documentation