
Recherche avancée
Autres articles (49)
-
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
-
Contribute to documentation
13 avril 2011Documentation is vital to the development of improved technical capabilities.
MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
To contribute, register to the project users’ mailing (...) -
Submit bugs and patches
13 avril 2011Unfortunately a software is never perfect.
If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
You may also (...)
Sur d’autres sites (5097)
-
Revision 99981 : Sur Opera 40 sous Windows 10, ça provoquait un « Uncaught ReferenceError : ...
20 octobre 2016, par real3t@… — LogSur Opera 40 sous Windows 10, ça provoquait un « Uncaught ReferenceError ? : tableau_sites is not defined » qui rendait les boutons calculer inutiles.
-
FFMPEG : Is defining a context for a codec compulsary ?
28 novembre 2013, par samI'm having a a decoder code. I'm trying to integrate it into
ffmpeg framework
.I'm referring to the HOW TO given here : http://wiki.multimedia.cx/index.php?title=FFmpeg_codec_howto
According to that article i need to define a structure in my
decoder_name.c
file.The example structure is shown below :
AVCodec sample_decoder =
{
.name = "sample",
.type = AVCODEC_TYPE_VIDEO,
.id = AVCODEC_ID_SAMPLE,
// .priv_data_size = sizeof(COOKContext),
.init = sample_decode_init,
.close = sample_decode_close,
.decode = sample_decode_frame,
};Where,
.name -> specifies the short name of my decoder.
.type -> is used to specify that it is a video decoder.
.id -> is an unique id that i'm assigning to my video decoder.
.init -> is a function pointer to the function in my decoder code that performs decoder related initializations
.decode -> is a function pointer to the function in my decoder code that decodes a single frame, given the input data (elementary stream).
.close -> is a function pointer to the function in my decoder that frees all allocated memory i.e. the memory allocated in init.However, my doubt is according to the above mentioned article, there is another field called
.priv_data_size
which hold the size of some context.Is it compulsory to have this field
.priv_data_size
because according to the above article, i need not define all the parameters of the structureAVCodec
. Further i do not possess any such context for my decoder.However, when i go through the code of other available decoders in
libavcodec
of ffmpeg, i find that every decoder has defined this. Will my decoder work if i do not specify this ? I'm unable to proceed because of this. please provide some guidance regrading the same.—Thanks in advance.
-
FFMPEG or FFPLAY, catch FFT signal in real time as floats
25 avril 2021, par NVRMLooking to extract in real time a FFT snapshot of waveforms data with
ffplay
, in the view of creating animations.

This is exactly what I am looking to catch, but this demo is using JavaScript in a browser. (Source own post)




const audio = document.getElementById('music');
audio.load();
audio.play();

const ctx = new AudioContext();
const audioSrc = ctx.createMediaElementSource(audio);
const analyser = ctx.createAnalyser();

audioSrc.connect(analyser);
analyser.connect(ctx.destination);

analyser.fftSize = 256;
const bufferLength = analyser.frequencyBinCount;
const frequencyData = new Uint8Array(bufferLength);

setInterval(() => {
 analyser.getByteFrequencyData(frequencyData);
 console.log(frequencyData);
}, 1000);


<audio src="http://strm112.1.fm/reggae_mobile_mp3" crossorigin="use-URL-credentials" controls="true"></audio>








I tried many variations around the method posted on https://trac.ffmpeg.org/wiki/Waveform .




The problem is the output format for FFT is
PCM
(Pulse Code Modulation), and not real time.


In a generic way, is there a simple way to do this, while the sound is playing, to retrieve this data ?


ffplay -fft file.mp3 > fft.json




Using C, same stuff : Apply FFT on pcm data and convert to a spectrogram


FFMPEG waveform filter documentation