Recherche avancée

Médias (3)

Mot : - Tags -/plugin

Autres articles (57)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

Sur d’autres sites (7440)

  • FFMPEG or FFPLAY, catch FFT signal in real time as floats

    25 avril 2021, par NVRM

    Looking to extract in real time a FFT snapshot of waveforms data with ffplay, in the view of creating animations.

    


    This is exactly what I am looking to catch, but this demo is using JavaScript in a browser. (Source own post)

    


    

    

    const audio = document.getElementById('music');
audio.load();
audio.play();

const ctx = new AudioContext();
const audioSrc = ctx.createMediaElementSource(audio);
const analyser = ctx.createAnalyser();

audioSrc.connect(analyser);
analyser.connect(ctx.destination);

analyser.fftSize = 256;
const bufferLength = analyser.frequencyBinCount;
const frequencyData = new Uint8Array(bufferLength);

setInterval(() => {
   analyser.getByteFrequencyData(frequencyData);
   console.log(frequencyData);
}, 1000);

    


    <audio src="http://strm112.1.fm/reggae_mobile_mp3" crossorigin="use-URL-credentials" controls="true"></audio>

    &#xD;&#xA;

    &#xD;&#xA;

    &#xD;&#xA;&#xA;


    &#xA;

    I tried many variations around the method posted on https://trac.ffmpeg.org/wiki/Waveform .

    &#xA;

    enter image description here

    &#xA;

    The problem is the output format for FFT is PCM (Pulse Code Modulation), and not real time.

    &#xA;


    &#xA;

    In a generic way, is there a simple way to do this, while the sound is playing, to retrieve this data ?

    &#xA;

    ffplay -fft file.mp3 > fft.json&#xA;

    &#xA;


    &#xA;

    Using C, same stuff : Apply FFT on pcm data and convert to a spectrogram

    &#xA;

    FFMPEG waveform filter documentation

    &#xA;

  • FFMPEG : Is defining a context for a codec compulsary ?

    28 novembre 2013, par sam

    I'm having a a decoder code. I'm trying to integrate it into ffmpeg framework.

    I'm referring to the HOW TO given here : http://wiki.multimedia.cx/index.php?title=FFmpeg_codec_howto

    According to that article i need to define a structure in my decoder_name.c file.

    The example structure is shown below :

    AVCodec sample_decoder =
    {
       .name           = "sample",
       .type           = AVCODEC_TYPE_VIDEO,
       .id             = AVCODEC_ID_SAMPLE,
      // .priv_data_size = sizeof(COOKContext),
       .init           = sample_decode_init,
       .close          = sample_decode_close,
       .decode         = sample_decode_frame,
    };

    Where,

    .name -> specifies the short name of my decoder.

    .type -> is used to specify that it is a video decoder.

    .id -> is an unique id that i&#39;m assigning to my video decoder.

    .init -> is a function pointer to the function in my decoder code that performs decoder related initializations

    .decode -> is a function pointer to the function in my decoder code that decodes a single frame, given the input data (elementary stream).

    .close -> is a function pointer to the function in my decoder that frees all allocated memory i.e. the memory allocated in init.

    However, my doubt is according to the above mentioned article, there is another field called .priv_data_size which hold the size of some context.

    Is it compulsory to have this field .priv_data_size because according to the above article, i need not define all the parameters of the structure AVCodec. Further i do not possess any such context for my decoder.

    However, when i go through the code of other available decoders in libavcodec of ffmpeg, i find that every decoder has defined this. Will my decoder work if i do not specify this ? I'm unable to proceed because of this. please provide some guidance regrading the same.

    —Thanks in advance.

  • Revision 99981 : Sur Opera 40 sous Windows 10, ça provoquait un « Uncaught ReferenceError : ...

    20 octobre 2016, par real3t@… — Log

    Sur Opera 40 sous Windows 10, ça provoquait un « Uncaught ReferenceError ? : tableau_sites is not defined » qui rendait les boutons calculer inutiles.