Recherche avancée

Médias (1)

Mot : - Tags -/book

Autres articles (96)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

Sur d’autres sites (5192)

  • Revision 99981 : Sur Opera 40 sous Windows 10, ça provoquait un « Uncaught ReferenceError : ...

    20 octobre 2016, par real3t@… — Log

    Sur Opera 40 sous Windows 10, ça provoquait un « Uncaught ReferenceError ? : tableau_sites is not defined » qui rendait les boutons calculer inutiles.

  • FFMPEG : Is defining a context for a codec compulsary ?

    28 novembre 2013, par sam

    I'm having a a decoder code. I'm trying to integrate it into ffmpeg framework.

    I'm referring to the HOW TO given here : http://wiki.multimedia.cx/index.php?title=FFmpeg_codec_howto

    According to that article i need to define a structure in my decoder_name.c file.

    The example structure is shown below :

    AVCodec sample_decoder =
    {
       .name           = "sample",
       .type           = AVCODEC_TYPE_VIDEO,
       .id             = AVCODEC_ID_SAMPLE,
      // .priv_data_size = sizeof(COOKContext),
       .init           = sample_decode_init,
       .close          = sample_decode_close,
       .decode         = sample_decode_frame,
    };

    Where,

    .name -> specifies the short name of my decoder.

    .type -> is used to specify that it is a video decoder.

    .id -> is an unique id that i'm assigning to my video decoder.

    .init -> is a function pointer to the function in my decoder code that performs decoder related initializations

    .decode -> is a function pointer to the function in my decoder code that decodes a single frame, given the input data (elementary stream).

    .close -> is a function pointer to the function in my decoder that frees all allocated memory i.e. the memory allocated in init.

    However, my doubt is according to the above mentioned article, there is another field called .priv_data_size which hold the size of some context.

    Is it compulsory to have this field .priv_data_size because according to the above article, i need not define all the parameters of the structure AVCodec. Further i do not possess any such context for my decoder.

    However, when i go through the code of other available decoders in libavcodec of ffmpeg, i find that every decoder has defined this. Will my decoder work if i do not specify this ? I'm unable to proceed because of this. please provide some guidance regrading the same.

    —Thanks in advance.

  • FFMPEG or FFPLAY, catch FFT signal in real time as floats

    25 avril 2021, par NVRM

    Looking to extract in real time a FFT snapshot of waveforms data with ffplay, in the view of creating animations.

    


    This is exactly what I am looking to catch, but this demo is using JavaScript in a browser. (Source own post)

    


    

    

    const audio = document.getElementById('music');
audio.load();
audio.play();

const ctx = new AudioContext();
const audioSrc = ctx.createMediaElementSource(audio);
const analyser = ctx.createAnalyser();

audioSrc.connect(analyser);
analyser.connect(ctx.destination);

analyser.fftSize = 256;
const bufferLength = analyser.frequencyBinCount;
const frequencyData = new Uint8Array(bufferLength);

setInterval(() => {
   analyser.getByteFrequencyData(frequencyData);
   console.log(frequencyData);
}, 1000);

    


    <audio src="http://strm112.1.fm/reggae_mobile_mp3" crossorigin="use-URL-credentials" controls="true"></audio>

    &#xD;&#xA;

    &#xD;&#xA;

    &#xD;&#xA;&#xA;


    &#xA;

    I tried many variations around the method posted on https://trac.ffmpeg.org/wiki/Waveform .

    &#xA;

    enter image description here

    &#xA;

    The problem is the output format for FFT is PCM (Pulse Code Modulation), and not real time.

    &#xA;


    &#xA;

    In a generic way, is there a simple way to do this, while the sound is playing, to retrieve this data ?

    &#xA;

    ffplay -fft file.mp3 > fft.json&#xA;

    &#xA;


    &#xA;

    Using C, same stuff : Apply FFT on pcm data and convert to a spectrogram

    &#xA;

    FFMPEG waveform filter documentation

    &#xA;