Recherche avancée

Médias (1)

Mot : - Tags -/biomaping

Autres articles (34)

  • MediaSPIP Core : La Configuration

    9 novembre 2010, par

    MediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
    Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

Sur d’autres sites (6687)

  • FFMPEG : Is defining a context for a codec compulsary ?

    28 novembre 2013, par sam

    I'm having a a decoder code. I'm trying to integrate it into ffmpeg framework.

    I'm referring to the HOW TO given here : http://wiki.multimedia.cx/index.php?title=FFmpeg_codec_howto

    According to that article i need to define a structure in my decoder_name.c file.

    The example structure is shown below :

    AVCodec sample_decoder =
    {
       .name           = "sample",
       .type           = AVCODEC_TYPE_VIDEO,
       .id             = AVCODEC_ID_SAMPLE,
      // .priv_data_size = sizeof(COOKContext),
       .init           = sample_decode_init,
       .close          = sample_decode_close,
       .decode         = sample_decode_frame,
    };

    Where,

    .name -> specifies the short name of my decoder.

    .type -> is used to specify that it is a video decoder.

    .id -> is an unique id that i'm assigning to my video decoder.

    .init -> is a function pointer to the function in my decoder code that performs decoder related initializations

    .decode -> is a function pointer to the function in my decoder code that decodes a single frame, given the input data (elementary stream).

    .close -> is a function pointer to the function in my decoder that frees all allocated memory i.e. the memory allocated in init.

    However, my doubt is according to the above mentioned article, there is another field called .priv_data_size which hold the size of some context.

    Is it compulsory to have this field .priv_data_size because according to the above article, i need not define all the parameters of the structure AVCodec. Further i do not possess any such context for my decoder.

    However, when i go through the code of other available decoders in libavcodec of ffmpeg, i find that every decoder has defined this. Will my decoder work if i do not specify this ? I'm unable to proceed because of this. please provide some guidance regrading the same.

    —Thanks in advance.

  • FFMPEG or FFPLAY, catch FFT signal in real time as floats

    25 avril 2021, par NVRM

    Looking to extract in real time a FFT snapshot of waveforms data with ffplay, in the view of creating animations.

    


    This is exactly what I am looking to catch, but this demo is using JavaScript in a browser. (Source own post)

    


    

    

    const audio = document.getElementById('music');
audio.load();
audio.play();

const ctx = new AudioContext();
const audioSrc = ctx.createMediaElementSource(audio);
const analyser = ctx.createAnalyser();

audioSrc.connect(analyser);
analyser.connect(ctx.destination);

analyser.fftSize = 256;
const bufferLength = analyser.frequencyBinCount;
const frequencyData = new Uint8Array(bufferLength);

setInterval(() => {
   analyser.getByteFrequencyData(frequencyData);
   console.log(frequencyData);
}, 1000);

    


    <audio src="http://strm112.1.fm/reggae_mobile_mp3" crossorigin="use-URL-credentials" controls="true"></audio>

    &#xD;&#xA;

    &#xD;&#xA;

    &#xD;&#xA;&#xA;


    &#xA;

    I tried many variations around the method posted on https://trac.ffmpeg.org/wiki/Waveform .

    &#xA;

    enter image description here

    &#xA;

    The problem is the output format for FFT is PCM (Pulse Code Modulation), and not real time.

    &#xA;


    &#xA;

    In a generic way, is there a simple way to do this, while the sound is playing, to retrieve this data ?

    &#xA;

    ffplay -fft file.mp3 > fft.json&#xA;

    &#xA;


    &#xA;

    Using C, same stuff : Apply FFT on pcm data and convert to a spectrogram

    &#xA;

    FFMPEG waveform filter documentation

    &#xA;

  • How do I know ffmpeg-php is installed ?

    18 juillet 2014, par Rob Avery IV

    I just followed the instructions from this link on how to install ffmpeg-php on my dedicated server : http://www.ndchost.com/wiki/server-administration/install-ffmpeg

    At the bottom, it says to run the command php -i|grep ffmpeg and if it outputs the following lines then it is installed :

    ffmpegffmpeg support (ffmpeg-php) => enabled
    ffmpeg-php version => 0.6.0
    ffmpeg.allow_persistent => 0 => 0

    When I run it, it gives me this :

    ffmpeg
    ffmpeg-php version => 0.6.0-svn
    ffmpeg-php built on => Jul 18 2014 08:46:12
    ffmpeg-php gd support  => enabled
    ffmpeg libavcodec version => Lavc52.108.0
    ffmpeg libavformat version => Lavf52.93.0
    ffmpeg swscaler version => SwS0.12.0
    ffmpeg.allow_persistent => 0 => 0
    ffmpeg.show_warnings => 0 => 0
    PWD => /usr/local/src/ffmpeg-php-0.6.0
    _SERVER["PWD"] => /usr/local/src/ffmpeg-php-0.6.0
    _ENV["PWD"] => /usr/local/src/ffmpeg-php-0.6.0

    I got 2/3 lines, but the one is not character-for-character the same.

    Is ffmpegffmpeg support (ffmpeg-php) => enabled the same as ffmpegffmpeg support (ffmpeg-php) => enabled in this context ?

    EDIT :
    Running this command ffmpeg -version gives me this result :

    FFmpeg version SVN-r26402, Copyright (c) 2000-2011 the FFmpeg developers
     built on Jul 18 2014 08:41:45 with gcc 4.4.7 20120313 (Red Hat 4.4.7-3)
     configuration: --enable-libmp3lame --disable-mmx --enable-shared
     libavutil     50.36. 0 / 50.36. 0
     libavcore      0.16. 1 /  0.16. 1
     libavcodec    52.108. 0 / 52.108. 0
     libavformat   52.93. 0 / 52.93. 0
     libavdevice   52. 2. 3 / 52. 2. 3
     libavfilter    1.74. 0 /  1.74. 0
     libswscale     0.12. 0 /  0.12. 0
    FFmpeg SVN-r26402
    libavutil     50.36. 0 / 50.36. 0
    libavcore      0.16. 1 /  0.16. 1
    libavcodec    52.108. 0 / 52.108. 0
    libavformat   52.93. 0 / 52.93. 0
    libavdevice   52. 2. 3 / 52. 2. 3
    libavfilter    1.74. 0 /  1.74. 0
    libswscale     0.12. 0 /  0.12. 0