Recherche avancée

Médias (91)

Autres articles (56)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

  • Menus personnalisés

    14 novembre 2010, par

    MediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
    Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
    Menus créés à l’initialisation du site
    Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...)

Sur d’autres sites (3511)

  • How to stop a sound when certain other sound is inserted in the mix in ffmpeg ?

    3 avril 2022, par Antonio Oliveira

    I'm using a ffmpeg command that takes a set of sounds, mixes them into a single file, separating them by certain time intervals.

    


    Below is how my command is today.

    


    ffmpeg -i 
close_hh.wav    -i \
crash_l.wav     -i \
crash_r.wav     -i \
floor.wav       -i \
kick_l.wav      -i \
kick_r.wav      -i \
open_hh.wav     -i \
ride.wav        -i \
snare.wav       -i \
splash.wav      -i \
tom_1.wav       -i \
tom_2.wav       -i \
  tom_3.wav  -filter_complex  " [6]adelay=0|0[note_0];  [0]adelay=360|360[note_1];  [6]adelay=1260|1260[note_2];  [0]adelay=1537|1537[note_3];  [6]adelay=2494|2494[note_4];  [5]adelay=2767|2767[note_5];  [0]adelay=2969|2969[note_6];  [6]adelay=3673|3673[note_7];  [5]adelay=3924|3924[note_8];  [0]adelay=4132|4132[note_9];  [0][note_0][note_1][note_2][note_3][note_4][note_5][note_6][note_7][note_8][note_9]amix=inputs=11:normalize=0" record.wav


    


    This is the resulting audio that this command generates :

    


    ffmpg record.wav : https://drive.google.com/file/d/1LFV4ImLKLnRCqZRhZ7OqZy4Ecq5fwT3j/view?usp=sharing

    


    The purpose is to generate a drum recording, so I would like to simulate the dynamics of the hi-hat sounds : When the closed hi-hat is played, the open hi-hat will stop playing immediately if it is still sounding. The same behavior does not happen for any of the other sounds.

    


    One point that makes this a little more challenging is that other sounds can also be played between open hi-hat and closed hi-hat strikes, and theoretically the sound interruption behavior should work normally.

    


    Below is a recording demonstrating the expected result. (My app already reproduces the sound result I need internally, so I just made a simple recording with the microphone to illustrate)

    


    mic record.wav https://drive.google.com/file/d/19x19Fd_URQVo-MMCmGEHIC1SjaQbpWrh/view?usp=sharing

    


    Notice that in the first audio (ffmpeg record.wav) the first sound (open hi-hat) continues playing after the second is played.
In the second audio (mic record.wav) the first sound stops immediately after the second sound is played.

    


    How should the ffmpeg command be to get the expected result ?

    


  • why the output of mp3 decode sounds so delayed ?(with ffmpeg mp3lame lib)

    11 mars 2014, par user3401739

    i'm recording sound and encoding to mp3 with ffmpeg lib. then decode the mp3 data right away, play the decode data, but it's sounds so delayed.
    here are the codes :
    the function encode first parameter accepts the raw pcm data, len = 44100.

    encode parameters :

    cntx_->channels = 1;
    cntx_->sample_rate = 44100;
    cntx_->sample_fmt = 6;
    cntx_->channel_layout =  AV_CH_LAYOUT_MONO;
    cntx_->bit_rate = 8000;
    err_ = avcodec_open2(cntx_, codec_, NULL);

    vector<unsigned char="char">       encode(unsigned char* encode_data, unsigned int len)
    {
       vector<unsigned char="char"> ret;
       AVPacket avpkt;
       av_init_packet(&amp;avpkt);

       unsigned int len_encoded = 0;
       int data_left = len / 2;
       int miss_c = 0, i = 0;
       while (data_left > 0)
       {
           int sz = data_left > cntx_->frame_size ? cntx_->frame_size : data_left;
           mp3_frame_->nb_samples = sz;
           mp3_frame_->format = cntx_->sample_fmt;
           mp3_frame_->channel_layout = cntx_->channel_layout;

           int needed_size = av_samples_get_buffer_size(NULL, 1,
               mp3_frame_->nb_samples, cntx_->sample_fmt, 1);

           int r = avcodec_fill_audio_frame(mp3_frame_, 1, cntx_->sample_fmt, encode_data + len_encoded, needed_size, 0);

           int gotted = -1;

           r = avcodec_encode_audio2(cntx_, &amp;avpkt, mp3_frame_, &amp;gotted);
           if (gotted){
               i++;
               ret.insert(ret.end(), avpkt.data, avpkt.data + avpkt.size);
           }
           else if (gotted == 0){
               miss_c++;
           }
           len_encoded += needed_size;
           data_left -= sz;
           av_free_packet(&amp;avpkt);
       }
       return ret;
    }

    std::vector<unsigned char="char">  decode(unsigned char* data, unsigned int len)
    {
       std::vector<unsigned char="char"> ret;

       AVPacket avpkt;
       av_init_packet(&amp;avpkt);
       avpkt.data = data;
       avpkt.size = len;

       AVFrame* pframe = av_frame_alloc();
       while (avpkt.size > 0){
           int goted = -1;av_frame_unref(pframe);
           int used = avcodec_decode_audio4(cntx_, pframe, &amp;goted, &amp;avpkt);
           if (goted){
               ret.insert(ret.end(), pframe->data[0], pframe->data[0] + pframe->linesize[0]);
               avpkt.data += used;
               avpkt.size -= used;
               avpkt.dts = avpkt.pts = AV_NOPTS_VALUE;
           }
           else if (goted == 0){
               avpkt.data += used;
               avpkt.size -= used;
               avpkt.dts = avpkt.pts = AV_NOPTS_VALUE;
           }
           else if(goted &lt; 0){
               break;
           }
       }
       av_frame_free(&amp;pframe);
       return ret;
    }
    </unsigned></unsigned></unsigned></unsigned>

    Suppose it's the 100th call to encode(data, len), this "frame" would appear in 150th or later in the decode call, the latency is not acceptable. It seems the mp3lame encoder would keep the sample data for later use, but not my desire.
    I don't know what is going wrong. Thank you for any information.

    today i debug the code again and post some detail :

    encode : each pcm sample frame len = 23040 ,which is 10 times of mp3 frame size, each time call encode only output 9 frames, this output cause decode output 20736 samples, 1 frame(2304 bytes) is lost, and the sound is noisy.

    if the mp3 or mp2 encode is not suitable for real time voice transfer, which encoder should i choose ?

  • avcodec : Add explicit capability flag for encoder flushing

    10 avril 2020, par Philip Langdale
    avcodec : Add explicit capability flag for encoder flushing
    

    Previously, there was no way to flush an encoder such that after
    draining, the encoder could be used again. We generally suggested
    that clients teardown and replace the encoder instance in these
    situations. However, for at least some hardware encoders, the cost of
    this tear down/replace cycle is very high, which can get in the way of
    some use-cases - for example : segmented encoding with nvenc.

    To help address that use case, we added support for calling
    avcodec_flush_buffers() to nvenc and things worked in practice,
    although it was not clearly documented as to whether this should work
    or not. There was only one previous example of an encoder implementing
    the flush callback (audiotoolboxenc) and it's unclear if that was
    intentional or not. However, it was clear that calling
    avocdec_flush_buffers() on any other encoder would leave the encoder in
    an undefined state, and that's not great.

    As part of cleaning this up, this change introduces a formal capability
    flag for encoders that support flushing and ensures a flush call is a
    no-op for any other encoder. This allows client code to check if it is
    meaningful to call flush on an encoder before actually doing it.

    I have not attempted to separate the steps taken inside
    avcodec_flush_buffers() because it's not doing anything that's wrong
    for an encoder. But I did add a sanity check to reject attempts to
    flush a frame threaded encoder because I couldn't wrap my head around
    whether that code path was actually safe or not. As this combination
    doesn't exist today, we'll deal with it if it ever comes up.

    • [DH] doc/APIchanges
    • [DH] libavcodec/audiotoolboxenc.c
    • [DH] libavcodec/avcodec.h
    • [DH] libavcodec/decode.c
    • [DH] libavcodec/nvenc_h264.c
    • [DH] libavcodec/nvenc_hevc.c
    • [DH] libavcodec/version.h