Recherche avancée

Médias (91)

Autres articles (86)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • MediaSPIP Player : problèmes potentiels

    22 février 2011, par

    Le lecteur ne fonctionne pas sur Internet Explorer
    Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
    Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...)

Sur d’autres sites (12147)

  • How do I use FFMPEG/libav to access the data in individual audio samples ?

    15 octobre 2022, par Breadsnshreds

    The end result is I'm trying to visualise the audio waveform to use in a DAW-like software. So I want to get each sample's value and draw it. With that in mind, I'm currently stumped by trying to gain access to the values stored in each sample. For the time being, I'm just trying to access the value in the first sample - I'll build it into a loop once I have some working code.

    


    I started off by following the code in this example. However, LibAV/FFMPEG has been updated since then, so a lot of the code is deprecated or straight up doesn't work the same anymore.

    


    Based on the example above, I believe the logic is as follows :

    


      

    1. get the formatting info of the audio file
    2. 


    3. get audio stream info from the format
    4. 


    5. check that the codec required for the stream is an audio codec
    6. 


    7. get the codec context (I think this is info about the codec) - This is where it gets kinda confusing for me
    8. 


    9. create an empty packet and frame to use - packets are for holding compressed data and frames are for holding uncompressed data
    10. 


    11. the format reads the first frame from the audio file into our packet
    12. 


    13. pass that packet into the codec context to be decoded
    14. 


    15. pass our frame to the codec context to receive the uncompressed audio data of the first frame
    16. 


    17. create a buffer to hold the values and try allocating samples to it from our frame
    18. 


    


    From debugging my code, I can see that step 7 succeeds and the packet that was empty receives some data. In step 8, the frame doesn't receive any data. This is what I need help with. I get that if I get the frame, assuming a stereo audio file, I should have two samples per frame, so really I just need your help to get uncompressed data into the frame.

    


    I've scoured through the documentation for loads of different classes and I'm pretty sure I'm using the right classes and functions to achieve my goal, but evidently not (I'm also using Qt, so I'm using qDebug throughout, and QString to hold the URL for the audio file as path). So without further ado, here's my code :

    


    // Step 1 - get the formatting info of the audio file
    AVFormatContext* format = avformat_alloc_context();
    if (avformat_open_input(&format, path.toStdString().c_str(), NULL, NULL) != 0) {
        qDebug() << "Could not open file " << path;
        return -1;
    }

// Step 2 - get audio stream info from the format
    if (avformat_find_stream_info(format, NULL) < 0) {
        qDebug() << "Could not retrieve stream info from file " << path;
        return -1;
    }

// Step 3 - check that the codec required for the stream is an audio codec
    int stream_index =- 1;
    for (unsigned int i=0; inb_streams; i++) {
        if (format->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_AUDIO) {
            stream_index = i;
            break;
        }
    }

    if (stream_index == -1) {
        qDebug() << "Could not retrieve audio stream from file " << path;
        return -1;
    }

// Step 4 -get the codec context
    const AVCodec *codec = avcodec_find_decoder(format->streams[stream_index]->codecpar->codec_id);
    AVCodecContext *codecContext = avcodec_alloc_context3(codec);
    avcodec_open2(codecContext, codec, NULL);

// Step 5 - create an empty packet and frame to use
    AVPacket *packet = av_packet_alloc();
    AVFrame *frame = av_frame_alloc();

// Step 6 - the format reads the first frame from the audio file into our packet
    av_read_frame(format, packet);
// Step 7 - pass that packet into the codec context to be decoded
    avcodec_send_packet(codecContext, packet);
//Step 8 - pass our frame to the codec context to receive the uncompressed audio data of the first frame
    avcodec_receive_frame(codecContext, frame);

// Step 9 - create a buffer to hold the values and try allocating samples to it from our frame
    double *buffer;
    av_samples_alloc((uint8_t**) &buffer, NULL, 1, frame->nb_samples, AV_SAMPLE_FMT_DBL, 0);
    qDebug () << "packet: " << &packet;
    qDebug() << "frame: " <<  frame;
    qDebug () << "buffer: " << buffer;


    


    For the time being, step 9 is incomplete as you can probably tell. But for now, I need help with step 8. Am I missing a step, using the wrong function, wrong class ? Cheers.

    


  • Revision c8ed36432e : Non-uniform quantization experiment This framework allows lower quantization bi

    4 mars 2015, par Deb Mukherjee

    Changed Paths :
     Modify /configure


     Modify /vp9/common/vp9_blockd.h


     Modify /vp9/common/vp9_onyxc_int.h


     Modify /vp9/common/vp9_quant_common.c


     Modify /vp9/common/vp9_quant_common.h


     Modify /vp9/common/vp9_rtcd_defs.pl


     Modify /vp9/decoder/vp9_decodeframe.c


     Modify /vp9/decoder/vp9_detokenize.c


     Modify /vp9/encoder/vp9_block.h


     Modify /vp9/encoder/vp9_encodemb.c


     Modify /vp9/encoder/vp9_encodemb.h


     Modify /vp9/encoder/vp9_quantize.c


     Modify /vp9/encoder/vp9_quantize.h


     Modify /vp9/encoder/vp9_rdopt.c



    Non-uniform quantization experiment

    This framework allows lower quantization bins to be shrunk down or
    expanded to match closer the source distribution (assuming a generalized
    gaussian-like central peaky model for the coefficients) in an
    entropy-constrained sense. Specifically, the width of the bins 0-4 are
    modified as a factor of the nominal quantization step size and from 5
    onwards all bins become the same as the nominal quantization step size.
    Further, different bin width profiles as well as reconstruction values
    can be used based on the coefficient band as well as the quantization step
    size divided into 5 ranges.

    A small gain currently on derflr of about 0.16% is observed with the
    same paraemters for all q values.
    Optimizing the parameters based on qstep value is left as a TODO for now.

    Results on derflr with all expts on is +6.08% (up from 5.88%).

    Experiments are in progress to tune the parameters for different
    coefficient bands and quantization step ranges.

    Change-Id : I88429d8cb0777021bfbb689ef69b764eafb3a1de

  • ffmpeg : Capture x11grab video with automatic 10mins splitting

    20 juillet 2015, par Leo Gallucci

    I’m aware I can split the ffmpeg captured video in a second step, with ffmpeg, however would be nice to be able to do it in 1 step.

    The command I use to capture is

    ffmpeg -an -y -f x11grab \
     -framerate ${FFMPEG_FRAME_RATE} \
     -video_size ${FFMPEG_FRAME_SIZE} ${FFMPEG_CODEC_ARGS} \
     -i "${DISPLAY}.0+0,0" \
     "${VIDEO_PATH}"

    Which option can I add to that capture cmd to tell it to automatically split the file in 10mins chunks or 5mb chunks for example ?