Recherche avancée

Médias (2)

Mot : - Tags -/plugins

Autres articles (52)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

Sur d’autres sites (7522)

  • FFMPEG : Mapping YUV data to output buffer of decode function

    22 décembre 2013, par Zax

    I am modifying a video decoder code from FFMPEG. The decoded code is of format YUV420. I'm having 3 pointers, each pointing to Y, U and V data i.e :

    yPtr-> is a pointer pointing to the Luma
    uPtr-> is a pointer pointing to the Chroma Cb
    vPtr-> is a pointer pointing to the Chroma Cr

    However, the output pointer to which I need to map my YUV data is of type void. And the output pointer is just one.

    i.e : void *data is the output pointer for which I need to point my yPtr, uPtr and vPtr. How should I do this ?

    One approach I have tried is, created a new buffer whose size is equal to the sum of Y, U and V data, and copy the contents of yPtr, uPtr and vPtr to the newly allocated buffer, and the pointer to this buffer I'm allocating to *data output pointer.

    However this approach is not preferred because memcpy needs to be performed and other performance drawbacks.

    Can anyone please suggest an alternative for this issue. This may be not related directly to FFMPEG, but since I'm modifying FFMPEG's libavcodec's decoder code, I'm tagging it in FFMPEG.

    Edit : What I'm trying to do :

    Actually my understanding is if I make this pointer point to void *data pointer of any decode function of any decoder and setting *got_frame_ptr to 1, the framework will take care of dumping this data into the yuv file. is my understanding right ?

    The function prototype of my custom video decoder or any video decoder in ffmpeg is as shown below :

    static int myCustomDec_decode_frame(AVCodecContext *avctx,
               void *data, int *data_size,
               uint8_t *buf, int buf_size) {

    I'm referring to this post FFMPEG : Explain parameters of any codecs function pointers and assuming that I need to point *data to my YUV data pointer, and the dumping stuff will be taken care by ffmpeg. Please provide suggestions regrading the same.

  • Analyze a video's pixel data in Flash

    9 avril 2012, par simon.d

    I'm using FFmpeg to do some video analysis on my PC but I'd like to see if I can do something similar in the browser using Flash.

    Does Flash offer any way to take a video file as input, crack it open, and get access to the pixel data ?

    Thanks !

  • How to use pyav or opencv to decode a live stream of raw H.264 data ?

    6 mai 2024, par Dery

    The data was received by socket ,with no more shell , they are pure I P B frames begin with NAL Header(something like 00 00 00 01). I am now using pyav to decode the frames ,but i can only decode the data after the second pps info(in key frame) was received(so the chunk of data I send to my decode thread can begin with pps and sps ), otherwise the decode() or demux() will return error "non-existing PPS 0 referenced decode_slice_header error" .

    


    I want to feed data to a sustaining decoder which can remember the previous P frame , so after feeding one B frame, the decoder return a decoded video frame. Or someform of IO that can be opened as container and keep writing data into it by another thread.

    


    Here is my key code :

    


    #read thread... read until get a key frame, then make a new io.BytesIO() to store the new data.
rawFrames = io.BytesIO()
while flag_get_keyFrame:()
    ....
    content= socket.recv(2048)
    rawFrames.write(content)
    ....

#decode thread... decode content between two key frames
....
rawFrames.seek(0)
container = av.open(rawFrames)
for packet in container.demux():
    for frame in packet.decode():
        self.frames.append(frame)
....


    


    My code will play the video but with a 3 4 seconds delay. So I am not putting all of it here, because I know it's not actually working for what I want to achieve.
I want to play the video after receiving the first key frame and decode the following frames right after receiving them . Pyav opencv ffmpeg or something else ,how can I achieve my goal ?