Recherche avancée

Médias (0)

Mot : - Tags -/protocoles

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (110)

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

  • L’utiliser, en parler, le critiquer

    10 avril 2011

    La première attitude à adopter est d’en parler, soit directement avec les personnes impliquées dans son développement, soit autour de vous pour convaincre de nouvelles personnes à l’utiliser.
    Plus la communauté sera nombreuse et plus les évolutions seront rapides ...
    Une liste de discussion est disponible pour tout échange entre utilisateurs.

Sur d’autres sites (13013)

  • Audio decoding using ffms2(ffmegsource)

    1er mai 2013, par praks411

    I'm using ffms2(ffmpegsource) a wrapper around libav to get video and audio frame from a file.
    Video decoding is working fine. However I'm facing some issues with audio decoding.
    FFMS2 provide a simple function FFMS_GetAudio(FFMS_AudioSource *A, void *Buf, int64_t Start, int64_t Count, FFMS_ErrorInfo *ErrorInfo); api to get the decoded buffer. The decoded data is return in buffer provided by user.

    For single channel the data is interpretation is straight forward with data byte starting from first location of user buffer. However when it comes to two channel there are two possibilities the decoded data could be planar or interleaved depending upon sample format return by FFMS_GetAudioProperties. In my case the sample format is always planar which means that decoded data will in two sperate data plane data[0] and data[1]. And this is what is explained by libav/ffmpeg and also by portaudio which consider planar data to be in two separate data plane.

    However FFMS_GetAudio just take single buffer from user. So can I assume for planar data
    data[0] = buf, data[1] = buf + offset, where offset is half the length of buffer return by FFMS_GetAudio.

    FFMS does not provide any good document for this interpretation. It would be great help if some can provide more information on this.

  • Suggestions on moving textures in system ram to gpu in DirectX ?

    11 février 2014, par OddlyOrdinary

    I've used FFMPEG to decode a 1080p 60fps video and have a pool of ready texture color information. Stored as just an uint8_t array. I've managed to create ID3D102D textures and update them with the color information, but my performance varies from 45-65fps.

    For reference this is a test application where I'm attempting to draw the video on a mesh. There are no other objects being processed by DirectX or my application.

    My original implementation involved me getting the pixel information from the pool of decoded video frames, using a simple Dynamic texture2d, mapping/memcopying/unmapping. The memcopy was very expensive at about 20% of my runtime.

    I've changed to decoding the video straight to a pool of D3D10_USAGE_DYNAMIC/D3D10_CPU_ACCESS_WRITE textures, and I'm able to always have textures ready before each update loop. I then have a Texture2D applied to the mesh that I'm updating with

    ID3D10Texture2D* decodedFrame = mDecoder->GetFrame();
    if(decodedFrontFrame){
       //ID3D10Device* device;
       device->CopyResource(mTexture, decodedFrame );
    }

    From my understanding CopyResource should be faster but I don't see a noticeable difference. My questions are, is there a better way ? Also for textures created with D3D10_USAGE_DYNAMIC, is there a way to tell DirectX that I intend to use it on the gpu the next frame ?

    The last thing I can think of would be decoding to a D3D10_USAGE_DEFAULT, but I don't know how I would create it using existing pixel information in system ram. Suggestions would be greatly appreciated, Thanks !

  • Read dumepd RTP stream in libav

    25 mai 2017, par Pawel K

    Hi I am in a need of a bit of a help/guidance because I got stuck in my research.

    The problem :

    How to convert RTP data using either gstreamer or avlib (ffmpeg) in either API (by programming) or console versions.

    Data

    I have RTP dump that comes from RTP/RTCP over TCP so I can get the precise start and stop for each RTP packet in file. It’s a H264 video stream dump.
    The data is in this fashion because I need to acquire the RTCP/RTP interleaved stream via libcurl (which I’m currently doing)

    Status

    I’ve tried to use ffmpeg to consume pure RTP packets but is seems that using rtp either by console or by programming involves "starting" the whole rtsp/rtp session business in ffmpeg. I’ve stopped there and for the time being I didn’t pursue this avenue deeper. I guess this is possible with lover level RTP API like ff_rtp_parse_packet() I’m too new with this lib to do it straight out.

    Then there is the gstreamer It has somewhat more capabilities to do it without programming, but for the time being I’m not able to figure out how to pass it the RTP dump I have.

    I have also tried to do a little bit of a trickery and stream the dump via socat/nc to the udp port and listen on it via ffplay with sdp file as an input, there seems to be some progress the rtp at least gets recognized, but for socat there are loads of packet missing (data sent too fast perhaps ?) and in the end the data is not visualized. When I used nc the video was badly misshapen but at least there were not that much receive errors.

    One way or another the data is not properly visualized.

    I know I can depacketize the data "by hand" but the idea is to do it via some kind of library because in the end there would also be second stream with audio that would have to be muxed together with the video.

    I would appreciate any help on how to tackle this problem.
    Thanks.