Recherche avancée

Médias (1)

Mot : - Tags -/ogg

Autres articles (29)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

  • Other interesting software

    13 avril 2011, par

    We don’t claim to be the only ones doing what we do ... and especially not to assert claims to be the best either ... What we do, we just try to do it well and getting better ...
    The following list represents softwares that tend to be more or less as MediaSPIP or that MediaSPIP tries more or less to do the same, whatever ...
    We don’t know them, we didn’t try them, but you can take a peek.
    Videopress
    Website : http://videopress.com/
    License : GNU/GPL v2
    Source code : (...)

Sur d’autres sites (4716)

  • Use modern avconv syntax for codec selection in documentation and tests

    18 octobre 2012, par Diego Biurrun
    Use modern avconv syntax for codec selection in documentation and tests
    
    • [DBH] doc/encoders.texi
    • [DBH] doc/faq.texi
    • [DBH] doc/filters.texi
    • [DBH] tests/fate-run.sh
    • [DBH] tests/fate/demux.mak
    • [DBH] tests/fate/h264.mak
    • [DBH] tests/fate/microsoft.mak
    • [DBH] tests/fate/mp3.mak
    • [DBH] tests/fate/mpc.mak
    • [DBH] tests/fate/utvideo.mak
    • [DBH] tests/fate/video.mak
    • [DBH] tests/fate/vqf.mak
    • [DBH] tests/lavf-regression.sh
  • Hardware Accelerated H264 Decode using DirectX11 in Unity Plugin for UWP

    8 janvier 2019, par rohit n

    I’ve built an Unity plugin for my UWP app which converts raw h264 packets to RGB data and renders it to a texture. I’ve used FFMPEG to do this and it works fine.

    int framefinished = avcodec_send_packet(m_pCodecCtx, &packet);
    framefinished = avcodec_receive_frame(m_pCodecCtx, m_pFrame);
    // YUV to RGB conversion and render to texture after this

    Now, I’m trying to shift to hardware based decoding using DirectX11 DXVA2.0.

    Using this : https://docs.microsoft.com/en-us/windows/desktop/medfound/supporting-direct3d-11-video-decoding-in-media-foundation

    I was able to create a decoder(ID3D11VideoDecoder) but I don’t know how to supply it the raw H264 packets and get the YUV or NV12 data as output.
    (Or if its possible to render the output directly to the texture since I can get the ID3D11Texture2D pointer)

    so my question is, How do you send the raw h264 packets to this decoder and get the output from it ?

    Also, this is for real time operation so I’m trying to achieve minimal latency.

    Thanks in advance !

  • How do I create and initialise a DXGI_FORMAT_NV12 resource in DX12 (source is AVFrame)

    5 janvier 2023, par mike

    I'm trying to create an NV12 resource as source for a video encoder in DX12. While I intend to eventually populate a resource from GPU, what I'm trying to do now is take an ffmpeg AVFrame I already have (in AV_PIX_FMT_YUV420P format) and create a texture in DXGI_FORMAT_NV12 format using that data.

    


    I understand the NV12 format (https://learn.microsoft.com/en-us/windows/win32/medfound/recommended-8-bit-yuv-formats-for-video-rendering#nv12) has U and V interleaved while the AV_PIX_FMT_YUV420P doesn't.

    


    My main question is what does the D3D12_RESOURCE_DESC look like for an NV12 texture - do I tell it I need more than one array/mip level to make it planar ? Or do I just give it a single memory address with both planes layed out as per the NV12 format, and it figures out subresources for me based on the format ?
    
I understand that to read the data I define two SRVs, one for Y mapped to the Red channel and a second for U and V, but it's how I initialise it that's confusing me.