
Recherche avancée
Médias (2)
-
Core Media Video
4 avril 2013, par
Mis à jour : Juin 2013
Langue : français
Type : Video
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (95)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs
Sur d’autres sites (7759)
-
How can I transform a sequence of images into a playable video using LibVLCSharp ?
9 février 2021, par adamasanI have a sequence of images that I was able to extract from a video using LibVLCSharp. This sample to be more specific. I'm creating a small video library manager for learning purposes, and I would like to extract frames and create thumbnails to play when the user hovers the mouse over the previewer.


Using the aforementioned sample I was able to create a WPF UI around the same loging and extract the frames from a video file. However what I want now is to convert these extracted frames into a video file, using them as preview for the video, just like happens on YouTube.


I wasn't able, however, to find out how to achieve this using LibVLCSharp or just LibVLC. Using this answer on Super User I was able to achieve my goal and put those frames together into a video using ffmpeg.


I haven't taken the time yet to study FFmpeg.Autogen, so I don't know if I would be able to extract the frames from the video files in the same way I can do with LibVLCSharp, but I don't see with good eyes using both libraries on my application, one to export the frames and one to generate these frames into a video.


So, is there a way to get the output frames and convert them into a playable video using LibVLCSharp (or libvlc) itself ?


-
Using video frame decoded by FFMPEG via CUDA as texture or image from OpenGL
29 mai 2022, par JPNotADragonUsing the example provided here, I was able to get FFMPEG to decode an HEVC encoded video via CUDA (as evidenced by Task Manager showing me that my test program is using 75% of the GPU).


I would like to go one step further and display those frames directly from the GPU - it would be a pity having to download them to the CPU (as the above sample code does) just to re-upload them to the GPU as OpenGL textures, especially since the conversion from YUV to RGB is also more efficient on the GPU (and I already have a shader doing that).


EDIT : to clarify what I'm trying to do, here is the list of decoding devices reported by FFMPEG via
avcodec_get_hw_config()
andav_hwdevice_get_type_name
:

Hardware configurations:
0: Device Type: dxva2
1: Device Type: (none)
2: Device Type: d3d11va
3: Device Type: cuda



As talonmies has pointed out in the comments, that last one is actually misnamed, the decoding is not done via CUDA computing but with dedicated hardware (SIP) on the GPU chip. There is no doubt however that that is the device I need to use, though it would be more properly designated as "NVDEC".


EDIT #2 : looking through the FFMPEG source code file responsible for decoding via NVDEC,
cuviddec.c
, it appears that there is no optionthat would prevent the unnecessary copy from device (GPU) memory to host (CPU) memory. That means I will most probably have to program against the NVDEC API myself.

-
h264 : don’t sync pic_id between threads.
3 avril 2017, par Ronald S. Bultjeh264 : don’t sync pic_id between threads.
This is how the ref list manager links bitstream IDs to H264Picture/Ref
objects, and is local to the producer thread. There is no need for the
consumer thread to know the bitstream IDs of its references in their
respective producer threads.In practice, this fixes tsan warnings when running fate-h264 :
WARNING : ThreadSanitizer : data race (pid=19295)
Read of size 4 at 0x7dbc0000e614 by main thread (mutexes : write M1914) :
#0 ff_h264_ref_picture src/libavcodec/h264_picture.c:112 (ffmpeg+0x0000013b3709)
[..]
Previous write of size 4 at 0x7dbc0000e614 by thread T2 (mutexes : write M1917) :
#0 build_def_list src/libavcodec/h264_refs.c:91 (ffmpeg+0x0000013b46cf)