
Recherche avancée
Autres articles (27)
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Mise à disposition des fichiers
14 avril 2011, parPar défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...) -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)
Sur d’autres sites (7062)
-
How do I create and initialise a DXGI_FORMAT_NV12 resource in DX12 (source is AVFrame)
5 janvier 2023, par mikeI'm trying to create an NV12 resource as source for a video encoder in DX12. While I intend to eventually populate a resource from GPU, what I'm trying to do now is take an ffmpeg
AVFrame
I already have (inAV_PIX_FMT_YUV420P
format) and create a texture inDXGI_FORMAT_NV12
format using that data.

I understand the NV12 format (https://learn.microsoft.com/en-us/windows/win32/medfound/recommended-8-bit-yuv-formats-for-video-rendering#nv12) has U and V interleaved while the
AV_PIX_FMT_YUV420P
doesn't.

My main question is what does the
D3D12_RESOURCE_DESC
look like for an NV12 texture - do I tell it I need more than one array/mip level to make it planar ? Or do I just give it a single memory address with both planes layed out as per the NV12 format, and it figures out subresources for me based on the format ?

I understand that to read the data I define two SRVs, one for Y mapped to the Red channel and a second for U and V, but it's how I initialise it that's confusing me.

-
Encode and stream from Xbox 360 kinect using ffmpeg
17 juin 2015, par user3288346I want to live stream content obtained from Kinect onto my internal network.
I have one physical machine which is my server and has ubuntu 14.04 Server on it. I connect remotely to it. I have installed ffmpeg and ffserver and can encode and stream stored video files on the server. However, I have a few problems when using the Xbox Kinect.
I have xbox 360 kinect which I have attached through usb. I have followed this https://bitbucket.org/samirmenon/scl-manips-v2/wiki/vision/kinect, however I couldn’t get through the OpenCV part. When I run
$ cmake-gui ..
I get
cmake-gui: cannot connect to X server
I don’t have physical access to the machine. Probably, its due to accessing it remotely.
When I do
test@cloud-node-2:~/kinnect$ lsusb
Bus 002 Device 006: ID 045e:02ae Microsoft Corp. Xbox NUI Camera
Bus 002 Device 004: ID 045e:02b0 Microsoft Corp. Xbox NUI Motor
Bus 002 Device 005: ID 045e:02ad Microsoft Corp. Xbox NUI Audio
Bus 002 Device 003: ID 0409:005a NEC Corp. HighSpeed Hub
Bus 002 Device 002: ID 0bda:0181 Realtek Semiconductor Corp.
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 008 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hubWhen I do
test@cloud-node-2:~/kinnect$ ls -ltrh /dev/video*
ls: cannot access /dev/video*: No such file or directoryTherefore, I am not able to capture the video using ffmpeg.
-
How to create a video out of frames without saving it to disk using python ?
6 septembre 2022, par brenodacostaI have a function that returns a frame as result. I wanted to know how to make a video out of a for-loop with this function without saving every frame and then creating the video.


What I have from now is something similar to :


import cv2
out = cv2.VideoWriter('video.mp4',cv2.VideoWriter_fourcc(*'DIVX'), 14.25,(500,258))
for frame in frames:
 img_result = MyImageTreatmentFunction(frame) # returns a numpy array image
 out.write(img_result)
out.release()



Then the video will be created as video.mp4 and I can access it on memory. I'm asking myself if there's a way to have this video in a variable that I can easily convert to bytes later. My purpose for that is to send the video via HTTP post.


I've looked on ffmpeg-python and opencv but I didn't find anything that applies to my case.