Advanced search

Medias (0)

Tag: - Tags -/formulaire

No media matches your criterion on the site.

Other articles (25)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 September 2013, by

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo; l’ajout d’une bannière l’ajout d’une image de fond;

  • Publier sur MédiaSpip

    13 June 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Support de tous types de médias

    10 April 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...); audio (MP3, Ogg, Wav et autres...); vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...); contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google (...)

On other websites (4558)

  • bash: ffmpeg libx265 prevent output

    13 September 2015, by linux_lover

    I’d like to use the new codec x265 (libx265) to encode my video collection.

    For this I created a lovely bash script under linux which works in general very well! But something is strange:

    I prohibit the output of ffmpeg to echo on my own way. With x264 (the "old" one) everything works fine. But as soon as I use x265 I get always this kind of output on my terminal:

    x265 [info]: HEVC encoder version 1.7
    x265 [info]: build info [Linux][GCC 5.1.0][64 bit] 8bpp
    x265 [info]: using cpu capabilities: MMX2 SSE2Fast SSSE3 Cache64
    x265 [info]: Main profile, Level-2.1 (Main tier)
    x265 [info]: Thread pool created using 2 threads
    x265 [info]: frame threads / pool features       : 1 / wpp(5 rows)
    x265 [info]: Coding QT: max CU size, min CU size : 64 / 8
    x265 [info]: Residual QT: max TU size, max depth : 32 / 1 inter / 1 intra
    x265 [info]: ME / range / subpel / merge         : hex / 57 / 2 / 2
    x265 [info]: Keyframe min / max / scenecut       : 25 / 250 / 40
    x265 [info]: Lookahead / bframes / badapt        : 20 / 4 / 2
    x265 [info]: b-pyramid / weightp / weightb / refs: 1 / 1 / 0 / 3
    x265 [info]: AQ: mode / str / qg-size / cu-tree  : 1 / 1.0 / 64 / 1
    x265 [info]: Rate Control / qCompress            : CRF-28.0 / 0.60
    x265 [info]: tools: rd=3 psy-rd=0.30 signhide tmvp strong-intra-smoothing
    x265 [info]: tools: deblock sao

    This is the way I encode my video with ffmpeg:

    ffmpeg -i /input/file -c:v libx265 -c:a copy -loglevel quiet /output/file.mp4 <>/dev/null 2>&1

    I thought that the

    <>/dev/null 2>&1

    and the

    -loglevel quiet

    will do this but apparently I’m mistaken.

    How can I solve this problem?

    Thanks for your help!

  • Encode and stream from Xbox 360 kinect using ffmpeg

    17 June 2015, by user3288346

    I want to live stream content obtained from Kinect onto my internal network.

    I have one physical machine which is my server and has ubuntu 14.04 Server on it. I connect remotely to it. I have installed ffmpeg and ffserver and can encode and stream stored video files on the server. However, I have a few problems when using the Xbox Kinect.

    I have xbox 360 kinect which I have attached through usb. I have followed this https://bitbucket.org/samirmenon/scl-manips-v2/wiki/vision/kinect, however I couldn’t get through the OpenCV part. When I run

    $ cmake-gui ..

    I get

    cmake-gui: cannot connect to X server

    I don’t have physical access to the machine. Probably, its due to accessing it remotely.

    When I do

    test@cloud-node-2:~/kinnect$ lsusb
    Bus 002 Device 006: ID 045e:02ae Microsoft Corp. Xbox NUI Camera
    Bus 002 Device 004: ID 045e:02b0 Microsoft Corp. Xbox NUI Motor
    Bus 002 Device 005: ID 045e:02ad Microsoft Corp. Xbox NUI Audio
    Bus 002 Device 003: ID 0409:005a NEC Corp. HighSpeed Hub
    Bus 002 Device 002: ID 0bda:0181 Realtek Semiconductor Corp.
    Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
    Bus 008 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
    Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
    Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
    Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
    Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
    Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
    Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub

    When I do

    test@cloud-node-2:~/kinnect$ ls -ltrh /dev/video*
    ls: cannot access /dev/video*: No such file or directory

    Therefore, I am not able to capture the video using ffmpeg.

  • How do I create and initialise a DXGI_FORMAT_NV12 resource in DX12 (source is AVFrame)

    5 January 2023, by mike

    I'm trying to create an NV12 resource as source for a video encoder in DX12. While I intend to eventually populate a resource from GPU, what I'm trying to do now is take an ffmpeg AVFrame I already have (in AV_PIX_FMT_YUV420P format) and create a texture in DXGI_FORMAT_NV12 format using that data.

    


    I understand the NV12 format (https://learn.microsoft.com/en-us/windows/win32/medfound/recommended-8-bit-yuv-formats-for-video-rendering#nv12) has U and V interleaved while the AV_PIX_FMT_YUV420P doesn't.

    


    My main question is what does the D3D12_RESOURCE_DESC look like for an NV12 texture - do I tell it I need more than one array/mip level to make it planar? Or do I just give it a single memory address with both planes layed out as per the NV12 format, and it figures out subresources for me based on the format?
    
I understand that to read the data I define two SRVs, one for Y mapped to the Red channel and a second for U and V, but it's how I initialise it that's confusing me.