Recherche avancée

Médias (0)

Mot : - Tags -/masques

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (66)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

Sur d’autres sites (6690)

  • FFMPEG/NVDEC Fails When Under 7 Frames

    13 août 2021, par Meme Machine

    I was looking the examples from NVIDIA's repository, specifically their Encoding and Decoding projects. I downloaded the desktop duplication project, which allows you to capture a certain number of frames from the desktop as raw h264. I also got AppDecode, which decodes and displays frames from an input file. I noticed that if I try and capture only a single frame, it fails to decode the input file.

    


    Here is the output

    


    C:\Users\Admin>C:\Users\Admin\source\repos\video-sdk-samples\Samples\x64.Debug\AppDecD3d -d3d 11 -i C:\Users\Admin\source\repos\video-sdk-samples\nvEncDXGIOutputDuplicationSample\x64\Debug\ddatest_0.h264
GPU in use: NVIDIA GeForce RTX 2080 Super with Max-Q Design
Display with D3D11.
[INFO ][17:59:47] Media format: raw H.264 video (h264)
Session Initialization Time: 39 ms
[INFO ][17:59:47] Video Input Information
        Codec        : AVC/H.264
        Frame rate   : 30000/1000 = 30 fps
        Sequence     : Progressive
        Coded size   : [1920, 1088]
        Display area : [0, 0, 1920, 1080]
        Chroma       : YUV 420
        Bit depth    : 8
Video Decoding Params:
        Num Surfaces : 20
        Crop         : [0, 0, 0, 0]
        Resize       : 1920x1088
        Deinterlace  : Weave

Total frame decoded: 7
Session Deinitialization Time: 10 ms

C:\Users\Admin>C:\Users\Admin\source\repos\video-sdk-samples\Samples\x64.Debug\AppDecD3d -d3d 11 -i C:\Users\Admin\source\repos\video-sdk-samples\nvEncDXGIOutputDuplicationSample\x64\Debug\ddatest_0.h264
GPU in use: NVIDIA GeForce RTX 2080 Super with Max-Q Design
Display with D3D11.
[INFO ][17:59:54] Media format: raw H.264 video (h264)
[h264 @ 0000023B8AB5C3A0] decoding for stream 0 failed
Session Initialization Time: 42 ms
[INFO ][17:59:54] Video Input Information
        Codec        : AVC/H.264
        Frame rate   : 30000/1000 = 30 fps
        Sequence     : Progressive
        Coded size   : [1920, 1088]
        Display area : [0, 0, 1920, 1080]
        Chroma       : YUV 420
        Bit depth    : 8
Video Decoding Params:
        Num Surfaces : 20
        Crop         : [0, 0, 0, 0]
        Resize       : 1920x1088
        Deinterlace  : Weave

Total frame decoded: 6
Session Deinitialization Time: 10 ms


    


    I started from 10 frames and counted down to 6 where it eventually failed. It is important for me to know why this happens, because I plan to implement this decoder into my project, and will be feeding it single frames from a stream.

    


    Oh, and also I noticed the coded size is 1088 by 1920 instead of 1080 according to the output log. Not sure why that is occurring or if it is relevant

    


  • CUDA_ERORR_INVALID_CONTEXT

    15 août 2021, par Meme Machine

    I am making a desktop sharing application based off of these repositories from NVIDIA.

    


    https://github.com/NVIDIA/video-sdk-samples/tree/master/nvEncDXGIOutputDuplicationSample

    


    https://github.com/NVIDIA/video-sdk-samples/blob/master/Samples/AppDecode/AppDecD3D/

    


    https://github.com/NVIDIA/video-sdk-samples/tree/master/Samples/AppDecode/AppDecMem

    


    I intend to have a setup function that is called once when Remote Desktop is selected, and then a second function that actually displays the received frames which is called when a frame is received

    


    The below functions are nearly identical to the main() and NvD3D() functions found in AppDecD3D and AppDecMem repositories

    


    CUcontext cuContext = NULL; // maybe it has to do with this variable?&#xA;&#xA;int setup()&#xA;{&#xA;    char szInFilePath[256] = "C:\\Users\\Admin\\Desktop\\test.h264";&#xA;    int iGpu = 0;&#xA;    int iD3d = 0;&#xA;    try&#xA;    {&#xA;        //ParseCommandLine(argc, argv, szInFilePath, NULL, iGpu, NULL, &amp;iD3d);&#xA;        CheckInputFile(szInFilePath);&#xA;&#xA;        ck(cuInit(0));&#xA;        int nGpu = 0;&#xA;        ck(cuDeviceGetCount(&amp;nGpu));&#xA;        if (iGpu &lt; 0 || iGpu >= nGpu)&#xA;        {&#xA;            std::ostringstream err;&#xA;            err &lt;&lt; "GPU ordinal out of range. Should be within [" &lt;&lt; 0 &lt;&lt; ", " &lt;&lt; nGpu - 1 &lt;&lt; "]" &lt;&lt; std::endl;&#xA;            throw std::invalid_argument(err.str());&#xA;        }&#xA;        CUdevice cuDevice = 0;&#xA;        ck(cuDeviceGet(&amp;cuDevice, iGpu));&#xA;        char szDeviceName[80];&#xA;        ck(cuDeviceGetName(szDeviceName, sizeof(szDeviceName), cuDevice));&#xA;        std::cout &lt;&lt; "GPU in use: " &lt;&lt; szDeviceName &lt;&lt; std::endl;&#xA;&#xA;        ck(cuCtxCreate(&amp;cuContext, CU_CTX_SCHED_BLOCKING_SYNC, cuDevice));&#xA;        //NvDecD3D<framepresenterd3d11>(szInFilePath);&#xA;&#xA;        std::cout &lt;&lt; "Display with D3D11." &lt;&lt; std::endl;&#xA;    }&#xA;    catch (const std::exception&amp; ex)&#xA;    {&#xA;        std::cout &lt;&lt; ex.what();&#xA;        exit(1);&#xA;    }&#xA;    return 0;&#xA;}&#xA;&#xA;template<class typename="std::enable_if<std::is_base_of<FramePresenterD3D," framepresentertype="framepresentertype">::value>>&#xA;int NvDecD3D(char* szInFilePath)&#xA;{&#xA;    FileDataProvider dp(szInFilePath);&#xA;    FFmpegDemuxer demuxer(&amp;dp);&#xA;    NvDecoder dec(cuContext, demuxer.GetWidth(), demuxer.GetHeight(), true, FFmpeg2NvCodecId(demuxer.GetVideoCodec()));&#xA;    FramePresenterType presenter(cuContext, demuxer.GetWidth(), demuxer.GetHeight());&#xA;    CUdeviceptr dpFrame = 0;&#xA;    ck(cuMemAlloc(&amp;dpFrame, demuxer.GetWidth() * demuxer.GetHeight() * 4));&#xA;    int nVideoBytes = 0, nFrameReturned = 0, nFrame = 0;&#xA;    uint8_t* pVideo = NULL, ** ppFrame;&#xA;&#xA;    do&#xA;    {&#xA;        demuxer.Demux(&amp;pVideo, &amp;nVideoBytes);&#xA;        dec.Decode(pVideo, nVideoBytes, &amp;ppFrame, &amp;nFrameReturned);&#xA;        if (!nFrame &amp;&amp; nFrameReturned)&#xA;            LOG(INFO) &lt;&lt; dec.GetVideoInfo();&#xA;&#xA;        for (int i = 0; i &lt; nFrameReturned; i&#x2B;&#x2B;)&#xA;        {&#xA;            if (dec.GetBitDepth() == 8)&#xA;                Nv12ToBgra32((uint8_t*)ppFrame[i], dec.GetWidth(), (uint8_t*)dpFrame, 4 * dec.GetWidth(), dec.GetWidth(), dec.GetHeight());&#xA;            else&#xA;                P016ToBgra32((uint8_t*)ppFrame[i], 2 * dec.GetWidth(), (uint8_t*)dpFrame, 4 * dec.GetWidth(), dec.GetWidth(), dec.GetHeight());&#xA;            presenter.PresentDeviceFrame((uint8_t*)dpFrame, demuxer.GetWidth() * 4);&#xA;        }&#xA;        nFrame &#x2B;= nFrameReturned;&#xA;    } while (nVideoBytes);&#xA;    ck(cuMemFree(dpFrame));&#xA;    std::cout &lt;&lt; "Total frame decoded: " &lt;&lt; nFrame &lt;&lt; std::endl;&#xA;    return 0;&#xA;}&#xA;</class></framepresenterd3d11>

    &#xA;

    Notice the line NvDecD3D<framepresenterd3d11>(szInFilePath);</framepresenterd3d11> ? I plan to call NvDecD3D() when a frame is received. So, I commented out the call in setup() and moved it to my asio:async_read function. (see below)

    &#xA;

    void do_read_body()&#xA;    {&#xA;        readBuffer.reserve(_read_msg.ReadLength);&#xA;        _read_msg.Body = readBuffer.data();&#xA;        auto self(shared_from_this());&#xA;        asio::async_read(_socket,&#xA;            asio::buffer(_read_msg.Body, _read_msg.ReadLength),&#xA;            [this, self](std::error_code ec, std::size_t /*length*/)&#xA;            {&#xA;                if (!ec)&#xA;                {&#xA;                    if (_read_msg.CmdId == 0x5)&#xA;                    {&#xA;                        std::cout &lt;&lt; "Received a frame" &lt;&lt; std::endl;&#xA;&#xA;                        NvDecD3D<framepresenterd3d11>(szInFilePath);&#xA;                    }&#xA;                    else&#xA;                    {&#xA;                        std::cout &lt;&lt; std::string(_read_msg.Body, 0, _read_msg.ReadLength) &lt;&lt; std::endl;&#xA;                    }&#xA;                    &#xA;                    do_read_header();&#xA;                }&#xA;                else&#xA;                {&#xA;                    _room.leave(shared_from_this());&#xA;                }&#xA;            });&#xA;    }&#xA;</framepresenterd3d11>

    &#xA;

    However, when I go to execute it, I get CUDA_ERORR_INVALID_CONTEXT when cuMemAlloc() is called. If I uncomment the call to NvDecD3D() inside setup() and call it from there, it does not error however.

    &#xA;

    Do you have any idea what could be causing this problem ? Perhaps it is related to the ASIO.

    &#xA;

  • Create drawbox with ffmpeg between specific seconds (without reencoding whole video - faster)

    31 mars 2022, par protter

    My plan was to put a transparent red box behind a video. This box should only be present from second 1-45.&#xA;But if the videos are 3 hours long, the process takes a long time although it only has to process 45 seconds.

    &#xA;

    My first attempt takes too long :

    &#xA;

    ffmpeg -i %1 -vf drawbox=0:9*ih/10:iw:ih/10:t=fill:color=red@0.5:enable=&#x27;between(t,1,45)&#x27; "%~dp0transpred\%~n1%~x1

    &#xA;

    Then i tried splitting the video into two parts. put the box on the first video, and then put the two back together again.

    &#xA;

    ffmpeg  -ss 00:00:00.0000 -i %1 -to 00:00:45.0000  -vf drawbox=0:9*ih/10:iw:ih/10:t=fill:color=red@0.5:enable=&#x27;between(t,1,45)&#x27; "%~dp0transpred\%~n1A%~x1"

    &#xA;

    FFMpeg -ss 00:00:45.0000 -i %1 -c:v copy -c:a copy -avoid_negative_ts make_zero "%~dp0transpred\%~n1B%~x1"

    &#xA;

    But i don't even have to try to put these two together, because they are not separated exactly at the second. I have read that this is due to "timestamps" and the different video and audio streams.

    &#xA;

    Now I'm trying an approach to create a stream with the bar, and then overlay it with the finished video. I haven't quite managed that yet, and I don't know if it's faster.&#xA;Shortening the video is very fast.

    &#xA;

    EDIT (Added as a replacement for the comment later)

    &#xA;

    Thanks for your help I have almost done it with a slightly different approach. Unfortunately, the second part now always has no sound. No matter if I put A and B (B no sound) or B and A (A no sound) together.

    &#xA;

      &#xA;
    1. First split with mkvmerge so i have no worrys about the keyframes and get the exact time&#xA;mkvmerge --split timestamps:00:00:45.100 A.MKV -o splitmkm.mkv
    2. &#xA;

    3. Then add the Bar (Black because of easier testing) :&#xA;ffmpeg -i splitmkm-001.mkv -vf drawbox=0:9*ih/10:iw:ih/10:t=fill BAR1.MKV
    4. &#xA;

    5. Merge (mkvmerge ends with error) :&#xA;ffmpeg -safe 0 -f concat -i list.txt -c copy output1.mkv
    6. &#xA;

    &#xA;

    EDIT (Answer to kesh)

    &#xA;

    This was the error Again, audio codec config&#x27;s must match across all your concat files. The drawbox changed the audio Codec from AC-3 to Vorbis.

    &#xA;

    the procedure is now :

    &#xA;

      &#xA;
    1. mkvtoolnix\mkvmerge --split timestamps:00:00:05.100  %1 -o A_splitmkm.mkv with mkvmerge i have an exact split at the time, and i don't have to learn about keyframes.
    2. &#xA;

    3. ffmpeg -i A_splitmkm-001.mkv -vf drawbox=0:9*ih/10:iw:ih/10:t=fill:color=red A_BARmkm.MKV create the Bar
    4. &#xA;

    5. ffmpeg -i A_BARmkm.MKV -i A_splitmkm-001.mkv -map 0:v  -map 1 -map -1:v  -c copy A_BARwithAudio.mkv redo the step with the changed audio from drawbox
    6. &#xA;

    7. ffmpeg -safe 0 -f concat -i list.txt -map 0  -c copy A_output1.mkv merge
    8. &#xA;

    &#xA;

    Now everything works.&#xA;Thanks alot !

    &#xA;