Recherche avancée

Médias (1)

Mot : - Tags -/iphone

Autres articles (65)

  • Les vidéos

    21 avril 2011, par

    Comme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
    Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
    Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

Sur d’autres sites (9195)

  • CUDA_ERORR_INVALID_CONTEXT

    15 août 2021, par Meme Machine

    I am making a desktop sharing application based off of these repositories from NVIDIA.

    


    https://github.com/NVIDIA/video-sdk-samples/tree/master/nvEncDXGIOutputDuplicationSample

    


    https://github.com/NVIDIA/video-sdk-samples/blob/master/Samples/AppDecode/AppDecD3D/

    


    https://github.com/NVIDIA/video-sdk-samples/tree/master/Samples/AppDecode/AppDecMem

    


    I intend to have a setup function that is called once when Remote Desktop is selected, and then a second function that actually displays the received frames which is called when a frame is received

    


    The below functions are nearly identical to the main() and NvD3D() functions found in AppDecD3D and AppDecMem repositories

    


    CUcontext cuContext = NULL; // maybe it has to do with this variable?&#xA;&#xA;int setup()&#xA;{&#xA;    char szInFilePath[256] = "C:\\Users\\Admin\\Desktop\\test.h264";&#xA;    int iGpu = 0;&#xA;    int iD3d = 0;&#xA;    try&#xA;    {&#xA;        //ParseCommandLine(argc, argv, szInFilePath, NULL, iGpu, NULL, &amp;iD3d);&#xA;        CheckInputFile(szInFilePath);&#xA;&#xA;        ck(cuInit(0));&#xA;        int nGpu = 0;&#xA;        ck(cuDeviceGetCount(&amp;nGpu));&#xA;        if (iGpu &lt; 0 || iGpu >= nGpu)&#xA;        {&#xA;            std::ostringstream err;&#xA;            err &lt;&lt; "GPU ordinal out of range. Should be within [" &lt;&lt; 0 &lt;&lt; ", " &lt;&lt; nGpu - 1 &lt;&lt; "]" &lt;&lt; std::endl;&#xA;            throw std::invalid_argument(err.str());&#xA;        }&#xA;        CUdevice cuDevice = 0;&#xA;        ck(cuDeviceGet(&amp;cuDevice, iGpu));&#xA;        char szDeviceName[80];&#xA;        ck(cuDeviceGetName(szDeviceName, sizeof(szDeviceName), cuDevice));&#xA;        std::cout &lt;&lt; "GPU in use: " &lt;&lt; szDeviceName &lt;&lt; std::endl;&#xA;&#xA;        ck(cuCtxCreate(&amp;cuContext, CU_CTX_SCHED_BLOCKING_SYNC, cuDevice));&#xA;        //NvDecD3D<framepresenterd3d11>(szInFilePath);&#xA;&#xA;        std::cout &lt;&lt; "Display with D3D11." &lt;&lt; std::endl;&#xA;    }&#xA;    catch (const std::exception&amp; ex)&#xA;    {&#xA;        std::cout &lt;&lt; ex.what();&#xA;        exit(1);&#xA;    }&#xA;    return 0;&#xA;}&#xA;&#xA;template<class typename="std::enable_if<std::is_base_of<FramePresenterD3D," framepresentertype="framepresentertype">::value>>&#xA;int NvDecD3D(char* szInFilePath)&#xA;{&#xA;    FileDataProvider dp(szInFilePath);&#xA;    FFmpegDemuxer demuxer(&amp;dp);&#xA;    NvDecoder dec(cuContext, demuxer.GetWidth(), demuxer.GetHeight(), true, FFmpeg2NvCodecId(demuxer.GetVideoCodec()));&#xA;    FramePresenterType presenter(cuContext, demuxer.GetWidth(), demuxer.GetHeight());&#xA;    CUdeviceptr dpFrame = 0;&#xA;    ck(cuMemAlloc(&amp;dpFrame, demuxer.GetWidth() * demuxer.GetHeight() * 4));&#xA;    int nVideoBytes = 0, nFrameReturned = 0, nFrame = 0;&#xA;    uint8_t* pVideo = NULL, ** ppFrame;&#xA;&#xA;    do&#xA;    {&#xA;        demuxer.Demux(&amp;pVideo, &amp;nVideoBytes);&#xA;        dec.Decode(pVideo, nVideoBytes, &amp;ppFrame, &amp;nFrameReturned);&#xA;        if (!nFrame &amp;&amp; nFrameReturned)&#xA;            LOG(INFO) &lt;&lt; dec.GetVideoInfo();&#xA;&#xA;        for (int i = 0; i &lt; nFrameReturned; i&#x2B;&#x2B;)&#xA;        {&#xA;            if (dec.GetBitDepth() == 8)&#xA;                Nv12ToBgra32((uint8_t*)ppFrame[i], dec.GetWidth(), (uint8_t*)dpFrame, 4 * dec.GetWidth(), dec.GetWidth(), dec.GetHeight());&#xA;            else&#xA;                P016ToBgra32((uint8_t*)ppFrame[i], 2 * dec.GetWidth(), (uint8_t*)dpFrame, 4 * dec.GetWidth(), dec.GetWidth(), dec.GetHeight());&#xA;            presenter.PresentDeviceFrame((uint8_t*)dpFrame, demuxer.GetWidth() * 4);&#xA;        }&#xA;        nFrame &#x2B;= nFrameReturned;&#xA;    } while (nVideoBytes);&#xA;    ck(cuMemFree(dpFrame));&#xA;    std::cout &lt;&lt; "Total frame decoded: " &lt;&lt; nFrame &lt;&lt; std::endl;&#xA;    return 0;&#xA;}&#xA;</class></framepresenterd3d11>

    &#xA;

    Notice the line NvDecD3D<framepresenterd3d11>(szInFilePath);</framepresenterd3d11> ? I plan to call NvDecD3D() when a frame is received. So, I commented out the call in setup() and moved it to my asio:async_read function. (see below)

    &#xA;

    void do_read_body()&#xA;    {&#xA;        readBuffer.reserve(_read_msg.ReadLength);&#xA;        _read_msg.Body = readBuffer.data();&#xA;        auto self(shared_from_this());&#xA;        asio::async_read(_socket,&#xA;            asio::buffer(_read_msg.Body, _read_msg.ReadLength),&#xA;            [this, self](std::error_code ec, std::size_t /*length*/)&#xA;            {&#xA;                if (!ec)&#xA;                {&#xA;                    if (_read_msg.CmdId == 0x5)&#xA;                    {&#xA;                        std::cout &lt;&lt; "Received a frame" &lt;&lt; std::endl;&#xA;&#xA;                        NvDecD3D<framepresenterd3d11>(szInFilePath);&#xA;                    }&#xA;                    else&#xA;                    {&#xA;                        std::cout &lt;&lt; std::string(_read_msg.Body, 0, _read_msg.ReadLength) &lt;&lt; std::endl;&#xA;                    }&#xA;                    &#xA;                    do_read_header();&#xA;                }&#xA;                else&#xA;                {&#xA;                    _room.leave(shared_from_this());&#xA;                }&#xA;            });&#xA;    }&#xA;</framepresenterd3d11>

    &#xA;

    However, when I go to execute it, I get CUDA_ERORR_INVALID_CONTEXT when cuMemAlloc() is called. If I uncomment the call to NvDecD3D() inside setup() and call it from there, it does not error however.

    &#xA;

    Do you have any idea what could be causing this problem ? Perhaps it is related to the ASIO.

    &#xA;

  • FFMPEG/NVDEC Fails When Under 7 Frames

    13 août 2021, par Meme Machine

    I was looking the examples from NVIDIA's repository, specifically their Encoding and Decoding projects. I downloaded the desktop duplication project, which allows you to capture a certain number of frames from the desktop as raw h264. I also got AppDecode, which decodes and displays frames from an input file. I noticed that if I try and capture only a single frame, it fails to decode the input file.

    &#xA;

    Here is the output

    &#xA;

    C:\Users\Admin>C:\Users\Admin\source\repos\video-sdk-samples\Samples\x64.Debug\AppDecD3d -d3d 11 -i C:\Users\Admin\source\repos\video-sdk-samples\nvEncDXGIOutputDuplicationSample\x64\Debug\ddatest_0.h264&#xA;GPU in use: NVIDIA GeForce RTX 2080 Super with Max-Q Design&#xA;Display with D3D11.&#xA;[INFO ][17:59:47] Media format: raw H.264 video (h264)&#xA;Session Initialization Time: 39 ms&#xA;[INFO ][17:59:47] Video Input Information&#xA;        Codec        : AVC/H.264&#xA;        Frame rate   : 30000/1000 = 30 fps&#xA;        Sequence     : Progressive&#xA;        Coded size   : [1920, 1088]&#xA;        Display area : [0, 0, 1920, 1080]&#xA;        Chroma       : YUV 420&#xA;        Bit depth    : 8&#xA;Video Decoding Params:&#xA;        Num Surfaces : 20&#xA;        Crop         : [0, 0, 0, 0]&#xA;        Resize       : 1920x1088&#xA;        Deinterlace  : Weave&#xA;&#xA;Total frame decoded: 7&#xA;Session Deinitialization Time: 10 ms&#xA;&#xA;C:\Users\Admin>C:\Users\Admin\source\repos\video-sdk-samples\Samples\x64.Debug\AppDecD3d -d3d 11 -i C:\Users\Admin\source\repos\video-sdk-samples\nvEncDXGIOutputDuplicationSample\x64\Debug\ddatest_0.h264&#xA;GPU in use: NVIDIA GeForce RTX 2080 Super with Max-Q Design&#xA;Display with D3D11.&#xA;[INFO ][17:59:54] Media format: raw H.264 video (h264)&#xA;[h264 @ 0000023B8AB5C3A0] decoding for stream 0 failed&#xA;Session Initialization Time: 42 ms&#xA;[INFO ][17:59:54] Video Input Information&#xA;        Codec        : AVC/H.264&#xA;        Frame rate   : 30000/1000 = 30 fps&#xA;        Sequence     : Progressive&#xA;        Coded size   : [1920, 1088]&#xA;        Display area : [0, 0, 1920, 1080]&#xA;        Chroma       : YUV 420&#xA;        Bit depth    : 8&#xA;Video Decoding Params:&#xA;        Num Surfaces : 20&#xA;        Crop         : [0, 0, 0, 0]&#xA;        Resize       : 1920x1088&#xA;        Deinterlace  : Weave&#xA;&#xA;Total frame decoded: 6&#xA;Session Deinitialization Time: 10 ms&#xA;

    &#xA;

    I started from 10 frames and counted down to 6 where it eventually failed. It is important for me to know why this happens, because I plan to implement this decoder into my project, and will be feeding it single frames from a stream.

    &#xA;

    Oh, and also I noticed the coded size is 1088 by 1920 instead of 1080 according to the output log. Not sure why that is occurring or if it is relevant

    &#xA;

  • Adding album cover art to FLAC audio files using `ffmpeg`

    27 décembre 2022, par user5395338

    I have ripped files from an audio CD I just bought. I ripped using the Music app on my Macbook Pro, Catalina 10.15.6 - output format was .wav as there was no option for FLAC. My plan was to change format using ffmpeg :

    &#xA;

    % ffmpeg -v&#xA;ffmpeg version 4.4 Copyright (c) 2000-2021 the FFmpeg developers&#xA;

    &#xA;

    Except for the "album cover artwork" addition, the .wav-to-.flac conversion implemented in the short bash script below seems to have worked as expected :

    &#xA;

    #!/bin/bash&#xA;for file in *.wav&#xA;do&#xA;echo $file &#xA;ffmpeg -loglevel quiet -i "$file" -ar 48000 -c:a flac -disposition:v AnotherLand.png -vsync 0 -c:v png "${file/%.wav/.flac}"&#xA;done&#xA;

    &#xA;

    A script very similar to this one worked some time ago on a series of FLAC-to-FLAC conversions I had to do to reduce the bit depth. However, in that case, the original FLAC files already had the artwork embedded. Since this script produced usable audio files, I decided that I would try adding the artwork with a second ffmpeg command.

    &#xA;

    I did some research, which informed me that there have been issues with ffmpeg (1, 2, 3, 4) on adding album artwork to FLAC files.

    &#xA;

    I have tried several commands given in the references above, but still have not found a way to add album artwork to my FLAC files. The following command was a highly upvoted answer, which I felt would work, but didn't :

    &#xA;

    % ffmpeg -i "01 Grave Walker.flac" -i ./AnotherLand.png -map 0:0 -map 1:0 -codec copy -id3v2_version 3 -metadata:s:v title="Album cover" -metadata:s:v comment="Cover (front)" output.flac&#xA;&#xA;...&#xA;&#xA;&#xA;Input #0, flac, from &#x27;01 Grave Walker.flac&#x27;:&#xA;  Metadata:&#xA;    encoder         : Lavf58.76.100&#xA;  Duration: 00:06:59.93, start: 0.000000, bitrate: 746 kb/s&#xA;  Stream #0:0: Audio: flac, 48000 Hz, stereo, s16&#xA;Input #1, png_pipe, from &#x27;./AnotherLand.png&#x27;:&#xA;  Duration: N/A, bitrate: N/A&#xA;  Stream #1:0: Video: png, rgba(pc), 522x522, 25 fps, 25 tbr, 25 tbn, 25 tbc&#xA;File &#x27;output.flac&#x27; already exists. Overwrite? [y/N] y&#xA;[flac @ 0x7fb4d701e800] Video stream #1 is not an attached picture. Ignoring&#xA;Output #0, flac, to &#x27;output.flac&#x27;:&#xA;  Metadata:&#xA;    encoder         : Lavf58.76.100&#xA;  Stream #0:0: Audio: flac, 48000 Hz, stereo, s16&#xA;  Stream #0:1: Video: png, rgba(pc), 522x522, q=2-31, 25 fps, 25 tbr, 25 tbn, 25 tbc&#xA;    Metadata:&#xA;      title           : Album cover&#xA;      comment         : Cover (front)&#xA;Stream mapping:&#xA;  Stream #0:0 -> #0:0 (copy)&#xA;  Stream #1:0 -> #0:1 (copy)&#xA;&#xA;...&#xA;&#xA;

    &#xA;

    I don't understand the error message : Video stream #1 is not an attached picture. It seems to imply that that the artwork is "attached" (embedded ???) in the input file, but as I've specified the artwork is a separate file, this makes no sense to me.

    &#xA;