
Recherche avancée
Médias (1)
-
Richard Stallman et le logiciel libre
19 octobre 2011, par
Mis à jour : Mai 2013
Langue : français
Type : Texte
Autres articles (97)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
ANNEXE : Les plugins utilisés spécifiquement pour la ferme
5 mars 2010, parLe site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)
Sur d’autres sites (9153)
-
CUDA_ERORR_INVALID_CONTEXT
15 août 2021, par Meme MachineI am making a desktop sharing application based off of these repositories from NVIDIA.


https://github.com/NVIDIA/video-sdk-samples/tree/master/nvEncDXGIOutputDuplicationSample


https://github.com/NVIDIA/video-sdk-samples/blob/master/Samples/AppDecode/AppDecD3D/


https://github.com/NVIDIA/video-sdk-samples/tree/master/Samples/AppDecode/AppDecMem


I intend to have a setup function that is called once when Remote Desktop is selected, and then a second function that actually displays the received frames which is called when a frame is received


The below functions are nearly identical to the main() and NvD3D() functions found in AppDecD3D and AppDecMem repositories


CUcontext cuContext = NULL; // maybe it has to do with this variable?

int setup()
{
 char szInFilePath[256] = "C:\\Users\\Admin\\Desktop\\test.h264";
 int iGpu = 0;
 int iD3d = 0;
 try
 {
 //ParseCommandLine(argc, argv, szInFilePath, NULL, iGpu, NULL, &iD3d);
 CheckInputFile(szInFilePath);

 ck(cuInit(0));
 int nGpu = 0;
 ck(cuDeviceGetCount(&nGpu));
 if (iGpu < 0 || iGpu >= nGpu)
 {
 std::ostringstream err;
 err << "GPU ordinal out of range. Should be within [" << 0 << ", " << nGpu - 1 << "]" << std::endl;
 throw std::invalid_argument(err.str());
 }
 CUdevice cuDevice = 0;
 ck(cuDeviceGet(&cuDevice, iGpu));
 char szDeviceName[80];
 ck(cuDeviceGetName(szDeviceName, sizeof(szDeviceName), cuDevice));
 std::cout << "GPU in use: " << szDeviceName << std::endl;

 ck(cuCtxCreate(&cuContext, CU_CTX_SCHED_BLOCKING_SYNC, cuDevice));
 //NvDecD3D<framepresenterd3d11>(szInFilePath);

 std::cout << "Display with D3D11." << std::endl;
 }
 catch (const std::exception& ex)
 {
 std::cout << ex.what();
 exit(1);
 }
 return 0;
}

template<class typename="std::enable_if<std::is_base_of<FramePresenterD3D," framepresentertype="framepresentertype">::value>>
int NvDecD3D(char* szInFilePath)
{
 FileDataProvider dp(szInFilePath);
 FFmpegDemuxer demuxer(&dp);
 NvDecoder dec(cuContext, demuxer.GetWidth(), demuxer.GetHeight(), true, FFmpeg2NvCodecId(demuxer.GetVideoCodec()));
 FramePresenterType presenter(cuContext, demuxer.GetWidth(), demuxer.GetHeight());
 CUdeviceptr dpFrame = 0;
 ck(cuMemAlloc(&dpFrame, demuxer.GetWidth() * demuxer.GetHeight() * 4));
 int nVideoBytes = 0, nFrameReturned = 0, nFrame = 0;
 uint8_t* pVideo = NULL, ** ppFrame;

 do
 {
 demuxer.Demux(&pVideo, &nVideoBytes);
 dec.Decode(pVideo, nVideoBytes, &ppFrame, &nFrameReturned);
 if (!nFrame && nFrameReturned)
 LOG(INFO) << dec.GetVideoInfo();

 for (int i = 0; i < nFrameReturned; i++)
 {
 if (dec.GetBitDepth() == 8)
 Nv12ToBgra32((uint8_t*)ppFrame[i], dec.GetWidth(), (uint8_t*)dpFrame, 4 * dec.GetWidth(), dec.GetWidth(), dec.GetHeight());
 else
 P016ToBgra32((uint8_t*)ppFrame[i], 2 * dec.GetWidth(), (uint8_t*)dpFrame, 4 * dec.GetWidth(), dec.GetWidth(), dec.GetHeight());
 presenter.PresentDeviceFrame((uint8_t*)dpFrame, demuxer.GetWidth() * 4);
 }
 nFrame += nFrameReturned;
 } while (nVideoBytes);
 ck(cuMemFree(dpFrame));
 std::cout << "Total frame decoded: " << nFrame << std::endl;
 return 0;
}
</class></framepresenterd3d11>


Notice the line
NvDecD3D<framepresenterd3d11>(szInFilePath);</framepresenterd3d11>
? I plan to callNvDecD3D()
when a frame is received. So, I commented out the call insetup()
and moved it to my asio:async_read function. (see below)

void do_read_body()
 {
 readBuffer.reserve(_read_msg.ReadLength);
 _read_msg.Body = readBuffer.data();
 auto self(shared_from_this());
 asio::async_read(_socket,
 asio::buffer(_read_msg.Body, _read_msg.ReadLength),
 [this, self](std::error_code ec, std::size_t /*length*/)
 {
 if (!ec)
 {
 if (_read_msg.CmdId == 0x5)
 {
 std::cout << "Received a frame" << std::endl;

 NvDecD3D<framepresenterd3d11>(szInFilePath);
 }
 else
 {
 std::cout << std::string(_read_msg.Body, 0, _read_msg.ReadLength) << std::endl;
 }
 
 do_read_header();
 }
 else
 {
 _room.leave(shared_from_this());
 }
 });
 }
</framepresenterd3d11>


However, when I go to execute it, I get
CUDA_ERORR_INVALID_CONTEXT
whencuMemAlloc()
is called. If I uncomment the call toNvDecD3D()
insidesetup()
and call it from there, it does not error however.

Do you have any idea what could be causing this problem ? Perhaps it is related to the ASIO.


-
FFMPEG/NVDEC Fails When Under 7 Frames
13 août 2021, par Meme MachineI was looking the examples from NVIDIA's repository, specifically their Encoding and Decoding projects. I downloaded the desktop duplication project, which allows you to capture a certain number of frames from the desktop as raw h264. I also got AppDecode, which decodes and displays frames from an input file. I noticed that if I try and capture only a single frame, it fails to decode the input file.


Here is the output


C:\Users\Admin>C:\Users\Admin\source\repos\video-sdk-samples\Samples\x64.Debug\AppDecD3d -d3d 11 -i C:\Users\Admin\source\repos\video-sdk-samples\nvEncDXGIOutputDuplicationSample\x64\Debug\ddatest_0.h264
GPU in use: NVIDIA GeForce RTX 2080 Super with Max-Q Design
Display with D3D11.
[INFO ][17:59:47] Media format: raw H.264 video (h264)
Session Initialization Time: 39 ms
[INFO ][17:59:47] Video Input Information
 Codec : AVC/H.264
 Frame rate : 30000/1000 = 30 fps
 Sequence : Progressive
 Coded size : [1920, 1088]
 Display area : [0, 0, 1920, 1080]
 Chroma : YUV 420
 Bit depth : 8
Video Decoding Params:
 Num Surfaces : 20
 Crop : [0, 0, 0, 0]
 Resize : 1920x1088
 Deinterlace : Weave

Total frame decoded: 7
Session Deinitialization Time: 10 ms

C:\Users\Admin>C:\Users\Admin\source\repos\video-sdk-samples\Samples\x64.Debug\AppDecD3d -d3d 11 -i C:\Users\Admin\source\repos\video-sdk-samples\nvEncDXGIOutputDuplicationSample\x64\Debug\ddatest_0.h264
GPU in use: NVIDIA GeForce RTX 2080 Super with Max-Q Design
Display with D3D11.
[INFO ][17:59:54] Media format: raw H.264 video (h264)
[h264 @ 0000023B8AB5C3A0] decoding for stream 0 failed
Session Initialization Time: 42 ms
[INFO ][17:59:54] Video Input Information
 Codec : AVC/H.264
 Frame rate : 30000/1000 = 30 fps
 Sequence : Progressive
 Coded size : [1920, 1088]
 Display area : [0, 0, 1920, 1080]
 Chroma : YUV 420
 Bit depth : 8
Video Decoding Params:
 Num Surfaces : 20
 Crop : [0, 0, 0, 0]
 Resize : 1920x1088
 Deinterlace : Weave

Total frame decoded: 6
Session Deinitialization Time: 10 ms



I started from 10 frames and counted down to 6 where it eventually failed. It is important for me to know why this happens, because I plan to implement this decoder into my project, and will be feeding it single frames from a stream.


Oh, and also I noticed the coded size is 1088 by 1920 instead of 1080 according to the output log. Not sure why that is occurring or if it is relevant


-
Adding album cover art to FLAC audio files using `ffmpeg`
27 décembre 2022, par user5395338I have ripped files from an audio CD I just bought. I ripped using the
Music
app on my Macbook Pro, Catalina 10.15.6 - output format was.wav
as there was no option forFLAC
. My plan was to change format usingffmpeg
:

% ffmpeg -v
ffmpeg version 4.4 Copyright (c) 2000-2021 the FFmpeg developers



Except for the "album cover artwork" addition, the
.wav-to-.flac
conversion implemented in the shortbash
script below seems to have worked as expected :

#!/bin/bash
for file in *.wav
do
echo $file 
ffmpeg -loglevel quiet -i "$file" -ar 48000 -c:a flac -disposition:v AnotherLand.png -vsync 0 -c:v png "${file/%.wav/.flac}"
done



A script very similar to this one worked some time ago on a series of
FLAC-to-FLAC
conversions I had to do to reduce the bit depth. However, in that case, the originalFLAC
files already had the artwork embedded. Since this script produced usable audio files, I decided that I would try adding the artwork with a secondffmpeg
command.

I did some research, which informed me that there have been issues with
ffmpeg
(1, 2, 3, 4) on adding album artwork toFLAC
files.

I have tried several commands given in the references above, but still have not found a way to add album artwork to my
FLAC
files. The following command was a highly upvoted answer, which I felt would work, but didn't :

% ffmpeg -i "01 Grave Walker.flac" -i ./AnotherLand.png -map 0:0 -map 1:0 -codec copy -id3v2_version 3 -metadata:s:v title="Album cover" -metadata:s:v comment="Cover (front)" output.flac

...


Input #0, flac, from '01 Grave Walker.flac':
 Metadata:
 encoder : Lavf58.76.100
 Duration: 00:06:59.93, start: 0.000000, bitrate: 746 kb/s
 Stream #0:0: Audio: flac, 48000 Hz, stereo, s16
Input #1, png_pipe, from './AnotherLand.png':
 Duration: N/A, bitrate: N/A
 Stream #1:0: Video: png, rgba(pc), 522x522, 25 fps, 25 tbr, 25 tbn, 25 tbc
File 'output.flac' already exists. Overwrite? [y/N] y
[flac @ 0x7fb4d701e800] Video stream #1 is not an attached picture. Ignoring
Output #0, flac, to 'output.flac':
 Metadata:
 encoder : Lavf58.76.100
 Stream #0:0: Audio: flac, 48000 Hz, stereo, s16
 Stream #0:1: Video: png, rgba(pc), 522x522, q=2-31, 25 fps, 25 tbr, 25 tbn, 25 tbc
 Metadata:
 title : Album cover
 comment : Cover (front)
Stream mapping:
 Stream #0:0 -> #0:0 (copy)
 Stream #1:0 -> #0:1 (copy)

...




I don't understand the error message :
Video stream #1 is not an attached picture.
It seems to imply that that the artwork is "attached" (embedded ???) in the input file, but as I've specified the artwork is a separate file, this makes no sense to me.