
Recherche avancée
Autres articles (44)
-
La file d’attente de SPIPmotion
28 novembre 2010, parUne file d’attente stockée dans la base de donnée
Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...) -
Les vidéos
21 avril 2011, parComme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...) -
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
Sur d’autres sites (6996)
-
FFMPEG/NVDEC Fails When Under 7 Frames
13 août 2021, par Meme MachineI was looking the examples from NVIDIA's repository, specifically their Encoding and Decoding projects. I downloaded the desktop duplication project, which allows you to capture a certain number of frames from the desktop as raw h264. I also got AppDecode, which decodes and displays frames from an input file. I noticed that if I try and capture only a single frame, it fails to decode the input file.


Here is the output


C:\Users\Admin>C:\Users\Admin\source\repos\video-sdk-samples\Samples\x64.Debug\AppDecD3d -d3d 11 -i C:\Users\Admin\source\repos\video-sdk-samples\nvEncDXGIOutputDuplicationSample\x64\Debug\ddatest_0.h264
GPU in use: NVIDIA GeForce RTX 2080 Super with Max-Q Design
Display with D3D11.
[INFO ][17:59:47] Media format: raw H.264 video (h264)
Session Initialization Time: 39 ms
[INFO ][17:59:47] Video Input Information
 Codec : AVC/H.264
 Frame rate : 30000/1000 = 30 fps
 Sequence : Progressive
 Coded size : [1920, 1088]
 Display area : [0, 0, 1920, 1080]
 Chroma : YUV 420
 Bit depth : 8
Video Decoding Params:
 Num Surfaces : 20
 Crop : [0, 0, 0, 0]
 Resize : 1920x1088
 Deinterlace : Weave

Total frame decoded: 7
Session Deinitialization Time: 10 ms

C:\Users\Admin>C:\Users\Admin\source\repos\video-sdk-samples\Samples\x64.Debug\AppDecD3d -d3d 11 -i C:\Users\Admin\source\repos\video-sdk-samples\nvEncDXGIOutputDuplicationSample\x64\Debug\ddatest_0.h264
GPU in use: NVIDIA GeForce RTX 2080 Super with Max-Q Design
Display with D3D11.
[INFO ][17:59:54] Media format: raw H.264 video (h264)
[h264 @ 0000023B8AB5C3A0] decoding for stream 0 failed
Session Initialization Time: 42 ms
[INFO ][17:59:54] Video Input Information
 Codec : AVC/H.264
 Frame rate : 30000/1000 = 30 fps
 Sequence : Progressive
 Coded size : [1920, 1088]
 Display area : [0, 0, 1920, 1080]
 Chroma : YUV 420
 Bit depth : 8
Video Decoding Params:
 Num Surfaces : 20
 Crop : [0, 0, 0, 0]
 Resize : 1920x1088
 Deinterlace : Weave

Total frame decoded: 6
Session Deinitialization Time: 10 ms



I started from 10 frames and counted down to 6 where it eventually failed. It is important for me to know why this happens, because I plan to implement this decoder into my project, and will be feeding it single frames from a stream.


Oh, and also I noticed the coded size is 1088 by 1920 instead of 1080 according to the output log. Not sure why that is occurring or if it is relevant


-
CUDA_ERORR_INVALID_CONTEXT
15 août 2021, par Meme MachineI am making a desktop sharing application based off of these repositories from NVIDIA.


https://github.com/NVIDIA/video-sdk-samples/tree/master/nvEncDXGIOutputDuplicationSample


https://github.com/NVIDIA/video-sdk-samples/blob/master/Samples/AppDecode/AppDecD3D/


https://github.com/NVIDIA/video-sdk-samples/tree/master/Samples/AppDecode/AppDecMem


I intend to have a setup function that is called once when Remote Desktop is selected, and then a second function that actually displays the received frames which is called when a frame is received


The below functions are nearly identical to the main() and NvD3D() functions found in AppDecD3D and AppDecMem repositories


CUcontext cuContext = NULL; // maybe it has to do with this variable?

int setup()
{
 char szInFilePath[256] = "C:\\Users\\Admin\\Desktop\\test.h264";
 int iGpu = 0;
 int iD3d = 0;
 try
 {
 //ParseCommandLine(argc, argv, szInFilePath, NULL, iGpu, NULL, &iD3d);
 CheckInputFile(szInFilePath);

 ck(cuInit(0));
 int nGpu = 0;
 ck(cuDeviceGetCount(&nGpu));
 if (iGpu < 0 || iGpu >= nGpu)
 {
 std::ostringstream err;
 err << "GPU ordinal out of range. Should be within [" << 0 << ", " << nGpu - 1 << "]" << std::endl;
 throw std::invalid_argument(err.str());
 }
 CUdevice cuDevice = 0;
 ck(cuDeviceGet(&cuDevice, iGpu));
 char szDeviceName[80];
 ck(cuDeviceGetName(szDeviceName, sizeof(szDeviceName), cuDevice));
 std::cout << "GPU in use: " << szDeviceName << std::endl;

 ck(cuCtxCreate(&cuContext, CU_CTX_SCHED_BLOCKING_SYNC, cuDevice));
 //NvDecD3D<framepresenterd3d11>(szInFilePath);

 std::cout << "Display with D3D11." << std::endl;
 }
 catch (const std::exception& ex)
 {
 std::cout << ex.what();
 exit(1);
 }
 return 0;
}

template<class typename="std::enable_if<std::is_base_of<FramePresenterD3D," framepresentertype="framepresentertype">::value>>
int NvDecD3D(char* szInFilePath)
{
 FileDataProvider dp(szInFilePath);
 FFmpegDemuxer demuxer(&dp);
 NvDecoder dec(cuContext, demuxer.GetWidth(), demuxer.GetHeight(), true, FFmpeg2NvCodecId(demuxer.GetVideoCodec()));
 FramePresenterType presenter(cuContext, demuxer.GetWidth(), demuxer.GetHeight());
 CUdeviceptr dpFrame = 0;
 ck(cuMemAlloc(&dpFrame, demuxer.GetWidth() * demuxer.GetHeight() * 4));
 int nVideoBytes = 0, nFrameReturned = 0, nFrame = 0;
 uint8_t* pVideo = NULL, ** ppFrame;

 do
 {
 demuxer.Demux(&pVideo, &nVideoBytes);
 dec.Decode(pVideo, nVideoBytes, &ppFrame, &nFrameReturned);
 if (!nFrame && nFrameReturned)
 LOG(INFO) << dec.GetVideoInfo();

 for (int i = 0; i < nFrameReturned; i++)
 {
 if (dec.GetBitDepth() == 8)
 Nv12ToBgra32((uint8_t*)ppFrame[i], dec.GetWidth(), (uint8_t*)dpFrame, 4 * dec.GetWidth(), dec.GetWidth(), dec.GetHeight());
 else
 P016ToBgra32((uint8_t*)ppFrame[i], 2 * dec.GetWidth(), (uint8_t*)dpFrame, 4 * dec.GetWidth(), dec.GetWidth(), dec.GetHeight());
 presenter.PresentDeviceFrame((uint8_t*)dpFrame, demuxer.GetWidth() * 4);
 }
 nFrame += nFrameReturned;
 } while (nVideoBytes);
 ck(cuMemFree(dpFrame));
 std::cout << "Total frame decoded: " << nFrame << std::endl;
 return 0;
}
</class></framepresenterd3d11>


Notice the line
NvDecD3D<framepresenterd3d11>(szInFilePath);</framepresenterd3d11>
? I plan to callNvDecD3D()
when a frame is received. So, I commented out the call insetup()
and moved it to my asio:async_read function. (see below)

void do_read_body()
 {
 readBuffer.reserve(_read_msg.ReadLength);
 _read_msg.Body = readBuffer.data();
 auto self(shared_from_this());
 asio::async_read(_socket,
 asio::buffer(_read_msg.Body, _read_msg.ReadLength),
 [this, self](std::error_code ec, std::size_t /*length*/)
 {
 if (!ec)
 {
 if (_read_msg.CmdId == 0x5)
 {
 std::cout << "Received a frame" << std::endl;

 NvDecD3D<framepresenterd3d11>(szInFilePath);
 }
 else
 {
 std::cout << std::string(_read_msg.Body, 0, _read_msg.ReadLength) << std::endl;
 }
 
 do_read_header();
 }
 else
 {
 _room.leave(shared_from_this());
 }
 });
 }
</framepresenterd3d11>


However, when I go to execute it, I get
CUDA_ERORR_INVALID_CONTEXT
whencuMemAlloc()
is called. If I uncomment the call toNvDecD3D()
insidesetup()
and call it from there, it does not error however.

Do you have any idea what could be causing this problem ? Perhaps it is related to the ASIO.


-
imageJpeg and FFMPEG in windows vs linux
25 janvier 2020, par Tanmay GawankarI have a working code for converting image to a 5 seconds video using FFMPEG.
The problem is, The code only works for downloaded images, FFMPEG doesn’t convert image to video when image is generated programmatically ONLY IN LINUX.
PHP code
<?php
$downloadedF="folder/d.jpg";
$downloadedV="folder/d.mp4";
$renderedF="folder/r.jpg";
$renderedV="folder/r.mp4";
$op_d=shell_exec("ffmpeg -r 1/5 -i ".$downloadedF." -c:v libx264 -vf fps=25 -pix_fmt yuv420p ".$downloadedV);
$op_r=shell_exec("ffmpeg -r 1/5 -i ".$renderedF." -c:v libx264 -vf fps=25 -pix_fmt yuv420p ".$renderedV);
echo "Errors:<br />".$op_d."<br /><br />".$op_r;
?>The d.mp4(or output for downloaded image) is getting generated for both Windows and Linux
The r.mp4(or output for rendered image) gets generated only in Windows and 48 bytes empty file is getting created in LinuxSystem :
XAMPP on Windows 10(Development)
Godaddy Starter plan hosting - Linux(Probably redhat)(Production)File Structure
root folder
|-index.php
|-ffmpeg (will be ffmpeg.exe in Windows)
|-folder
|-d.jpg (random downloaded image from google)
|-d.mp4 (Will be created - video converted from downloaded image)
|-r.jpg (rendered image using php imagejpg)
|-r.mp4 (Will be created - video converted from rendered image)Rendered Image Code :
$imgFF = imagecreatetruecolor($videoWidth, $videoHeight);
//---adding many text using imagettftext();
imagejpeg($imgFF, $path."-000.jpg"); //for this example, I copied output to folder as r.jpgEdit 1 :
The return value of
shell_exec
has no error/output even after addingerror_reporting(E_ALL);
ini_set('display_errors', 1);Edit 2 :
The log for successful conversion can be found at Here
The log for unsuccessful conversion of rendered image can be found at HereNote :
• The scenario here is minimized and code is separated from long code.
• In Linux command i add./
for FFMPEG