
Recherche avancée
Médias (2)
-
Valkaama DVD Label
4 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Image
-
Podcasting Legal guide
16 mai 2011, par
Mis à jour : Mai 2011
Langue : English
Type : Texte
Autres articles (27)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
ANNEXE : Les plugins utilisés spécifiquement pour la ferme
5 mars 2010, parLe site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)
-
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;
Sur d’autres sites (7128)
-
Create drawbox with ffmpeg between specific seconds (without reencoding whole video - faster)
31 mars 2022, par protterMy plan was to put a transparent red box behind a video. This box should only be present from second 1-45.
But if the videos are 3 hours long, the process takes a long time although it only has to process 45 seconds.


My first attempt takes too long :


ffmpeg -i %1 -vf drawbox=0:9*ih/10:iw:ih/10:t=fill:color=red@0.5:enable='between(t,1,45)' "%~dp0transpred\%~n1%~x1


Then i tried splitting the video into two parts. put the box on the first video, and then put the two back together again.


ffmpeg -ss 00:00:00.0000 -i %1 -to 00:00:45.0000 -vf drawbox=0:9*ih/10:iw:ih/10:t=fill:color=red@0.5:enable='between(t,1,45)' "%~dp0transpred\%~n1A%~x1"


FFMpeg -ss 00:00:45.0000 -i %1 -c:v copy -c:a copy -avoid_negative_ts make_zero "%~dp0transpred\%~n1B%~x1"


But i don't even have to try to put these two together, because they are not separated exactly at the second. I have read that this is due to "timestamps" and the different video and audio streams.


Now I'm trying an approach to create a stream with the bar, and then overlay it with the finished video. I haven't quite managed that yet, and I don't know if it's faster.
Shortening the video is very fast.


EDIT (Added as a replacement for the comment later)


Thanks for your help I have almost done it with a slightly different approach. Unfortunately, the second part now always has no sound. No matter if I put A and B (B no sound) or B and A (A no sound) together.


- 

- First split with mkvmerge so i have no worrys about the keyframes and get the exact time

mkvmerge --split timestamps:00:00:45.100 A.MKV -o splitmkm.mkv
- Then add the Bar (Black because of easier testing) :

ffmpeg -i splitmkm-001.mkv -vf drawbox=0:9*ih/10:iw:ih/10:t=fill BAR1.MKV
- Merge (mkvmerge ends with error) :

ffmpeg -safe 0 -f concat -i list.txt -c copy output1.mkv








EDIT (Answer to kesh)


This was the error
Again, audio codec config's must match across all your concat files
. Thedrawbox
changed the audio Codec from AC-3 to Vorbis.

the procedure is now :


- 

mkvtoolnix\mkvmerge --split timestamps:00:00:05.100 %1 -o A_splitmkm.mkv
with mkvmerge i have an exact split at the time, and i don't have to learn about keyframes.ffmpeg -i A_splitmkm-001.mkv -vf drawbox=0:9*ih/10:iw:ih/10:t=fill:color=red A_BARmkm.MKV
create the Barffmpeg -i A_BARmkm.MKV -i A_splitmkm-001.mkv -map 0:v -map 1 -map -1:v -c copy A_BARwithAudio.mkv
redo the step with the changed audio from drawboxffmpeg -safe 0 -f concat -i list.txt -map 0 -c copy A_output1.mkv
merge










Now everything works.
Thanks alot !


- First split with mkvmerge so i have no worrys about the keyframes and get the exact time

-
CUDA_ERORR_INVALID_CONTEXT
15 août 2021, par Meme MachineI am making a desktop sharing application based off of these repositories from NVIDIA.


https://github.com/NVIDIA/video-sdk-samples/tree/master/nvEncDXGIOutputDuplicationSample


https://github.com/NVIDIA/video-sdk-samples/blob/master/Samples/AppDecode/AppDecD3D/


https://github.com/NVIDIA/video-sdk-samples/tree/master/Samples/AppDecode/AppDecMem


I intend to have a setup function that is called once when Remote Desktop is selected, and then a second function that actually displays the received frames which is called when a frame is received


The below functions are nearly identical to the main() and NvD3D() functions found in AppDecD3D and AppDecMem repositories


CUcontext cuContext = NULL; // maybe it has to do with this variable?

int setup()
{
 char szInFilePath[256] = "C:\\Users\\Admin\\Desktop\\test.h264";
 int iGpu = 0;
 int iD3d = 0;
 try
 {
 //ParseCommandLine(argc, argv, szInFilePath, NULL, iGpu, NULL, &iD3d);
 CheckInputFile(szInFilePath);

 ck(cuInit(0));
 int nGpu = 0;
 ck(cuDeviceGetCount(&nGpu));
 if (iGpu < 0 || iGpu >= nGpu)
 {
 std::ostringstream err;
 err << "GPU ordinal out of range. Should be within [" << 0 << ", " << nGpu - 1 << "]" << std::endl;
 throw std::invalid_argument(err.str());
 }
 CUdevice cuDevice = 0;
 ck(cuDeviceGet(&cuDevice, iGpu));
 char szDeviceName[80];
 ck(cuDeviceGetName(szDeviceName, sizeof(szDeviceName), cuDevice));
 std::cout << "GPU in use: " << szDeviceName << std::endl;

 ck(cuCtxCreate(&cuContext, CU_CTX_SCHED_BLOCKING_SYNC, cuDevice));
 //NvDecD3D<framepresenterd3d11>(szInFilePath);

 std::cout << "Display with D3D11." << std::endl;
 }
 catch (const std::exception& ex)
 {
 std::cout << ex.what();
 exit(1);
 }
 return 0;
}

template<class typename="std::enable_if<std::is_base_of<FramePresenterD3D," framepresentertype="framepresentertype">::value>>
int NvDecD3D(char* szInFilePath)
{
 FileDataProvider dp(szInFilePath);
 FFmpegDemuxer demuxer(&dp);
 NvDecoder dec(cuContext, demuxer.GetWidth(), demuxer.GetHeight(), true, FFmpeg2NvCodecId(demuxer.GetVideoCodec()));
 FramePresenterType presenter(cuContext, demuxer.GetWidth(), demuxer.GetHeight());
 CUdeviceptr dpFrame = 0;
 ck(cuMemAlloc(&dpFrame, demuxer.GetWidth() * demuxer.GetHeight() * 4));
 int nVideoBytes = 0, nFrameReturned = 0, nFrame = 0;
 uint8_t* pVideo = NULL, ** ppFrame;

 do
 {
 demuxer.Demux(&pVideo, &nVideoBytes);
 dec.Decode(pVideo, nVideoBytes, &ppFrame, &nFrameReturned);
 if (!nFrame && nFrameReturned)
 LOG(INFO) << dec.GetVideoInfo();

 for (int i = 0; i < nFrameReturned; i++)
 {
 if (dec.GetBitDepth() == 8)
 Nv12ToBgra32((uint8_t*)ppFrame[i], dec.GetWidth(), (uint8_t*)dpFrame, 4 * dec.GetWidth(), dec.GetWidth(), dec.GetHeight());
 else
 P016ToBgra32((uint8_t*)ppFrame[i], 2 * dec.GetWidth(), (uint8_t*)dpFrame, 4 * dec.GetWidth(), dec.GetWidth(), dec.GetHeight());
 presenter.PresentDeviceFrame((uint8_t*)dpFrame, demuxer.GetWidth() * 4);
 }
 nFrame += nFrameReturned;
 } while (nVideoBytes);
 ck(cuMemFree(dpFrame));
 std::cout << "Total frame decoded: " << nFrame << std::endl;
 return 0;
}
</class></framepresenterd3d11>


Notice the line
NvDecD3D<framepresenterd3d11>(szInFilePath);</framepresenterd3d11>
? I plan to callNvDecD3D()
when a frame is received. So, I commented out the call insetup()
and moved it to my asio:async_read function. (see below)

void do_read_body()
 {
 readBuffer.reserve(_read_msg.ReadLength);
 _read_msg.Body = readBuffer.data();
 auto self(shared_from_this());
 asio::async_read(_socket,
 asio::buffer(_read_msg.Body, _read_msg.ReadLength),
 [this, self](std::error_code ec, std::size_t /*length*/)
 {
 if (!ec)
 {
 if (_read_msg.CmdId == 0x5)
 {
 std::cout << "Received a frame" << std::endl;

 NvDecD3D<framepresenterd3d11>(szInFilePath);
 }
 else
 {
 std::cout << std::string(_read_msg.Body, 0, _read_msg.ReadLength) << std::endl;
 }
 
 do_read_header();
 }
 else
 {
 _room.leave(shared_from_this());
 }
 });
 }
</framepresenterd3d11>


However, when I go to execute it, I get
CUDA_ERORR_INVALID_CONTEXT
whencuMemAlloc()
is called. If I uncomment the call toNvDecD3D()
insidesetup()
and call it from there, it does not error however.

Do you have any idea what could be causing this problem ? Perhaps it is related to the ASIO.


-
FFMPEG/NVDEC Fails When Under 7 Frames
13 août 2021, par Meme MachineI was looking the examples from NVIDIA's repository, specifically their Encoding and Decoding projects. I downloaded the desktop duplication project, which allows you to capture a certain number of frames from the desktop as raw h264. I also got AppDecode, which decodes and displays frames from an input file. I noticed that if I try and capture only a single frame, it fails to decode the input file.


Here is the output


C:\Users\Admin>C:\Users\Admin\source\repos\video-sdk-samples\Samples\x64.Debug\AppDecD3d -d3d 11 -i C:\Users\Admin\source\repos\video-sdk-samples\nvEncDXGIOutputDuplicationSample\x64\Debug\ddatest_0.h264
GPU in use: NVIDIA GeForce RTX 2080 Super with Max-Q Design
Display with D3D11.
[INFO ][17:59:47] Media format: raw H.264 video (h264)
Session Initialization Time: 39 ms
[INFO ][17:59:47] Video Input Information
 Codec : AVC/H.264
 Frame rate : 30000/1000 = 30 fps
 Sequence : Progressive
 Coded size : [1920, 1088]
 Display area : [0, 0, 1920, 1080]
 Chroma : YUV 420
 Bit depth : 8
Video Decoding Params:
 Num Surfaces : 20
 Crop : [0, 0, 0, 0]
 Resize : 1920x1088
 Deinterlace : Weave

Total frame decoded: 7
Session Deinitialization Time: 10 ms

C:\Users\Admin>C:\Users\Admin\source\repos\video-sdk-samples\Samples\x64.Debug\AppDecD3d -d3d 11 -i C:\Users\Admin\source\repos\video-sdk-samples\nvEncDXGIOutputDuplicationSample\x64\Debug\ddatest_0.h264
GPU in use: NVIDIA GeForce RTX 2080 Super with Max-Q Design
Display with D3D11.
[INFO ][17:59:54] Media format: raw H.264 video (h264)
[h264 @ 0000023B8AB5C3A0] decoding for stream 0 failed
Session Initialization Time: 42 ms
[INFO ][17:59:54] Video Input Information
 Codec : AVC/H.264
 Frame rate : 30000/1000 = 30 fps
 Sequence : Progressive
 Coded size : [1920, 1088]
 Display area : [0, 0, 1920, 1080]
 Chroma : YUV 420
 Bit depth : 8
Video Decoding Params:
 Num Surfaces : 20
 Crop : [0, 0, 0, 0]
 Resize : 1920x1088
 Deinterlace : Weave

Total frame decoded: 6
Session Deinitialization Time: 10 ms



I started from 10 frames and counted down to 6 where it eventually failed. It is important for me to know why this happens, because I plan to implement this decoder into my project, and will be feeding it single frames from a stream.


Oh, and also I noticed the coded size is 1088 by 1920 instead of 1080 according to the output log. Not sure why that is occurring or if it is relevant