
Recherche avancée
Médias (1)
-
The pirate bay depuis la Belgique
1er avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
Autres articles (59)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (9247)
-
Create video file by mixing video and audio byte arrays FFmpeg & C++
20 janvier 2021, par Sergey ZinovevI capture audio and video.


Video is captured by using Desktop Duplication API and, as a result, I get Textures 2D.
These Textures 2D are char arrays.


m_immediateContext->CopyResource(currTexture, m_acquiredDesktopImage.Get());

D3D11_MAPPED_SUBRESOURCE* resource = new D3D11_MAPPED_SUBRESOURCE;
UINT subresource = D3D11CalcSubresource(0, 0, 0);

m_immediateContext->Map(currTexture, subresource, D3D11_MAP_READ_WRITE, 0, resource);

uchar * buffer = new uchar[(m_desc.Height * m_desc.Width * 4)];
const uchar * mappedData = static_cast<uchar>(resource->pData);
memcpy(buffer, mappedData, m_desc.Height * m_desc.Width * 4);
</uchar>


Then the Textures 2D convert in cv::Mat and write Video using OpenCV.


Audio captured by using WASAPI and, as a result, I get samples.


BYTE * buffer = new BYTE[(numFramesAvailable * pwfx->nBlockAlign)];
memcpy(buffer, pData, numFramesAvailable * pwfx->nBlockAlign);



These samples are byte arrays then write in WAV file.


As a result, I get two files - video and audio, which merged by using FFmpeg.


I want to skip the creation of video and audio files and promptly create one file compose of two strims (video and audio) from raw data.


In order to make it I need help with FFmpeg code.
Specifically, in a way of creating and setting the correct output context and output streams, and how to encode raw data.


I've already learned doc/examples FFmpeg, but still can't make the working code. So, I really need your help guys.


-
Problem with processing FFMPEG video , the output video is totally black in first 3 seconds
13 janvier 2021, par Trần Minh TuấnI want to make a slideshow by using concat demuxer like this link :
https://trac.ffmpeg.org/wiki/Slideshow


Here is the textfile preinputFiles.txt :


file 'path a'
duration 2
file 'path b'
duration 2
file 'path c'
duration 2
file 'path c'



and my command array is :


String[] cmd = new String[]{
 "-f","concat","-i",
 textFile,
 "-vsync", "vfr", "-pix_fmt", "yuv420p",
 dest.getAbsolutePath()};



the video has 3 seconds that it is totally black and run from 3s to 9s even the gallery shows that it is a 6 seconds long video .


Thanks for the help !


-
ffmpeg encoding a video with time_base Not equal to framerate does not work in HML5 video players
1er juillet 2019, par Gilgamesh22I have a time_base of 90000 with a frame rate of 30. I can generate a h264 video and have it work in VLC but this video does not work in the browser player. If I change the time_base to 30 It works fine.
Note : I am changing the frame->pts appropriately to match the time_base.
Note : Video does not have audio stream//header.h
AVCodecContext *cctx;
AVStream* stream;Here is the non working example code
//source.cpp
stream->time_base = { 1, 90000 };
stream->r_frame_rate = { fps, 1 };
stream->avg_frame_rate = { fps, 1 };
cctx->codec_id = codecId;
cctx->time_base = { 1 , 90000 };
cctx->framerate = { fps, 1 };
// ......
// add frame code later on timestamp are in millisecond
frame->pts = (timestamp - startTimeStamp)* 90;Here is the working example code
//source.cpp
stream->time_base = { 1, fps};
stream->r_frame_rate = { fps, 1 };
stream->avg_frame_rate = { fps, 1 };
cctx->codec_id = codecId;
cctx->time_base = { 1 , fps};
cctx->framerate = { fps, 1 };
// ......
// add frame code timestamp are in millisecond
frame->pts = (timestamp - startTimeStamp)/(1000/fps);Any ideas on why the second example works and the first does not in the HTML5 video player.