
Recherche avancée
Autres articles (55)
-
Qu’est ce qu’un masque de formulaire
13 juin 2013, parUn masque de formulaire consiste en la personnalisation du formulaire de mise en ligne des médias, rubriques, actualités, éditoriaux et liens vers des sites.
Chaque formulaire de publication d’objet peut donc être personnalisé.
Pour accéder à la personnalisation des champs de formulaires, il est nécessaire d’aller dans l’administration de votre MediaSPIP puis de sélectionner "Configuration des masques de formulaires".
Sélectionnez ensuite le formulaire à modifier en cliquant sur sont type d’objet. (...) -
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 is the first MediaSPIP stable release.
Its official release date is June 21, 2013 and is announced here.
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
Sur d’autres sites (7850)
-
How to fill an AVFrame structure in order to encode an YUY2 video (or UYVY) into H265
22 avril, par Rich DengI want to compress a video stream in YUY2 or UYVY format to, say H265. If I understand the answers given this thread correctly, I should be able use the function
av_image_fill_arrays()
to fill the data and linesize arrays of anAVFrame
object, callavcodec_send_frame()
, and thenavcodec_receive_packet()
to get encoded data :

bool VideoEncoder::Init(const AM_MEDIA_TYPE* pMediaType)
{
 // we should have a valid pointer
 if (pMediaType)
 {
 m_mtInput.Empty();
 m_mtInput.Set(*pMediaType);
 }
 else
 return false;

 // find encoder
 m_pCodec = m_spAVCodecDlls->avcodec_find_encoder(AV_CODEC_ID_HEVC);
 m_pCodecCtx = m_spAVCodecDlls->avcodec_alloc_context3(m_pCodec);
 if (!m_pCodec || !m_pCodecCtx)
 {
 Log.Log(_T("Failed to find or allocate codec context!"));
 return false;
 }

 AVPixelFormat ePixFmtInput = GetInputPixelFormat();
 if (CanConvertInputFormat(ePixFmtInput) == false)
 {
 return false;
 }

 // we are able to convert
 // so continue with setting it up
 int nWidth = m_mtInput.GetWidth();
 int nHeight = m_mtInput.GetHeight();

 // Set encoding parameters

 // Set bitrate (4 Mbps for 1920x1080)
 m_pCodecCtx->bit_rate = (((int64)4000000 * nWidth / 1920) * nHeight / 1080); 

 m_pCodecCtx->width = nWidth; 
 m_pCodecCtx->height = nHeight;


 // use reference time as time_base
 m_pCodecCtx->time_base.den = 10000000; 
 m_pCodecCtx->time_base.num = 1;

 SetAVRational(m_pCodecCtx->framerate, m_mtInput.GetFrameRate());
 //m_pCodecCtx->framerate = (AVRational){ 30, 1 };
 m_pCodecCtx->gop_size = 10; // GOP size
 m_pCodecCtx->max_b_frames = 1;

 // set pixel format
 m_pCodecCtx->pix_fmt = ePixFmtInput; // YUV 4:2:0 format or YUV 4:2:2

 // Open the codec
 if (m_spAVCodecDlls->avcodec_open2(m_pCodecCtx, m_pCodec, NULL) < 0)
 {
 return false;
 }

 return true;
}

bool VideoEncoder::AllocateFrame()
{

 m_pFrame = m_spAVCodecDlls->av_frame_alloc();
 if (m_pFrame == NULL)
 {
 Log.Log(_T("Failed to allocate frame object!"));
 return false;
 }

 m_pFrame->format = m_pCodecCtx->pix_fmt;
 m_pFrame->width = m_pCodecCtx->width;
 m_pFrame->height = m_pCodecCtx->height;

 m_pFrame->time_base.den = m_pCodecCtx->time_base.den;
 m_pFrame->time_base.num = m_pCodecCtx->time_base.num;


 return true;
}

bool VideoEncoder::Encode(IMediaSample* pSample)
{
 if (m_pFrame == NULL)
 {
 return false;
 }

 // get the time stamps
 REFERENCE_TIME rtStart, rtEnd;
 HRESULT hr = pSample->GetTime(&rtStart, &rtEnd);
 m_rtInputFrameStart = rtStart;
 m_rtInputFrameEnd = rtEnd;


 // get length
 int nLength = pSample->GetActualDataLength();

 // get pointer to actual sample data
 uint8_t* pData = NULL;
 hr = pSample->GetPointer(&pData);

 if (FAILED(hr) || NULL == pData)
 return false;

 m_pFrame->flags = (S_OK == pSample->IsSyncPoint()) ? (m_pFrame->flags | AV_FRAME_FLAG_KEY) : (m_pFrame->flags & ~AV_FRAME_FLAG_KEY);

 // clear old data
 for (int n = 0; n < AV_NUM_DATA_POINTERS; n++)
 {
 m_pFrame->data[n] = NULL;// (uint8_t*)aryData[n];
 m_pFrame->linesize[n] = 0;// = aryStride[n];
 }


 int nRet = 0;
 int nStride = m_mtInput.GetStride();
 nRet = m_spAVCodecDlls->av_image_fill_arrays(m_pFrame->data, m_pFrame->linesize, pData, ePixFmt, m_pFrame->width, m_pFrame->height, 32);
 if (nRet < 0)
 {
 return false;
 }

 m_pFrame->pts = (int64_t) rtStart;
 m_pFrame->duration = rtEnd - rtStart;
 nRet = m_spAVCodecDlls->avcodec_send_frame(m_pCodecCtx, m_pFrame);
 if (nRet == AVERROR(EAGAIN))
 {
 ReceivePacket();
 nRet = m_spAVCodecDlls->avcodec_send_frame(m_pCodecCtx, m_pFrame);
 }

 if (nRet < 0)
 {
 return false;
 }

 // Receive the encoded packets
 ReceivePacket();

 return true;
}

bool VideoEncoder::ReceivePacket()
{
 bool bRet = true;
 AVPacket* pkt = m_spAVCodecDlls->av_packet_alloc();
 while (m_spAVCodecDlls->avcodec_receive_packet(m_pCodecCtx, pkt) == 0)
 {
 // Write pkt->data to output file or stream
 m_pCallback->VideoEncoderWriteEncodedSample(pkt);
 if (m_OutFile.IsOpen())
 m_OutFile.Write(pkt->data, pkt->size);
 m_spAVCodecDlls->av_packet_unref(pkt);
 }
 m_spAVCodecDlls->av_packet_free(&pkt);

 return bRet;
}



I must have done something wrong. The result is not correct. For example, rather than a video with a person's face showing in the middle of the screen, I get a mostly green screen with parts of the face showing up at the lower left and lower right corners.


Can someone help me ?


-
dxva2_vc1 : fix signaling of intensity compensation values
12 décembre 2013, par Hendrik Leppkes -
Getting List of default argument values if not used in FFMPEG command execution
12 juillet 2020, par Pradeep PrabhuWhen I use FFMPEG to capture a live IPTV stream on my MAC, I generally end up with an interrupted video. When I send these four arguments, the capture is successful.


-reconnect 1 -reconnect_at_eof 1 -reconnect_streamed 1 -reconnect_delay_max 2


I don't want to specify these arguments everytime, So I decided to update the default values in the source code. Unsure if I have done it right - I updated http.c which contains these arguments. I re-compiled FFMPEG successfully.


I want to know if my changes are applied. Is there a way I can list out all the default values of the arguments. I can use this compiled version of FFMPEG for a week, and determine if the fix is applied or not, I was wondering, if there is a quicker and easier way to do it.


If this is successful, I can use this version of FFMPEG for Emby & TellyTV.