
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (83)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Personnaliser les catégories
21 juin 2013, parFormulaire de création d’une catégorie
Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire.
Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...) -
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...)
Sur d’autres sites (8863)
-
FFMpeg gpl (ffmpeg-4.2.1-win64-dev_and_shared) version give different decode result (cmd query vs code)
31 mai 2021, par Aleksey Timoshchenko*all the source img I put on my google drive due to SO restrictions (just click on links provided in the text)


The problem is that for
.h264
I use two implementations (cmd query and code) (depends on the tasks) that give me different results and I don't see any reason for this.

Before all I would like to give an explanation, I have
.bmp
bayer image, then I do debayering and compress it to.h264
(compress .h264) with the script

ffmpeg -y -hide_banner -i orig_bayer.bmp -vf format=gray -f rawvideo pipe: | ffmpeg -hide_banner -y -framerate 30 -f rawvideo -pixel_format bayer_rggb8 -video_size 4096x3000 -i pipe: -c:v hevc_nvenc -qp 0 -pix_fmt yuv444p res.h264




Then the first cmd query implementation of decoding (image result)


ffmpeg -y -i res.h264 -vframes 1 -f image2 gpl_script_res.bmp -hide_banner




The second implementation is code one and takes more lines (image result here)


Init ffmpeg


bool FFmpegDecoder::InitFFmpeg()
{
 m_pAVPkt = av_packet_alloc();
 m_pAVFrame = av_frame_alloc();
 m_pAVFrameRGB = av_frame_alloc();

 m_pAVFormatCtx = avformat_alloc_context();
 m_pIoCtx->initAVFormatContext(m_pAVFormatCtx);

 if (avformat_open_input(&m_pAVFormatCtx, "", nullptr, nullptr) != 0)
 {
 printf("FFmpegDecoder::InitFFmpeg: error in avformat_open_input\n");
 return false;
 }

 if (avformat_find_stream_info(m_pAVFormatCtx, nullptr) < 0)
 {
 printf("FFmpegDecoder::InitFFmpeg: error in avformat_find_stream_info\n");
 return false;
 }

 //av_dump_format(ctx_format, 0, "", false);
 for (int i = 0; i < (int)m_pAVFormatCtx->nb_streams; i++)
 {
 if (m_pAVFormatCtx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO)
 {
 m_streamIdx = i;
 m_pAVStream = m_pAVFormatCtx->streams[i];
 break;
 }
 }
 if (m_pAVStream == nullptr)
 {
 printf("FFmpegDecoder::InitFFmpeg: failed to find video stream\n");
 return false;
 }

 m_pAVCodec = avcodec_find_decoder(m_pAVStream->codecpar->codec_id);
 if (!m_pAVCodec)
 {
 printf("FFmpegDecoder::InitFFmpeg: error in avcodec_find_decoder\n");
 return false;
 }

 m_pAVCodecCtx = avcodec_alloc_context3(m_pAVCodec);
 if (!m_pAVCodecCtx)
 {
 printf("FFmpegDecoder::InitFFmpeg: error in avcodec_alloc_context3\n");
 return false;
 }

 if (avcodec_parameters_to_context(m_pAVCodecCtx, m_pAVStream->codecpar) < 0)
 {
 printf("FFmpegDecoder::InitFFmpeg: error in avcodec_parameters_to_context\n");
 return false;
 }

 if (avcodec_open2(m_pAVCodecCtx, m_pAVCodec, nullptr) < 0)
 {
 printf("FFmpegDecoder::InitFFmpeg: error in avcodec_open2\n");
 return false;
 }

 m_pAVFrameRGB->format = AV_PIX_FMT_BGR24;
 m_pAVFrameRGB->width = m_pAVCodecCtx->width;
 m_pAVFrameRGB->height = m_pAVCodecCtx->height;
 if (av_frame_get_buffer(m_pAVFrameRGB, 32) != 0)
 {
 printf("FFmpegDecoder::InitFFmpeg: error in av_frame_get_buffer\n");
 return false;
 }

 m_streamRotationDegrees = GetAVStreamRotation(m_pAVStream);
 m_estimatedFramesCount = 0;
 assert(m_pAVFormatCtx->nb_streams > 0);
 if (m_pAVFormatCtx->nb_streams > 0)
 {
 m_estimatedFramesCount = m_pAVFormatCtx->streams[0]->nb_frames;
 }

 return InitConvertColorSpace(); 
}

bool FFmpegDecoder::InitConvertColorSpace()
{
 // Init converter from YUV420p to BGR:
 m_pSwsCtxConvertImg = sws_getContext(m_pAVCodecCtx->width, m_pAVCodecCtx->height, m_pAVCodecCtx->pix_fmt, m_pAVCodecCtx->width, m_pAVCodecCtx->height, AV_PIX_FMT_RGB24, SWS_FAST_BILINEAR, NULL, NULL, NULL);
 if (!m_pSwsCtxConvertImg)
 {
 printf("FFmpegDecoder::InitFFmpeg: error in sws_getContext\n");
 return false;
 }
 return true;
}



Decoding impl


bool FFmpegDecoder::DecodeContinue(int firstFrameIdx, int maxNumFrames)
{
 if (firstFrameIdx == FIRST_FRAME_IDX_BEGINNING)
 {
 firstFrameIdx = 0;
 }

 auto lastReportedFrameIdxTillNow = GetLastReportedFrameIdx();

 if (GetLastDecodedFrameIdx() >= 0 && firstFrameIdx <= GetLastDecodedFrameIdx())
 {
 printf("FFmpegDecoder::DecodeContinue FAILED: firstFrameIdx (%d) already passed decoded (Last decoded idx: %d)\n", firstFrameIdx, GetLastDecodedFrameIdx());
 return false;
 }

 bool bRes;
 int nRet = 0;
 bool bDecodeShouldEnd = false;

 if (m_pAVPkt != nullptr)
 {
 while (nRet >= 0)
 {
 m_bCurrentAVPktSentToCodec = false;
 nRet = av_read_frame(m_pAVFormatCtx, m_pAVPkt);
 if (nRet < 0)
 {
 break;
 }
 if (m_pAVPkt->stream_index == m_streamIdx)
 {
 bRes = DecodeCurrentAVPkt(firstFrameIdx, maxNumFrames, bDecodeShouldEnd);
 if (!bRes || m_bRequestedAbort)
 {
 av_packet_unref(m_pAVPkt);
 return false;
 }

 if (bDecodeShouldEnd)
 {
 av_packet_unref(m_pAVPkt);
 return true;
 }
 }

 av_packet_unref(m_pAVPkt);
 }
 m_bCurrentAVPktSentToCodec = false;
 m_pAVPkt = nullptr;
 }

 // drain:
 bRes = DecodeCurrentAVPkt(firstFrameIdx, maxNumFrames, bDecodeShouldEnd);
 if (!bRes)
 {
 return false;
 }

 if (lastReportedFrameIdxTillNow == GetLastReportedFrameIdx())
 {
 printf("FFmpegDecoder::DecodeContinue(firstFrameIdx==%d, maxNumFrames==%d) FAILED: no new frame was decoded\n", firstFrameIdx, maxNumFrames);
 return false;
 }

 return true;
}


bool FFmpegDecoder::DecodeCurrentAVPkt(int firstFrameIdx, int maxNumFrames, bool & bDecodeShouldEnd)
{
 bDecodeShouldEnd = false;

 int ret = 0;
 if (m_bCurrentAVPktSentToCodec == false)
 {
 ret = avcodec_send_packet(m_pAVCodecCtx, m_pAVPkt);
 m_bCurrentAVPktSentToCodec = true;
 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
 {
 printf("FFmpegDecoder::DecodeFrameImp: error EAGAIN/AVERROR_EOF in avcodec_send_packet\n");
 return false;
 }
 if (ret < 0)
 {
 if (ret == AVERROR_INVALIDDATA)
 {
 printf("FFmpegDecoder::DecodeFrameImp: error (%d - AVERROR_INVALIDDATA) in avcodec_send_packet\n", ret);
 }
 else
 {
 printf("FFmpegDecoder::DecodeFrameImp: error (%d) in avcodec_send_packet\n", ret);
 }
 return false;
 }
 }

 ret = 0;
 while (ret >= 0)
 {
 ret = avcodec_receive_frame(m_pAVCodecCtx, m_pAVFrame);
 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
 {
 break;
 }

 IncrementLastDecodedFrameIdx();
 if (GetLastDecodedFrameIdx() < firstFrameIdx)
 {
 printf("FFmpegDecoder::DecodeCurrentAVPkt ignoring frame idx %d\n", GetLastDecodedFrameIdx());
 continue; // we don't need this frame
 }

 AVFrame * theFrame = m_pAVFrame; // default

 for (int j = 0; j < m_pAVFrame->nb_side_data; j++)
 {
 AVFrameSideData *sd = m_pAVFrame->side_data[j];
 if (sd->type == AV_FRAME_DATA_DISPLAYMATRIX) {
 auto ddd = av_display_rotation_get((int32_t *)sd->data);
 }
 }

 if (m_pSwsCtxConvertImg != nullptr)
 {
 {
 if (sws_scale(m_pSwsCtxConvertImg, theFrame->data, theFrame->linesize, 0, theFrame->height, m_pAVFrameRGB->data, m_pAVFrameRGB->linesize) == 0)
 {
 printf("FFmpegDecoder::DecodeFrameImp: error in sws_scale\n");
 return false;
 }
 }
 int numChannels = 3;
 FFmpegDecoderCallback::EPixelFormat pixFormat = FFmpegDecoderCallback::EPixelFormat::RGB;
 // Report frame to the client and update last reported frame idx:
 m_pCB->FFmpegDecoderCallback_HandleFrame(m_reqId, GetLastDecodedFrameIdx(), m_pAVFrameRGB->width, m_pAVFrameRGB->height, m_pAVFrameRGB->linesize[0], pixFormat, numChannels, m_pAVFrameRGB->data[0]);
 m_lastReportedFrameIdx = GetLastDecodedFrameIdx(); 
 }

 if (maxNumFrames != MAX_NUM_FRAMES_INFINITE && GetLastDecodedFrameIdx() >= (firstFrameIdx + maxNumFrames - 1))
 {
 bDecodeShouldEnd = true;
 return true;
 }
 }
 return true;
}

/*virtual*/ void FFmpegOneDecoderCtx::FFmpegDecoderCallback_HandleFrame(int reqId, int frameIdx0based, int width, int height, int widthStepBytes, EPixelFormat pixFormat, int numChannels, void * pData) /*override*/
{
 // We don't have metadata json => return the frame as is:
 m_pLastFrame->create(height, width, CV_8UC3);
 *m_pLastFrame = cv::Scalar(0, 0, 0);
 unsigned char * pSrc = reinterpret_cast<unsigned char="char">(pData);
 unsigned char *pDst = m_pLastFrame->data;
 auto dstStride = m_pLastFrame->step[0];
 for (int y = 0; y < height; ++y)
 {
 memcpy(pDst + y * dstStride, pSrc + y * widthStepBytes, numChannels*width);
 }
}
</unsigned>


And eventually usage looks like this


//>>>Get Frame
 FFmpegMultiDecoder decoder;
 decoder.HandleRequest_GetFrame(nullptr, filename, 1, image);
 //<<<

 //>>> Collor conversion from BGR to RGB
 cv::cvtColor(image, image, cv::COLOR_BGR2RGB);
 //<<<
 bool isOk = cv::imwrite(save_location, image);



The problem is that if you try to open two final decompressed images


- 

- one with code https://drive.google.com/file/d/1sfTnqvHKQ2DUy0uP8POZXDw2-u3oIRfZ/view?usp=sharing
- one with cmd query https://drive.google.com/file/d/1cwsOltk3DVtK86eLeyhiYjNeEXj0msES/view?usp=sharing






and try to flip from one image to other you'll see that image I got by cmd query a little bit brighter than that I got by code.


What is a possible problem here ?


If I missed smth feel free to ask.


-
doc : Update to Doxygen 1.7.6.1
18 novembre 2019, par NotTsunamidoc : Update to Doxygen 1.7.6.1
This will bring our doxyfile closer to the modern world and clean up some warnings in the doxygen output during a regular build. I believe it is pretty fair to use 1.7.6.1 given it released in 2011, with the 1.7.x branch a year prior. The current branch is 1.8, which released 2012, but I believe 1.7.6.1 is sufficient.
Updated by running doxygen -u doc/Doxygen.in with Doxygen 1.7.6.1. The only manual change was adding 'Free Lossless Audio Codec' to PROJECT_BRIEF.
-
Custom IO writes only header, rest of frames seems omitted
18 septembre 2023, par DanielI'm using libavformat to read packets from rtsp and remux it to mp4 (fragmented).


Video frames are intact, meaning I don't want to transcode/modify/change anything.
Video frames shall be remuxed into mp4 in their original form. (i.e. : NALUs shall remain the same).


I have updated libavformat to latest (currently 4.4).


Here is my snippet :


//open input, probesize is set to 32, we don't need to decode anything
avformat_open_input

//open output with custom io
avformat_alloc_output_context2(&ofctx,...);
ofctx->pb = avio_alloc_context(buffer, bufsize, 1/*write flag*/, 0, 0, &writeOutput, 0);
ofctx->flags |= AVFMT_FLAG_NOBUFFER | AVFMT_FLAG_FLUSH_PACKETS | AVFMT_FLAG_CUSTOM_IO;

avformat_write_header(...);

//loop
av_read_frame()
LOGPACKET_DETAILS //<- this works, packets are coming
av_write_frame() //<- this doesn't work, my write callback is not called. Also tried with av_write_interleaved_frame, not seem to work.

int writeOutput(void *opaque, uint8_t *buffer, int buffer_size) {
 printf("writeOutput: writing %d bytes: ", buffer_size);
}



avformat_write_header
works, it prints the header correctly.

I'm looking for the reason on why my custom IO is not called after a frame has been read.


There must be some more flags should be set to ask avformat to don't care about decoding, just write out whatever comes in.


More information :
Input stream is a VBR encoded H264. It seems
av_write_frame
calls my write function only in case an SPS, PPS or IDR frame. Non-IDR frames are not passed at all.

Update


I found out if I request IDR frame at every second (I can ask it from the encoder),
writeOutput
is called at every second.

I created a test : after a client joins, I requested the encoder to create IDRs @1Hz for 10 times. Libav calls
writeOutput
at 1Hz for 10 seconds, but then encoder sets itself back to create IDR only at every 10 seconds. And then libav callswriteOutput
only at every 10s, which makes my decoder fail. In case 1Hz IDRs, decoder is fine.