Recherche avancée

Médias (1)

Mot : - Tags -/biomaping

Autres articles (83)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

Sur d’autres sites (8863)

  • FFMpeg gpl (ffmpeg-4.2.1-win64-dev_and_shared) version give different decode result (cmd query vs code)

    31 mai 2021, par Aleksey Timoshchenko

    *all the source img I put on my google drive due to SO restrictions (just click on links provided in the text)

    


    The problem is that for .h264 I use two implementations (cmd query and code) (depends on the tasks) that give me different results and I don't see any reason for this.

    


    Before all I would like to give an explanation, I have .bmp bayer image, then I do debayering and compress it to .h264 (compress .h264) with the script

    


    ffmpeg -y -hide_banner -i orig_bayer.bmp -vf format=gray -f rawvideo pipe: | ffmpeg -hide_banner -y -framerate 30 -f rawvideo -pixel_format bayer_rggb8 -video_size 4096x3000 -i pipe: -c:v hevc_nvenc -qp 0 -pix_fmt yuv444p res.h264


    



    


    Then the first cmd query implementation of decoding (image result)

    


    ffmpeg -y -i res.h264 -vframes 1 -f image2 gpl_script_res.bmp -hide_banner


    



    


    The second implementation is code one and takes more lines (image result here)

    


    Init ffmpeg

    


    bool FFmpegDecoder::InitFFmpeg()
{
    m_pAVPkt = av_packet_alloc();
    m_pAVFrame = av_frame_alloc();
    m_pAVFrameRGB = av_frame_alloc();

    m_pAVFormatCtx = avformat_alloc_context();
    m_pIoCtx->initAVFormatContext(m_pAVFormatCtx);

    if (avformat_open_input(&m_pAVFormatCtx, "", nullptr, nullptr) != 0)
    {
        printf("FFmpegDecoder::InitFFmpeg: error in avformat_open_input\n");
        return false;
    }

    if (avformat_find_stream_info(m_pAVFormatCtx, nullptr) < 0)
    {
        printf("FFmpegDecoder::InitFFmpeg: error in avformat_find_stream_info\n");
        return false;
    }

    //av_dump_format(ctx_format, 0, "", false);
    for (int i = 0; i < (int)m_pAVFormatCtx->nb_streams; i++)
    {
        if (m_pAVFormatCtx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO)
        {
            m_streamIdx = i;
            m_pAVStream = m_pAVFormatCtx->streams[i];
            break;
        }
    }
    if (m_pAVStream == nullptr)
    {
        printf("FFmpegDecoder::InitFFmpeg: failed to find video stream\n");
        return false;
    }

    m_pAVCodec = avcodec_find_decoder(m_pAVStream->codecpar->codec_id);
    if (!m_pAVCodec)
    {
        printf("FFmpegDecoder::InitFFmpeg: error in avcodec_find_decoder\n");
        return false;
    }

    m_pAVCodecCtx = avcodec_alloc_context3(m_pAVCodec);
    if (!m_pAVCodecCtx)
    {
        printf("FFmpegDecoder::InitFFmpeg: error in avcodec_alloc_context3\n");
        return false;
    }

    if (avcodec_parameters_to_context(m_pAVCodecCtx, m_pAVStream->codecpar) < 0)
    {
        printf("FFmpegDecoder::InitFFmpeg: error in avcodec_parameters_to_context\n");
        return false;
    }

    if (avcodec_open2(m_pAVCodecCtx, m_pAVCodec, nullptr) < 0)
    {
        printf("FFmpegDecoder::InitFFmpeg: error in avcodec_open2\n");
        return false;
    }

    m_pAVFrameRGB->format = AV_PIX_FMT_BGR24;
    m_pAVFrameRGB->width = m_pAVCodecCtx->width;
    m_pAVFrameRGB->height = m_pAVCodecCtx->height;
    if (av_frame_get_buffer(m_pAVFrameRGB, 32) != 0)
    {
        printf("FFmpegDecoder::InitFFmpeg: error in av_frame_get_buffer\n");
        return false;
    }

    m_streamRotationDegrees = GetAVStreamRotation(m_pAVStream);
    m_estimatedFramesCount = 0;
    assert(m_pAVFormatCtx->nb_streams > 0);
    if (m_pAVFormatCtx->nb_streams > 0)
    {
        m_estimatedFramesCount = m_pAVFormatCtx->streams[0]->nb_frames;
    }

    return InitConvertColorSpace(); 
}

bool FFmpegDecoder::InitConvertColorSpace()
{
    // Init converter from YUV420p to BGR:
    m_pSwsCtxConvertImg = sws_getContext(m_pAVCodecCtx->width, m_pAVCodecCtx->height, m_pAVCodecCtx->pix_fmt, m_pAVCodecCtx->width, m_pAVCodecCtx->height, AV_PIX_FMT_RGB24, SWS_FAST_BILINEAR, NULL, NULL, NULL);
    if (!m_pSwsCtxConvertImg)
    {
        printf("FFmpegDecoder::InitFFmpeg: error in sws_getContext\n");
        return false;
    }
    return true;
}


    


    Decoding impl

    


    bool FFmpegDecoder::DecodeContinue(int firstFrameIdx, int maxNumFrames)&#xA;{&#xA;    if (firstFrameIdx == FIRST_FRAME_IDX_BEGINNING)&#xA;    {&#xA;        firstFrameIdx = 0;&#xA;    }&#xA;&#xA;    auto lastReportedFrameIdxTillNow = GetLastReportedFrameIdx();&#xA;&#xA;    if (GetLastDecodedFrameIdx() >= 0 &amp;&amp; firstFrameIdx &lt;= GetLastDecodedFrameIdx())&#xA;    {&#xA;        printf("FFmpegDecoder::DecodeContinue FAILED: firstFrameIdx (%d) already passed decoded (Last decoded idx: %d)\n", firstFrameIdx, GetLastDecodedFrameIdx());&#xA;        return false;&#xA;    }&#xA;&#xA;    bool bRes;&#xA;    int nRet = 0;&#xA;    bool bDecodeShouldEnd = false;&#xA;&#xA;    if (m_pAVPkt != nullptr)&#xA;    {&#xA;        while (nRet >= 0)&#xA;        {&#xA;            m_bCurrentAVPktSentToCodec = false;&#xA;            nRet = av_read_frame(m_pAVFormatCtx, m_pAVPkt);&#xA;            if (nRet &lt; 0)&#xA;            {&#xA;                break;&#xA;            }&#xA;            if (m_pAVPkt->stream_index == m_streamIdx)&#xA;            {&#xA;                bRes = DecodeCurrentAVPkt(firstFrameIdx, maxNumFrames, bDecodeShouldEnd);&#xA;                if (!bRes || m_bRequestedAbort)&#xA;                {&#xA;                    av_packet_unref(m_pAVPkt);&#xA;                    return false;&#xA;                }&#xA;&#xA;                if (bDecodeShouldEnd)&#xA;                {&#xA;                    av_packet_unref(m_pAVPkt);&#xA;                    return true;&#xA;                }&#xA;            }&#xA;&#xA;            av_packet_unref(m_pAVPkt);&#xA;        }&#xA;        m_bCurrentAVPktSentToCodec = false;&#xA;        m_pAVPkt = nullptr;&#xA;    }&#xA;&#xA;    // drain:&#xA;    bRes = DecodeCurrentAVPkt(firstFrameIdx, maxNumFrames, bDecodeShouldEnd);&#xA;    if (!bRes)&#xA;    {&#xA;        return false;&#xA;    }&#xA;&#xA;    if (lastReportedFrameIdxTillNow == GetLastReportedFrameIdx())&#xA;    {&#xA;        printf("FFmpegDecoder::DecodeContinue(firstFrameIdx==%d, maxNumFrames==%d) FAILED: no new frame was decoded\n", firstFrameIdx, maxNumFrames);&#xA;        return false;&#xA;    }&#xA;&#xA;    return true;&#xA;}&#xA;&#xA;&#xA;bool FFmpegDecoder::DecodeCurrentAVPkt(int firstFrameIdx, int maxNumFrames, bool &amp; bDecodeShouldEnd)&#xA;{&#xA;    bDecodeShouldEnd = false;&#xA;&#xA;    int ret = 0;&#xA;    if (m_bCurrentAVPktSentToCodec == false)&#xA;    {&#xA;        ret = avcodec_send_packet(m_pAVCodecCtx, m_pAVPkt);&#xA;        m_bCurrentAVPktSentToCodec = true;&#xA;        if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)&#xA;        {&#xA;            printf("FFmpegDecoder::DecodeFrameImp: error EAGAIN/AVERROR_EOF in avcodec_send_packet\n");&#xA;            return false;&#xA;        }&#xA;        if (ret &lt; 0)&#xA;        {&#xA;            if (ret == AVERROR_INVALIDDATA)&#xA;            {&#xA;                printf("FFmpegDecoder::DecodeFrameImp: error (%d - AVERROR_INVALIDDATA) in avcodec_send_packet\n", ret);&#xA;            }&#xA;            else&#xA;            {&#xA;                printf("FFmpegDecoder::DecodeFrameImp: error (%d) in avcodec_send_packet\n", ret);&#xA;            }&#xA;            return false;&#xA;        }&#xA;    }&#xA;&#xA;    ret = 0;&#xA;    while (ret >= 0)&#xA;    {&#xA;        ret = avcodec_receive_frame(m_pAVCodecCtx, m_pAVFrame);&#xA;        if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)&#xA;        {&#xA;            break;&#xA;        }&#xA;&#xA;        IncrementLastDecodedFrameIdx();&#xA;        if (GetLastDecodedFrameIdx() &lt; firstFrameIdx)&#xA;        {&#xA;            printf("FFmpegDecoder::DecodeCurrentAVPkt ignoring frame idx %d\n", GetLastDecodedFrameIdx());&#xA;            continue;   // we don&#x27;t need this frame&#xA;        }&#xA;&#xA;        AVFrame * theFrame = m_pAVFrame;    // default&#xA;&#xA;        for (int j = 0; j &lt; m_pAVFrame->nb_side_data; j&#x2B;&#x2B;)&#xA;        {&#xA;            AVFrameSideData *sd = m_pAVFrame->side_data[j];&#xA;            if (sd->type == AV_FRAME_DATA_DISPLAYMATRIX) {&#xA;                auto ddd = av_display_rotation_get((int32_t *)sd->data);&#xA;            }&#xA;        }&#xA;&#xA;        if (m_pSwsCtxConvertImg != nullptr)&#xA;        {&#xA;            {&#xA;                if (sws_scale(m_pSwsCtxConvertImg, theFrame->data, theFrame->linesize, 0, theFrame->height, m_pAVFrameRGB->data, m_pAVFrameRGB->linesize) == 0)&#xA;                {&#xA;                    printf("FFmpegDecoder::DecodeFrameImp: error in sws_scale\n");&#xA;                    return false;&#xA;                }&#xA;            }&#xA;            int numChannels = 3;&#xA;            FFmpegDecoderCallback::EPixelFormat pixFormat = FFmpegDecoderCallback::EPixelFormat::RGB;&#xA;            // Report frame to the client and update last reported frame idx:&#xA;            m_pCB->FFmpegDecoderCallback_HandleFrame(m_reqId, GetLastDecodedFrameIdx(), m_pAVFrameRGB->width, m_pAVFrameRGB->height, m_pAVFrameRGB->linesize[0], pixFormat, numChannels, m_pAVFrameRGB->data[0]);&#xA;            m_lastReportedFrameIdx = GetLastDecodedFrameIdx();  &#xA;        }&#xA;&#xA;        if (maxNumFrames != MAX_NUM_FRAMES_INFINITE &amp;&amp; GetLastDecodedFrameIdx() >= (firstFrameIdx &#x2B; maxNumFrames - 1))&#xA;        {&#xA;            bDecodeShouldEnd = true;&#xA;            return true;&#xA;        }&#xA;    }&#xA;    return true;&#xA;}&#xA;&#xA;/*virtual*/ void FFmpegOneDecoderCtx::FFmpegDecoderCallback_HandleFrame(int reqId, int frameIdx0based, int width, int height, int widthStepBytes, EPixelFormat pixFormat, int numChannels, void * pData) /*override*/&#xA;{&#xA;    // We don&#x27;t have metadata json => return the frame as is:&#xA;    m_pLastFrame->create(height, width, CV_8UC3);&#xA;    *m_pLastFrame = cv::Scalar(0, 0, 0);&#xA;    unsigned char * pSrc = reinterpret_cast<unsigned char="char">(pData);&#xA;    unsigned char *pDst = m_pLastFrame->data;&#xA;    auto dstStride = m_pLastFrame->step[0];&#xA;    for (int y = 0; y &lt; height; &#x2B;&#x2B;y)&#xA;    {&#xA;        memcpy(pDst &#x2B; y * dstStride, pSrc &#x2B; y * widthStepBytes, numChannels*width);&#xA;    }&#xA;}&#xA;</unsigned>

    &#xA;

    And eventually usage looks like this

    &#xA;

        //>>>Get Frame&#xA;    FFmpegMultiDecoder decoder;&#xA;    decoder.HandleRequest_GetFrame(nullptr, filename, 1, image);&#xA;    //&lt;&lt;&lt;&#xA;&#xA;    //>>> Collor conversion from BGR to RGB&#xA;    cv::cvtColor(image, image, cv::COLOR_BGR2RGB);&#xA;    //&lt;&lt;&lt;&#xA;    bool isOk = cv::imwrite(save_location, image);&#xA;

    &#xA;

    The problem is that if you try to open two final decompressed images

    &#xA;

      &#xA;
    1. one with code https://drive.google.com/file/d/1sfTnqvHKQ2DUy0uP8POZXDw2-u3oIRfZ/view?usp=sharing
    2. &#xA;

    3. one with cmd query https://drive.google.com/file/d/1cwsOltk3DVtK86eLeyhiYjNeEXj0msES/view?usp=sharing
    4. &#xA;

    &#xA;

    and try to flip from one image to other you'll see that image I got by cmd query a little bit brighter than that I got by code.

    &#xA;

    What is a possible problem here ?

    &#xA;

    If I missed smth feel free to ask.

    &#xA;

  • doc : Update to Doxygen 1.7.6.1

    18 novembre 2019, par NotTsunami
    doc : Update to Doxygen 1.7.6.1
    

    This will bring our doxyfile closer to the modern world and clean up some warnings in the doxygen output during a regular build. I believe it is pretty fair to use 1.7.6.1 given it released in 2011, with the 1.7.x branch a year prior. The current branch is 1.8, which released 2012, but I believe 1.7.6.1 is sufficient.

    Updated by running doxygen -u doc/Doxygen.in with Doxygen 1.7.6.1. The only manual change was adding 'Free Lossless Audio Codec' to PROJECT_BRIEF.

    • [DH] doc/Doxyfile.in
  • Custom IO writes only header, rest of frames seems omitted

    18 septembre 2023, par Daniel

    I'm using libavformat to read packets from rtsp and remux it to mp4 (fragmented).

    &#xA;

    Video frames are intact, meaning I don't want to transcode/modify/change anything.&#xA;Video frames shall be remuxed into mp4 in their original form. (i.e. : NALUs shall remain the same).

    &#xA;

    I have updated libavformat to latest (currently 4.4).

    &#xA;

    Here is my snippet :

    &#xA;

    //open input, probesize is set to 32, we don&#x27;t need to decode anything&#xA;avformat_open_input&#xA;&#xA;//open output with custom io&#xA;avformat_alloc_output_context2(&amp;ofctx,...);&#xA;ofctx->pb = avio_alloc_context(buffer, bufsize, 1/*write flag*/, 0, 0, &amp;writeOutput, 0);&#xA;ofctx->flags |= AVFMT_FLAG_NOBUFFER | AVFMT_FLAG_FLUSH_PACKETS | AVFMT_FLAG_CUSTOM_IO;&#xA;&#xA;avformat_write_header(...);&#xA;&#xA;//loop&#xA;av_read_frame()&#xA;LOGPACKET_DETAILS //&lt;- this works, packets are coming&#xA;av_write_frame() //&lt;- this doesn&#x27;t work, my write callback is not called. Also tried with av_write_interleaved_frame, not seem to work.&#xA;&#xA;int writeOutput(void *opaque, uint8_t *buffer, int buffer_size) {&#xA;  printf("writeOutput: writing %d bytes: ", buffer_size);&#xA;}&#xA;

    &#xA;

    avformat_write_header works, it prints the header correctly.

    &#xA;

    I'm looking for the reason on why my custom IO is not called after a frame has been read.

    &#xA;

    There must be some more flags should be set to ask avformat to don't care about decoding, just write out whatever comes in.

    &#xA;

    More information :&#xA;Input stream is a VBR encoded H264. It seems av_write_frame calls my write function only in case an SPS, PPS or IDR frame. Non-IDR frames are not passed at all.

    &#xA;

    Update

    &#xA;

    I found out if I request IDR frame at every second (I can ask it from the encoder), writeOutput is called at every second.

    &#xA;

    I created a test : after a client joins, I requested the encoder to create IDRs @1Hz for 10 times. Libav calls writeOutput at 1Hz for 10 seconds, but then encoder sets itself back to create IDR only at every 10 seconds. And then libav calls writeOutput only at every 10s, which makes my decoder fail. In case 1Hz IDRs, decoder is fine.

    &#xA;