Recherche avancée

Médias (91)

Autres articles (47)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

  • Récupération d’informations sur le site maître à l’installation d’une instance

    26 novembre 2010, par

    Utilité
    Sur le site principal, une instance de mutualisation est définie par plusieurs choses : Les données dans la table spip_mutus ; Son logo ; Son auteur principal (id_admin dans la table spip_mutus correspondant à un id_auteur de la table spip_auteurs)qui sera le seul à pouvoir créer définitivement l’instance de mutualisation ;
    Il peut donc être tout à fait judicieux de vouloir récupérer certaines de ces informations afin de compléter l’installation d’une instance pour, par exemple : récupérer le (...)

Sur d’autres sites (4470)

  • The hardware decoding was successful, but the hw_frames_ctx in the received frame is empty

    15 juillet 2024, par mercuric taylor

    I tried to use QSV hardware decoding under ffmpeg, using the integrated graphics 730 on my computer. Here's the code I used to initialize the decoder

    


    const AVCodec* codec = NULL;
int ret;
int err = 0;
// Create the QSV hardware device.
    ret = av_hwdevice_ctx_create(&hw_device_ctx, AV_HWDEVICE_TYPE_QSV, "auto", NULL, 0);
    if (ret < 0)
    {
        char error_string[AV_ERROR_MAX_STRING_SIZE];
        av_make_error_string(error_string, AV_ERROR_MAX_STRING_SIZE, ret);
        LError("Error creating QSV device: {}", error_string);
        return NULL;
    }
 // Search for QSV decoders, either for H.264 or H.265.
    codec = avcodec_find_decoder_by_name(codec_name);
    if (!codec)
    {
        LError("Failed to find QSV decoder.");
        return NULL;
    }

    // Creating a decoder context and associating it with the hardware device.
    decoder_ctx = avcodec_alloc_context3(codec);
    if (!decoder_ctx)
    {
        ret = AVERROR(ENOMEM);
        LError("Failed to allocate decoder context.\n");
        return NULL;
    }
    decoder_ctx->codec_id = AV_CODEC_ID_H264;  
    decoder_ctx->opaque = &hw_device_ctx;
    decoder_ctx->get_format = get_format;
// Open the decoder.
    if ((ret = avcodec_open2(decoder_ctx, NULL, NULL)) < 0)
    {
        LError("Failed to open decoder: %d\n", ret);
        return NULL;
    }

    parser_ctx = av_parser_init(avcodec_find_encoder_by_name(codec_name)->id);


    


    The following is the process of decoding using the decoder :

    


    AVFrame* frame = av_frame_alloc();
    AVFrame* dstFrame = av_frame_alloc();
    res = avcodec_send_packet(decoder_ctx, pkt);
    if (res < 0)
    {
        return;
    }
    int num = 0;
    while (res >= 0)
    {
        res = avcodec_receive_frame(decoder_ctx, frame);

        if (res == AVERROR(EAGAIN) || res == AVERROR_EOF)
        {
            //if (res == AVERROR(EAGAIN)) 
            //{
            //   LInfo("AVERROR(EAGAIN):");
            //}
            //if (res == AVERROR_EOF) 
            //{
            //  //  LInfo("AVERROR_EOF");
            //}
           // av_frame_unref(frame);
            break;
        }
        else if (res < 0)
        {
          //  av_frame_unref(frame);
            return;
        }


        frameNumbers_++;
        if (frame->hw_frames_ctx == NULL)
        {
            LError("hw_frames_ctx is null");
            LError("avcodec_receive_frame return is {}", res);
        }


    


    My issue is that I've successfully decoded the video. The return value of avcodec_receive_frame is 0, and the width and height of the AVFrame are the same as the input video stream.

    


    However,** the hw_frames_ctx field of the AVFrame is empty**. Why would this happen in a successful hardware decoding scenario ?

    


    Could it be due to some incorrect configurations ? I've set up a get_format function like this

    


    static enum AVPixelFormat get_format(AVCodecContext *avctx, const enum AVPixelFormat *pix_fmts)
{
    while (*pix_fmts != AV_PIX_FMT_NONE) {
        if (*pix_fmts == AV_PIX_FMT_QSV) {
            DecodeContext *decode = (DecodeContext*)avctx->opaque;
            AVHWFramesContext  *frames_ctx;
            AVQSVFramesContext *frames_hwctx;
            int ret;
            /* create a pool of surfaces to be used by the decoder */
            avctx->hw_frames_ctx = av_hwframe_ctx_alloc(decode->hw_device_ref);
            if (!avctx->hw_frames_ctx)
                return AV_PIX_FMT_NONE;
            frames_ctx = (AVHWFramesContext*)avctx->hw_frames_ctx->data;
            frames_hwctx = (AVQSVFramesContext*)frames_ctx->hwctx;
            frames_ctx->format = AV_PIX_FMT_QSV;
            frames_ctx->sw_format = avctx->sw_pix_fmt;
            frames_ctx->width = FFALIGN(avctx->coded_width, 32);
            frames_ctx->height = FFALIGN(avctx->coded_height, 32);
            frames_ctx->initial_pool_size = 32;
            frames_hwctx->frame_type = MFX_MEMTYPE_VIDEO_MEMORY_DECODER_TARGET;
            ret = av_hwframe_ctx_init(avctx->hw_frames_ctx);
            if (ret < 0)
                return AV_PIX_FMT_NONE;
            return AV_PIX_FMT_QSV;
        }
        pix_fmts++;
    }
    fprintf(stderr, "The QSV pixel format not offered in get_format()\n");
    return AV_PIX_FMT_NONE;
}


    


    But I also noticed that even though I set decoder_ctx->get_format = get_format ; this function is not being executed later on.

    


    I observed that my GPU is also being utilized during program execution, indicating a successful hardware decoding. My subsequent goal is to render a frame from this decoded AVFrame. It seems like the hw_frames_ctx of the AVFrame is a texture handle on the GPU. I wish to directly use this field for D3D11 rendering and display it on the screen.
My questions are :

    


      

    1. Is the hw_frames_ctx field empty in the case of successful hardware decoding ?
    2. 


    3. Does it represent a texture handle on the GPU ?
    4. 


    5. If my rendering approach is wrong, how can I correctly render this AVFrame using D3D11 ?
    6. 


    


  • lavc/vaapi_encode : Add hardware config metadata

    13 avril 2020, par Mark Thompson
    lavc/vaapi_encode : Add hardware config metadata
    

    These encoders all accept VAAPI surfaces in a hardware frames context.

    • [DH] libavcodec/vaapi_encode.c
    • [DH] libavcodec/vaapi_encode.h
    • [DH] libavcodec/vaapi_encode_h264.c
    • [DH] libavcodec/vaapi_encode_h265.c
    • [DH] libavcodec/vaapi_encode_mjpeg.c
    • [DH] libavcodec/vaapi_encode_mpeg2.c
    • [DH] libavcodec/vaapi_encode_vp8.c
    • [DH] libavcodec/vaapi_encode_vp9.c
  • lsws/yuv2rgb : Fix yuva2rgb32 on big endian hardware.

    29 octobre 2017, par Carl Eugen Hoyos
    lsws/yuv2rgb : Fix yuva2rgb32 on big endian hardware.
    
    • [DH] libswscale/version.h
    • [DH] libswscale/yuv2rgb.c