Recherche avancée

Médias (0)

Mot : - Tags -/médias

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (112)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

Sur d’autres sites (16290)

  • Revision 3273 : On utilise notation et on définit un critère _dist pour notation plutot

    18 avril 2010, par kent1 — Log

    On utilise notation et on définit un critère _dist pour notation plutot

  • Glitchy audio or broken video in fragmented MP4

    6 septembre 2022, par PookyFan

    I'm working on small C++ library for muxing audio and video. This is basically a facade for FFMPEG functions and structures. The code is here with minimal reproduction testing code here and as of now it seems like it's working fine... almost.

    


    For the record - my MP4 file is so-called "fragmented MP4", with headers moved at the beginning of the file in a way that would allow to stream that file (ie. play it in a browser while it's being buffered). That's what these movflags I'm setting in Mp4Muxer::writeHeader() are for.

    


    While testing this library with raw H264 video stream and MP3 file (video is <1 min long, MP3 - a few minutes long), I observed that :

    &#xA;

      &#xA;
    • if I don't limit muxing audio stream when it's way "ahead" of the video (and it will be since MP3 is longer, so eventually video frames stop coming but audio frames still come in), all muxes just fine with no errors, but playing output MP4 with ffplay after just a few seconds results in the following log (and also frozen video, while audio keeps playing) :
    • &#xA;

    &#xA;

    [h264 @ 0x7f90a40ae2c0] Invalid NAL unit size (2162119 > 76779).0&#xA;[h264 @ 0x7f90a40ae2c0] Error splitting the input into NAL units.&#xA;[mp3float @ 0x7f90a4009540] Header missing  515KB sq=    0B f=0/0&#xA;[h264 @ 0x7f90a40cb0c0] Invalid NAL unit size (-860010620 > 17931).&#xA;[h264 @ 0x7f90a40cb0c0] Error splitting the input into NAL units.&#xA;[h264 @ 0x7f90a42bf440] Invalid NAL unit size (-168012642 > 8000).&#xA;[h264 @ 0x7f90a42bf440] Error splitting the input into NAL units.&#xA;[h264 @ 0x7f90a42fa780] Invalid NAL unit size (-1843711407 > 5683).&#xA;[ and it repeats...]&#xA;

    &#xA;

      &#xA;
    • even if I limit how much a stream can be "ahead" of the other, limiting it too much results in no muxed data in the output
    • &#xA;

    • any other intermediate level of limiting how much one stream can be buffered in muxer relative to the other stream results in glitchy audio, with the following errors popping out every now and then in ffplay (the more strict limit is, the more often they are printed) :
    • &#xA;

    &#xA;

    [mp3float @ 0x7f744c01b640] overread, skip -6 enddists: -1 -1=0/0 &#xA;

    &#xA;

    Not limitting muxed audio (at all or enough) relative to muxed video also results in following messages in my muxing application :

    &#xA;

    [mp4 @ 0x55d0c6c21940] Delay between the first packet and last packet in the muxing queue is 10004898 > 10000000: forcing output&#xA;

    &#xA;

    For now, the fix is quite ugly and I don't even understand why it works, but before writting MP4 header I manually set a limit for frames buffered by muxer, like so :

    &#xA;

    formatCtxt->max_interleave_delta = 10000000LL * 10LL;&#xA;

    &#xA;

    This way the muxer can store more packets of one stream that's way "ahead" of the other (maximum difference between DTS of the packets at the beginning and at the end of queue is set to 10x larger than default ; it also gets rid of information log mentioned above). Obviously, I'd like to resolve it more properly, without hacking things like that.

    &#xA;

    I was trying various things, including manual skipping of ID3 tags in MP3 file (but seems like FFMPEG handles them just fine and it didn't change anything). I was also experimenting with FLAC in MP4 instead of MP3. and while I know it's rather experimental thing, I encountered very similar problems with glitching audio (no problem with video being frozen when lots of audio data gets muxed, though). It also seems that problem with glitching audio or frozen video varies in scale depending on how large are input data chunks that I feed muxer with. For now, honestly, I'm out of ideas.

    &#xA;

  • The hardware decoding was successful, but the hw_frames_ctx in the received frame is empty

    15 juillet 2024, par mercuric taylor

    I tried to use QSV hardware decoding under ffmpeg, using the integrated graphics 730 on my computer. Here's the code I used to initialize the decoder

    &#xA;

    const AVCodec* codec = NULL;&#xA;int ret;&#xA;int err = 0;&#xA;// Create the QSV hardware device.&#xA;    ret = av_hwdevice_ctx_create(&amp;hw_device_ctx, AV_HWDEVICE_TYPE_QSV, "auto", NULL, 0);&#xA;    if (ret &lt; 0)&#xA;    {&#xA;        char error_string[AV_ERROR_MAX_STRING_SIZE];&#xA;        av_make_error_string(error_string, AV_ERROR_MAX_STRING_SIZE, ret);&#xA;        LError("Error creating QSV device: {}", error_string);&#xA;        return NULL;&#xA;    }&#xA; // Search for QSV decoders, either for H.264 or H.265.&#xA;    codec = avcodec_find_decoder_by_name(codec_name);&#xA;    if (!codec)&#xA;    {&#xA;        LError("Failed to find QSV decoder.");&#xA;        return NULL;&#xA;    }&#xA;&#xA;    // Creating a decoder context and associating it with the hardware device.&#xA;    decoder_ctx = avcodec_alloc_context3(codec);&#xA;    if (!decoder_ctx)&#xA;    {&#xA;        ret = AVERROR(ENOMEM);&#xA;        LError("Failed to allocate decoder context.\n");&#xA;        return NULL;&#xA;    }&#xA;    decoder_ctx->codec_id = AV_CODEC_ID_H264;  &#xA;    decoder_ctx->opaque = &amp;hw_device_ctx;&#xA;    decoder_ctx->get_format = get_format;&#xA;// Open the decoder.&#xA;    if ((ret = avcodec_open2(decoder_ctx, NULL, NULL)) &lt; 0)&#xA;    {&#xA;        LError("Failed to open decoder: %d\n", ret);&#xA;        return NULL;&#xA;    }&#xA;&#xA;    parser_ctx = av_parser_init(avcodec_find_encoder_by_name(codec_name)->id);&#xA;

    &#xA;

    The following is the process of decoding using the decoder :

    &#xA;

    AVFrame* frame = av_frame_alloc();&#xA;    AVFrame* dstFrame = av_frame_alloc();&#xA;    res = avcodec_send_packet(decoder_ctx, pkt);&#xA;    if (res &lt; 0)&#xA;    {&#xA;        return;&#xA;    }&#xA;    int num = 0;&#xA;    while (res >= 0)&#xA;    {&#xA;        res = avcodec_receive_frame(decoder_ctx, frame);&#xA;&#xA;        if (res == AVERROR(EAGAIN) || res == AVERROR_EOF)&#xA;        {&#xA;            //if (res == AVERROR(EAGAIN)) &#xA;            //{&#xA;            //   LInfo("AVERROR(EAGAIN):");&#xA;            //}&#xA;            //if (res == AVERROR_EOF) &#xA;            //{&#xA;            //  //  LInfo("AVERROR_EOF");&#xA;            //}&#xA;           // av_frame_unref(frame);&#xA;            break;&#xA;        }&#xA;        else if (res &lt; 0)&#xA;        {&#xA;          //  av_frame_unref(frame);&#xA;            return;&#xA;        }&#xA;&#xA;&#xA;        frameNumbers_&#x2B;&#x2B;;&#xA;        if (frame->hw_frames_ctx == NULL)&#xA;        {&#xA;            LError("hw_frames_ctx is null");&#xA;            LError("avcodec_receive_frame return is {}", res);&#xA;        }&#xA;

    &#xA;

    My issue is that I've successfully decoded the video. The return value of avcodec_receive_frame is 0, and the width and height of the AVFrame are the same as the input video stream.

    &#xA;

    However,** the hw_frames_ctx field of the AVFrame is empty**. Why would this happen in a successful hardware decoding scenario ?

    &#xA;

    Could it be due to some incorrect configurations ? I've set up a get_format function like this

    &#xA;

    static enum AVPixelFormat get_format(AVCodecContext *avctx, const enum AVPixelFormat *pix_fmts)&#xA;{&#xA;    while (*pix_fmts != AV_PIX_FMT_NONE) {&#xA;        if (*pix_fmts == AV_PIX_FMT_QSV) {&#xA;            DecodeContext *decode = (DecodeContext*)avctx->opaque;&#xA;            AVHWFramesContext  *frames_ctx;&#xA;            AVQSVFramesContext *frames_hwctx;&#xA;            int ret;&#xA;            /* create a pool of surfaces to be used by the decoder */&#xA;            avctx->hw_frames_ctx = av_hwframe_ctx_alloc(decode->hw_device_ref);&#xA;            if (!avctx->hw_frames_ctx)&#xA;                return AV_PIX_FMT_NONE;&#xA;            frames_ctx = (AVHWFramesContext*)avctx->hw_frames_ctx->data;&#xA;            frames_hwctx = (AVQSVFramesContext*)frames_ctx->hwctx;&#xA;            frames_ctx->format = AV_PIX_FMT_QSV;&#xA;            frames_ctx->sw_format = avctx->sw_pix_fmt;&#xA;            frames_ctx->width = FFALIGN(avctx->coded_width, 32);&#xA;            frames_ctx->height = FFALIGN(avctx->coded_height, 32);&#xA;            frames_ctx->initial_pool_size = 32;&#xA;            frames_hwctx->frame_type = MFX_MEMTYPE_VIDEO_MEMORY_DECODER_TARGET;&#xA;            ret = av_hwframe_ctx_init(avctx->hw_frames_ctx);&#xA;            if (ret &lt; 0)&#xA;                return AV_PIX_FMT_NONE;&#xA;            return AV_PIX_FMT_QSV;&#xA;        }&#xA;        pix_fmts&#x2B;&#x2B;;&#xA;    }&#xA;    fprintf(stderr, "The QSV pixel format not offered in get_format()\n");&#xA;    return AV_PIX_FMT_NONE;&#xA;}&#xA;

    &#xA;

    But I also noticed that even though I set decoder_ctx->get_format = get_format ; this function is not being executed later on.

    &#xA;

    I observed that my GPU is also being utilized during program execution, indicating a successful hardware decoding. My subsequent goal is to render a frame from this decoded AVFrame. It seems like the hw_frames_ctx of the AVFrame is a texture handle on the GPU. I wish to directly use this field for D3D11 rendering and display it on the screen.&#xA;My questions are :

    &#xA;

      &#xA;
    1. Is the hw_frames_ctx field empty in the case of successful hardware decoding ?
    2. &#xA;

    3. Does it represent a texture handle on the GPU ?
    4. &#xA;

    5. If my rendering approach is wrong, how can I correctly render this AVFrame using D3D11 ?
    6. &#xA;

    &#xA;