Recherche avancée

Médias (1)

Mot : - Tags -/biomaping

Autres articles (66)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

  • Librairies et binaires spécifiques au traitement vidéo et sonore

    31 janvier 2010, par

    Les logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
    Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
    Binaires complémentaires et facultatifs flvtool2 : (...)

Sur d’autres sites (9861)

  • FFMPEG using AV_PIX_FMT_D3D11 gives "Error registering the input resource" from NVENC

    13 novembre 2024, par nbabcock

    Input frames start on the GPU as ID3D11Texture2D pointers.

    


    I encode them to H264 using FFMPEG + NVENC. NVENC works perfectly if I download the textures to CPU memory as format AV_PIX_FMT_BGR0, but I'd like to cut out the CPU texture download entirely, and pass the GPU memory pointer directly into the encoder in native format. I write frames like this :

    


    int write_gpu_video_frame(ID3D11Texture2D* gpuTex, AVFormatContext* oc, OutputStream* ost) {
    AVFrame *hw_frame = ost->hw_frame;

    printf("gpuTex address = 0x%x\n", &gpuTex);

    hw_frame->data[0] = (uint8_t *) gpuTex;
    hw_frame->data[1] = (uint8_t *) (intptr_t) 0;
    hw_frame->pts     = ost->next_pts++;

    return write_frame(oc, ost->enc, ost->st, hw_frame);
    // write_frame is identical to sample code in ffmpeg repo
}


    


    Running the code with this modification gives the following error :

    


    gpuTex address = 0x4582f6d0
[h264_nvenc @ 00000191233e1bc0] Error registering an input resource: invalid call (9):
[h264_nvenc @ 00000191233e1bc0] Could not register an input HW frame
Error sending a frame to the encoder: Unknown error occurred


    



    


    Here's some supplemental code used in setting up and configuring the hw context and encoder :

    


    /* A few config flags */
#define ENABLE_NVENC TRUE
#define USE_D3D11 TRUE // Skip downloading textures to CPU memory and send it straight to NVENC


    


    /* Init hardware frame context */
static int set_hwframe_ctx(AVCodecContext* ctx, AVBufferRef* hw_device_ctx) {
    AVBufferRef*       hw_frames_ref;
    AVHWFramesContext* frames_ctx = NULL;
    int                err        = 0;

    if (!(hw_frames_ref = av_hwframe_ctx_alloc(hw_device_ctx))) {
        fprintf(stderr, "Failed to create HW frame context.\n");
        throw;
    }
    frames_ctx                    = (AVHWFramesContext*) (hw_frames_ref->data);
    frames_ctx->format            = AV_PIX_FMT_D3D11;
    frames_ctx->sw_format         = AV_PIX_FMT_NV12;
    frames_ctx->width             = STREAM_WIDTH;
    frames_ctx->height            = STREAM_HEIGHT;
    //frames_ctx->initial_pool_size = 20;
    if ((err = av_hwframe_ctx_init(hw_frames_ref)) < 0) {
        fprintf(stderr, "Failed to initialize hw frame context. Error code: %s\n", av_err2str(err));
        av_buffer_unref(&hw_frames_ref);
        throw;
    }
    ctx->hw_frames_ctx = av_buffer_ref(hw_frames_ref);
    if (!ctx->hw_frames_ctx)
        err = AVERROR(ENOMEM);

    av_buffer_unref(&hw_frames_ref);
    return err;
}


    


    /* Add an output stream. */
static void add_video_stream(
    OutputStream* ost,
    AVFormatContext* oc,
    const AVCodec** codec,
    enum AVCodecID  codec_id,
    int width,
    int height
) {
    AVCodecContext* c;
    int             i;
    bool            nvenc = false;

    /* find the encoder */
    if (ENABLE_NVENC) {
        printf("Getting nvenc encoder\n");
        *codec = avcodec_find_encoder_by_name("h264_nvenc");
        nvenc  = true;
    }
    
    if (!ENABLE_NVENC || *codec == NULL) {
        printf("Getting standard encoder\n");
        avcodec_find_encoder(codec_id);
        nvenc = false;
    }
    if (!(*codec)) {
        fprintf(stderr, "Could not find encoder for '%s'\n",
                avcodec_get_name(codec_id));
        exit(1);
    }

    ost->st = avformat_new_stream(oc, NULL);
    if (!ost->st) {
        fprintf(stderr, "Could not allocate stream\n");
        exit(1);
    }
    ost->st->id = oc->nb_streams - 1;
    c           = avcodec_alloc_context3(*codec);
    if (!c) {
        fprintf(stderr, "Could not alloc an encoding context\n");
        exit(1);
    }
    ost->enc = c;

    printf("Using video codec %s\n", avcodec_get_name(codec_id));

    c->codec_id = codec_id;
    c->bit_rate = 4000000;
    /* Resolution must be a multiple of two. */
    c->width  = STREAM_WIDTH;
    c->height = STREAM_HEIGHT;
    /* timebase: This is the fundamental unit of time (in seconds) in terms
        * of which frame timestamps are represented. For fixed-fps content,
        * timebase should be 1/framerate and timestamp increments should be
        * identical to 1. */
    ost->st->time_base = {1, STREAM_FRAME_RATE};
    c->time_base       = ost->st->time_base;
    c->gop_size = 12; /* emit one intra frame every twelve frames at most */

    if (nvenc && USE_D3D11) {
        const std::string hw_device_name = "d3d11va";
        AVHWDeviceType    device_type    = av_hwdevice_find_type_by_name(hw_device_name.c_str());

        // set up hw device context
        AVBufferRef *hw_device_ctx;
        // const char*  device = "0"; // Default GPU (may be integrated in the case of switchable graphics!)
        const char*  device = "1";
        ret = av_hwdevice_ctx_create(&hw_device_ctx, device_type, device, nullptr, 0);

        if (ret < 0) {
            fprintf(stderr, "Could not create hwdevice context; %s", av_err2str(ret));
        }

        set_hwframe_ctx(c, hw_device_ctx);
        c->pix_fmt = AV_PIX_FMT_D3D11;
    } else if (nvenc && !USE_D3D11)
        c->pix_fmt = AV_PIX_FMT_BGR0;
    else
        c->pix_fmt = STREAM_PIX_FMT;

    if (c->codec_id == AV_CODEC_ID_MPEG2VIDEO) {
        /* just for testing, we also add B-frames */
        c->max_b_frames = 2;
    }

    if (c->codec_id == AV_CODEC_ID_MPEG1VIDEO) {
        /* Needed to avoid using macroblocks in which some coeffs overflow.
            * This does not happen with normal video, it just happens here as
            * the motion of the chroma plane does not match the luma plane. */
        c->mb_decision = 2;
    }

    /* Some formats want stream headers to be separate. */
    if (oc->oformat->flags & AVFMT_GLOBALHEADER)
        c->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
}


    


  • Pims.open Throws "UnkownFormat Error" with "Invalid Argument" On One Machine But Not on Another

    12 août 2021, par KaceBellamy

    I'm trying to do some video processing for a physics experiment, and I want to do it on my much more powerful Windows desktop computer as opposed to my Mac laptop.

    


    The following code works like a dream when run as a Jupyter notebook on my Mac :

    


    import matplotlib as mpl
from mpl_toolkits import mplot3d
import pims
import trackpy as tp 

@pims.pipeline
def gray(image):
    return image[:, :, 1]  # Take just the green channel

frames = gray(pims.open('output.mp4'))


    


    but on my Windows machine I get this error :

    


    ---------------------------------------------------------------------------&#xA;UnknownFormatError                        Traceback (most recent call last)&#xA;~\AppData\Local\Temp/ipykernel_12152/704954007.py in <module>&#xA;      1 #Actually converts the video. Might be VERY processor intensive... or not?&#xA;----> 2 frames = gray(pims.open(&#x27;output.mp4&#x27;)) #Make the File Name whatever file you like!&#xA;&#xA;~\miniconda3\lib\site-packages\pims\api.py in open(sequence, **kwargs)&#xA;    207             warn(message)&#xA;    208             exceptions &#x2B;= message &#x2B; &#x27;\n&#x27;&#xA;--> 209     raise UnknownFormatError("All handlers returned exceptions:\n" &#x2B; exceptions)&#xA;    210 &#xA;    211 &#xA;&#xA;UnknownFormatError: All handlers returned exceptions:&#xA;<class> errored: [Errno 22] Invalid argument: &#x27;output.mp4&#x27;&#xA;<class> errored: [Errno 22] Invalid argument: &#x27;output.mp4&#x27;&#xA;<class> errored: Could not load meta information&#xA;=== stderr ===&#xA;&#xA;ffmpeg version 4.2.2 Copyright (c) 2000-2019 the FFmpeg developers&#xA;  built with gcc 9.2.1 (GCC) 20200122&#xA;  configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt&#xA;  libavutil      56. 31.100 / 56. 31.100&#xA;  libavcodec     58. 54.100 / 58. 54.100&#xA;  libavformat    58. 29.100 / 58. 29.100&#xA;  libavdevice    58.  8.100 / 58.  8.100&#xA;  libavfilter     7. 57.100 /  7. 57.100&#xA;  libswscale      5.  5.100 /  5.  5.100&#xA;  libswresample   3.  5.100 /  3.  5.100&#xA;  libpostproc    55.  5.100 / 55.  5.100&#xA;C:\Users\Callum\OneDrive - The University of Chicago\output.mp4: Invalid argument&#xA;<class> errored: MoviePy error: failed to read the duration of file output.mp4.&#xA;Here are the file infos returned by ffmpeg:&#xA;&#xA;ffmpeg version 4.2.2 Copyright (c) 2000-2019 the FFmpeg developers&#xA;  built with gcc 9.2.1 (GCC) 20200122&#xA;  configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt&#xA;  libavutil      56. 31.100 / 56. 31.100&#xA;  libavcodec     58. 54.100 / 58. 54.100&#xA;  libavformat    58. 29.100 / 58. 29.100&#xA;  libavdevice    58.  8.100 / 58.  8.100&#xA;  libavfilter     7. 57.100 /  7. 57.100&#xA;  libswscale      5.  5.100 /  5.  5.100&#xA;  libswresample   3.  5.100 /  3.  5.100&#xA;  libpostproc    55.  5.100 / 55.  5.100&#xA;output.mp4: Invalid argument&#xA;</class></class></class></class></module>

    &#xA;

    output.mp4 is just a run of the mill video file taken on a CCD camera ; I've tried converting it to a .MOV, I've tried other video files taken on different cameras, and I've tried running the file through FFmpeg to impose a 30 fps framerate ; everything I've tried works fine on my Mac and throws the above error on my Windows machine.

    &#xA;

    For reference, I installed necessary packages for this code on both machines this morning, so it should all be up to date and the same on both.

    &#xA;

    Any ideas as to what's up ? Thanks !

    &#xA;

  • How to fixed error "Encoding Failed" using FFMPEG ?

    17 août 2021, par Irshad Khan

    I am using FFMPEG for converting mp4 to m3u8 format on Ubuntu local machine. But I am facing error "Encoding Failed". MY controller code is

    &#xA;

    FFMpeg::open(public_path()."/uploads/".$filename)&#xA;        ->exportForHLS()&#xA;        ->addFormat($lowBitrate)&#xA;        ->addFormat($midBitrate)&#xA;        ->addFormat($highBitrate)&#xA;        ->toDisk(env(&#x27;APP_ENV&#x27;))&#xA;        ->save(public_path().&#x27;/converted/adaptive_steve.m3u8&#x27;);&#xA;

    &#xA;

    error is :

    &#xA;

    enter image description here

    &#xA;

    Config file is

    &#xA;

    enter image description here

    &#xA;

    How can I solve this. Thanks

    &#xA;