Recherche avancée

Médias (0)

Mot : - Tags -/alertes

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (20)

  • La sauvegarde automatique de canaux SPIP

    1er avril 2010, par

    Dans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
    Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

Sur d’autres sites (5678)

  • Merge commit '3a0d5e206d24d41d87a25ba16a79b2ea04c39d4c'

    18 octobre 2017, par James Almer
    Merge commit '3a0d5e206d24d41d87a25ba16a79b2ea04c39d4c'
    

    * commit '3a0d5e206d24d41d87a25ba16a79b2ea04c39d4c' :
    arm/aarch64 : vp9itxfm : Skip loading the min_eob pointer when it won't be used
    arm : vp9itxfm : Template the quarter/half idct32 function

    This commit is a noop, see
    b7a565fe71d16747209bd66955a54c9b54abc5dd
    70317b25aa35c0907720e4d2b7686408588c07aa

    Merged-by : James Almer <jamrial@gmail.com>

  • Rotation on Video frame image makes video quality low and becomes green ffmpeg opencv

    21 novembre 2013, par bindal

    I am working on one application in which i have to record video on touch which includes pause recording so , i am using FFmpegFrameRecorder for that

    But when i am recording video with rear camera then in onpreviewframe
    i am getting yuvIplImage in portrait mode that is correct but when i
    am recording with front camera in portrait mode then in onPreviewframe
    i am getting image upside down , so my half, so from that result my
    half video is showing in correct portrait mode and remaining half
    video id showing upside down, so i am applying rotation on yuvIplImage
    when recording from front camera

    Here is my onPreviewFrame Method

    @Override
               public void onPreviewFrame(byte[] data, Camera camera) {

                   long frameTimeStamp = 0L;
                   if (mAudioTimestamp == 0L &amp;&amp; firstTime > 0L)
                       frameTimeStamp = 1000L * (System.currentTimeMillis() - firstTime);
                   else if (mLastAudioTimestamp == mAudioTimestamp)
                       frameTimeStamp = mAudioTimestamp + frameTime;
                   else {
                       long l2 = (System.nanoTime() - mAudioTimeRecorded) / 1000L;
                       frameTimeStamp = l2 + mAudioTimestamp;
                       mLastAudioTimestamp = mAudioTimestamp;
                   }
                   synchronized (mVideoRecordLock) {
                       if (recording &amp;&amp; rec &amp;&amp; lastSavedframe != null
                               &amp;&amp; lastSavedframe.getFrameBytesData() != null
                               &amp;&amp; yuvIplImage != null) {
                           mVideoTimestamp += frameTime;
                           if (lastSavedframe.getTimeStamp() > mVideoTimestamp)
                               mVideoTimestamp = lastSavedframe.getTimeStamp();
                           try {
                               yuvIplImage.getByteBuffer().put(
                                       lastSavedframe.getFrameBytesData());
                               videoRecorder.setTimestamp(lastSavedframe
                                       .getTimeStamp());

                               // if (defaultCameraId == 1) {
                               // CvSize size = new CvSize(yuvIplImage.height(),
                               // yuvIplImage.width());
                               // IplImage yuvIplImage2 = opencv_core.cvCreateImage(
                               // size, yuvIplImage.depth(),
                               // yuvIplImage.nChannels());
                               //
                               // videoRecorder.record(yuvIplImage2);
                               // } else {

                               // }



                               if (defaultCameraId == 1) {
                                   yuvIplImage = rotate(yuvIplImage, 270);

                                   videoRecorder.record(yuvIplImage);
                               }else
                               {
                                   videoRecorder.record(yuvIplImage);
                               }
                               // else

                               // opencv_core.cvTranspose(yuvIplImage, yuvIplImage);
                           } catch (com.googlecode.javacv.FrameRecorder.Exception e) {
                               e.printStackTrace();
                           }
                       }
                       lastSavedframe = new SavedFrames(data, frameTimeStamp);
                   }
               }
           }

    Here is rotation function

    public static IplImage rotate(IplImage image, double angle) {
       IplImage copy = opencv_core.cvCloneImage(image);

       IplImage rotatedImage = opencv_core.cvCreateImage(
               opencv_core.cvGetSize(copy), copy.depth(), copy.nChannels());
       CvMat mapMatrix = opencv_core.cvCreateMat(2, 3, opencv_core.CV_32FC1);

       // Define Mid Point
       CvPoint2D32f centerPoint = new CvPoint2D32f();
       centerPoint.x(copy.width() / 2);
       centerPoint.y(copy.height() / 2);

       // Get Rotational Matrix
       opencv_imgproc.cv2DRotationMatrix(centerPoint, angle, 1.0, mapMatrix);
       // opencv_core.cvReleaseImage(copy);

       // Rotate the Image
       opencv_imgproc.cvWarpAffine(copy, rotatedImage, mapMatrix,
               opencv_imgproc.CV_INTER_CUBIC
                       + opencv_imgproc.CV_WARP_FILL_OUTLIERS,
               opencv_core.cvScalarAll(170));
       opencv_core.cvReleaseImage(copy);
       opencv_core.cvReleaseMat(mapMatrix);
       return rotatedImage;
    }

    But Final output of video makes half of the video green

    Thanks in advance

  • FFMPEG using AV_PIX_FMT_D3D11 gives "Error registering the input resource" from NVENC

    13 novembre 2024, par nbabcock

    Input frames start on the GPU as ID3D11Texture2D pointers.

    &#xA;

    I encode them to H264 using FFMPEG + NVENC. NVENC works perfectly if I download the textures to CPU memory as format AV_PIX_FMT_BGR0, but I'd like to cut out the CPU texture download entirely, and pass the GPU memory pointer directly into the encoder in native format. I write frames like this :

    &#xA;

    int write_gpu_video_frame(ID3D11Texture2D* gpuTex, AVFormatContext* oc, OutputStream* ost) {&#xA;    AVFrame *hw_frame = ost->hw_frame;&#xA;&#xA;    printf("gpuTex address = 0x%x\n", &amp;gpuTex);&#xA;&#xA;    hw_frame->data[0] = (uint8_t *) gpuTex;&#xA;    hw_frame->data[1] = (uint8_t *) (intptr_t) 0;&#xA;    hw_frame->pts     = ost->next_pts&#x2B;&#x2B;;&#xA;&#xA;    return write_frame(oc, ost->enc, ost->st, hw_frame);&#xA;    // write_frame is identical to sample code in ffmpeg repo&#xA;}&#xA;

    &#xA;

    Running the code with this modification gives the following error :

    &#xA;

    gpuTex address = 0x4582f6d0&#xA;[h264_nvenc @ 00000191233e1bc0] Error registering an input resource: invalid call (9):&#xA;[h264_nvenc @ 00000191233e1bc0] Could not register an input HW frame&#xA;Error sending a frame to the encoder: Unknown error occurred&#xA;

    &#xA;


    &#xA;

    Here's some supplemental code used in setting up and configuring the hw context and encoder :

    &#xA;

    /* A few config flags */&#xA;#define ENABLE_NVENC TRUE&#xA;#define USE_D3D11 TRUE // Skip downloading textures to CPU memory and send it straight to NVENC&#xA;

    &#xA;

    /* Init hardware frame context */&#xA;static int set_hwframe_ctx(AVCodecContext* ctx, AVBufferRef* hw_device_ctx) {&#xA;    AVBufferRef*       hw_frames_ref;&#xA;    AVHWFramesContext* frames_ctx = NULL;&#xA;    int                err        = 0;&#xA;&#xA;    if (!(hw_frames_ref = av_hwframe_ctx_alloc(hw_device_ctx))) {&#xA;        fprintf(stderr, "Failed to create HW frame context.\n");&#xA;        throw;&#xA;    }&#xA;    frames_ctx                    = (AVHWFramesContext*) (hw_frames_ref->data);&#xA;    frames_ctx->format            = AV_PIX_FMT_D3D11;&#xA;    frames_ctx->sw_format         = AV_PIX_FMT_NV12;&#xA;    frames_ctx->width             = STREAM_WIDTH;&#xA;    frames_ctx->height            = STREAM_HEIGHT;&#xA;    //frames_ctx->initial_pool_size = 20;&#xA;    if ((err = av_hwframe_ctx_init(hw_frames_ref)) &lt; 0) {&#xA;        fprintf(stderr, "Failed to initialize hw frame context. Error code: %s\n", av_err2str(err));&#xA;        av_buffer_unref(&amp;hw_frames_ref);&#xA;        throw;&#xA;    }&#xA;    ctx->hw_frames_ctx = av_buffer_ref(hw_frames_ref);&#xA;    if (!ctx->hw_frames_ctx)&#xA;        err = AVERROR(ENOMEM);&#xA;&#xA;    av_buffer_unref(&amp;hw_frames_ref);&#xA;    return err;&#xA;}&#xA;

    &#xA;

    /* Add an output stream. */&#xA;static void add_video_stream(&#xA;    OutputStream* ost,&#xA;    AVFormatContext* oc,&#xA;    const AVCodec** codec,&#xA;    enum AVCodecID  codec_id,&#xA;    int width,&#xA;    int height&#xA;) {&#xA;    AVCodecContext* c;&#xA;    int             i;&#xA;    bool            nvenc = false;&#xA;&#xA;    /* find the encoder */&#xA;    if (ENABLE_NVENC) {&#xA;        printf("Getting nvenc encoder\n");&#xA;        *codec = avcodec_find_encoder_by_name("h264_nvenc");&#xA;        nvenc  = true;&#xA;    }&#xA;    &#xA;    if (!ENABLE_NVENC || *codec == NULL) {&#xA;        printf("Getting standard encoder\n");&#xA;        avcodec_find_encoder(codec_id);&#xA;        nvenc = false;&#xA;    }&#xA;    if (!(*codec)) {&#xA;        fprintf(stderr, "Could not find encoder for &#x27;%s&#x27;\n",&#xA;                avcodec_get_name(codec_id));&#xA;        exit(1);&#xA;    }&#xA;&#xA;    ost->st = avformat_new_stream(oc, NULL);&#xA;    if (!ost->st) {&#xA;        fprintf(stderr, "Could not allocate stream\n");&#xA;        exit(1);&#xA;    }&#xA;    ost->st->id = oc->nb_streams - 1;&#xA;    c           = avcodec_alloc_context3(*codec);&#xA;    if (!c) {&#xA;        fprintf(stderr, "Could not alloc an encoding context\n");&#xA;        exit(1);&#xA;    }&#xA;    ost->enc = c;&#xA;&#xA;    printf("Using video codec %s\n", avcodec_get_name(codec_id));&#xA;&#xA;    c->codec_id = codec_id;&#xA;    c->bit_rate = 4000000;&#xA;    /* Resolution must be a multiple of two. */&#xA;    c->width  = STREAM_WIDTH;&#xA;    c->height = STREAM_HEIGHT;&#xA;    /* timebase: This is the fundamental unit of time (in seconds) in terms&#xA;        * of which frame timestamps are represented. For fixed-fps content,&#xA;        * timebase should be 1/framerate and timestamp increments should be&#xA;        * identical to 1. */&#xA;    ost->st->time_base = {1, STREAM_FRAME_RATE};&#xA;    c->time_base       = ost->st->time_base;&#xA;    c->gop_size = 12; /* emit one intra frame every twelve frames at most */&#xA;&#xA;    if (nvenc &amp;&amp; USE_D3D11) {&#xA;        const std::string hw_device_name = "d3d11va";&#xA;        AVHWDeviceType    device_type    = av_hwdevice_find_type_by_name(hw_device_name.c_str());&#xA;&#xA;        // set up hw device context&#xA;        AVBufferRef *hw_device_ctx;&#xA;        // const char*  device = "0"; // Default GPU (may be integrated in the case of switchable graphics!)&#xA;        const char*  device = "1";&#xA;        ret = av_hwdevice_ctx_create(&amp;hw_device_ctx, device_type, device, nullptr, 0);&#xA;&#xA;        if (ret &lt; 0) {&#xA;            fprintf(stderr, "Could not create hwdevice context; %s", av_err2str(ret));&#xA;        }&#xA;&#xA;        set_hwframe_ctx(c, hw_device_ctx);&#xA;        c->pix_fmt = AV_PIX_FMT_D3D11;&#xA;    } else if (nvenc &amp;&amp; !USE_D3D11)&#xA;        c->pix_fmt = AV_PIX_FMT_BGR0;&#xA;    else&#xA;        c->pix_fmt = STREAM_PIX_FMT;&#xA;&#xA;    if (c->codec_id == AV_CODEC_ID_MPEG2VIDEO) {&#xA;        /* just for testing, we also add B-frames */&#xA;        c->max_b_frames = 2;&#xA;    }&#xA;&#xA;    if (c->codec_id == AV_CODEC_ID_MPEG1VIDEO) {&#xA;        /* Needed to avoid using macroblocks in which some coeffs overflow.&#xA;            * This does not happen with normal video, it just happens here as&#xA;            * the motion of the chroma plane does not match the luma plane. */&#xA;        c->mb_decision = 2;&#xA;    }&#xA;&#xA;    /* Some formats want stream headers to be separate. */&#xA;    if (oc->oformat->flags &amp; AVFMT_GLOBALHEADER)&#xA;        c->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;&#xA;}&#xA;

    &#xA;