Recherche avancée

Médias (1)

Mot : - Tags -/Rennes

Autres articles (100)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

Sur d’autres sites (8921)

  • ffmpeg Bmp to yuv : Crash at sws_scale

    28 décembre 2015, par the-owner

    The context :
    I have a succession of continuous bitmap and I want to encode them into a light video format.
    I use ffmpeg version 2.8.3 (the build here), under qt5, qt IDE, and msvc2013 for win32.

    The problem :
    My code crash at sws_scale () (and sometimes at avcodec_encode_video2()). When I explore the stack, the crash event occurs at sws_getCachedContext (). (I can only see the stack with these ffmpeg builds).
    I only use these ffmpeg libraries (from the Qt .pro file) :

    LIBS += -lavcodec -lavformat -lswscale -lavutil

    It’s swscale which bug. And this is the code :

    void newVideo ()
    {
       ULONG_PTR gdiplusToken;
       GdiplusStartupInput gdiplusStartupInput;
       GdiplusStartup(&gdiplusToken, &gdiplusStartupInput, NULL);

       initBitmap (); //init bmp
       int screenWidth =  bmp.bmiHeader.biWidth;
       int screenHeight = bmp.bmiHeader.biHeight;

       AVCodec * codec;
       AVCodecContext * c = NULL;
       uint8_t * outbuf;
       int i, out_size, outbuf_size;


       avcodec_register_all();

       qDebug () << "Video encoding\n";

       // Find the mpeg1 video encoder
       codec = avcodec_find_encoder(AV_CODEC_ID_H264);
       if (!codec)
       {
           qDebug () << "Codec not found\n";
           avcodec_close(c);
           av_free(c);
           return;
       }
       else
           qDebug () << "H264 codec found\n";

       c = avcodec_alloc_context3(codec);

       c->bit_rate = 1000000;
       c->width = 800; // resolution must be a multiple of two (1280x720),(1900x1080),(720x480)
       c->height = 600;
       c->time_base.num = 1; // framerate numerator
       c->time_base.den = 25; // framerate denominator
       c->gop_size = 30; // emit one intra frame every ten frames
       c->max_b_frames = 1; // maximum number of b-frames between non b-frames
       c->pix_fmt = AV_PIX_FMT_YUV420P; //Converstion RGB to YUV ?
       c->codec_id = AV_CODEC_ID_H264;

       struct SwsContext* fooContext = sws_getContext(screenWidth, screenHeight,
                                                      AV_PIX_FMT_RGB32,
                                                      c->width, c->height,
                                                      AV_PIX_FMT_YUV420P,
                                                      SWS_FAST_BILINEAR,
                                                      NULL, NULL, NULL);

       // Open the encoder
       if (avcodec_open2(c, codec, NULL) < 0)
       {
           qDebug () << "Could not open codec\n";
           avcodec_close(c);
           av_free(c);
           return;
       }
       else qDebug () << "H264 codec opened\n";

       outbuf_size = 100000 + c->width*c->height*(32>>3);//*(32>>3); // alloc image and output buffer
       outbuf = static_cast(malloc(outbuf_size));
       qDebug() << "Setting buffer size to: " << outbuf_size << "\n";

       FILE* f = fopen("TEST.mpg","wb");
       if(!f) qDebug() << "x - Cannot open video file for writing\n";
       else qDebug() << "Opened video file for writing\n";

       // encode 5 seconds of video
       for (i = 0; i < STREAM_FRAME_RATE*STREAM_DURATION; i++) //the stop condition i < 5.0*5
       {
           qDebug () << "i = " << i;
           fflush(stdout);

           HBITMAP hBmp;
           if (GetScreen(hBmp) == -1) return;
           BYTE * pPixels;// = new BYTE [bmp.bmiHeader.biSizeImage];
           pPixels = getPixels (hBmp);
           DeleteObject (hBmp);

           int nbytes = avpicture_get_size(AV_PIX_FMT_YUV420P, c->width, c->height);
           uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes*sizeof(uint8_t));
           if(!outbuffer) // check if(outbuf) instead
           {
               qDebug () << "Bytes cannot be allocated";
               return;
           }

           AVFrame* inpic = avcodec_alloc_frame(); //av_frame_alloc () ?
           AVFrame* outpic = avcodec_alloc_frame();

           outpic->pts = (int64_t)((float)i * (1000.0/((float)(c->time_base.den))) * 90);
           if (avpicture_fill((AVPicture*) inpic, (uint8_t*) pPixels, AV_PIX_FMT_RGB32,
                          screenWidth, screenHeight) < 0)
               qDebug () <<  "avpicture_fill Fill picture with image failed"; //Fill picture with image

           if(avpicture_fill((AVPicture*) outpic, outbuffer, AV_PIX_FMT_YUV420P,
                          c->width, c->height) < 0)
               qDebug () <<  "avpicture_fill failed";

           if (av_image_alloc(outpic->data, outpic->linesize, c->width, c->height,
                          c->pix_fmt, 1) < 0)
               qDebug () <<  "av_image_alloc failed";

           inpic->data[0] += inpic->linesize[0]*(screenHeight - 1); // Flipping frame
           inpic->linesize[0] = -inpic->linesize[0]; // Flipping frame

    ////////////////////////////HERE THE BUG////////////////////////////////
           sws_scale(fooContext,
                     inpic->data, inpic->linesize,
                     0, c->height,
                     outpic->data, outpic->linesize); //HERE THE BUG

           av_free_packet((AVPacket *)outbuf);
           // encode the image
           out_size = avcodec_encode_video2 (c, (AVPacket *) outbuf,
                                             (AVFrame *) outbuf_size, (int *) outpic);
    ///////////////////////THE CODE DONT GO BEYOND/////////////////////////////////

           qDebug () << "Encoding frame" << i <<" (size=" << out_size <<"\n";
           fwrite(outbuf, 1, out_size, f);
           delete [] pPixels;
           av_free(outbuffer);
           av_free(inpic);
           av_freep(outpic);
       }

       // get the delayed frames
       for(; out_size; i++)
       {
           fflush(stdout);
           out_size = avcodec_encode_video2 (c, (AVPacket *) outbuf,
                                             (AVFrame *) outbuf_size, NULL);
           qDebug () << "Writing frame" << i <<" (size=" << out_size <<"\n";
           fwrite(outbuf, 1, out_size, f);
       }

       // add sequence end code to have a real mpeg file
       outbuf[0] = 0x00;
       outbuf[1] = 0x00;
       outbuf[2] = 0x01;
       outbuf[3] = 0xb7;
       fwrite(outbuf, 1, 4, f);
       fclose(f);

       avcodec_close(c);
       free(outbuf);
       av_free(c);
       qDebug () << "Closed codec and Freed\n";
    }

    And the output :

    Video encoding

    H264 codec found

    H264 codec opened

    Setting buffer size to:  2020000

    Opened video file for writing

    i =  0
    **CRASH**

    I have thougth that my bitmap wasn’t good so I have crafted a bitmap just for testing, the code was :

       uint8_t* pPixels = new uint8_t[Width * 3 * Height];
       int x = 50;
       for(unsigned int i = 0; i < Width * 3 * Height; i = i + 3) // loop for generating color changing images
       {
           pPixels [i] = x % 255; //R
           pPixels [i + 1] = (x) % 255; //G
           pPixels [i + 2] = (255 - x) % 255; //B
       }

    However the crash continue. Perhaps, it might prove that it’s not the bitmap (pPixels) which has a problem.

    If anyone know, why I get this bug : Maybe don’t I set one parameter well ? Or one ffmpeg deprecated function ? etc.


    EDIT 1 27/12/15

    Thanks to Ronald S. Bultje The function sws_scale () does not crash with this code, however I get an error from it bad dst image pointers. My code :

    //DESTINATION FRAME            
    if (avpicture_alloc ((AVPicture*) dst_frame, AV_PIX_FMT_YUV420P, c->width, c->height) < 0)
               {
                   qDebug () <<  "# avpicture_alloc failed";
                   return;
               }
               if(avpicture_fill((AVPicture*) dst_frame, NULL, AV_PIX_FMT_YUV420P,
                              c->width, c->height) < 0)
                   qDebug () <<  "avpicture_fill failed";
               avcodec_align_dimensions2 (c, &c->width, &c->height, dst_frame->linesize);

    //SOURCE FRAME
               if (avpicture_fill((AVPicture*) src_frame, (uint8_t *) pPixels, AV_PIX_FMT_RGB32,
                                  tmp_screenWidth, tmp_screenHeight) < 0)
                   qDebug () <<  "# avpicture_fill Fill picture with image failed"; //Fill picture with image
               avcodec_align_dimensions2 (c, &tmp_screenWidth, &tmp_screenHeight, src_frame->linesize);

               struct SwsContext* conversionContext = sws_getContext(tmp_screenWidth,tmp_screenHeight,AV_PIX_FMT_RGB32,c->width, c->height, AV_PIX_FMT_YUV420P,SWS_FAST_BILINEAR, NULL, NULL, NULL);

               int output_Height = sws_scale(conversionContext,
                                             src_frame->data, src_frame->linesize,
                                             0, tmp_screenHeight,
                                             dst_frame->data, dst_frame->linesize); //return 0 -> bad dst image pointers error

    EDIT 2 28/12/15

    I have tried to follow the Ronald S. Bultje’s suggestion and now I get a bad src image pointers error, I have investigated and worked many hours but I do not find a solution. Here, there is the new snippet :

    AVFrame* src_frame = av_frame_alloc ();
    AVFrame* dst_frame = av_frame_alloc ();
    AVFrame* tmp_src_frame = av_frame_alloc ();

    /*........I do not use them until this snippet..........*/
    //DESTINATION
    //avpicture_free ((AVPicture*)dst_frame);
    avcodec_align_dimensions2 (c, &c->width, &c->height, dst_frame->linesize);
    if (avpicture_alloc ((AVPicture*) dst_frame, AV_PIX_FMT_YUV420P, c->width, c->height) < 0)
    {
       qDebug () <<  "# avpicture_alloc failed";
       return;
    }

    //SOURCE
    //stride = src_frame->linesize [0] = ((((screenWidth * bitPerPixel) + 31) & ~31) >> 3); do I need to do that ?
    //== stride - I have gotten this formula from : https://msdn.microsoft.com/en-us/library/windows/desktop/dd318229(v=vs.85).aspx
    if (avpicture_fill((AVPicture*) src_frame, (uint8_t *) pPixels, AV_PIX_FMT_RGB32,
                      screenWidth, screenHeight) < 0)
       qDebug () <<  "# avpicture_fill Fill picture with image failed"; //Fill picture with image
    //linesize [0] == 21760 like commented stride

    //Source TO TMP Source
    avcodec_align_dimensions2 (c, &tmp_screenWidth, &tmp_screenHeight, tmp_src_frame->linesize);
    if (avpicture_fill((AVPicture*) tmp_src_frame, NULL, AV_PIX_FMT_RGB32,
                      tmp_screenWidth, tmp_screenHeight) < 0)
       qDebug () <<  "# avpicture_fill Fill picture with image failed"; //Fill picture with image

    av_picture_copy ((AVPicture*) tmp_src_frame, (const AVPicture*) src_frame, AV_PIX_FMT_RGB32,
                    screenWidth, screenHeight);

    struct SwsContext* conversionContext = sws_getContext(tmp_screenWidth, tmp_screenHeight,
                                                         AV_PIX_FMT_RGB32,
                                                         c->width, c->height,
                                                         AV_PIX_FMT_YUV420P,
                                                         SWS_FAST_BILINEAR,
                                                         NULL, NULL, NULL);

    int output_Height = sws_scale(conversionContext,
                                 tmp_src_frame->data, tmp_src_frame->linesize,
                                 0, tmp_screenHeight,
                                 dst_frame->data, dst_frame->linesize);
    //ffmpeg error = bad src image pointers
    // output_Height == 0

    EDIT 3

    For temp Picture I have done an avcode_align_dimension2() then a avpicture_alloc() for allocating memory and avpicture_fill() in order to fill the picture pointer. Below the updated code :

    //DESTINATION
    //avpicture_free ((AVPicture*)dst_frame);
    avcodec_align_dimensions2 (c, &c->width, &c->height, dst_frame->linesize);
    if (avpicture_alloc ((AVPicture*) dst_frame, AV_PIX_FMT_YUV420P, c->width, c->height) < 0)
    {
       qDebug () <<  "# avpicture_alloc failed";
       return;
    }

    //SOURCE
    //src_frame->linesize [0] = ((((screenWidth * bpp) + 31) & ~31) >> 3);
    //src_frame->linesize [0] = stride;
    if (avpicture_fill((AVPicture*) src_frame, (uint8_t *) pPixels, AV_PIX_FMT_RGB32,
                      screenWidth, screenHeight) < 0)
       qDebug () <<  "# avpicture_fill Fill picture with image failed"; //Fill picture with image

    //Source TO TMP Source
    avcodec_align_dimensions2 (c, &tmp_screenWidth, &tmp_screenHeight, tmp_src_frame->linesize);
    if (avpicture_alloc ((AVPicture*) tmp_src_frame, AV_PIX_FMT_RGB32, tmp_screenWidth, tmp_screenHeight) < 0)
    {
       qDebug () <<  "# avpicture_alloc failed";
       return;
    }
    int outbuf_size = tmp_screenWidth*tmp_screenHeight*4;// alloc image and output buffer
    outbuf = static_cast(malloc(outbuf_size));
    if (avpicture_fill((AVPicture*) tmp_src_frame, outbuf, AV_PIX_FMT_RGB32,
                      tmp_screenWidth, tmp_screenHeight) < 0)
       qDebug () <<  "# avpicture_fill Fill picture with image failed"; //Fill picture with image
    av_picture_copy ((AVPicture*) tmp_src_frame, (const AVPicture*) src_frame, AV_PIX_FMT_RGB32,
                    tmp_screenWidth, tmp_screenHeight);

    struct SwsContext* conversionContext = sws_getContext(tmp_screenWidth, tmp_screenHeight,
                                                         AV_PIX_FMT_RGB32,
                                                         c->width, c->height,
                                                         AV_PIX_FMT_YUV420P,
                                                         SWS_FAST_BILINEAR,
                                                         NULL, NULL, NULL);

    int output_Height = sws_scale(conversionContext,
                                 tmp_src_frame->data, tmp_src_frame->linesize,
                                 0, tmp_screenHeight,
                                 dst_frame->data, dst_frame->linesize);

    The call stack is as follow : av_picture_copy() is called then av_image_copy() then _VEC_memcpy() then fastcopy_I() and crash ... The problem is not the dimensions (tmp_screenWidth/Height) ? (With av_picture_copy () could we copy a picture P1 with dim W1xH1 to a picture P2 with dimension W2xH2 ?)

    EDIT 4

    Crash at av_picture_copy() which call _aligned_malloc() then av_image_copy _VEC_memcpy() and fastcopy_I()

    //SOURCE
    if (avpicture_fill((AVPicture*) src_frame, (uint8_t *) pPixels, AV_PIX_FMT_RGB32,
                      screenWidth, screenHeight) < 0)
       qDebug () <<  "# avpicture_fill Fill picture with image failed"; //Fill picture with image

    //Source TO TMP Source
    avcodec_align_dimensions2 (c, &tmp_screenWidth, &tmp_screenHeight, tmp_src_frame->linesize);
    if (avpicture_alloc ((AVPicture*) tmp_src_frame, AV_PIX_FMT_RGB32, tmp_screenWidth, tmp_screenHeight) < 0)
    {
       qDebug () <<  "# avpicture_alloc failed";
       return;
    }
    av_picture_copy ((AVPicture*) tmp_src_frame, (const AVPicture*) src_frame, AV_PIX_FMT_RGB32,
                    tmp_screenWidth, tmp_screenHeight);
  • Encode a MPEG-1 video using directshow

    4 septembre 2015, par Kayani

    I need to know whether it is possible to encode images to MPEG-1 video using DirectShow from Microsoft. I found in their documentation that it is possible to Convert a video to MPEG-1 but there is nothing about images. Images in my case are coming at runtime.
    Note : I was previously using FFMPEG to achieve this

  • Programatically get non-overlapping images from MP4

    15 février 2013, par Carlos F

    My ultimate goal is to get meaningful snapshots from MP4 videos that are either 30 min or 1 hour long. "Meaningful" is a bit ambitious, so I have simplified my requirements.

    The image should be crisp - non-overlapping, and ideally not blurry. Initially, I thought getting a keyframe would work, but I had no idea that keyframes could have overlapping images embedded in them like this :enter image description here

    Of course, some keyframe images look like this and those are much better :

    enter image description here

    I was wondering if someone might have source code to :

    Take a sequence of say 10-15 continuous keyframes (jpg or png) and identify the best keyframe from all of them.

    This must happen entirely programaticaly. I found this paper : http://research.microsoft.com/pubs/68802/blur_determination_compressed.pdf

    and felt that I could "rank" a few images based on the above paper, but then I was dissuaded by this link : Extracting DCT coefficients from encoded images and video given that my source video is an MP4. Of course, this confuses me because the input into the system is just a sequence of jpg images.

    Another link that is interesting is :

    Detection of Blur in Images/Video sequences

    However, I am not sure if this will work for "overlapping" images.

    Any ideas ?