Recherche avancée

Médias (0)

Mot : - Tags -/metadatas

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (38)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (3730)

  • ffmpeg Bmp to yuv : Crash at sws_scale

    28 décembre 2015, par the-owner

    The context :
    I have a succession of continuous bitmap and I want to encode them into a light video format.
    I use ffmpeg version 2.8.3 (the build here), under qt5, qt IDE, and msvc2013 for win32.

    The problem :
    My code crash at sws_scale () (and sometimes at avcodec_encode_video2()). When I explore the stack, the crash event occurs at sws_getCachedContext (). (I can only see the stack with these ffmpeg builds).
    I only use these ffmpeg libraries (from the Qt .pro file) :

    LIBS += -lavcodec -lavformat -lswscale -lavutil

    It’s swscale which bug. And this is the code :

    void newVideo ()
    {
       ULONG_PTR gdiplusToken;
       GdiplusStartupInput gdiplusStartupInput;
       GdiplusStartup(&gdiplusToken, &gdiplusStartupInput, NULL);

       initBitmap (); //init bmp
       int screenWidth =  bmp.bmiHeader.biWidth;
       int screenHeight = bmp.bmiHeader.biHeight;

       AVCodec * codec;
       AVCodecContext * c = NULL;
       uint8_t * outbuf;
       int i, out_size, outbuf_size;


       avcodec_register_all();

       qDebug () << "Video encoding\n";

       // Find the mpeg1 video encoder
       codec = avcodec_find_encoder(AV_CODEC_ID_H264);
       if (!codec)
       {
           qDebug () << "Codec not found\n";
           avcodec_close(c);
           av_free(c);
           return;
       }
       else
           qDebug () << "H264 codec found\n";

       c = avcodec_alloc_context3(codec);

       c->bit_rate = 1000000;
       c->width = 800; // resolution must be a multiple of two (1280x720),(1900x1080),(720x480)
       c->height = 600;
       c->time_base.num = 1; // framerate numerator
       c->time_base.den = 25; // framerate denominator
       c->gop_size = 30; // emit one intra frame every ten frames
       c->max_b_frames = 1; // maximum number of b-frames between non b-frames
       c->pix_fmt = AV_PIX_FMT_YUV420P; //Converstion RGB to YUV ?
       c->codec_id = AV_CODEC_ID_H264;

       struct SwsContext* fooContext = sws_getContext(screenWidth, screenHeight,
                                                      AV_PIX_FMT_RGB32,
                                                      c->width, c->height,
                                                      AV_PIX_FMT_YUV420P,
                                                      SWS_FAST_BILINEAR,
                                                      NULL, NULL, NULL);

       // Open the encoder
       if (avcodec_open2(c, codec, NULL) < 0)
       {
           qDebug () << "Could not open codec\n";
           avcodec_close(c);
           av_free(c);
           return;
       }
       else qDebug () << "H264 codec opened\n";

       outbuf_size = 100000 + c->width*c->height*(32>>3);//*(32>>3); // alloc image and output buffer
       outbuf = static_cast(malloc(outbuf_size));
       qDebug() << "Setting buffer size to: " << outbuf_size << "\n";

       FILE* f = fopen("TEST.mpg","wb");
       if(!f) qDebug() << "x - Cannot open video file for writing\n";
       else qDebug() << "Opened video file for writing\n";

       // encode 5 seconds of video
       for (i = 0; i < STREAM_FRAME_RATE*STREAM_DURATION; i++) //the stop condition i < 5.0*5
       {
           qDebug () << "i = " << i;
           fflush(stdout);

           HBITMAP hBmp;
           if (GetScreen(hBmp) == -1) return;
           BYTE * pPixels;// = new BYTE [bmp.bmiHeader.biSizeImage];
           pPixels = getPixels (hBmp);
           DeleteObject (hBmp);

           int nbytes = avpicture_get_size(AV_PIX_FMT_YUV420P, c->width, c->height);
           uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes*sizeof(uint8_t));
           if(!outbuffer) // check if(outbuf) instead
           {
               qDebug () << "Bytes cannot be allocated";
               return;
           }

           AVFrame* inpic = avcodec_alloc_frame(); //av_frame_alloc () ?
           AVFrame* outpic = avcodec_alloc_frame();

           outpic->pts = (int64_t)((float)i * (1000.0/((float)(c->time_base.den))) * 90);
           if (avpicture_fill((AVPicture*) inpic, (uint8_t*) pPixels, AV_PIX_FMT_RGB32,
                          screenWidth, screenHeight) < 0)
               qDebug () <<  "avpicture_fill Fill picture with image failed"; //Fill picture with image

           if(avpicture_fill((AVPicture*) outpic, outbuffer, AV_PIX_FMT_YUV420P,
                          c->width, c->height) < 0)
               qDebug () <<  "avpicture_fill failed";

           if (av_image_alloc(outpic->data, outpic->linesize, c->width, c->height,
                          c->pix_fmt, 1) < 0)
               qDebug () <<  "av_image_alloc failed";

           inpic->data[0] += inpic->linesize[0]*(screenHeight - 1); // Flipping frame
           inpic->linesize[0] = -inpic->linesize[0]; // Flipping frame

    ////////////////////////////HERE THE BUG////////////////////////////////
           sws_scale(fooContext,
                     inpic->data, inpic->linesize,
                     0, c->height,
                     outpic->data, outpic->linesize); //HERE THE BUG

           av_free_packet((AVPacket *)outbuf);
           // encode the image
           out_size = avcodec_encode_video2 (c, (AVPacket *) outbuf,
                                             (AVFrame *) outbuf_size, (int *) outpic);
    ///////////////////////THE CODE DONT GO BEYOND/////////////////////////////////

           qDebug () << "Encoding frame" << i <<" (size=" << out_size <<"\n";
           fwrite(outbuf, 1, out_size, f);
           delete [] pPixels;
           av_free(outbuffer);
           av_free(inpic);
           av_freep(outpic);
       }

       // get the delayed frames
       for(; out_size; i++)
       {
           fflush(stdout);
           out_size = avcodec_encode_video2 (c, (AVPacket *) outbuf,
                                             (AVFrame *) outbuf_size, NULL);
           qDebug () << "Writing frame" << i <<" (size=" << out_size <<"\n";
           fwrite(outbuf, 1, out_size, f);
       }

       // add sequence end code to have a real mpeg file
       outbuf[0] = 0x00;
       outbuf[1] = 0x00;
       outbuf[2] = 0x01;
       outbuf[3] = 0xb7;
       fwrite(outbuf, 1, 4, f);
       fclose(f);

       avcodec_close(c);
       free(outbuf);
       av_free(c);
       qDebug () << "Closed codec and Freed\n";
    }

    And the output :

    Video encoding

    H264 codec found

    H264 codec opened

    Setting buffer size to:  2020000

    Opened video file for writing

    i =  0
    **CRASH**

    I have thougth that my bitmap wasn’t good so I have crafted a bitmap just for testing, the code was :

       uint8_t* pPixels = new uint8_t[Width * 3 * Height];
       int x = 50;
       for(unsigned int i = 0; i < Width * 3 * Height; i = i + 3) // loop for generating color changing images
       {
           pPixels [i] = x % 255; //R
           pPixels [i + 1] = (x) % 255; //G
           pPixels [i + 2] = (255 - x) % 255; //B
       }

    However the crash continue. Perhaps, it might prove that it’s not the bitmap (pPixels) which has a problem.

    If anyone know, why I get this bug : Maybe don’t I set one parameter well ? Or one ffmpeg deprecated function ? etc.


    EDIT 1 27/12/15

    Thanks to Ronald S. Bultje The function sws_scale () does not crash with this code, however I get an error from it bad dst image pointers. My code :

    //DESTINATION FRAME            
    if (avpicture_alloc ((AVPicture*) dst_frame, AV_PIX_FMT_YUV420P, c->width, c->height) < 0)
               {
                   qDebug () <<  "# avpicture_alloc failed";
                   return;
               }
               if(avpicture_fill((AVPicture*) dst_frame, NULL, AV_PIX_FMT_YUV420P,
                              c->width, c->height) < 0)
                   qDebug () <<  "avpicture_fill failed";
               avcodec_align_dimensions2 (c, &c->width, &c->height, dst_frame->linesize);

    //SOURCE FRAME
               if (avpicture_fill((AVPicture*) src_frame, (uint8_t *) pPixels, AV_PIX_FMT_RGB32,
                                  tmp_screenWidth, tmp_screenHeight) < 0)
                   qDebug () <<  "# avpicture_fill Fill picture with image failed"; //Fill picture with image
               avcodec_align_dimensions2 (c, &tmp_screenWidth, &tmp_screenHeight, src_frame->linesize);

               struct SwsContext* conversionContext = sws_getContext(tmp_screenWidth,tmp_screenHeight,AV_PIX_FMT_RGB32,c->width, c->height, AV_PIX_FMT_YUV420P,SWS_FAST_BILINEAR, NULL, NULL, NULL);

               int output_Height = sws_scale(conversionContext,
                                             src_frame->data, src_frame->linesize,
                                             0, tmp_screenHeight,
                                             dst_frame->data, dst_frame->linesize); //return 0 -> bad dst image pointers error

    EDIT 2 28/12/15

    I have tried to follow the Ronald S. Bultje’s suggestion and now I get a bad src image pointers error, I have investigated and worked many hours but I do not find a solution. Here, there is the new snippet :

    AVFrame* src_frame = av_frame_alloc ();
    AVFrame* dst_frame = av_frame_alloc ();
    AVFrame* tmp_src_frame = av_frame_alloc ();

    /*........I do not use them until this snippet..........*/
    //DESTINATION
    //avpicture_free ((AVPicture*)dst_frame);
    avcodec_align_dimensions2 (c, &c->width, &c->height, dst_frame->linesize);
    if (avpicture_alloc ((AVPicture*) dst_frame, AV_PIX_FMT_YUV420P, c->width, c->height) < 0)
    {
       qDebug () <<  "# avpicture_alloc failed";
       return;
    }

    //SOURCE
    //stride = src_frame->linesize [0] = ((((screenWidth * bitPerPixel) + 31) & ~31) >> 3); do I need to do that ?
    //== stride - I have gotten this formula from : https://msdn.microsoft.com/en-us/library/windows/desktop/dd318229(v=vs.85).aspx
    if (avpicture_fill((AVPicture*) src_frame, (uint8_t *) pPixels, AV_PIX_FMT_RGB32,
                      screenWidth, screenHeight) < 0)
       qDebug () <<  "# avpicture_fill Fill picture with image failed"; //Fill picture with image
    //linesize [0] == 21760 like commented stride

    //Source TO TMP Source
    avcodec_align_dimensions2 (c, &tmp_screenWidth, &tmp_screenHeight, tmp_src_frame->linesize);
    if (avpicture_fill((AVPicture*) tmp_src_frame, NULL, AV_PIX_FMT_RGB32,
                      tmp_screenWidth, tmp_screenHeight) < 0)
       qDebug () <<  "# avpicture_fill Fill picture with image failed"; //Fill picture with image

    av_picture_copy ((AVPicture*) tmp_src_frame, (const AVPicture*) src_frame, AV_PIX_FMT_RGB32,
                    screenWidth, screenHeight);

    struct SwsContext* conversionContext = sws_getContext(tmp_screenWidth, tmp_screenHeight,
                                                         AV_PIX_FMT_RGB32,
                                                         c->width, c->height,
                                                         AV_PIX_FMT_YUV420P,
                                                         SWS_FAST_BILINEAR,
                                                         NULL, NULL, NULL);

    int output_Height = sws_scale(conversionContext,
                                 tmp_src_frame->data, tmp_src_frame->linesize,
                                 0, tmp_screenHeight,
                                 dst_frame->data, dst_frame->linesize);
    //ffmpeg error = bad src image pointers
    // output_Height == 0

    EDIT 3

    For temp Picture I have done an avcode_align_dimension2() then a avpicture_alloc() for allocating memory and avpicture_fill() in order to fill the picture pointer. Below the updated code :

    //DESTINATION
    //avpicture_free ((AVPicture*)dst_frame);
    avcodec_align_dimensions2 (c, &c->width, &c->height, dst_frame->linesize);
    if (avpicture_alloc ((AVPicture*) dst_frame, AV_PIX_FMT_YUV420P, c->width, c->height) < 0)
    {
       qDebug () <<  "# avpicture_alloc failed";
       return;
    }

    //SOURCE
    //src_frame->linesize [0] = ((((screenWidth * bpp) + 31) & ~31) >> 3);
    //src_frame->linesize [0] = stride;
    if (avpicture_fill((AVPicture*) src_frame, (uint8_t *) pPixels, AV_PIX_FMT_RGB32,
                      screenWidth, screenHeight) < 0)
       qDebug () <<  "# avpicture_fill Fill picture with image failed"; //Fill picture with image

    //Source TO TMP Source
    avcodec_align_dimensions2 (c, &tmp_screenWidth, &tmp_screenHeight, tmp_src_frame->linesize);
    if (avpicture_alloc ((AVPicture*) tmp_src_frame, AV_PIX_FMT_RGB32, tmp_screenWidth, tmp_screenHeight) < 0)
    {
       qDebug () <<  "# avpicture_alloc failed";
       return;
    }
    int outbuf_size = tmp_screenWidth*tmp_screenHeight*4;// alloc image and output buffer
    outbuf = static_cast(malloc(outbuf_size));
    if (avpicture_fill((AVPicture*) tmp_src_frame, outbuf, AV_PIX_FMT_RGB32,
                      tmp_screenWidth, tmp_screenHeight) < 0)
       qDebug () <<  "# avpicture_fill Fill picture with image failed"; //Fill picture with image
    av_picture_copy ((AVPicture*) tmp_src_frame, (const AVPicture*) src_frame, AV_PIX_FMT_RGB32,
                    tmp_screenWidth, tmp_screenHeight);

    struct SwsContext* conversionContext = sws_getContext(tmp_screenWidth, tmp_screenHeight,
                                                         AV_PIX_FMT_RGB32,
                                                         c->width, c->height,
                                                         AV_PIX_FMT_YUV420P,
                                                         SWS_FAST_BILINEAR,
                                                         NULL, NULL, NULL);

    int output_Height = sws_scale(conversionContext,
                                 tmp_src_frame->data, tmp_src_frame->linesize,
                                 0, tmp_screenHeight,
                                 dst_frame->data, dst_frame->linesize);

    The call stack is as follow : av_picture_copy() is called then av_image_copy() then _VEC_memcpy() then fastcopy_I() and crash ... The problem is not the dimensions (tmp_screenWidth/Height) ? (With av_picture_copy () could we copy a picture P1 with dim W1xH1 to a picture P2 with dimension W2xH2 ?)

    EDIT 4

    Crash at av_picture_copy() which call _aligned_malloc() then av_image_copy _VEC_memcpy() and fastcopy_I()

    //SOURCE
    if (avpicture_fill((AVPicture*) src_frame, (uint8_t *) pPixels, AV_PIX_FMT_RGB32,
                      screenWidth, screenHeight) < 0)
       qDebug () <<  "# avpicture_fill Fill picture with image failed"; //Fill picture with image

    //Source TO TMP Source
    avcodec_align_dimensions2 (c, &tmp_screenWidth, &tmp_screenHeight, tmp_src_frame->linesize);
    if (avpicture_alloc ((AVPicture*) tmp_src_frame, AV_PIX_FMT_RGB32, tmp_screenWidth, tmp_screenHeight) < 0)
    {
       qDebug () <<  "# avpicture_alloc failed";
       return;
    }
    av_picture_copy ((AVPicture*) tmp_src_frame, (const AVPicture*) src_frame, AV_PIX_FMT_RGB32,
                    tmp_screenWidth, tmp_screenHeight);
  • Live Photos on android implementation ?

    18 novembre 2016, par mohit gupta

    I have come up with a project which implements LIVE Photos capturing on android and I will hopefully make it open sourced,Spoiler : it takes a 1.5 second shot before and after you click the pic and combines them and make the video , While i tried it I was unable to do it with MediaRecorder in android , I even experimented with FFMPEG it just does not works . Anyone has any idea or a little light so i can follow and implement this.My min Api support is 16

  • qsv : Fix loading multiple plugins

    15 mars 2016, par Luca Barbato
    qsv : Fix loading multiple plugins
    

    av_get_token does not strip the trailing separator.

    • [DBH] libavcodec/qsv.c