Recherche avancée

Médias (91)

Autres articles (23)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (7421)

  • Interactive flash content on iOS

    20 novembre 2013, par Pratik Goswami

    I want to play interactive (user inputs/actions) flash contents-videos on the iOS devices. I am having flv files in which user can have their inputs like option selection, page turn etc.

    I am having 2 approach about the functionality. Please correct if I am wrong.

    1.Adobe-air can be used on the iPad devices. Does it have the ability to parse flash content run time ? (use flash content as resources/bundle)

    2.With the help of FFMPEG lib flash files/videos will work, but will it provide user actions/interactions ?

  • Adding Border to the image using FFmpeg

    11 juillet 2014, par user2882101

    I wish to add a border to the single strip of images, using FFMPEG.

    I have been trying to search this thing on google, I tried this command ffmpeg -i input.jpg -vf "draw box= : x=50 : y=10 : w=104 : h=80 : color=white" output.jpg, I am unable to increase the border size as well as my desired color,Its work only black border.how can I increase the border and change the color to the single strip of images. Can anybody help me ?

  • ffmpeg video to opengl texture

    23 avril 2017, par Infiniti Fizz

    I’m trying to render frames grabbed and converted from a video using ffmpeg to an OpenGL texture to be put on a quad. I’ve pretty much exhausted google and not found an answer, well I’ve found answers but none of them seem to have worked.

    Basically, I am using avcodec_decode_video2() to decode the frame and then sws_scale() to convert the frame to RGB and then glTexSubImage2D() to create an openGL texture from it but can’t seem to get anything to work.

    I’ve made sure the "destination" AVFrame has power of 2 dimensions in the SWS Context setup. Here is my code :

    SwsContext *img_convert_ctx = sws_getContext(pCodecCtx->width,
                   pCodecCtx->height, pCodecCtx->pix_fmt, 512,
                   256, PIX_FMT_RGB24, SWS_BICUBIC, NULL,
                   NULL, NULL);

    //While still frames to read
    while(av_read_frame(pFormatCtx, &packet)>=0) {
       glClear(GL_COLOR_BUFFER_BIT);

       //If the packet is from the video stream
       if(packet.stream_index == videoStream) {
           //Decode the video
           avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);

           //If we got a frame then convert it and put it into RGB buffer
           if(frameFinished) {
               printf("frame finished: %i\n", number);
               sws_scale(img_convert_ctx, pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize);

               glBindTexture(GL_TEXTURE_2D, texture);
               //gluBuild2DMipmaps(GL_TEXTURE_2D, 3, pCodecCtx->width, pCodecCtx->height, GL_RGB, GL_UNSIGNED_INT, pFrameRGB->data);
               glTexSubImage2D(GL_TEXTURE_2D, 0, 0,0, 512, 256, GL_RGB, GL_UNSIGNED_BYTE, pFrameRGB->data[0]);
               SaveFrame(pFrameRGB, pCodecCtx->width, pCodecCtx->height, number);
               number++;
           }
       }

       glColor3f(1,1,1);
       glBindTexture(GL_TEXTURE_2D, texture);
       glBegin(GL_QUADS);
           glTexCoord2f(0,1);
           glVertex3f(0,0,0);

           glTexCoord2f(1,1);
           glVertex3f(pCodecCtx->width,0,0);

           glTexCoord2f(1,0);
           glVertex3f(pCodecCtx->width, pCodecCtx->height,0);

           glTexCoord2f(0,0);
           glVertex3f(0,pCodecCtx->height,0);

       glEnd();

    As you can see in that code, I am also saving the frames to .ppm files just to make sure they are actually rendering, which they are.

    The file being used is a .wmv at 854x480, could this be the problem ? The fact I’m just telling it to go 512x256 ?

    P.S. I’ve looked at this Stack Overflow question but it didn’t help.

    Also, I have glEnable(GL_TEXTURE_2D) as well and have tested it by just loading in a normal bmp.

    EDIT

    I’m getting an image on the screen now but it is a garbled mess, I’m guessing something to do with changing things to a power of 2 (in the decode, swscontext and gluBuild2DMipmaps as shown in my code). I’m usually nearly exactly the same code as shown above, only I’ve changed glTexSubImage2D to gluBuild2DMipmaps and changed the types to GL_RGBA.

    Here is what the frame looks like :

    Ffmpeg as OpenGL Texture garbled

    EDIT AGAIN

    Just realised I haven’t showed the code for how pFrameRGB is set up :

    //Allocate video frame for 24bit RGB that we convert to.
    AVFrame *pFrameRGB;
    pFrameRGB = avcodec_alloc_frame();

    if(pFrameRGB == NULL) {
       return -1;
    }

    //Allocate memory for the raw data we get when converting.
    uint8_t *buffer;
    int numBytes;
    numBytes = avpicture_get_size(PIX_FMT_RGB24, pCodecCtx->width, pCodecCtx->height);
    buffer = (uint8_t *) av_malloc(numBytes*sizeof(uint8_t));

    //Associate frame with our buffer
    avpicture_fill((AVPicture *) pFrameRGB, buffer, PIX_FMT_RGB24,
       pCodecCtx->width, pCodecCtx->height);

    Now that I ahve changed the PixelFormat in avgpicture_get_size to PIX_FMT_RGB24, I’ve done that in SwsContext as well and changed GluBuild2DMipmaps to GL_RGB and I get a slightly better image but it looks like I’m still missing lines and it’s still a bit stretched :

    Ffmpeg Garbled OpenGL Texture 2

    Another Edit

    After following Macke’s advice and passing the actual resolution to OpenGL I get the frames nearly proper but still a bit skewed and in black and white, also it’s only getting 6fps now rather than 110fps :

    enter image description here

    P.S.

    I’ve got a function to save the frames to image after sws_scale() and they are coming out fine as colour and everything so something in OGL is making it B&W.

    LAST EDIT

    Working ! Okay I have it working now, basically I am not padding out the texture to a power of 2 and just using the resolution the video is.

    I got the texture showing up properly with a lucky guess at the correct glPixelStorei()

    glPixelStorei(GL_UNPACK_ALIGNMENT, 2);

    Also, if anyone else has the subimage() showing blank problem like me, you have to fill the texture at least once with glTexImage2D() and so I use it once in the loop and then use glTexSubImage2D() after that.

    Thanks Macke and datenwolf for all your help.