Recherche avancée

Médias (91)

Autres articles (111)

  • Script d’installation automatique de MediaSPIP

    25 avril 2011, par

    Afin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
    Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
    La documentation de l’utilisation du script d’installation (...)

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

  • Librairies et binaires spécifiques au traitement vidéo et sonore

    31 janvier 2010, par

    Les logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
    Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
    Binaires complémentaires et facultatifs flvtool2 : (...)

Sur d’autres sites (8986)

  • ffmpeg video to opengl texture

    23 avril 2017, par Infiniti Fizz

    I’m trying to render frames grabbed and converted from a video using ffmpeg to an OpenGL texture to be put on a quad. I’ve pretty much exhausted google and not found an answer, well I’ve found answers but none of them seem to have worked.

    Basically, I am using avcodec_decode_video2() to decode the frame and then sws_scale() to convert the frame to RGB and then glTexSubImage2D() to create an openGL texture from it but can’t seem to get anything to work.

    I’ve made sure the "destination" AVFrame has power of 2 dimensions in the SWS Context setup. Here is my code :

    SwsContext *img_convert_ctx = sws_getContext(pCodecCtx->width,
                   pCodecCtx->height, pCodecCtx->pix_fmt, 512,
                   256, PIX_FMT_RGB24, SWS_BICUBIC, NULL,
                   NULL, NULL);

    //While still frames to read
    while(av_read_frame(pFormatCtx, &packet)>=0) {
       glClear(GL_COLOR_BUFFER_BIT);

       //If the packet is from the video stream
       if(packet.stream_index == videoStream) {
           //Decode the video
           avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);

           //If we got a frame then convert it and put it into RGB buffer
           if(frameFinished) {
               printf("frame finished: %i\n", number);
               sws_scale(img_convert_ctx, pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize);

               glBindTexture(GL_TEXTURE_2D, texture);
               //gluBuild2DMipmaps(GL_TEXTURE_2D, 3, pCodecCtx->width, pCodecCtx->height, GL_RGB, GL_UNSIGNED_INT, pFrameRGB->data);
               glTexSubImage2D(GL_TEXTURE_2D, 0, 0,0, 512, 256, GL_RGB, GL_UNSIGNED_BYTE, pFrameRGB->data[0]);
               SaveFrame(pFrameRGB, pCodecCtx->width, pCodecCtx->height, number);
               number++;
           }
       }

       glColor3f(1,1,1);
       glBindTexture(GL_TEXTURE_2D, texture);
       glBegin(GL_QUADS);
           glTexCoord2f(0,1);
           glVertex3f(0,0,0);

           glTexCoord2f(1,1);
           glVertex3f(pCodecCtx->width,0,0);

           glTexCoord2f(1,0);
           glVertex3f(pCodecCtx->width, pCodecCtx->height,0);

           glTexCoord2f(0,0);
           glVertex3f(0,pCodecCtx->height,0);

       glEnd();

    As you can see in that code, I am also saving the frames to .ppm files just to make sure they are actually rendering, which they are.

    The file being used is a .wmv at 854x480, could this be the problem ? The fact I’m just telling it to go 512x256 ?

    P.S. I’ve looked at this Stack Overflow question but it didn’t help.

    Also, I have glEnable(GL_TEXTURE_2D) as well and have tested it by just loading in a normal bmp.

    EDIT

    I’m getting an image on the screen now but it is a garbled mess, I’m guessing something to do with changing things to a power of 2 (in the decode, swscontext and gluBuild2DMipmaps as shown in my code). I’m usually nearly exactly the same code as shown above, only I’ve changed glTexSubImage2D to gluBuild2DMipmaps and changed the types to GL_RGBA.

    Here is what the frame looks like :

    Ffmpeg as OpenGL Texture garbled

    EDIT AGAIN

    Just realised I haven’t showed the code for how pFrameRGB is set up :

    //Allocate video frame for 24bit RGB that we convert to.
    AVFrame *pFrameRGB;
    pFrameRGB = avcodec_alloc_frame();

    if(pFrameRGB == NULL) {
       return -1;
    }

    //Allocate memory for the raw data we get when converting.
    uint8_t *buffer;
    int numBytes;
    numBytes = avpicture_get_size(PIX_FMT_RGB24, pCodecCtx->width, pCodecCtx->height);
    buffer = (uint8_t *) av_malloc(numBytes*sizeof(uint8_t));

    //Associate frame with our buffer
    avpicture_fill((AVPicture *) pFrameRGB, buffer, PIX_FMT_RGB24,
       pCodecCtx->width, pCodecCtx->height);

    Now that I ahve changed the PixelFormat in avgpicture_get_size to PIX_FMT_RGB24, I’ve done that in SwsContext as well and changed GluBuild2DMipmaps to GL_RGB and I get a slightly better image but it looks like I’m still missing lines and it’s still a bit stretched :

    Ffmpeg Garbled OpenGL Texture 2

    Another Edit

    After following Macke’s advice and passing the actual resolution to OpenGL I get the frames nearly proper but still a bit skewed and in black and white, also it’s only getting 6fps now rather than 110fps :

    enter image description here

    P.S.

    I’ve got a function to save the frames to image after sws_scale() and they are coming out fine as colour and everything so something in OGL is making it B&W.

    LAST EDIT

    Working ! Okay I have it working now, basically I am not padding out the texture to a power of 2 and just using the resolution the video is.

    I got the texture showing up properly with a lucky guess at the correct glPixelStorei()

    glPixelStorei(GL_UNPACK_ALIGNMENT, 2);

    Also, if anyone else has the subimage() showing blank problem like me, you have to fill the texture at least once with glTexImage2D() and so I use it once in the loop and then use glTexSubImage2D() after that.

    Thanks Macke and datenwolf for all your help.

  • vaapi_encode : Check config attributes before creating config

    18 mai 2016, par Mark Thompson
    vaapi_encode : Check config attributes before creating config
    

    This prevents attempts to use unsupported modes, such as low-power
    H.264 mode on non-Skylake targets. Also fixes a crash on invalid
    configuration, when trying to destroy an invalid VA config/context.

    • [DBH] libavcodec/vaapi_encode.c
  • Forwarding RTSP streams to client from private networked server via proxy

    21 juin 2016, par beNerd

    I have a setup where I have two physical machines (remote VPSes) :

    1. Server One - This has good processing power in terms of hardware and it’s IP can’t be accessed publicly. It’s private networked to a proxy (server 2) i.e it can only be accessed by the proxy server. Running nodejs/expressjs and ffmpeg/ffserver on ubuntu.

    2. Server Two : Reverse Proxy. Publicly accessible. Implements nginx which pipes the requests to Server One.

    Now, I have client apps that needs to play RTSP streams configured in the FFSERVER residing on Server One. Since I cannot access server one directly and only via proxy, I need a mechanism where I can accept RTSP requests on my nodejs api (which receives requests from nginx proxy via proxy pass config block), do some validations (session tokens here) and then when validated, contact the underlying ffserver asking for the stream. As soon as I receive the stream, I should be able to forward to the asking client.

    Possible ? How ?