Recherche avancée

Médias (0)

Mot : - Tags -/alertes

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (82)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Menus personnalisés

    14 novembre 2010, par

    MediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
    Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
    Menus créés à l’initialisation du site
    Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...)

  • Configuration spécifique pour PHP5

    4 février 2011, par

    PHP5 est obligatoire, vous pouvez l’installer en suivant ce tutoriel spécifique.
    Il est recommandé dans un premier temps de désactiver le safe_mode, cependant, s’il est correctement configuré et que les binaires nécessaires sont accessibles, MediaSPIP devrait fonctionner correctement avec le safe_mode activé.
    Modules spécifiques
    Il est nécessaire d’installer certains modules PHP spécifiques, via le gestionnaire de paquet de votre distribution ou manuellement : php5-mysql pour la connectivité avec la (...)

Sur d’autres sites (4841)

  • ffmpeg video to opengl texture

    23 avril 2017, par Infiniti Fizz

    I’m trying to render frames grabbed and converted from a video using ffmpeg to an OpenGL texture to be put on a quad. I’ve pretty much exhausted google and not found an answer, well I’ve found answers but none of them seem to have worked.

    Basically, I am using avcodec_decode_video2() to decode the frame and then sws_scale() to convert the frame to RGB and then glTexSubImage2D() to create an openGL texture from it but can’t seem to get anything to work.

    I’ve made sure the "destination" AVFrame has power of 2 dimensions in the SWS Context setup. Here is my code :

    SwsContext *img_convert_ctx = sws_getContext(pCodecCtx->width,
                   pCodecCtx->height, pCodecCtx->pix_fmt, 512,
                   256, PIX_FMT_RGB24, SWS_BICUBIC, NULL,
                   NULL, NULL);

    //While still frames to read
    while(av_read_frame(pFormatCtx, &packet)>=0) {
       glClear(GL_COLOR_BUFFER_BIT);

       //If the packet is from the video stream
       if(packet.stream_index == videoStream) {
           //Decode the video
           avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);

           //If we got a frame then convert it and put it into RGB buffer
           if(frameFinished) {
               printf("frame finished: %i\n", number);
               sws_scale(img_convert_ctx, pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize);

               glBindTexture(GL_TEXTURE_2D, texture);
               //gluBuild2DMipmaps(GL_TEXTURE_2D, 3, pCodecCtx->width, pCodecCtx->height, GL_RGB, GL_UNSIGNED_INT, pFrameRGB->data);
               glTexSubImage2D(GL_TEXTURE_2D, 0, 0,0, 512, 256, GL_RGB, GL_UNSIGNED_BYTE, pFrameRGB->data[0]);
               SaveFrame(pFrameRGB, pCodecCtx->width, pCodecCtx->height, number);
               number++;
           }
       }

       glColor3f(1,1,1);
       glBindTexture(GL_TEXTURE_2D, texture);
       glBegin(GL_QUADS);
           glTexCoord2f(0,1);
           glVertex3f(0,0,0);

           glTexCoord2f(1,1);
           glVertex3f(pCodecCtx->width,0,0);

           glTexCoord2f(1,0);
           glVertex3f(pCodecCtx->width, pCodecCtx->height,0);

           glTexCoord2f(0,0);
           glVertex3f(0,pCodecCtx->height,0);

       glEnd();

    As you can see in that code, I am also saving the frames to .ppm files just to make sure they are actually rendering, which they are.

    The file being used is a .wmv at 854x480, could this be the problem ? The fact I’m just telling it to go 512x256 ?

    P.S. I’ve looked at this Stack Overflow question but it didn’t help.

    Also, I have glEnable(GL_TEXTURE_2D) as well and have tested it by just loading in a normal bmp.

    EDIT

    I’m getting an image on the screen now but it is a garbled mess, I’m guessing something to do with changing things to a power of 2 (in the decode, swscontext and gluBuild2DMipmaps as shown in my code). I’m usually nearly exactly the same code as shown above, only I’ve changed glTexSubImage2D to gluBuild2DMipmaps and changed the types to GL_RGBA.

    Here is what the frame looks like :

    Ffmpeg as OpenGL Texture garbled

    EDIT AGAIN

    Just realised I haven’t showed the code for how pFrameRGB is set up :

    //Allocate video frame for 24bit RGB that we convert to.
    AVFrame *pFrameRGB;
    pFrameRGB = avcodec_alloc_frame();

    if(pFrameRGB == NULL) {
       return -1;
    }

    //Allocate memory for the raw data we get when converting.
    uint8_t *buffer;
    int numBytes;
    numBytes = avpicture_get_size(PIX_FMT_RGB24, pCodecCtx->width, pCodecCtx->height);
    buffer = (uint8_t *) av_malloc(numBytes*sizeof(uint8_t));

    //Associate frame with our buffer
    avpicture_fill((AVPicture *) pFrameRGB, buffer, PIX_FMT_RGB24,
       pCodecCtx->width, pCodecCtx->height);

    Now that I ahve changed the PixelFormat in avgpicture_get_size to PIX_FMT_RGB24, I’ve done that in SwsContext as well and changed GluBuild2DMipmaps to GL_RGB and I get a slightly better image but it looks like I’m still missing lines and it’s still a bit stretched :

    Ffmpeg Garbled OpenGL Texture 2

    Another Edit

    After following Macke’s advice and passing the actual resolution to OpenGL I get the frames nearly proper but still a bit skewed and in black and white, also it’s only getting 6fps now rather than 110fps :

    enter image description here

    P.S.

    I’ve got a function to save the frames to image after sws_scale() and they are coming out fine as colour and everything so something in OGL is making it B&W.

    LAST EDIT

    Working ! Okay I have it working now, basically I am not padding out the texture to a power of 2 and just using the resolution the video is.

    I got the texture showing up properly with a lucky guess at the correct glPixelStorei()

    glPixelStorei(GL_UNPACK_ALIGNMENT, 2);

    Also, if anyone else has the subimage() showing blank problem like me, you have to fill the texture at least once with glTexImage2D() and so I use it once in the loop and then use glTexSubImage2D() after that.

    Thanks Macke and datenwolf for all your help.

  • How to identify exact type or variation of a .mp4 file

    5 octobre 2017, par Dave502619

    Can anyone tell me if there are different variations of mp4 file and if so, how to identify the exact type or variation of a mp4 file ?

    The reason I need to know this is if I pass an .mp4 through FFMPEG to create an uncompressed grayscale rgb24 avi file, depending on
    where the .mp4 file has been sourced from will produce a different internally structured avi file as output from FFMPEG. Ie, the file header
    and interframe header sizes differ.

    The ffmpeg command i am using is :
    ffmpeg.exe -i source.mp4 -b 1150 -r 20.97 -g 120 -an -vf format=gray -f rawvideo -pixfmt rgb24 -s 384x216 -vcode rawvideo -y fileX.avi

    So far I have identified that .mp4 files generated by my Samsung S5 mobile phone differ from .mp4 files generated by Power Director 14. So I suspect there are different variations of .mp4 files.

    I have written some software which steps into the FFMPEG output .avi file to extract video frames, but it requires fixed offset positions to work so I can only make it work for one variation of .mp4.

  • Connect FFServer multiple instances

    3 mars 2020, par absentio

    I am trying to deploy FFServer on Kubernetes and try to use the power of distributed systems.
    It’s the first time I am using both and so I am a bit confused.
    Got my Kubernetes setup working on my bare metal server, LoadBalancer and CNI are working flawlessly. I then created an FFServer deployment and a FFServer service. Then I made an NFS storage to share ffserver.conf and feed files, but there is something strange happening.

    All my ffserver k8s pods load the ffserver.conf file with not problem, then when I start to stream using ffmpeg the loadbalancer gives my stream to one of my server(i’ll call it pod1). The problem is that I can get the stream played if connecting directly to pod1 but will not work if i try to get it from pod2 although pod2 could read the feed.ffm written from pod1.

    NFS storage is setup with ReadWriteMany. How could I get it to work ? Is there any way to use multiple ffserver without having to ffmpeg to all of them one by one ?