Recherche avancée

Médias (91)

Autres articles (86)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

Sur d’autres sites (11855)

  • Render FFmpeg AVFrame as OpenGL texture ?

    5 mars 2019, par ZeroDefect

    I’m attempting to to render a jpeg image (1024x1024 pixels) in the form of an FFmpeg AVFrame as a texture in OpenGL. What I get instead is something that appears as a 1024x1024 dark green quad :

    dark green quad screenshot

    The code to render the AVFrame data in OpenGL is shown below. I have convinced myself that the raw RGB data held within the FFmpeg AVFrame data is not solely dark green.

    GLuint g_texture = {};

    //////////////////////////////////////////////////////////////////////////
    void display()
    {
       // Clear color and depth buffers
       glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
       glMatrixMode(GL_MODELVIEW);     // Operate on model-view matrix

       glEnable(GL_TEXTURE_2D);
       GLuint texture = g_texture;
       glBindTexture(GL_TEXTURE_2D, texture);

       // Draw a quad
       glBegin(GL_QUADS);
       glVertex2i(0, 0); // top left
       glVertex2i(1024, 0); // top right
       glVertex2i(1024, 1024); // bottom right
       glVertex2i(0, 1024); // bottom left
       glEnd();

       glDisable(GL_TEXTURE_2D);
       glBindTexture(GL_TEXTURE_2D, 0);

       glFlush();
    }

    /* Initialize OpenGL Graphics */
    void initGL(int w, int h)
    {
       glViewport(0, 0, w, h); // use a screen size of WIDTH x HEIGHT
       glEnable(GL_TEXTURE_2D);     // Enable 2D texturing

       glMatrixMode(GL_PROJECTION);     // Make a simple 2D projection on the entire window
       glOrtho(0.0, w, h, 0.0, 0.0, 100.0);
       glMatrixMode(GL_MODELVIEW);    // Set the matrix mode to object modeling
       //glTranslatef( 0, 0, -15 );

       glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
       glClearDepth(0.0f);
       glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear the window
    }

    //////////////////////////////////////////////////////////////////////////
    int main(int argc, char *argv[])
    {
       std::shared_ptr<avframe> apAVFrame;
       if (!load_image_to_AVFrame(apAVFrame, "marble.jpg"))
       {
           assert(false);
           return 1;
       }

       // From here on out, the AVFrame is RGB interleaved
       // and is sized to 1,024 x 1,024 (power of 2).

       glutInit(&amp;argc, argv);
       glutInitDisplayMode(GLUT_RGB | GLUT_SINGLE);
       glutInitWindowSize(1060, 1060);
       glutInitWindowPosition(0, 0);
       glutCreateWindow("OpenGL - Creating a texture");

       glGenTextures(1, &amp;g_texture);

       //glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
       glBindTexture(GL_TEXTURE_2D, g_texture);
       glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, apAVFrame->width,
                    apAVFrame->height, 0, GL_RGB, GL_UNSIGNED_BYTE,
                    apAVFrame->data[0]);
       glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); /* We will use linear interpolation for magnification filter */
       glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); /* We will use linear interpolation for minifying filter */

       initGL(1060, 1060);

       glutDisplayFunc(display);

       glutMainLoop();

       return 0;
    }
    </avframe>

    Environment :

    • Ubuntu 18.04
    • GCC v8.2

    EDIT : As per @immibis’ suggestion below, it all works when I change the rendering of the quad to :

    // Draw a quad
    glBegin(GL_QUADS);
    glTexCoord2f(0, 0);
    glVertex2i(0, 0); // top left
    glTexCoord2f(1, 0);
    glVertex2i(1024, 0); // top right
    glTexCoord2f(1, 1);
    glVertex2i(1024, 1024); // bottom right
    glTexCoord2f(0, 1);
    glVertex2i(0, 1024); // bottom left
    glEnd();
  • FFMPEG to OpenGL Texture

    23 avril 2014, par Spamdark

    I was here to ask, how can I convert an AVFrame to an opengl texture. Actually, I created a renderer the outputs me the audio (Audio is working) and the video, but the video is not outputing. Here is my code :

    Texture creation :

    glGenTextures(1,&amp;_texture);
    glBindTexture(GL_TEXTURE_2D,_texture);
    glPixelStorei(GL_UNPACK_ALIGNMENT, 1);

    glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
    glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );

    Code Info : _texture variable is a GLuint that keeps the texture ID

    Function that gets the AVFrame and convert it to OpenGL Texture :

    int VideoGL::NextVideoFrame(){
    // Get a packet from the queue
    AVPacket *videopacket = this->DEQUEUE(VIDEO);
    int frameFinished;
    if(videopacket!=0){
       avcodec_decode_video2(_codec_context_video, _std_frame,&amp;frameFinished,videopacket);

       if(frameFinished){

           sws_scale(sws_ctx, _std_frame->data, _std_frame->linesize, 0, _codec_context_video->height, _rgb_frame->data, _rgb_frame->linesize);

           if(_firstrendering){
           glBindTexture(GL_TEXTURE_2D,_texture);
           glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, _codec_context_video->width,_codec_context_video->height,0,GL_RGB,GL_UNSIGNED_BYTE,_rgb_frame->data[0]);

           _firstrendering = false;

           }else{

               glActiveTexture(_texture);
               glBindTexture(GL_TEXTURE_2D,_texture);
               glTexSubImage2D(GL_TEXTURE_2D,0,0,0,_codec_context_video->width,_codec_context_video->height,GL_RGB,GL_UNSIGNED_BYTE,_rgb_frame->data[0]);

           }
           av_free_packet(videopacket);
           return 0;
       }else{

           av_free_packet(videopacket);
           return -1;
       }

    }else{
       return -1;
    }
    return 0;
    }

    Code Information : There is a queue where a thread store the AVFrames, this function is frequently called to get the AVFrames, until it gets a NULL it stops to being called.

    That’s actually not working. (I tried to look at some questions in stack overflow, it’s still not working)
    Any example, or someone that helps me to correct any error there ?

    Additional Data : I tried to change the GL_RGB to GL_RGBA and started to play with the formats, anyway it crashes when I try GL_RGBA (Because the width and height are very big, anyway I tried to resize them). I have tried to change the sizes to Power Of 2, stills not working.

    1 Edit :

    Thread function :

    DWORD WINAPI VideoGL::VidThread(LPVOID myparam){

    VideoGL * instance = (VideoGL*) myparam;
    instance->wave_audio->Start();

    int quantity=0;

    AVPacket packet;
    while(av_read_frame(instance->_format_context,&amp;packet) >= 0){
       if(packet.stream_index==instance->videoStream){
           instance->ENQUEUE(VIDEO,&amp;packet);
       }
       if(packet.stream_index==instance->audioStream){
           instance->ENQUEUE(AUDIO,&amp;packet);
       }
    }

    instance->ENQUEUE(AUDIO,NULL);
    instance->ENQUEUE(VIDEO,NULL);

    return 0;
    }

    Thread creation function :

    CreateThread(NULL, 0, VidThread, this, NULL, NULL);

    Where this refers to the class that contains the NextVideoFrame, and the _texture members.

    Solved :

    I followed some of the datenwolf tips, and now the video is displaying correctly with the audio/video :

    Screenshot took

  • rtp : Initial H.261 support

    6 décembre 2014, par Thomas Volkert
    rtp : Initial H.261 support
    

    The packetizer only supports splitting at GOB headers - if
    such aren’t available frequently enough, it splits at any
    random byte offset (not at a macroblock boundary either, which
    would be allowed by the spec) and sends a payload header pretend
    that it starts with a GOB header.

    As long as a receiver doesn’t try to handle such cases cleverly
    but just drops broken frames, this shouldn’t matter too much
    in practice.

    Signed-off-by : Martin Storsjö <martin@martin.st>

    • [DBH] Changelog
    • [DBH] libavformat/Makefile
    • [DBH] libavformat/rtpdec.c
    • [DBH] libavformat/rtpdec_formats.h
    • [DBH] libavformat/rtpdec_h261.c
    • [DBH] libavformat/rtpenc.c
    • [DBH] libavformat/rtpenc.h
    • [DBH] libavformat/rtpenc_h261.c
    • [DBH] libavformat/sdp.c
    • [DBH] libavformat/version.h