Recherche avancée

Médias (3)

Mot : - Tags -/collection

Autres articles (19)

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

Sur d’autres sites (4288)

  • avcodec_decode_video2 fails to decode after frame resolution change

    7 octobre 2016, par Krzysztof Kansy

    I’m using ffmpeg in Android project via JNI to decode real-time H264 video stream. On the Java side I’m only sending the the byte arrays into native module. Native code is running a loop and checking data buffers for new data to decode. Each data chunk is processed with :

    int bytesLeft = data->GetSize();
    int paserLength = 0;
    int decodeDataLength = 0;
    int gotPicture = 0;
    const uint8_t* buffer = data->GetData();
    while (bytesLeft > 0) {
       AVPacket packet;
       av_init_packet(&packet);
       paserLength = av_parser_parse2(_codecPaser, _codecCtx, &packet.data, &packet.size, buffer, bytesLeft, AV_NOPTS_VALUE, AV_NOPTS_VALUE, AV_NOPTS_VALUE);
       bytesLeft -= paserLength;
       buffer += paserLength;

       if (packet.size > 0) {
           decodeDataLength = avcodec_decode_video2(_codecCtx, _frame, &gotPicture, &packet);
       }
       else {
           break;
       }
       av_free_packet(&packet);
    }

    if (gotPicture) {
    // pass the frame to rendering
    }

    The system works pretty well until incoming video’s resolution changes. I need to handle transition between 4:3 and 16:9 aspect ratios. While having AVCodecContext configured as follows :

    _codecCtx->flags2|=CODEC_FLAG2_FAST;
    _codecCtx->thread_count = 2;
    _codecCtx->thread_type = FF_THREAD_FRAME;

    if(_codec->capabilities&CODEC_FLAG_LOW_DELAY){
       _codecCtx->flags|=CODEC_FLAG_LOW_DELAY;
    }

    I wasn’t able to continue decoding new frames after video resolution change. The got_picture_ptr flag that avcodec_decode_video2 enables when whole frame is available was never true after that.
    This ticket made me wonder if the issue isn’t connected with multithreading. Only useful thing I’ve noticed is that when I change thread_type to FF_THREAD_SLICE the decoder is not always blocked after resolution change, about half of my attempts were successfull. Switching to single-threaded processing is not possible, I need more computing power. Setting up the context to one thread does not solve the problem and makes the decoder not keeping up with processing incoming data.
    Everything work well after app restart.

    I can only think of one workoround (it doesn’t really solve the problem) : unloading and loading the whole library after stream resolution change (e.g as mentioned in here). I don’t think it’s good tho, it will propably introduce other bugs and take a lot of time (from user’s viewpoint).

    Is it possible to fix this issue ?

    EDIT :
    I’ve dumped the stream data that is passed to decoding pipeline. I’ve changed the resolution few times while stream was being captured. Playing it with ffplay showed that in moment when resolution changed and preview in application froze, ffplay managed to continue, but preview is glitchy for a second or so. You can see full ffplay log here. In this case video preview stopped when I changed resolution to 960x720 for the second time. (Reinit context to 960x720, pix_fmt: yuv420p in log).

  • Render FFmpeg AVFrame as OpenGL texture ?

    5 mars 2019, par ZeroDefect

    I’m attempting to to render a jpeg image (1024x1024 pixels) in the form of an FFmpeg AVFrame as a texture in OpenGL. What I get instead is something that appears as a 1024x1024 dark green quad :

    dark green quad screenshot

    The code to render the AVFrame data in OpenGL is shown below. I have convinced myself that the raw RGB data held within the FFmpeg AVFrame data is not solely dark green.

    GLuint g_texture = {};

    //////////////////////////////////////////////////////////////////////////
    void display()
    {
       // Clear color and depth buffers
       glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
       glMatrixMode(GL_MODELVIEW);     // Operate on model-view matrix

       glEnable(GL_TEXTURE_2D);
       GLuint texture = g_texture;
       glBindTexture(GL_TEXTURE_2D, texture);

       // Draw a quad
       glBegin(GL_QUADS);
       glVertex2i(0, 0); // top left
       glVertex2i(1024, 0); // top right
       glVertex2i(1024, 1024); // bottom right
       glVertex2i(0, 1024); // bottom left
       glEnd();

       glDisable(GL_TEXTURE_2D);
       glBindTexture(GL_TEXTURE_2D, 0);

       glFlush();
    }

    /* Initialize OpenGL Graphics */
    void initGL(int w, int h)
    {
       glViewport(0, 0, w, h); // use a screen size of WIDTH x HEIGHT
       glEnable(GL_TEXTURE_2D);     // Enable 2D texturing

       glMatrixMode(GL_PROJECTION);     // Make a simple 2D projection on the entire window
       glOrtho(0.0, w, h, 0.0, 0.0, 100.0);
       glMatrixMode(GL_MODELVIEW);    // Set the matrix mode to object modeling
       //glTranslatef( 0, 0, -15 );

       glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
       glClearDepth(0.0f);
       glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear the window
    }

    //////////////////////////////////////////////////////////////////////////
    int main(int argc, char *argv[])
    {
       std::shared_ptr<avframe> apAVFrame;
       if (!load_image_to_AVFrame(apAVFrame, "marble.jpg"))
       {
           assert(false);
           return 1;
       }

       // From here on out, the AVFrame is RGB interleaved
       // and is sized to 1,024 x 1,024 (power of 2).

       glutInit(&amp;argc, argv);
       glutInitDisplayMode(GLUT_RGB | GLUT_SINGLE);
       glutInitWindowSize(1060, 1060);
       glutInitWindowPosition(0, 0);
       glutCreateWindow("OpenGL - Creating a texture");

       glGenTextures(1, &amp;g_texture);

       //glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
       glBindTexture(GL_TEXTURE_2D, g_texture);
       glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, apAVFrame->width,
                    apAVFrame->height, 0, GL_RGB, GL_UNSIGNED_BYTE,
                    apAVFrame->data[0]);
       glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); /* We will use linear interpolation for magnification filter */
       glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); /* We will use linear interpolation for minifying filter */

       initGL(1060, 1060);

       glutDisplayFunc(display);

       glutMainLoop();

       return 0;
    }
    </avframe>

    Environment :

    • Ubuntu 18.04
    • GCC v8.2

    EDIT : As per @immibis’ suggestion below, it all works when I change the rendering of the quad to :

    // Draw a quad
    glBegin(GL_QUADS);
    glTexCoord2f(0, 0);
    glVertex2i(0, 0); // top left
    glTexCoord2f(1, 0);
    glVertex2i(1024, 0); // top right
    glTexCoord2f(1, 1);
    glVertex2i(1024, 1024); // bottom right
    glTexCoord2f(0, 1);
    glVertex2i(0, 1024); // bottom left
    glEnd();
  • FFMPEG to OpenGL Texture

    23 avril 2014, par Spamdark

    I was here to ask, how can I convert an AVFrame to an opengl texture. Actually, I created a renderer the outputs me the audio (Audio is working) and the video, but the video is not outputing. Here is my code :

    Texture creation :

    glGenTextures(1,&amp;_texture);
    glBindTexture(GL_TEXTURE_2D,_texture);
    glPixelStorei(GL_UNPACK_ALIGNMENT, 1);

    glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
    glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );

    Code Info : _texture variable is a GLuint that keeps the texture ID

    Function that gets the AVFrame and convert it to OpenGL Texture :

    int VideoGL::NextVideoFrame(){
    // Get a packet from the queue
    AVPacket *videopacket = this->DEQUEUE(VIDEO);
    int frameFinished;
    if(videopacket!=0){
       avcodec_decode_video2(_codec_context_video, _std_frame,&amp;frameFinished,videopacket);

       if(frameFinished){

           sws_scale(sws_ctx, _std_frame->data, _std_frame->linesize, 0, _codec_context_video->height, _rgb_frame->data, _rgb_frame->linesize);

           if(_firstrendering){
           glBindTexture(GL_TEXTURE_2D,_texture);
           glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, _codec_context_video->width,_codec_context_video->height,0,GL_RGB,GL_UNSIGNED_BYTE,_rgb_frame->data[0]);

           _firstrendering = false;

           }else{

               glActiveTexture(_texture);
               glBindTexture(GL_TEXTURE_2D,_texture);
               glTexSubImage2D(GL_TEXTURE_2D,0,0,0,_codec_context_video->width,_codec_context_video->height,GL_RGB,GL_UNSIGNED_BYTE,_rgb_frame->data[0]);

           }
           av_free_packet(videopacket);
           return 0;
       }else{

           av_free_packet(videopacket);
           return -1;
       }

    }else{
       return -1;
    }
    return 0;
    }

    Code Information : There is a queue where a thread store the AVFrames, this function is frequently called to get the AVFrames, until it gets a NULL it stops to being called.

    That’s actually not working. (I tried to look at some questions in stack overflow, it’s still not working)
    Any example, or someone that helps me to correct any error there ?

    Additional Data : I tried to change the GL_RGB to GL_RGBA and started to play with the formats, anyway it crashes when I try GL_RGBA (Because the width and height are very big, anyway I tried to resize them). I have tried to change the sizes to Power Of 2, stills not working.

    1 Edit :

    Thread function :

    DWORD WINAPI VideoGL::VidThread(LPVOID myparam){

    VideoGL * instance = (VideoGL*) myparam;
    instance->wave_audio->Start();

    int quantity=0;

    AVPacket packet;
    while(av_read_frame(instance->_format_context,&amp;packet) >= 0){
       if(packet.stream_index==instance->videoStream){
           instance->ENQUEUE(VIDEO,&amp;packet);
       }
       if(packet.stream_index==instance->audioStream){
           instance->ENQUEUE(AUDIO,&amp;packet);
       }
    }

    instance->ENQUEUE(AUDIO,NULL);
    instance->ENQUEUE(VIDEO,NULL);

    return 0;
    }

    Thread creation function :

    CreateThread(NULL, 0, VidThread, this, NULL, NULL);

    Where this refers to the class that contains the NextVideoFrame, and the _texture members.

    Solved :

    I followed some of the datenwolf tips, and now the video is displaying correctly with the audio/video :

    Screenshot took