Recherche avancée

Médias (39)

Mot : - Tags -/audio

Autres articles (65)

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

Sur d’autres sites (4960)

  • avcodec_decode_video2 fails to decode after frame resolution change

    10 avril 2021, par Krzysztof Kansy

    I'm using ffmpeg in Android project via JNI to decode real-time H264 video stream. On the Java side I'm only sending the the byte arrays into native module. Native code is running a loop and checking data buffers for new data to decode. Each data chunk is processed with :

    



    int bytesLeft = data->GetSize();
int paserLength = 0;
int decodeDataLength = 0;
int gotPicture = 0;
const uint8_t* buffer = data->GetData();
while (bytesLeft > 0) {
    AVPacket packet;
    av_init_packet(&packet);
    paserLength = av_parser_parse2(_codecPaser, _codecCtx, &packet.data, &packet.size, buffer, bytesLeft, AV_NOPTS_VALUE, AV_NOPTS_VALUE, AV_NOPTS_VALUE);
    bytesLeft -= paserLength;
    buffer += paserLength;

    if (packet.size > 0) {
        decodeDataLength = avcodec_decode_video2(_codecCtx, _frame, &gotPicture, &packet);
    }
    else {
        break;
    }
    av_free_packet(&packet);
}

if (gotPicture) {
// pass the frame to rendering
}


    



    The system works pretty well until incoming video's resolution changes. I need to handle transition between 4:3 and 16:9 aspect ratios. While having AVCodecContext configured as follows :

    



    _codecCtx->flags2|=CODEC_FLAG2_FAST;
_codecCtx->thread_count = 2;
_codecCtx->thread_type = FF_THREAD_FRAME;

if(_codec->capabilities&CODEC_FLAG_LOW_DELAY){
    _codecCtx->flags|=CODEC_FLAG_LOW_DELAY;
}


    



    I wasn't able to continue decoding new frames after video resolution change. The got_picture_ptr flag that avcodec_decode_video2 enables when whole frame is available was never true after that.
    
This ticket made me wonder if the issue isn't connected with multithreading. Only useful thing I've noticed is that when I change thread_type to FF_THREAD_SLICE the decoder is not always blocked after resolution change, about half of my attempts were successfull. Switching to single-threaded processing is not possible, I need more computing power. Setting up the context to one thread does not solve the problem and makes the decoder not keeping up with processing incoming data.
    
Everything work well after app restart.

    



    I can only think of one workoround (it doesn't really solve the problem) : unloading and loading the whole library after stream resolution change (e.g as mentioned in here). I don't think it's good tho, it will propably introduce other bugs and take a lot of time (from user's viewpoint).

    



    Is it possible to fix this issue ?

    



    EDIT :
    
I've dumped the stream data that is passed to decoding pipeline. I've changed the resolution few times while stream was being captured. Playing it with ffplay showed that in moment when resolution changed and preview in application froze, ffplay managed to continue, but preview is glitchy for a second or so. You can see full ffplay log here. In this case video preview stopped when I changed resolution to 960x720 for the second time. (Reinit context to 960x720, pix_fmt: yuv420p in log).

    


  • OpenCV 4.5.2 takes a long time (>100ms) to retrieve a single frame from a webcam, C++ on Windows 10

    9 juin 2021, par Mustard Tiger

    I've been having a tough time getting my webcam working quickly with opencv. Frames take a very long time to read, (a recorded average of 124ms across 500 frames) I've tried on three different computers (running Windows 10) with a logitech C922 webcam. The most recent machine I tested on has a Ryzen 9 3950X, with 32gbs of ram ; no lack of power.

    


    Here is the code :

    


    cv::VideoCapture cap = cv::VideoCapture(m_cameraNum);&#xA;&#xA;// Check if camera opened successfully&#xA;if (!cap.isOpened())&#xA;{&#xA;    m_logger->critical("Error opening video stream or file\n\r");&#xA;    return -1;&#xA;}&#xA;&#xA;bool result = true;&#xA;result &amp;= cap.set(cv::CAP_PROP_FRAME_WIDTH, 1280);&#xA;result &amp;= cap.set(cv::CAP_PROP_FRAME_HEIGHT, 720);&#xA;&#xA;bool ready = false;&#xA;std::vector<string> timeLog;&#xA;timeLog.reserve(50000);&#xA;int i = 0;&#xA;&#xA;while (i &lt; 500)&#xA;{&#xA;    auto start = std::chrono::system_clock::now();&#xA;    &#xA;    cv::Mat img;&#xA;    ready = cap.read(img);&#xA;&#xA;    // If the frame is empty, break immediately&#xA;    if (!ready)&#xA;    {&#xA;        timeLog.push_back("continue");&#xA;        continue;&#xA;    }&#xA;&#xA;    i&#x2B;&#x2B;;&#xA;    auto end = std::chrono::system_clock::now();&#xA;    timeLog.push_back(std::to_string(std::chrono::duration_cast(end - start).count()));&#xA;}&#xA;&#xA;for (auto&amp; entry : timeLog)&#xA;    m_logger->info(entry);&#xA;&#xA;cap.release();&#xA;return 0;&#xA;</string>

    &#xA;

    Notice that I write the elapsed time to a log file at the end of execution. The average time is 124ms for debug and release, and not one instance of "continue" after half a dozen runs.

    &#xA;

    It doesn't matter if I use USB 2 or USB 3 ports (the camera is USB2) or if I run a debug build or a release build, the log file will show anywhere from 110ms to 130ms of time for each frame. The camera works fine in other app, OBS can get a smooth 1080@30fps or 720@60fps.

    &#xA;

    Stepping through the debugger and doing a lot of Googling, I've learned the following about my system :

    &#xA;

      &#xA;
    • The backend chosen by default is DSHOW. GStreamer and FFMPEG are also available.
    • &#xA;

    • DSHOW uses FFMPEG somehow (it needs the FFMPEG dll) but I cannot use FFMPEG directly through opencv. Attempting to use cv::VideoCapture(m_cameraNum, cv::CAP_FFMPEG) always fails. It seems like Opencv's interface to FFMPEG is only capable of opening video files.
    • &#xA;

    • Microsoft really screwed up camera devices in Windows a few years back, not sure if this is related to my problem.
    • &#xA;

    &#xA;

    Here's a short list of the fixes I have tried, most taken from older SO posts :

    &#xA;

      &#xA;
    • result &= cap.set(cv::CAP_PROP_FRAME_COUNT, 30) ; // Returns false, does nothing
    • &#xA;

    • result &= cap.set(cv::CAP_PROP_CONVERT_RGB, 0) ; // Returns true, does nothing
    • &#xA;

    • result &= cap.set(cv::CAP_PROP_MODE, cv::VideoWriter::fourcc('M', 'J', 'P', 'G')) ; // Returns false, does nothing
    • &#xA;

    • Set registry key from http://alax.info/blog/1693 that should disable the new Windows camera server.
    • &#xA;

    • Updated from 4.5.0 to 4.5.2, no change.
    • &#xA;

    • Asked device manager to find a newer driver, no newer driver found.
    • &#xA;

    &#xA;

    I'm out of ideas. Any help ?

    &#xA;

  • Render FFmpeg AVFrame as OpenGL texture ?

    5 mars 2019, par ZeroDefect

    I’m attempting to to render a jpeg image (1024x1024 pixels) in the form of an FFmpeg AVFrame as a texture in OpenGL. What I get instead is something that appears as a 1024x1024 dark green quad :

    dark green quad screenshot

    The code to render the AVFrame data in OpenGL is shown below. I have convinced myself that the raw RGB data held within the FFmpeg AVFrame data is not solely dark green.

    GLuint g_texture = {};

    //////////////////////////////////////////////////////////////////////////
    void display()
    {
       // Clear color and depth buffers
       glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
       glMatrixMode(GL_MODELVIEW);     // Operate on model-view matrix

       glEnable(GL_TEXTURE_2D);
       GLuint texture = g_texture;
       glBindTexture(GL_TEXTURE_2D, texture);

       // Draw a quad
       glBegin(GL_QUADS);
       glVertex2i(0, 0); // top left
       glVertex2i(1024, 0); // top right
       glVertex2i(1024, 1024); // bottom right
       glVertex2i(0, 1024); // bottom left
       glEnd();

       glDisable(GL_TEXTURE_2D);
       glBindTexture(GL_TEXTURE_2D, 0);

       glFlush();
    }

    /* Initialize OpenGL Graphics */
    void initGL(int w, int h)
    {
       glViewport(0, 0, w, h); // use a screen size of WIDTH x HEIGHT
       glEnable(GL_TEXTURE_2D);     // Enable 2D texturing

       glMatrixMode(GL_PROJECTION);     // Make a simple 2D projection on the entire window
       glOrtho(0.0, w, h, 0.0, 0.0, 100.0);
       glMatrixMode(GL_MODELVIEW);    // Set the matrix mode to object modeling
       //glTranslatef( 0, 0, -15 );

       glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
       glClearDepth(0.0f);
       glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear the window
    }

    //////////////////////////////////////////////////////////////////////////
    int main(int argc, char *argv[])
    {
       std::shared_ptr<avframe> apAVFrame;
       if (!load_image_to_AVFrame(apAVFrame, "marble.jpg"))
       {
           assert(false);
           return 1;
       }

       // From here on out, the AVFrame is RGB interleaved
       // and is sized to 1,024 x 1,024 (power of 2).

       glutInit(&amp;argc, argv);
       glutInitDisplayMode(GLUT_RGB | GLUT_SINGLE);
       glutInitWindowSize(1060, 1060);
       glutInitWindowPosition(0, 0);
       glutCreateWindow("OpenGL - Creating a texture");

       glGenTextures(1, &amp;g_texture);

       //glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
       glBindTexture(GL_TEXTURE_2D, g_texture);
       glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, apAVFrame->width,
                    apAVFrame->height, 0, GL_RGB, GL_UNSIGNED_BYTE,
                    apAVFrame->data[0]);
       glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); /* We will use linear interpolation for magnification filter */
       glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); /* We will use linear interpolation for minifying filter */

       initGL(1060, 1060);

       glutDisplayFunc(display);

       glutMainLoop();

       return 0;
    }
    </avframe>

    Environment :

    • Ubuntu 18.04
    • GCC v8.2

    EDIT : As per @immibis’ suggestion below, it all works when I change the rendering of the quad to :

    // Draw a quad
    glBegin(GL_QUADS);
    glTexCoord2f(0, 0);
    glVertex2i(0, 0); // top left
    glTexCoord2f(1, 0);
    glVertex2i(1024, 0); // top right
    glTexCoord2f(1, 1);
    glVertex2i(1024, 1024); // bottom right
    glTexCoord2f(0, 1);
    glVertex2i(0, 1024); // bottom left
    glEnd();