Recherche avancée

Médias (3)

Mot : - Tags -/plugin

Autres articles (71)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

  • Menus personnalisés

    14 novembre 2010, par

    MediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
    Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
    Menus créés à l’initialisation du site
    Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...)

Sur d’autres sites (6449)

  • FFmpeg C++ decoding in a separate thread

    12 juin 2019, par Brigapes

    I’m trying to decode a video with FFmpeg and convert it to an openGL texture and display it inside a cocos2dx engine. I’ve managed to do that and it displays the video as i wanted to, now the problem is performance wise. I get a Sprite update every frame(game is fixed 60fps, video is 30fps) so what i did was i decoded and converted frame interchangeably, didn’t work great, now i have it set up to have a separate thread where i decode in an infinite while loop with sleep() just so it doesn’t hog the cpu/program.
    What i currently have set up is 2 pbo framebuffers and a bool flag to tell my ffmpeg thread loop to decode another frame since i don’t know how to manually wait when to decode another frame. I’ve searched online for a soultion to this kind of problem but didn’t manage to get any answers.

    I’ve looked at this : Decoding video directly into a texture in separate thread but it didn’t solve my problem since it was just converting YUV to RGB inside opengl shaders which i haven’t done yet but currently not an issue.

    Additional info that might be useful is that i don’t need to end thread until application exit and i’m open to using any video format, including lossless.

    Ok so main decoding loop looks like this :

    //.. this is inside of a constructor / init
    //adding thread to array in order to save the thread    
    global::global_pending_futures.push_back(std::async(std::launch::async, [=] {
           while (true) {
               if (isPlaying) {
                   this->decodeLoop();
               }
               else {
                   std::this_thread::sleep_for(std::chrono::milliseconds(3));
               }
           }
       }));

    Reason why i use bool to check if frame was used is because main decoding function takes about 5ms to finish in debug and then should wait about 11 ms for it to display the frame, so i can’t know when the frame was displayed and i also don’t know how long did decoding take.

    Decode function :

    void video::decodeLoop() { //this should loop in a separate thread
       frameData* buff = nullptr;
       if (buf1.needsRefill) {
       /// buf1.bufferLock.lock();
           buff = &buf1;
           buf1.needsRefill = false;
           firstBuff = true;
       }
       else if (buf2.needsRefill) {
           ///buf2.bufferLock.lock();
           buff = &buf2;
           buf2.needsRefill = false;
           firstBuff = false;
       }

       if (buff == nullptr) {
           std::this_thread::sleep_for(std::chrono::milliseconds(1));
           return;//error? //wait?
       }

       //pack pixel buffer?

       if (getNextFrame(buff)) {
           getCurrentRBGConvertedFrame(buff);
       }
       else {
           loopedTimes++;
           if (loopedTimes >= repeatTimes) {
               stop();
           }
           else {
               restartVideoPlay(&buf1);//restart both
               restartVideoPlay(&buf2);
               if (getNextFrame(buff)) {
                   getCurrentRBGConvertedFrame(buff);
               }
           }
       }
    /// buff->bufferLock.unlock();

       return;
    }

    As you can tell i first check if buffer was used using bool needsRefill and then decode another frame.

    frameData struct :

       struct frameData {
           frameData() {};
           ~frameData() {};

           AVFrame* frame;
           AVPacket* pkt;
           unsigned char* pdata;
           bool needsRefill = true;
           std::string name = "";

           std::mutex bufferLock;

           ///unsigned int crrFrame
           GLuint pboid = 0;
       };

    And this is called every frame :

    void video::actualDraw() { //meant for cocos implementation
       if (this->isVisible()) {
           if (this->getOpacity() > 0) {
               if (isPlaying) {
                   if (loopedTimes >= repeatTimes) { //ignore -1 because comparing unsgined to signed
                       this->stop();
                   }
               }

               if (isPlaying) {
                   this->setVisible(true);

                   if (!display) { //skip frame
                       ///this->getNextFrame();
                       display = true;
                   }
                   else if (display) {
                       display = false;
                       auto buff = this->getData();                    
                       width = this->getWidth();
                       height = this->getHeight();
                       if (buff) {
                           if (buff->pdata) {

                               glBindBuffer(GL_PIXEL_UNPACK_BUFFER, buff->pboid);
                               glBufferData(GL_PIXEL_UNPACK_BUFFER, 3 * (width*height), buff->pdata, GL_DYNAMIC_DRAW);


                               glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, 0);///buff->pdata);                            glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);
                           }

                           buff->needsRefill = true;
                       }
                   }
               }
               else { this->setVisible(false); }
           }
       }
    }

    getData func to tell which frambuffer it uses

    video::frameData* video::getData() {
       if (firstBuff) {
           if (buf1.needsRefill == false) {
               ///firstBuff = false;
               return &buf1;///.pdata;
           }
       }
       else { //if false
           if (buf2.needsRefill == false) {
               ///firstBuff = true;
               return &buf2;///.pdata;
           }
       }
       return nullptr;
    }

    I’m not sure what else to include i pasted whole code to pastebin.
    video.cpp : https://pastebin.com/cWGT6APn
    video.h https://pastebin.com/DswAXwXV

    To summarize the problem :

    How do i properly implement decoding in a separate thread / how do i optimize current code ?

    Currently video is lagging when some other thread or main thread gets heavy and then it does not decode fast enough.

  • Android - FFmpeg Alternative to get video frames. (Due to licensing)

    3 juillet 2012, par Fresh_Meat

    Scenario :

    I am working on a Android project where in one particular openGL page, I display videos.

    FFmpeg is used to obtain frames from videos(as openGL does not support video as texture) and I am using the frames to obtain a video effect.

    I am using pre-compiled FFmpeg binaries in the project.


    I was not aware of the level of legal implications of using the FFmpeg library.
    My superior brought this to my notice FFmpeg legal reference

    Problem :

    I am no legal expert, so only thing that I understood is that using FFmpeg in a comercial free app (but service needs to be paid for) is going to get me and my company into trouble :(

    In no way the source or any part of the source of the project can be released.(The client is very strict about this.)


    Questions ?

    1) Is there any alternative to FFmpeg (which uses Apache or MIT license) that I can use to obtain video frames ?

    2) In openGL, getting the video frames and looping through - Is it the only way of playing a video ? Is there any alternate method to achieve this functionality ?

  • How to Dynamically Control FFmpeg Encoding Bitrate Based on Current Network Bandwidth ? [closed]

    26 novembre 2024, par Stove Fire

    I’ve implemented a program using FFmpeg where the server encodes OpenGL render results and sends them to the client. The client then decodes the video and renders the result onto a texture using OpenGL. What I’m trying to achieve is to adjust the encoding bitrate dynamically based on the current network bandwidth — higher bandwidth results in higher bitrate encoding, and lower bandwidth results in lower bitrate encoding.

    


    I use libx264 encoder and rtsp.

    


    I found the iperf tool online, which is useful for measuring network bandwidth, but it is a command-line tool and does not have a readily available API for programmatic use.

    


    My questions are :

    


    How can I monitor the current network bandwidth programmatically in my application (preferably in C++) ?

    


    Is there a way to integrate bandwidth monitoring into my FFmpeg encoding process so I can adjust the bitrate dynamically based on the current network conditions ?