Recherche avancée

Médias (1)

Mot : - Tags -/livre électronique

Autres articles (26)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (4043)

  • FFmpeg C++ decoding in a separate thread

    12 juin 2019, par Brigapes

    I’m trying to decode a video with FFmpeg and convert it to an openGL texture and display it inside a cocos2dx engine. I’ve managed to do that and it displays the video as i wanted to, now the problem is performance wise. I get a Sprite update every frame(game is fixed 60fps, video is 30fps) so what i did was i decoded and converted frame interchangeably, didn’t work great, now i have it set up to have a separate thread where i decode in an infinite while loop with sleep() just so it doesn’t hog the cpu/program.
    What i currently have set up is 2 pbo framebuffers and a bool flag to tell my ffmpeg thread loop to decode another frame since i don’t know how to manually wait when to decode another frame. I’ve searched online for a soultion to this kind of problem but didn’t manage to get any answers.

    I’ve looked at this : Decoding video directly into a texture in separate thread but it didn’t solve my problem since it was just converting YUV to RGB inside opengl shaders which i haven’t done yet but currently not an issue.

    Additional info that might be useful is that i don’t need to end thread until application exit and i’m open to using any video format, including lossless.

    Ok so main decoding loop looks like this :

    //.. this is inside of a constructor / init
    //adding thread to array in order to save the thread    
    global::global_pending_futures.push_back(std::async(std::launch::async, [=] {
           while (true) {
               if (isPlaying) {
                   this->decodeLoop();
               }
               else {
                   std::this_thread::sleep_for(std::chrono::milliseconds(3));
               }
           }
       }));

    Reason why i use bool to check if frame was used is because main decoding function takes about 5ms to finish in debug and then should wait about 11 ms for it to display the frame, so i can’t know when the frame was displayed and i also don’t know how long did decoding take.

    Decode function :

    void video::decodeLoop() { //this should loop in a separate thread
       frameData* buff = nullptr;
       if (buf1.needsRefill) {
       /// buf1.bufferLock.lock();
           buff = &buf1;
           buf1.needsRefill = false;
           firstBuff = true;
       }
       else if (buf2.needsRefill) {
           ///buf2.bufferLock.lock();
           buff = &buf2;
           buf2.needsRefill = false;
           firstBuff = false;
       }

       if (buff == nullptr) {
           std::this_thread::sleep_for(std::chrono::milliseconds(1));
           return;//error? //wait?
       }

       //pack pixel buffer?

       if (getNextFrame(buff)) {
           getCurrentRBGConvertedFrame(buff);
       }
       else {
           loopedTimes++;
           if (loopedTimes >= repeatTimes) {
               stop();
           }
           else {
               restartVideoPlay(&buf1);//restart both
               restartVideoPlay(&buf2);
               if (getNextFrame(buff)) {
                   getCurrentRBGConvertedFrame(buff);
               }
           }
       }
    /// buff->bufferLock.unlock();

       return;
    }

    As you can tell i first check if buffer was used using bool needsRefill and then decode another frame.

    frameData struct :

       struct frameData {
           frameData() {};
           ~frameData() {};

           AVFrame* frame;
           AVPacket* pkt;
           unsigned char* pdata;
           bool needsRefill = true;
           std::string name = "";

           std::mutex bufferLock;

           ///unsigned int crrFrame
           GLuint pboid = 0;
       };

    And this is called every frame :

    void video::actualDraw() { //meant for cocos implementation
       if (this->isVisible()) {
           if (this->getOpacity() > 0) {
               if (isPlaying) {
                   if (loopedTimes >= repeatTimes) { //ignore -1 because comparing unsgined to signed
                       this->stop();
                   }
               }

               if (isPlaying) {
                   this->setVisible(true);

                   if (!display) { //skip frame
                       ///this->getNextFrame();
                       display = true;
                   }
                   else if (display) {
                       display = false;
                       auto buff = this->getData();                    
                       width = this->getWidth();
                       height = this->getHeight();
                       if (buff) {
                           if (buff->pdata) {

                               glBindBuffer(GL_PIXEL_UNPACK_BUFFER, buff->pboid);
                               glBufferData(GL_PIXEL_UNPACK_BUFFER, 3 * (width*height), buff->pdata, GL_DYNAMIC_DRAW);


                               glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, 0);///buff->pdata);                            glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);
                           }

                           buff->needsRefill = true;
                       }
                   }
               }
               else { this->setVisible(false); }
           }
       }
    }

    getData func to tell which frambuffer it uses

    video::frameData* video::getData() {
       if (firstBuff) {
           if (buf1.needsRefill == false) {
               ///firstBuff = false;
               return &buf1;///.pdata;
           }
       }
       else { //if false
           if (buf2.needsRefill == false) {
               ///firstBuff = true;
               return &buf2;///.pdata;
           }
       }
       return nullptr;
    }

    I’m not sure what else to include i pasted whole code to pastebin.
    video.cpp : https://pastebin.com/cWGT6APn
    video.h https://pastebin.com/DswAXwXV

    To summarize the problem :

    How do i properly implement decoding in a separate thread / how do i optimize current code ?

    Currently video is lagging when some other thread or main thread gets heavy and then it does not decode fast enough.

  • Kernel32 not found when using FFmpeg.Autogen 4.1.0.2 in Mono/Linux

    5 décembre 2024, par Robert Russell

    I'm submitting a bug report while I was posting this I didn't know I could see into FFmpeg.Autogen from the stacktrace. Anyways I posted a Bug Report on Github

    



    https://github.com/Ruslan-B/FFmpeg.AutoGen/issues/109

    



    I'm trying to run my code in Linux that uses FFmpeg.Autogen to interface with the ffmpeg libraries. I am getting kernel32 dll not found can not figure out why. He says to not post issues to github for troubleshooting.
Possible related issue : https://github.com/Ruslan-B/FFmpeg.AutoGen/issues/89

    



    First thing I've tried were to include the binary helper class from the example code I tweaked it a little bit. Added the exact path to the linux files.
Second thing I did was add FFmpeg.AutoGen.dll.config if configured right and it tries to ref a windows DLL it should point to the linux one.
Stacktrace :

    



    System.DllNotFoundException: kernel32
  at at (wrapper managed-to-native) FFmpeg.AutoGen.Native.WindowsNativeMethods.GetProcAddress(intptr,string)
  at FFmpeg.AutoGen.Native.FunctionLoader.GetFunctionPointer (System.IntPtr nativeLibraryHandle, System.String functionName) [0x00000] in D:\FFmpeg.AutoGen\FFmpeg.AutoGen\Native\FunctionLoader.cs:55
  at FFmpeg.AutoGen.Native.FunctionLoader.GetFunctionDelegate[T] (System.IntPtr nativeLibraryHandle, System.String functionName, System.Boolean throwOnError) [0x00000] in D:\FFmpeg.AutoGen\FFmpeg.AutoGen\Native\FunctionLoader.cs:28
  at FFmpeg.AutoGen.ffmpeg.GetFunctionDelegate[T] (System.IntPtr libraryHandle, System.String functionName) [0x00000] in D:\FFmpeg.AutoGen\FFmpeg.AutoGen\FFmpeg.cs:50
  at FFmpeg.AutoGen.ffmpeg+<>c.<.cctor>b__4_318 () [0x00000] in D:\FFmpeg.AutoGen\FFmpeg.AutoGen\FFmpeg.functions.export.g.cs:7163
  at FFmpeg.AutoGen.ffmpeg.avformat_alloc_context () [0x00000] in D:\FFmpeg.AutoGen\FFmpeg.AutoGen\FFmpeg.functions.export.g.cs:7176
  at FF8.FfccVaribleGroup..ctor () [0x0009c] in /home/robert/OpenVIII/FF8/FfccVaribleGroup.cs:53
  at FF8.Ffcc..ctor (System.String filename, FFmpeg.AutoGen.AVMediaType mediatype, FF8.Ffcc+FfccMode mode) [0x00008] in /home/robert/OpenVIII/FF8/Ffcc.cs:31
  at FF8.Module_movie_test.InitMovie () [0x00001] in /home/robert/OpenVIII/FF8/module_movie_test.cs:160
  at FF8.Module_movie_test.Update () [0x000c5] in /home/robert/OpenVIII/FF8/module_movie_test.cs:88
  at FF8.ModuleHandler.Update (Microsoft.Xna.Framework.GameTime gameTime) [0x000ac] in /home/robert/OpenVIII/FF8/ModuleHandler.cs:43
  at FF8.Game1.Update (Microsoft.Xna.Framework.GameTime gameTime) [0x00030] in /home/robert/OpenVIII/FF8/Game1.cs:69
  at Microsoft.Xna.Framework.Game.DoUpdate (Microsoft.Xna.Framework.GameTime gameTime) [0x00019] in <4fc8466c27384bb19c7b81b2a6a71083>:0
  at Microsoft.Xna.Framework.Game.Tick () [0x00103] in <4fc8466c27384bb19c7b81b2a6a71083>:0
  at Microsoft.Xna.Framework.SdlGamePlatform.RunLoop () [0x00021] in <4fc8466c27384bb19c7b81b2a6a71083>:0
  at Microsoft.Xna.Framework.Game.Run (Microsoft.Xna.Framework.GameRunBehavior runBehavior) [0x0008b] in <4fc8466c27384bb19c7b81b2a6a71083>:0
  at Microsoft.Xna.Framework.Game.Run () [0x0000c] in <4fc8466c27384bb19c7b81b2a6a71083>:0
  at FF8.Program.Main () [0x00007] in /home/robert/OpenVIII/FF8/Program.cs:17


    



    My code that triggers this :

    



    Format = ffmpeg.avformat_alloc_context();


    



    Binaryhelper should set the path correctly for the file

    



    internal static void RegisterFFmpegBinaries()
        {
            var libraryPath = "";
            switch (Environment.OSVersion.Platform)
            {
                case PlatformID.Win32NT:
                case PlatformID.Win32S:
                case PlatformID.Win32Windows:
                    var current = Environment.CurrentDirectory;
                    var probe = Path.Combine(Environment.Is64BitProcess ? "x64" : "x86");
                    while (current != null)
                    {
                        var ffmpegDirectory = Path.Combine(current, probe);
                        if (Directory.Exists(ffmpegDirectory))
                        {
                            Console.WriteLine($"FFmpeg binaries found in: {ffmpegDirectory}");
                            RegisterLibrariesSearchPath(ffmpegDirectory);
                            return;
                        }
                        current = Directory.GetParent(current)?.FullName;
                    }
                    break;
                case PlatformID.Unix:
                    libraryPath = "/usr/lib/x86_64-linux-gnu";
                    RegisterLibrariesSearchPath(libraryPath);
                    break;
                case PlatformID.MacOSX:
                    libraryPath = Environment.GetEnvironmentVariable(LD_LIBRARY_PATH);
                    RegisterLibrariesSearchPath(libraryPath);
                    break;
            }
        }


    



    The FFmpeg.Autogen.dll.config

    



    <configuration>&#xA;  <dllmap os="linux" dll="avutil-56.dll" target="/usr/lib/x86_64-linux-gnu/libavutil.so.56"></dllmap>&#xA;  <dllmap os="linux" dll="avcodec-58.dll" target="/usr/lib/x86_64-linux-gnu/libavcodec.so.58"></dllmap>&#xA;  <dllmap os="linux" dll="avformat-58.dll" target="/usr/lib/x86_64-linux-gnu/libavformat.so.58"></dllmap>&#xA;  <dllmap os="linux" dll="avdevice-58.dll" target="/usr/lib/x86_64-linux-gnu/libavdevice.so.58"></dllmap>&#xA;  <dllmap os="linux" dll="avfilter-7.dll" target="/usr/lib/x86_64-linux-gnu/libavfilter.so.7"></dllmap>&#xA;  <dllmap os="linux" dll="avresample-4.dll" target="/usr/lib/x86_64-linux-gnu/libavresample.so.4"></dllmap>&#xA;  <dllmap os="linux" dll="swscale-5.dll" target="/usr/lib/x86_64-linux-gnu/libswscale.so.5"></dllmap>&#xA;  <dllmap os="linux" dll="swresample-3.dll" target="/usr/lib/x86_64-linux-gnu/libswresample.so.3"></dllmap>&#xA;  <dllmap os="linux" dll="postproc-55.dll" target="/usr/lib/x86_64-linux-gnu/libpostproc.so.55"></dllmap>&#xA;</configuration>&#xA;

    &#xA;

  • To get OpenCV VideoWriter work across platforms consistently for MP4 container with H264 encoding

    28 mars 2019, par Moh

    I am trying to get OpenCV VideoWriter work across platform consistently for MP4 container with H246 encoding.

    Target platforms in order of importance - Ubuntu, Raspbian, OSX

    Basically, my shortcoming at this point is not understanding the relationship of FourCC code (as a parameter to OpenCV VideoWriter) to the FFMPEG backend and its requirements. I am interested to understand the game in play rather than discussing a piece of code.

    What I want to know is when I specify ’X264’ as FourCC code trying to write an x.MP4 file (FFMPEG backend) and the request is marshalled to FFMPEG what requirements/dependencies need to be satisfied by the OS for it to success.

    So far I have got my python stack writing MP4 video files across Raspbian/Ubuntu/OSX, with a hack.

    On my Raspbian stretch installation, I use 0x00000021 as the fourCC code.
    On Ubuntu (VM on OSX) and on OSX, AVC1 works.

    Days of Googling only delivered those hacks, not a good understanding of the problem.

    The x264 as FourCC code leads to one of - failure, non-portable video file + annoying FFMPEG warning.

    I am trying to get to the bottom of it.

    The code,

       #self.__fourCC = cv2.VideoWriter_fourcc('x', '2', '6', '4')
       self.__fourCC = cv2.VideoWriter_fourcc('a', 'v', 'c', '1')
       if PlatformUtils.isRunningOnRaspberryPi():
           self.__fourCC = 0x00000021

    I have control over the version both OpenCV and FFMPEG (if required GStreamer too). I can and have built them for Ubuntu/Raspbian.