Recherche avancée

Médias (0)

Mot : - Tags -/flash

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (39)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (4855)

  • Audio stuttering every couple of seconds

    27 juin 2019, par Glutch

    I’m merging a video (recorded with ffmpeg, good quality, all solid), with a musicfile.mp3. However every couple of seconds the music stutters and skips slightly. Which seems very strange since simply adding music on top of a video sounds like the engine could relax and take its time, creating no artifacts. (In comparison to recording live desktop footage). Can anyone help me sort this out ?

    System : MacOS MBP 2015, 16gb ram 2.7ghz i5

    ffmpeg -i "temp/1561246948349.mkv" -i "music/happy.mp3" -vcodec copy -filter_complex amix -map 0:v -map 0:a -map 1:a -shortest -b:a 144k "finished/2019-06-22/1561246948349/output.mkv"
    ffmpeg version 4.1.3 Copyright (c) 2000-2019 the FFmpeg developers
     built with Apple LLVM version 10.0.1 (clang-1001.0.46.4)
     configuration: --prefix=/usr/local/Cellar/ffmpeg/4.1.3_1 --enable-shared --enable-pthreads --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags='-I/Library/Java/JavaVirtualMachines/adoptopenjdk-11.0.2.jdk/Contents/Home/include -I/Library/Java/JavaVirtualMachines/adoptopenjdk-11.0.2.jdk/Contents/Home/include/darwin' --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libmp3lame --enable-libopus --enable-librubberband --enable-libsnappy --enable-libtesseract --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librtmp --enable-libspeex --enable-videotoolbox --disable-libjack --disable-indev=jack --enable-libaom --enable-libsoxr
     libavutil      56. 22.100 / 56. 22.100
     libavcodec     58. 35.100 / 58. 35.100
     libavformat    58. 20.100 / 58. 20.100
     libavdevice    58.  5.100 / 58.  5.100
     libavfilter     7. 40.101 /  7. 40.101
     libavresample   4.  0.  0 /  4.  0.  0
     libswscale      5.  3.100 /  5.  3.100
     libswresample   3.  3.100 /  3.  3.100
     libpostproc    55.  3.100 / 55.  3.100
    Input #0, matroska,webm, from 'temp/1561246948349.mkv':
     Metadata:
       ENCODER         : Lavf58.20.100
     Duration: 00:00:21.50, start: 0.000000, bitrate: 5834 kb/s
       Stream #0:0: Video: h264 (High 4:4:4 Predictive), yuv422p(progressive), 2880x1800, 30 fps, 30 tbr, 1k tbn, 2000k tbc (default)
       Metadata:
         ENCODER         : Lavc58.35.100 libx264
         DURATION        : 00:00:21.467000000
       Stream #0:1: Audio: vorbis, 44100 Hz, stereo, fltp (default)
       Metadata:
         ENCODER         : Lavc58.35.100 libvorbis
         DURATION        : 00:00:21.496000000
    Input #1, mp3, from 'music/happy.mp3':
     Metadata:
       album           : Random
       genre           : Jazz & Blues
     Duration: 00:15:59.84, start: 0.025057, bitrate: 186 kb/s
       Stream #1:0: Audio: mp3, 44100 Hz, stereo, fltp, 186 kb/s
       Metadata:
         encoder         : LAME3.100
    Stream mapping:
     Stream #0:1 (vorbis) -> amix:input0
     Stream #1:0 (mp3float) -> amix:input1
     amix -> Stream #0:0 (libvorbis)
     Stream #0:0 -> #0:1 (copy)
    Press [q] to stop, [?] for help
    Output #0, matroska, to 'finished/2019-06-22/1561246948349/output.mkv':
     Metadata:
       encoder         : Lavf58.20.100
       Stream #0:0: Audio: vorbis (libvorbis) (oV[0][0] / 0x566F), 44100 Hz, stereo, fltp, 144 kb/s (default)
       Metadata:
         encoder         : Lavc58.35.100 libvorbis
       Stream #0:1: Video: h264 (High 4:4:4 Predictive) (H264 / 0x34363248), yuv422p(progressive), 2880x1800, q=2-31, 30 fps, 30 tbr, 1k tbn, 1k tbc (default)
       Metadata:
         ENCODER         : Lavc58.35.100 libx264
         DURATION        : 00:00:21.467000000
    frame=  640 fps=0.0 q=-1.0 Lsize=   15227kB time=00:00:21.46 bitrate=5810.3kbits/s speed=33.8x    
    video:14888kB audio:318kB subtitle:0kB other streams:0kB global headers:4kB muxing overhead: 0.139864%
  • FFmpeg C++ decoding in a separate thread

    12 juin 2019, par Brigapes

    I’m trying to decode a video with FFmpeg and convert it to an openGL texture and display it inside a cocos2dx engine. I’ve managed to do that and it displays the video as i wanted to, now the problem is performance wise. I get a Sprite update every frame(game is fixed 60fps, video is 30fps) so what i did was i decoded and converted frame interchangeably, didn’t work great, now i have it set up to have a separate thread where i decode in an infinite while loop with sleep() just so it doesn’t hog the cpu/program.
    What i currently have set up is 2 pbo framebuffers and a bool flag to tell my ffmpeg thread loop to decode another frame since i don’t know how to manually wait when to decode another frame. I’ve searched online for a soultion to this kind of problem but didn’t manage to get any answers.

    I’ve looked at this : Decoding video directly into a texture in separate thread but it didn’t solve my problem since it was just converting YUV to RGB inside opengl shaders which i haven’t done yet but currently not an issue.

    Additional info that might be useful is that i don’t need to end thread until application exit and i’m open to using any video format, including lossless.

    Ok so main decoding loop looks like this :

    //.. this is inside of a constructor / init
    //adding thread to array in order to save the thread    
    global::global_pending_futures.push_back(std::async(std::launch::async, [=] {
           while (true) {
               if (isPlaying) {
                   this->decodeLoop();
               }
               else {
                   std::this_thread::sleep_for(std::chrono::milliseconds(3));
               }
           }
       }));

    Reason why i use bool to check if frame was used is because main decoding function takes about 5ms to finish in debug and then should wait about 11 ms for it to display the frame, so i can’t know when the frame was displayed and i also don’t know how long did decoding take.

    Decode function :

    void video::decodeLoop() { //this should loop in a separate thread
       frameData* buff = nullptr;
       if (buf1.needsRefill) {
       /// buf1.bufferLock.lock();
           buff = &buf1;
           buf1.needsRefill = false;
           firstBuff = true;
       }
       else if (buf2.needsRefill) {
           ///buf2.bufferLock.lock();
           buff = &buf2;
           buf2.needsRefill = false;
           firstBuff = false;
       }

       if (buff == nullptr) {
           std::this_thread::sleep_for(std::chrono::milliseconds(1));
           return;//error? //wait?
       }

       //pack pixel buffer?

       if (getNextFrame(buff)) {
           getCurrentRBGConvertedFrame(buff);
       }
       else {
           loopedTimes++;
           if (loopedTimes >= repeatTimes) {
               stop();
           }
           else {
               restartVideoPlay(&buf1);//restart both
               restartVideoPlay(&buf2);
               if (getNextFrame(buff)) {
                   getCurrentRBGConvertedFrame(buff);
               }
           }
       }
    /// buff->bufferLock.unlock();

       return;
    }

    As you can tell i first check if buffer was used using bool needsRefill and then decode another frame.

    frameData struct :

       struct frameData {
           frameData() {};
           ~frameData() {};

           AVFrame* frame;
           AVPacket* pkt;
           unsigned char* pdata;
           bool needsRefill = true;
           std::string name = "";

           std::mutex bufferLock;

           ///unsigned int crrFrame
           GLuint pboid = 0;
       };

    And this is called every frame :

    void video::actualDraw() { //meant for cocos implementation
       if (this->isVisible()) {
           if (this->getOpacity() > 0) {
               if (isPlaying) {
                   if (loopedTimes >= repeatTimes) { //ignore -1 because comparing unsgined to signed
                       this->stop();
                   }
               }

               if (isPlaying) {
                   this->setVisible(true);

                   if (!display) { //skip frame
                       ///this->getNextFrame();
                       display = true;
                   }
                   else if (display) {
                       display = false;
                       auto buff = this->getData();                    
                       width = this->getWidth();
                       height = this->getHeight();
                       if (buff) {
                           if (buff->pdata) {

                               glBindBuffer(GL_PIXEL_UNPACK_BUFFER, buff->pboid);
                               glBufferData(GL_PIXEL_UNPACK_BUFFER, 3 * (width*height), buff->pdata, GL_DYNAMIC_DRAW);


                               glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, 0);///buff->pdata);                            glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);
                           }

                           buff->needsRefill = true;
                       }
                   }
               }
               else { this->setVisible(false); }
           }
       }
    }

    getData func to tell which frambuffer it uses

    video::frameData* video::getData() {
       if (firstBuff) {
           if (buf1.needsRefill == false) {
               ///firstBuff = false;
               return &buf1;///.pdata;
           }
       }
       else { //if false
           if (buf2.needsRefill == false) {
               ///firstBuff = true;
               return &buf2;///.pdata;
           }
       }
       return nullptr;
    }

    I’m not sure what else to include i pasted whole code to pastebin.
    video.cpp : https://pastebin.com/cWGT6APn
    video.h https://pastebin.com/DswAXwXV

    To summarize the problem :

    How do i properly implement decoding in a separate thread / how do i optimize current code ?

    Currently video is lagging when some other thread or main thread gets heavy and then it does not decode fast enough.

  • Are there any technical reasons that MP4 is more popular than Webm ?

    1er juin 2019, par dprogramz

    Not looking for opinions. I’m searching for data.

    As it is now, I want to become a webM evangelist. However, I assume there are some actual technical reasons why mp4 is preferred over webm in the bigger picture. I want to know them so I can be accurate in my assessments.

    I’m working on developing a broadcast video messaging graphics engine (think chyron) and using the Chromium engine like OBS does for messaging. So far the results have been excellent.

    One of the best features I’ve found is using webm for video. I should note I am using small (640x480 max) videos as graphics that are on top off a larger full HD video.

    Not only does it seem to have a better compression:quality ratio than mp4 for my use case, the most important thing is that it has full alpha support, which allows for excellent layering of video objects on top of each other in the HTML DOM, in real time, with no noticable performance hits.

    Aside from it’s predecessor, FLV, I can’t think of another high quality, high compression codec that also supports alpha. I feel like you are stuck using pro-res 4444 or the ancient animation codec to reliable distribute video with an alpha.

    So, that said, are there technical reasons why webM isn’t more adopted than mp4 ?

    I already know the obvious, that there is dedicated hardware to decode mp4. But, is there any technicality that would prevent a hardware webM decoder ? I really want to understand more what the benefits of mp4 are over webM, which i assume is why it is more widely used than webM.

    Thanks !