Recherche avancée

Médias (91)

Autres articles (5)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • Le plugin : Podcasts.

    14 juillet 2010, par

    Le problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
    Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
    Types de fichiers supportés dans les flux
    Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...)

  • Sélection de projets utilisant MediaSPIP

    29 avril 2011, par

    Les exemples cités ci-dessous sont des éléments représentatifs d’usages spécifiques de MediaSPIP pour certains projets.
    Vous pensez avoir un site "remarquable" réalisé avec MediaSPIP ? Faites le nous savoir ici.
    Ferme MediaSPIP @ Infini
    L’Association Infini développe des activités d’accueil, de point d’accès internet, de formation, de conduite de projets innovants dans le domaine des Technologies de l’Information et de la Communication, et l’hébergement de sites. Elle joue en la matière un rôle unique (...)

Sur d’autres sites (2056)

  • How do video encoding standards(like h.264) then serialize motion prediction ?

    12 août 2019, par Nephilim

    Motion prediction brute force algorithms, in a nutshell work like this(if I'm not mistaken) :

    



      

    1. Search every possible macroblock in the search window
    2. 


    3. Compare each of them with the reference macroblock
    4. 


    5. Take the one that is the most similar and encode the DIFFERENCE between the frames instead of the actual frame.
    6. 


    



    Now this in theory makes sense to me. But when it gets to the actual serializing I'm lost. We've found the most similar block. We know where it is, and from that we can calculate the distance vector of it. Let's say it's about 64 pixels to the right.

    



    Basically, when serializing this block, we do :

    



      

    • Ignore everything but luminosity(encode only Y, i think i saw this somewhere ?), take note of the difference between it and the reference block
    • 


    • Encode the motion, a distance vector
    • 


    • Encode the MSE, so we can reconstruct it
    • 


    



    Is the output of this a simple 2D array of luminosity values, with an appended/prepended MSE value and distance vector ? Where is the compression in this ? We got to take out the UV component ? There seem to be many resources that take on the surface level of video encoders, but it's very hard to find actual in-depth explanations of modern video encoders. Feel free to correct me on my above statements.

    


  • Consume RTMP ans distribute via WebSocket

    10 février 2018, par ShubhadeepB

    I have a Linux PC which streams video (with audio) from a webcam to an RTMP server (nginx). The nginx RTMP server then converts the video into HLS and that HLS stream is shown on the browsers. Everything works good. The only problem is the delay due to the HLS protocol (10-20 seconds depending on the HLS playlist size).

    I am looking for an alternative to HLS which can run on most of the major browsers. I can not use WebRTC due to the lack of audio, I can not use flash due to lack of support is mobile browsers. So my question is, is there any way to consume the RTMP stream, then distribute it via WebSocket and play on modern WebSocket supported browsers without any additional plugin ? I am using ffmpeg to publish the RTMP stream from the Linux PC. If required, the source stream can easily be changed to other live streaming protocol like RTSP. So if there’s some other solution which can solve this problem without RTMP, I can go for that too.

    Thanks in advance.

  • Using ffmpeg to convert video audio and thumbnail in separate threads ?

    26 janvier 2018, par Oleksandr Kyrpa

    I converting lot of videos and same time ago, I found what if split converting process to different task then I can speed up total process. Because, for example, ffmpeg process don’t need wait for processing raw data to video decoder if audio frame was readed from input file, just skip this frame.

    process1 for video

    ffmpeg -i video.mp4 -an -c:v h264 ..... -y out.h264

    process2 for audio

    ffmpeg -i video.mp4 -vn -c:a aac ...... -y out.aac

    process3 for thumbnail

    ffmpeg -i video.mp4 -vf fps=1/60 img%03d.jpg

    And after all process was completed then I need to marge video and audio tracks in to mp4 container. Using this method I solve bottle neck problem and got seed up total process.

    ffmpeg -i out.h264 -i out.aac -c:v copy -c:a copy -y out.mp4

    But any other way to speed up convertion in modern ffmpeg ?