Recherche avancée

Médias (91)

Autres articles (41)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

Sur d’autres sites (7528)

  • Fragmented mp4 cannot be played on pure html5 video

    23 novembre 2017, par Kevin Youngho Seo

    I need to use a video tag to serve over 3GB of video on the web.
    When the page is loaded, it takes a long time for the media element to receive the ’loadedmetadata event’.

    I’ve found that the size of the moov box is too large (33MB).
    So when I re-encoded it with the ’empty_moov + frag_keyframe’ option of ’ffmpeg’, but it also took longer to fetch all fragmented information from the ’Inspector - Network’ tab in Chrome.

    Is there a way to speed up loading when playing ’fragmented mp4’ with html5 video tag ?

  • How does mp4 block matching work

    19 juin 2019, par YAHsaves

    I’ve been working on a video encoder that uses block matching to find similar blocks on previous frames.

    For the sake of simplicity I’ll leave out most of the details, but I’m wondering if I got the block matching algorithm right.

    In order to find a block on a previous frame my encoder uses the mean squared algorithm for the Y channel in YUV color space.

    This works by comparing each pixel of the block we want to match, with the block on the previous frame. It takes the difference of each pixel and squares it.

    After all the pixels are compared the block that has the least average difference is chosen as the desired block.

    Now this is where I need help. My encoder looks at every possible block in a 256x256 area and uses half pixel searches as well. The smallest block size it can use is 4x4.

    From what I’ve read online this is the same things mp4 uses.

    However I can’t find nearly as many blocks as mp4 appears to be able to find.

    For example here are 2 frames I want to compress. The first will be the I frame and the second is the P frame :

    enter image description here

    Now after my encoder has run it is able to reduce the second frame by 80% and, what it can’t match close enough it saves as "difference" blocks. Which are grey blocks only recording the difference. They look like this :

    enter image description here

    Now what I don’t get is to save these "difference" blocks as a jpg takes roughly 90kb to be accurate enough.

    Multiply that by 24 (24 frames per second) you get 2070kb per second. That’s not including how much space the actual motion vectors take up or anything else.

    However somehow mp4 is able to compress the video of the images above into a mere 700kb per second and still look better than my encoder at much larger data amounts.

    Why is this ? Is there something I’m doing wrong when looking for blocks ? Any help would be much appreciated.

  • How to trace C threads in Android ?

    25 septembre 2015, par 李天宇

    I’m working on a video player based on ffmpeg. I found a LAG every time I try to exit. To figure it out, I put some log and found when it happened. But since ffmpeg forked into several threads using C. It’s hard to decide where it happened. The logs won’t tell me if time is wasted up by another thread.

    I tried DDMS and OS.Debug.startMethodtraceing. But I just can’t find detailed information.
    Is there some command that can make the OS tell me which thread takes more time ?