Diary Of An x264 Developer

http://x264dev.multimedia.cx/

Les articles publiés sur le site

  • Patent skullduggery : Tandberg rips off x264 algorithm

    25 novembre 2010, par Dark Shikaripatents, ripoffs, x264

    Update: Tandberg claims they came up with the algorithm independently: to be fair, I can actually believe this to some extent, as I think the algorithm is way too obvious to be patented.  Of course, they also claim that the algorithm isn’t actually identical, since they don’t want to lose their patent application.

    I still don’t trust them, but it’s possible it’s merely bad research (and thus being unaware of prior art) as opposed to anything malicious.  Furthermore, word from within their office suggests they’re quite possibly being honest: supposedly the development team does not read x264 code at all.  So this might just all be very bad luck.

    Regardless, the patent is still complete tripe, and should never have been filed.

    Most importantly, stop harassing the guy whose name is on the patent (Lars): he’s just a programmer, not the management or lawyers responsible for filing the patent.  This is stupid and unnecessary.  I’ve removed the original post because of this; it can be found here for those who want to read it.

    Appendix: the details of the patent:

    I figure I’ll go over the exact correspondence between the patent and my code here.

    1. A method for calculating run and level representations of quantized transform coefficients representing pixel values included in a block of a video picture, the method comprising:

    Translation: It’s a run-level coder.

    packing, at a video processing apparatus, each quantized transform coefficients in a value interval [Max, Min] by setting all quantized transform coefficients greater than Max equal to Max, and all quantized transform coefficients less than Min equal to Min

    The quantized coefficients are clipped to a certain valid range to allow them to be packed into bytes (they start as 16-bit values).

    reordering, at the video processing apparatus, the quantized transform ID coefficients according to a predefined order depending on respective positions in the block resulting in an array C of reordered quantized transform coefficients

    This is the zigzag pattern used in H.264 (and most formats) for reordering DCT coefficients.  In x264, this is done before the run-level coder ste.

    masking, at the video processing apparatus, C by generating an array M containing ones in positions corresponding to positions of C having non-zero values, and zeros in positions corresponding to positions of C having zero values

    This is creating a bitmask based on the coefficient values, the pmovmskb step.

    is generating, at the video processing apparatus, for each position containing a one in M, a run and a level representation by setting the level value equal to an occurring value in a corresponding position of C; and setting, at the video processing apparatus, for each position containing a one in M5 the run value equal to the number of proceeding positions relative to a current position in M since a previous occurrence of one in M.

    This is the process of creating run/level values from the bitmask.

    Now into the detailed claims:

    2. The method according to Claim 1, wherein the masking further includes, creating an array C from C where positions corresponding to positions of nonzero values in C are filled with ones, and positions corresponding to positions of zero values in C are filled with zeros, and creating M from C by extracting the most significant bit from values in respective position of C and inserting the bits in corresponding positions in M.

    They’re extracting the most significant bit of the values to create a bitmask.  This is exactly what the pmovmskb in my algorithm does.

    3. The method according to Claim 2, wherein the creating of the array C is executed by a C++ function PCMPGTB, and the creating of M from C is executed by a C++ function PMOVMSKB.

    And here they use pcmpgtb (they call it a C++ function for some reason, but it’s a SSE instruction) to do the clipping of the input values.   This is exactly the same method I used in decimate_score.  They also use pmovmskb as mentioned.

    4. The method according to Claim 1 , wherein the generating of the run and level representation further includes determining positions containing non-zero values in C by corresponding positions containing ones in M.

    5. The method according to Claim 4, wherein the determining of positions containing non-zero values in C is executed by a C++ function BSF.

    Here they iterate over the bitmask of transform coefficients using a “BSF” function to find runs, which is exactly what I did.  Of course, BSF isn’t a function, it’s an x86 instruction.

    6. The method according to Claim 1 , wherein Max is 256 and Min is 0.

    This is almost surely a typo or mistake of some sort.  They mean the Max should be 255, not 256: 256 doesn’t fit in a uint8_t.

    7. The method according to Claim 1 , wherein the predefined order follows a zigzag path of transform coefficient positions in the block starting in an upper left corner heading towards a lower right corner.

    This is a description of the typical DCT zigzag pattern (like in H.264, MPEG-2, Theora, etc).

    Everything after this part is just repeating itself with the phrase “an apparatus” added in order to make the USPTO listen to them.

  • How to contribute to open source, for companies

    18 octobre 2010, par Dark Shikaridevelopment, open source, x264

    I have seen many nigh-incomprehensible attempts by companies to contribute to open source projects, including x264.  Developers are often simply boggled, wondering why the companies seem incapable of proper communication.  The companies assume the developers are being unreceptive, while the developers assume the companies are being incompetent, idiotic, or malicious.  Most of this seems to boil down to a basic lack of understanding of how open source works, resulting in a wide variety of misunderstandings.  Accordingly, this post will cover the dos and don’ts of corporate contribution to open source.

    Do: contact the project using their preferred medium of communication.

    Most open source projects use public methods of communication, such as mailing lists and IRC.  It’s not the end of the world if you mistakenly make contact with the wrong people or via the wrong medium, but be prepared to switch to the correct one once informed!  You may not be experienced using whatever form of communication the project uses, but if you refuse to communicate through proper channels, they will likely not be as inclined to assist you.  Larger open source projects are often much like companies in that they have different parts to their organization with different roles.  Don’t assume that everyone is a major developer!

    If you don’t know what to do, a good bet is often to just ask someone.

    Don’t: contact only one person.

    Open source projects are a communal effort.  Major contributions are looked over by multiple developers and are often discussed by the community as a whole.  Yet many companies tend to contact only a single person in lieu of dealing with the project proper.  This has many flaws: to begin with, it forces a single developer (who isn’t paid by you) to act as your liaison, adding yet another layer between what you want and the people you want to talk to.  Contribution to open source projects should not be a game of telephone.

    Of course, there are exceptions to this: sometimes a single developer is in charge of the entirety of some particular aspect of a project that you intend to contribute to, in which case this might not be so bad.

    Do: make clear exactly what it is you are contributing.

    Are you contributing code?  Development resources?  Money?  API documentation?  Make it as clear as possible, from the start!  How developers react, which developers get involved, and their expectations will depend heavily on what they think you are providing.  Make sure their expectations match reality.  Great confusion can result when they do not.

    This also applies in the reverse — if there’s something you need from the project, such as support or assistance with development of your patch, make that explicitly clear.

    Don’t: code dump.

    Code does not have intrinsic value: it is only useful as part of a working, living project.  Most projects react very negatively to large “dumps” of code without associated human resources.  That is, they expect you to work with them to finalize the code until it is ready to be committed.  Of course, it’s better to work with the project from the start: this avoids the situation of writing 50,000 lines of code independently and then finding that half of it needs to be rewritten.  Or, worse, writing an enormous amount of code only to find it completely unnecessary.

    Of course, the reverse option — keeping such code to yourself — is often even more costly, as it forces you to maintain the code instead of the official developers.

    Do: ignore trolls.

    As mentioned above, many projects use public communication methods — which, of course, allow anyone to communicate, by nature of being public.  Not everyone on a project’s IRC or mailing list is necessarily qualified to officially represent the project.  It is not too uncommon for a prospective corporate contributor to be turned off by the uninviting words of someone who isn’t even involved in the project due to assuming that they were.  Make sure you’re dealing with the right people before making conclusions.

    Don’t: disappear.

    If you are going to try to be involved in a project, you need to stay in contact.  We’ve had all too many companies who simply disappear after the initial introduction.  Some tell us that we’ll need an NDA, then never provide it or send status updates.  You may know why you’re not in contact — political issues at the company, product launch crunches, a nice vacation to the Bahamas — but we don’t!  If you disappear, we will assume that you gave up.

    Above all, don’t assume that being at a large successful company makes you immune to these problems.  If anything, these problems seem to be the most common at the largest companies.  I didn’t name any names in this post, but practically every single one of these rules has been violated at some point by companies looking to contribute to x264.  In the larger scale of open source, these problems happen constantly.  Don’t fall into the same traps that many other companies have.

    If you’re an open source developer reading this post, remember it next time you see a company acting seemingly nonsensically in an attempt to contribute: it’s quite possible they just don’t know what to do.  And just because they’re doing it wrong doesn’t mean that it isn’t your responsibility to try to help them do it right.

  • H.264 and VP8 for still image coding : WebP ?

    1er octobre 2010, par Dark ShikariH.264, VP8, google, psychovisual optimizations

    JPEG is a very old lossy image format.  By today’s standards, it’s awful compression-wise: practically every video format since the days of MPEG-2 has been able to tie or beat JPEG at its own game.  The reasons people haven’t switched to something more modern practically always boil down to a simple one — it’s just not worth the hassle.  Even if JPEG can be beaten by a factor of 2, convincing the entire world to change image formats after 20 years is nigh impossible.  Furthermore, JPEG is fast, simple, and practically guaranteed to be free of any intellectual property worries.  It’s been tried before: JPEG-2000 first, then Microsoft’s JPEG XR, both tried to unseat JPEG.  Neither got much of anywhere.

    Now Google is trying to dump yet another image format on us, “WebP”.  But really, it’s just a VP8 intra frame.  There are some obvious practical problems with this new image format in comparison to JPEG; it doesn’t even support all of JPEG’s features, let alone many of the much-wanted features JPEG was missing (alpha channel support, lossless support).  It only supports 4:2:0 chroma subsampling, while JPEG can handle 4:2:2 and 4:4:4.  Google doesn’t seem interested in adding any of these features either.

    But let’s get to the meat and see how these encoders stack up on compressing still images.  As I explained in my original analysis, VP8 has the advantage of H.264′s intra prediction, which is one of the primary reasons why H.264 has such an advantage in intra compression.  It only has i4x4 and i16x16 modes, not i8x8, so it’s not quite as fancy as H.264′s, but it comes close.

    The test files are all around 155KB; download them for the exact filesizes.  For all three, I did a binary search of quality levels to get the file sizes close.  For x264, I encoded with --tune stillimage --preset placebo.  For libvpx, I encoded with --best.  For JPEG, I encoded with ffmpeg, then applied jpgcrush, a lossless jpeg compressor.  I suspect there are better JPEG encoders out there than ffmpeg; if you have one, feel free to test it and post the results.  The source image is the 200th frame of Parkjoy, from derf’s page (fun fact: this video was shot here!  More info on the video here.).

    Files: (x264 [154KB], vp8 [155KB], jpg [156KB])

    Results (decoded to PNG): (x264, vp8, jpg)

    This seems rather embarrassing for libvpx.  Personally I think VP8 looks by far the worst of the bunch, despite JPEG’s blocking.  What’s going on here?  VP8 certainly has better entropy coding than JPEG does (by far!).  It has better intra prediction (JPEG has just DC prediction).  How could VP8 look worse?  Let’s investigate.

    VP8 uses a 4×4 transform, which tends to blur and lose more detail than JPEG’s 8×8 transform.  But that alone certainly isn’t enough to create such a dramatic difference.  Let’s investigate a hypothesis — that the problem is that libvpx is optimizing for PSNR and ignoring psychovisual considerations when encoding the image… I’ll encode with --tune psnr --preset placebo in x264, turning off all psy optimizations.  

    Files: (x264, optimized for PSNR [154KB]) [Note for the technical people: because adaptive quantization is off, to get the filesize on target I had to use a CQM here.]

    Results (decoded to PNG): (x264, optimized for PSNR)

    What a blur!  Only somewhat better than VP8, and still worse than JPEG.  And that’s using the same encoder and the same level of analysis — the only thing done differently is dropping the psy optimizations.  Thus we come back to the conclusion I’ve made over and over on this blog — the encoder matters more than the video format, and good psy optimizations are more important than anything else for compression.  libvpx, a much more powerful encoder than ffmpeg’s jpeg encoder, loses because it tries too hard to optimize for PSNR.

    These results raise an obvious question — is Google nuts?  I could understand the push for “WebP” if it was better than JPEG.  And sure, technically as a file format it is, and an encoder could be made for it that’s better than JPEG.  But note the word “could”.  Why announce it now when libvpx is still such an awful encoder?  You’d have to be nuts to try to replace JPEG with this blurry mess as-is.  Now, I don’t expect libvpx to be able to compete with x264, the best encoder in the world — but surely it should be able to beat an image format released in 1992?

    Earth to Google: make the encoder good first, then promote it as better than the alternatives.  The reverse doesn’t work quite as well.

    [155KB]
  • H.264 and VP8 for still image coding : WebP ?

    1er octobre 2010, par Dark Shikarigoogle, H.264, psychovisual optimizations, VP8

    Update: post now contains a Theora comparison as well; see below.

    JPEG is a very old lossy image format.  By today’s standards, it’s awful compression-wise: practically every video format since the days of MPEG-2 has been able to tie or beat JPEG at its own game.  The reasons people haven’t switched to something more modern practically always boil down to a simple one — it’s just not worth the hassle.  Even if JPEG can be beaten by a factor of 2, convincing the entire world to change image formats after 20 years is nigh impossible.  Furthermore, JPEG is fast, simple, and practically guaranteed to be free of any intellectual property worries.  It’s been tried before: JPEG-2000 first, then Microsoft’s JPEG XR, both tried to unseat JPEG.  Neither got much of anywhere.

    Now Google is trying to dump yet another image format on us, “WebP”.  But really, it’s just a VP8 intra frame.  There are some obvious practical problems with this new image format in comparison to JPEG; it doesn’t even support all of JPEG’s features, let alone many of the much-wanted features JPEG was missing (alpha channel support, lossless support).  It only supports 4:2:0 chroma subsampling, while JPEG can handle 4:2:2 and 4:4:4.  Google doesn’t seem interested in adding any of these features either.

    But let’s get to the meat and see how these encoders stack up on compressing still images.  As I explained in my original analysis, VP8 has the advantage of H.264′s intra prediction, which is one of the primary reasons why H.264 has such an advantage in intra compression.  It only has i4x4 and i16x16 modes, not i8x8, so it’s not quite as fancy as H.264′s, but it comes close.

    The test files are all around 155KB; download them for the exact filesizes.  For all three, I did a binary search of quality levels to get the file sizes close.  For x264, I encoded with --tune stillimage --preset placebo.  For libvpx, I encoded with --best.  For JPEG, I encoded with ffmpeg, then applied jpgcrush, a lossless jpeg compressor.  I suspect there are better JPEG encoders out there than ffmpeg; if you have one, feel free to test it and post the results.  The source image is the 200th frame of Parkjoy, from derf’s page (fun fact: this video was shot here!  More info on the video here.).

    Files: (x264 [154KB], vp8 [155KB], jpg [156KB])

    Results (decoded to PNG): (x264, vp8, jpg)

    This seems rather embarrassing for libvpx.  Personally I think VP8 looks by far the worst of the bunch, despite JPEG’s blocking.  What’s going on here?  VP8 certainly has better entropy coding than JPEG does (by far!).  It has better intra prediction (JPEG has just DC prediction).  How could VP8 look worse?  Let’s investigate.

    VP8 uses a 4×4 transform, which tends to blur and lose more detail than JPEG’s 8×8 transform.  But that alone certainly isn’t enough to create such a dramatic difference.  Let’s investigate a hypothesis — that the problem is that libvpx is optimizing for PSNR and ignoring psychovisual considerations when encoding the image… I’ll encode with --tune psnr --preset placebo in x264, turning off all psy optimizations.  

    Files: (x264, optimized for PSNR [154KB]) [Note for the technical people: because adaptive quantization is off, to get the filesize on target I had to use a CQM here.]

    Results (decoded to PNG): (x264, optimized for PSNR)

    What a blur!  Only somewhat better than VP8, and still worse than JPEG.  And that’s using the same encoder and the same level of analysis — the only thing done differently is dropping the psy optimizations.  Thus we come back to the conclusion I’ve made over and over on this blog — the encoder matters more than the video format, and good psy optimizations are more important than anything else for compression.  libvpx, a much more powerful encoder than ffmpeg’s jpeg encoder, loses because it tries too hard to optimize for PSNR.

    These results raise an obvious question — is Google nuts?  I could understand the push for “WebP” if it was better than JPEG.  And sure, technically as a file format it is, and an encoder could be made for it that’s better than JPEG.  But note the word “could”.  Why announce it now when libvpx is still such an awful encoder?  You’d have to be nuts to try to replace JPEG with this blurry mess as-is.  Now, I don’t expect libvpx to be able to compete with x264, the best encoder in the world — but surely it should be able to beat an image format released in 1992?

    Earth to Google: make the encoder good first, then promote it as better than the alternatives.  The reverse doesn’t work quite as well.

    Addendum (added Oct. 2, 03:51):

    maikmerten gave me a Theora-encoded image to compare as well.  Here’s the PNG and the source (155KB).  And yes, that’s Theora 1.2 (Ptalarbvorm) beating VP8 handily.  Now that is embarassing.  Guess what the main new feature of Ptalarbvorm is?  Psy optimizations…

    Addendum (added Apr. 20, 23:33):

    There’s a new webp encoder out, written from scratch by skal (available in libwebp).  It’s significantly better than libvpx — not like that says much — but it should probably beat JPEG much more readily now.  The encoder design is rather unique — it basically uses K-means for a large part of the encoding process.  It still loses to x264, but that was expected.

    [155KB]
    http://x264.nl/developers/Dark_Shikari/imagecoding/output.ogv
  • Announcing the world’s fastest VP8 decoder : ffvp8

    24 juillet 2010, par Dark ShikariVP8, ffmpeg, google, speed

    Back when I originally reviewed VP8, I noted that the official decoder, libvpx, was rather slow.  While there was no particular reason that it should be much faster than a good H.264 decoder, it shouldn’t have been that much slower either!  So, I set out with Ronald Bultje and David Conrad to make a better one in FFmpeg.  This one would be community-developed and free from the beginning, rather than the proprietary code-dump that was libvpx.  A few weeks ago the decoder was complete enough to be bit-exact with libvpx, making it the first independent free implementation of a VP8 decoder.  Now, with the first round of optimizations complete, it should be ready for primetime.  I’ll go into some detail about the development process, but first, let’s get to the real meat of this post: the benchmarks.

    We tested on two 1080p clips: Parkjoy, a live-action 1080p clip, and the Sintel trailer, a CGI 1080p clip.  Testing was done using “time ffmpeg -vcodec {libvpx or vp8} -i input -vsync 0 -an -f null -”.  We all used the latest SVN FFmpeg at the time of this posting; the last revision optimizing the VP8 decoder was r24471.

    Parkjoy graphSintel graph

    As these benchmarks show, ffvp8 is clearly much faster than libvpx, particularly on 64-bit.  It’s even faster by a large margin on Atom, despite the fact that we haven’t even begun optimizing for it.  In many cases, ffvp8′s extra speed can make the difference between a video that plays and one that doesn’t, especially in modern browsers with software compositing engines taking up a lot of CPU time.  Want to get faster playback of VP8 videos?  The next versions of FFmpeg-based players, like VLC, will include ffvp8.  Want to get faster playback of WebM in your browser?  Lobby your browser developers to use ffvp8 instead of libvpx.  I expect Chrome to switch first, as they already use libavcodec for most of their playback system.

    Keep in mind ffvp8 is not “done” — we will continue to improve it and make it faster.  We still have a number of optimizations in the pipeline that aren’t committed yet.

    Developing ffvp8

    The initial challenge, primarily pioneered by David and Ronald, was constructing the core decoder and making it bit-exact to libvpx.  This was rather challenging, especially given the lack of a real spec.  Many parts of the spec were outright misleading and contradicted libvpx itself.  It didn’t help that the suite of official conformance tests didn’t even cover all the features used by the official encoder!  We’ve already started adding our own conformance tests to deal with this.  But I’ve complained enough in past posts about the lack of a spec; let’s get onto the gritty details.

    The next step was adding SIMD assembly for all of the important DSP functions.  VP8′s motion compensation and deblocking filter are by far the most CPU-intensive parts, much the same as in H.264.  Unlike H.264, the deblocking filter relies on a lot of internal saturation steps, which are free in SIMD but costly in a normal C implementation, making the plain C code even slower.  Of course, none of this is a particularly large problem; any sane video decoder has all this stuff in SIMD.

    I tutored Ronald in x86 SIMD and wrote most of the motion compensation, intra prediction, and some inverse transforms.  Ronald wrote the rest of the inverse transforms and a bit of the motion compensation.  He also did the most difficult part: the deblocking filter.  Deblocking filters are always a bit difficult because every one is different.  Motion compensation, by comparison, is usually very similar regardless of video format; a 6-tap filter is a 6-tap filter, and most of the variation going on is just the choice of numbers to multiply by.

    The biggest challenge in an SIMD deblocking filter is to avoid unpacking, that is, going from 8-bit to 16-bit.  Many operations in deblocking filters would naively appear to require more than 8-bit precision.  A simple example in the case of x86 is abs(a-b), where a and b are 8-bit unsigned integers.  The result of “a-b” requires a 9-bit signed integer (it can be anywhere from -255 to 255), so it can’t fit in 8-bit.  But this is quite possible to do without unpacking: (satsub(a,b) | satsub(b,a)), where “satsub” performs a saturating subtract on the two values.  If the value is positive, it yields the result; if the value is negative, it yields zero.  Oring the two together yields the desired result.  This requires 4 ops on x86; unpacking would probably require at least 10, including the unpack and pack steps.

    After the SIMD came optimizing the C code, which still took a significant portion of the total runtime.  One of my biggest optimizations was adding aggressive “smart” prefetching to reduce cache misses.  ffvp8 prefetches the reference frames (PREVIOUS, GOLDEN, and ALTREF)… but only the ones which have been used reasonably often this frame.  This lets us prefetch everything we need without prefetching things that we probably won’t use.  libvpx very often encodes frames that almost never (but not quite never) use GOLDEN or ALTREF, so this optimization greatly reduces time spent prefetching in a lot of real videos.  There are of course countless other optimizations we made that are too long to list here as well, such as David’s entropy decoder optimizations.  I’d also like to thank Eli Friedman for his invaluable help in benchmarking a lot of these changes.

    What next?  Altivec (PPC) assembly is almost nonexistent, with the only functions being David’s motion compensation code.  NEON (ARM) is completely nonexistent: we’ll need that to be fast on mobile devices as well.  Of course, all this will come in due time — and as always — patches welcome!

    Appendix: the raw numbers

    Here’s the raw numbers (in fps) for the graphs at the start of this post, with standard error values:

    Core i7 620QM (1.6Ghz), Windows 7, 32-bit:
    Parkjoy ffvp8: 44.58 0.44
    Parkjoy libvpx: 33.06 0.23
    Sintel ffvp8: 74.26 1.18
    Sintel libvpx: 56.11 0.96

    Core i5 520M (2.4Ghz), Linux, 64-bit:
    Parkjoy ffvp8: 68.29 0.06
    Parkjoy libvpx: 41.06 0.04
    Sintel ffvp8: 112.38 0.37
    Sintel libvpx: 69.64 0.09

    Core 2 T9300 (2.5Ghz), Mac OS X 10.6.4, 64-bit:
    Parkjoy ffvp8: 54.09 0.02
    Parkjoy libvpx: 33.68 0.01
    Sintel ffvp8: 87.54 0.03
    Sintel libvpx: 52.74 0.04

    Core Duo (2Ghz), Mac OS X 10.6.4, 32-bit:
    Parkjoy ffvp8: 21.31 0.02
    Parkjoy libvpx: 17.96 0.00
    Sintel ffvp8: 41.24 0.01
    Sintel libvpx: 29.65 0.02

    Atom N270 (1.6Ghz), Linux, 32-bit:
    Parkjoy ffvp8: 15.29 0.01
    Parkjoy libvpx: 12.46 0.01
    Sintel ffvp8: 26.87 0.05
    Sintel libvpx: 20.41 0.02