Hardwarebug : Everything is broken

http://hardwarebug.org/

Les articles publiés sur le site

  • Nexus One

    19 mars 2010, par MansUncategorized

    I have had a Nexus One for about a week (thanks Google), and naturally I have an opinion or two about it.

    Hardware

    With the front side dominated by a touch-screen and a lone, round button, the Nexus One appearance is similar to that of most contemporary smartphones. The reverse sports a 5 megapixel camera with LED flash, a Google logo, and a smaller HTC logo. Power button, volume control, and headphone and micro-USB sockets are found along the edges. It is with appreciation I note the lack of a front-facing camera; the silly idea of video calls is finally put to rest.

    Powering up the phone (I’m beginning to question the applicability of that word), I am immediately enamoured with the display. At 800×480 pixels, the AMOLED display is crystal-clear and easily viewable even in bright light. In a darker environment, the display automatically dims. The display does have one quirk in that the subpixel pattern doesn’t actually have a full RGB triplet for each pixel. The close-up photo below shows the pattern seen when displaying a solid white colour.

    Nexus One display close-up

    The result of this is that fine vertical lines, particularly red or blue ones, look a bit jagged. Most of the time this is not much of a problem, and I find it an acceptable compromise for the higher effective resolution it provides.

    Basic interaction

    The Android system is by now familiar, and the Nexus offers no surprises in basic usage. All the usual applications come pre-installed: browser, email, calendar, contacts, maps, and even voice calls. Many of the applications integrate with a Google account, which is nice. Calendar entries, map placemarks, etc. are automatically shared between desktop and mobile. Gone is the need for the bug-ridden custom synchronisation software with which mobile phones of the past were plagued.

    Launching applications is mostly speedy, and recently used apps are kept loaded as long as memory needs allow. Although this garbage-collection-style of application management, where you are never quite sure whether an app is still running, takes a few moments of acclimatisation, it works reasonably well in day to day use. Most of the applications are well-behaved and save their data before terminating.

    Email

    Two email applications are included out of the box: one generic and one Gmail-only. As I do not use Gmail, I cannot comment on this application. The generic email client supports IMAP, but is rather limited in functionality. Fortunately, a much-enhanced version, K-9, is available for download. The main feature I find lacking here is threaded message view.

    The features, or lack thereof, in the email applications is not, however, of huge importance, as composing email, or any longer piece of text, is something one rather avoids on a system like this. The on-screen keyboard, while falling among the better of its kind, is still slow to use. Lack of tactile feedback means accidentally tapping the wrong key is easily done, and entering numbers or punctuation is an outright chore.

    Browser

    Whatever the Nexus lacks in email abilities, it makes up for with the browser. Surfing the web on a phone has never been this pleasant. Page rendering is quick, and zooming is fast and simple. Even pages not designed for mobile viewing are easy to read with smart reformatting almost entirely eliminating the sideways scrolling which hampered many a mobile browser of old.

    Calls and messaging

    Being a phone, the Nexus One is obviously able to make and receive calls, and it does so with ease. Entering a number or locating a stored contact are both straight-forward operations. During a call, audio is clear and of adequate loudness, although I have yet to use the phone in really noisy surroundings.

    The other traditional task of a mobile phone, messaging, is also well-supported. There isn’t really much to say about this.

    Multimedia

    Having a bit of an interest in most things multimedia, I obviously tested the capabilities of the Nexus by throwing some assorted samples at it, revealing ample space for improvement. With video limited to H.264 and MPEG4, and the only supported audio codecs being AAC, MP3, Vorbis, and AMR, there are many files which will not play.

    To make matters worse, only selected combinations of audio and video will play together. Several video files I tested played without sound, yet when presented with the very same audio data alone, it was correctly decoded. As for container formats, it appears restricted to MP4/MOV, and Ogg (for Vorbis). AVI files are recognised as media files, but I was unable to find an AVI file which would play.

    With a device clearly capable of so much more, the poor multimedia support is nothing short of embarrassing.

    The Market

    Much of the hype surrounding Android revolves around the Market, Google’s virtual marketplace for app authors to sell or give away their creations. The thousands of available applications are broadly categorised, and a search function is available.

    The categorised lists are divided into free and paid sections, while search results, disappointingly, are not. To aid the decision, ratings and comments are displayed alongside the summary and screenshots of each application. Overall, the process of finding and installing an application is mostly painless. While it could certainly be improved, it could also have been much worse.

    The applications themselves are, as hinted above, beyond numerous. Sadly, quality does not quite match up to quantity. The vast majority of the apps are pointless, though occasionally mildly amusing, gimmicks of no practical value. The really good ones, and they do exist, are very hard to find unless one knows precisely what to look for.

    Battery

    Packing great performance into a pocket-size device comes with a price in battery life. The battery in the Nexus lasts considerably shorter time than that in my older, less feature-packed Nokia phone. To some extent this is probably a result of me actually using it a lot more, yet the end result is the same: more frequent recharging. I should probably get used to the idea of recharging the phone every other night.

    Verdict

    The Nexus One is a capable hardware platform running an OS with plenty of potential. The applications are still somewhat lacking (or very hard to find), although the basic features work reasonably well. Hopefully future Android updates will see more and better core applications integrated, and I imagine that over time, I will find third-party apps to solve my problems in a way I like. I am not putting this phone on the shelf just yet.

  • Ogg objections

    3 mars 2010, par MansMultimedia

    The Ogg container format is being promoted by the Xiph Foundation for use with its Vorbis and Theora codecs. Unfortunately, a number of technical shortcomings in the format render it ill-suited to most, if not all, use cases. This article examines the most severe of these flaws.

    Overview of Ogg

    The basic unit in an Ogg stream is the page consisting of a header followed by one or more packets from a single elementary stream. A page can contain up to 255 packets, and a packet can span any number of pages. The following table describes the page header.

    Field Size (bits) Description
    capture_pattern 32 magic number “OggS”
    version 8 always zero
    flags 8
    granule_position 64 abstract timestamp
    bitstream_serial_number 32 elementary stream number
    page_sequence_number 32 incremented by 1 each page
    checksum 32 CRC of entire page
    page_segments 8 length of segment_table
    segment_table variable list of packet sizes

    Elementary stream types are identified by looking at the payload of the first few pages, which contain any setup data required by the decoders. For full details, see the official format specification.

    Generality

    Ogg, legend tells, was designed to be a general-purpose container format. To most multimedia developers, a general-purpose format is one in which encoded data of any type can be encapsulated with a minimum of effort.

    The Ogg format defined by the specification does not fit this description. For every format one wishes to use with Ogg, a complex mapping must first be defined. This mapping defines how to identify a codec, how to extract setup data, and even how timestamps are to be interpreted. All this is done differently for every codec. To correctly parse an Ogg stream, every such mapping ever defined must be known.

    Under this premise, a centralised repository of codec mappings would seem like a sensible idea, but alas, no such thing exists. It is simply impossible to obtain a exhaustive list of defined mappings, which makes the task of creating a complete implementation somewhat daunting.

    One brave soul, Tobias Waldvogel, created a mapping, OGM, capable of storing any Microsoft AVI compatible codec data in Ogg files. This format saw some use in the wild, but was frowned upon by Xiph, and it was eventually displaced by other formats.

    True generality is evidently not to be found with the Ogg format.

    A good example of a general-purpose format is Matroska. This container can trivially accommodate any codec, all it requires is a unique string to identify the codec. For codecs requiring setup data, a standard location for this is provided in the container. Furthermore, an official list of codec identifiers is maintained, meaning all information required to fully support Matroska files is available from one place.

    Matroska also has probably the greatest advantage of all: it is in active, wide-spread use. Historically, standards derived from existing practice have proven more successful than those created by a design committee.

    Overhead

    When designing a container format, one important consideration is that of overhead, i.e. the extra space required in addition to the elementary stream data being combined. For any given container, the overhead can be divided into a fixed part, independent of the total file size, and a variable part growing with increasing file size. The fixed overhead is not of much concern, its relative contribution being negligible for typical file sizes.

    The variable overhead in the Ogg format comes from the page headers, mostly from the segment_table field. This field uses a most peculiar encoding, somewhat reminiscent of Roman numerals. In Roman times, numbers were written as a sequence of symbols, each representing a value, the combined value being the sum of the constituent values.

    The segment_table field lists the sizes of all packets in the page. Each value in the list is coded as a number of bytes equal to 255 followed by a final byte with a smaller value. The packet size is simply the sum of all these bytes. Any strictly additive encoding, such as this, has the distinct drawback of coded length being linearly proportional to the encoded value. A value of 5000, a reasonable packet size for video of moderate bitrate, requires no less than 20 bytes to encode.

    On top of this we have the 27-byte page header which, although paling in comparison to the packet size encoding, is still much larger than necessary. Starting at the top of the list:

    • The version field could be disposed of, a single-bit marker being adequate to separate this first version from hypothetical future versions. One of the unused positions in the flags field could be used for this purpose
    • A 64-bit granule_position is completely overkill. 32 bits would be more than enough for the vast majority of use cases. In extreme cases, a one-bit flag could be used to signal an extended timestamp field.
    • 32-bit elementary stream number? Are they anticipating files with four billion elementary streams? An eight-bit field, if not smaller, would seem more appropriate here.
    • The 32-bit page_sequence_number is inexplicable. The intent is to allow detection of page loss due to transmission errors. ISO MPEG-TS uses a 4-bit counter per 188-byte packet for this purpose, and that format is used where packet loss actually happens, unlike any use of Ogg to date.
    • A mandatory 32-bit checksum is nothing but a waste of space when using a reliable storage/transmission medium. Again, a flag could be used to signal the presence of an optional checksum field.

    With the changes suggested above, the page header would shrink from 27 bytes to 12 bytes in size.

    We thus see that in an Ogg file, the packet size fields alone contribute an overhead of 1/255 or approximately 0.4%. This is a hard lower bound on the overhead, not attainable even in theory. In reality the overhead tends to be closer to 1%.

    Contrast this with the ISO MP4 file format, which can easily achieve an overhead of less than 0.05% with a 1 Mbps elementary stream.

    Latency

    In many applications end-to-end latency is an important factor. Examples include video conferencing, telephony, live sports events, interactive gaming, etc. With the codec layer contributing as little as 10 milliseconds of latency, the amount imposed by the container becomes an important factor.

    Latency in an Ogg-based system is introduced at both the sender and the receiver. Since the page header depends on the entire contents of the page (packet sizes and checksum), a full page of packets must be buffered by the sender before a single bit can be transmitted. This sets a lower bound for the sending latency at the duration of a page.

    On the receiving side, playback cannot commence until packets from all elementary streams are available. Hence, with two streams (audio and video) interleaved at the page level, playback is delayed by at least one page duration (two if checksums are verified).

    Taking both send and receive latencies into account, the minimum end-to-end latency for Ogg is thus twice the duration of a page, triple if strict checksum verification is required. If page durations are variable, the maximum value must be used in order to avoid buffer underflows.

    Minimum latency is clearly achieved by minimising the page duration, which in turn implies sending only one packet per page. This is where the size of the page header becomes important. The header for a single-packet page is 27 + packet_size/255 bytes in size. For a 1 Mbps video stream at 25 fps this gives an overhead of approximately 1%. With a typical audio packet size of 400 bytes, the overhead becomes a staggering 7%. The average overhead for a multiplex of these two streams is 1.4%.

    As it stands, the Ogg format is clearly not a good choice for a low-latency application. The key to low latency is small packets and fine-grained interleaving of streams, and although Ogg can provide both of these, by sending a single packet per page, the price in overhead is simply too high.

    ISO MPEG-PS has an overhead of 9 bytes on most packets (a 5-byte timestamp is added a few times per second), and Microsoft’s ASF has a 12-byte packet header. My suggestions for compacting the Ogg page header would bring it in line with these formats.

    Random access

    Any general-purpose container format needs to allow random access for direct seeking to any given position in the file. Despite this goal being explicitly mentioned in the Ogg specification, the format only allows the most crude of random access methods.

    While many container formats include an index allowing a time to be directly translated into an offset into the file, Ogg has nothing of this kind, the stated rationale for the omission being that this would require a two-pass multiplexing, the second pass creating the index. This is obviously not true; the index could simply be written at the end of the file. Those objecting that this index would be unavailable in a streaming scenario are forgetting that seeking is impossible there regardless.

    The method for seeking suggested by the Ogg documentation is to perform a binary search on the file, after each file-level seek operation scanning for a page header, extracting the timestamp, and comparing it to the desired position. When the elementary stream encoding allows only certain packets as random access points (video key frames), a second search will have to be performed to locate the entry point closest to the desired time. In a large file (sizes upwards of 10 GB are common), 50 seeks might be required to find the correct position.

    A typical hard drive has an average seek time of roughly 10 ms, giving a total time for the seek operation of around 500 ms, an annoyingly long time. On a slow medium, such as an optical disc or files served over a network, the times are orders of magnitude longer.

    A factor further complicating the seeking process is the possibility of header emulation within the elementary stream data. To safeguard against this, one has to read the entire page and verify the checksum. If the storage medium cannot provide data much faster than during normal playback, this provides yet another substantial delay towards finishing the seeking operation. This too applies to both network delivery and optical discs.

    Although optical disc usage is perhaps in decline today, one should bear in mind that the Ogg format was designed at a time when CDs and DVDs were rapidly gaining ground, and network-based storage is most certainly on the rise.

    The final nail in the coffin of seeking is the codec-dependent timestamp format. At each step in the seeking process, the timestamp parsing specified by the codec mapping corresponding the current page must be invoked. If the mapping is not known, the best one can do is skip pages until one with a known mapping is found. This delays the seeking and complicates the implementation, both bad things.

    Timestamps

    A problem old as multimedia itself is that of synchronising multiple elementary streams (e.g. audio and video) during playback; badly synchronised A/V is highly unpleasant to view. By the time Ogg was invented, solutions to this problem were long since explored and well-understood. The key to proper synchronisation lies in tagging elementary stream packets with timestamps, packets carrying the same timestamp intended for simultaneous presentation. The concept is as simple as it seems, so it is astonishing to see the amount of complexity with which the Ogg designers managed to imbue it. So bizarre is it, that I have devoted an entire article to the topic, and will not cover it further here.

    Complexity

    Video and audio decoding are time-consuming tasks, so containers should be designed to minimise extra processing required. With the data volumes involved, even an act as simple as copying a packet of compressed data can have a significant impact. Once again, however, Ogg lets us down. Despite the brevity of the specification, the format is remarkably complicated to parse properly.

    The unusual and inefficient encoding of the packet sizes limits the page size to somewhat less than 64 kB. To still allow individual packets larger than this limit, it was decided to allow packets spanning multiple pages, a decision with unfortunate implications. A page-spanning packet as it arrives in the Ogg stream will be discontiguous in memory, a situation most decoders are unable to handle, and reassembly, i.e. copying, is required.

    The knowledgeable reader may at this point remark that the MPEG-TS format also splits packets into pieces requiring reassembly before decoding. There is, however, a significant difference there. MPEG-TS was designed for hardware demultiplexing feeding directly into hardware decoders. In such an implementation the fragmentation is not a problem. Rather, the fine-grained interleaving is a feature allowing smaller on-chip buffers.

    Buffering is also an area in which Ogg suffers. To keep the overhead down, pages must be made as large as practically possible, and page size translates directly into demultiplexer buffer size. Playback of a file with two elementary streams thus requires 128 kB of buffer space. On a modern PC this is perhaps nothing to be concerned about, but in a small embedded system, e.g. a portable media player, it can be relevant.

    In addition to the above, a number of other issues, some of them minor, others more severe, make Ogg processing a painful experience. A selection follows:

    • 32-bit random elementary stream identifiers mean a simple table-lookup cannot be used. Instead the list of streams must be searched for a match. While trivial to do in software, it is still annoying, and a hardware demultiplexer would be significantly more complicated than with a smaller identifier.
    • Semantically ambiguous streams are possible. For example, the continuation flag (bit 1) may conflict with continuation (or lack thereof) implied by the segment table on the preceding page. Such invalid files have been spotted in the wild.
    • Concatenating independent Ogg streams forms a valid stream. While finding a use case for this strange feature is difficult, an implementation must of course be prepared to encounter such streams. Detecting and dealing with these adds pointless complexity.
    • Unusual terminology: inventing new terms for well-known concepts is confusing for the developer trying to understand the format in relation to others. A few examples:
      Ogg name Usual name
      logical bitstream elementary stream
      grouping multiplexing
      lacing value packet size (approximately)
      segment imaginary element serving no real purpose
      granule position timestamp

    Final words

    We have found the Ogg format to be a dubious choice in just about every situation. Why then do certain organisations and individuals persist in promoting it with such ferocity?

    When challenged, three types of reaction are characteristic of the Ogg campaigners.

    On occasion, these people will assume an apologetic tone, explaining how Ogg was only ever designed for simple audio-only streams (ignoring it is as bad for these as for anything), and this is no doubt true. Why then, I ask again, do they continue to tout Ogg as the one-size-fits-all solution they already admitted it is not?

    More commonly, the Ogg proponents will respond with hand-waving arguments best summarised as Ogg isn’t bad, it’s just different. My reply to this assertion is twofold:

    • Being too different is bad. We live in a world where multimedia files come in many varieties, and a decent media player will need to handle the majority of them. Fortunately, most multimedia file formats share some basic traits, and they can easily be processed in the same general framework, the specifics being taken care of at the input stage. A format deviating too far from the standard model becomes problematic.
    • Ogg is bad. When every angle of examination reveals serious flaws, bad is the only fitting description.

    The third reaction bypasses all technical analysis: Ogg is patent-free, a claim I am not qualified to directly discuss. Assuming it is true, it still does not alter the fact that Ogg is a bad format. Being free from patents does not magically make Ogg a good choice as file format. If all the standard formats are indeed covered by patents, the only proper solution is to design a new, good format which is not, this time hopefully avoiding the old mistakes.

  • Cat pictures

    21 février 2010, par MansUncategorized

    It has come to my attention that this blog suffers a complete lack of the single most important thing on the Internet: cat pictures. Here is a feeble attempt to remedy this most shocking of shortfalls.



  • 1080p video on Beagle

    10 février 2010, par MansMultimedia

    FOSDEM 2010 is over, and I’d like to call it a success for FFmpeg. The 11-man strong delegation showed the stunned audience a smashing demo featuring a Beagle-powered video wall. It looked like this:

    The people who pulled this off look like this:

    The FFmpeg team

  • IJG swings again, and misses

    1er février 2010, par MansMultimedia

    Earlier this month the IJG unleashed version 8 of its ubiquitous libjpeg library on the world. Eager to try out the “major breakthrough in image coding technology” promised in the README file accompanying v7, I downloaded the release. A glance at the README file suggests something major indeed is afoot:

    Version 8.0 is the first release of a new generation JPEG standard to overcome the limitations of the original JPEG specification.

    The text also hints at the existence of a document detailing these marvellous new features, and a Google search later a copy has found its way onto my monitor. As I read, however, my state of mind shifts from an initial excited curiosity, through bewilderment and disbelief, finally arriving at pure merriment.

    Already on the first page it becomes clear no new JPEG standard in fact exists. All we have is an unsolicited proposal sent to the ITU-T by members of the IJG. Realising that even the most brilliant of inventions must start off as mere proposals, I carry on reading. The summary informs me that I am about to witness the introduction of three extensions to the T.81 JPEG format:

    1. An alternative coefficient scan sequence for DCT coefficient serialization
    2. A SmartScale extension in the Start-Of-Scan (SOS) marker segment
    3. A Frame Offset definition in or in addition to the Start-Of-Frame (SOF) marker segment

    Together these three extensions will, it is promised, “bring DCT based JPEG back to the forefront of state-of-the-art image coding technologies.”

    Alternative scan

    The first of the proposed extensions introduces an alternative DCT coefficient scan sequence to be used in place of the zigzag scan employed in most block transform based codecs.

    Alternative scan sequence

    Alternative scan sequence

    The advantage of this scan would be that combined with the existing progressive mode, it simplifies decoding of an initial low-resolution image which is enhanced through subsequent passes. The author of the document calls this scheme  “image-pyramid/hierarchical multi-resolution coding.” It is not immediately obvious to me how this constitutes even a small advance in image coding technology.

    At this point I am beginning to suspect that our friend from the IJG has been trapped in a half-world between interlaced GIF images transmitted down noisy phone lines and today’s inferno of SVC, MVC, and other buzzwords.

    (Not so) SmartScale

    Disguised behind this camel-cased moniker we encounter a method which, we are told, will provide better image quality at high compression ratios. The author has combined two well-known (to us) properties in a (to him) clever way.

    The first property concerns the perceived impact of different types of distortion in an image. When encoding with JPEG, as the quantiser is increased, the decoded image becomes ever more blocky. At a certain point, a better subjective visual quality can be achieved by down-sampling the image before encoding it, thus allowing a lower quantiser to be used. If the decoded image is scaled back up to the original size, the unpleasant, blocky appearance is replaced with a smooth blur.

    The second property belongs to the DCT where, as we all know, the top-left (DC) coefficient is the average of the entire block, its neighbours represent the lowest frequency components etc. A top-left-aligned subset of the coefficient block thus represents a low-resolution version of the full block in the spatial domain.

    In his flash of genius, our hero came up with the idea of using the DCT for down-scaling the image. Unfortunately, he appears to possess precious little knowledge of sampling theory and human visual perception. Any block-based resampling will inevitably produce sharp artefacts along the block edges. The human visual system is particularly sensitive to sharp edges, so this is one of the most unwanted types of distortion in an encoded image.

    Despite the obvious flaws in this approach, I decided to give it a try. After all, the software is already written, allowing downscaling by factors of 8/8..16.

    Using a 1280×720 test image, I encoded it with each of the nine scaling options, from unity to half size, each time adjusting the quality parameter for a final encoded file size of no more than 200000 bytes. The following table presents the encoded file size, the libjpeg quality parameter used, and the SSIM metric for each of the images.

    Scale Size Quality SSIM
    8/8 198462 59 0.940
    8/9 196337 70 0.936
    8/10 196133 79 0.934
    8/11 197179 84 0.927
    8/12 193872 89 0.915
    8/13 197153 92 0.914
    8/14 188334 94 0.899
    8/15 198911 96 0.886
    8/16 197190 97 0.869

    Although the smaller images allowed a higher quality setting to be used, the SSIM value drops significantly. Numbers may of course be misleading, but the images below speak for themselves. These are cut-outs from the full image, the original on the left, unscaled JPEG-compressed in the middle, and JPEG with 8/16 scaling to the right.

    Looking at these images, I do not need to hesitate before picking the JPEG variant I prefer.

    Frame offset

    The third and final extension proposed is quite simple and also quite pointless: a top-left cropping to be applied to the decoded image. The alleged utility of this feature would be to enable lossless cropping of a JPEG image. In a typical image workflow, however, JPEG is only used for the final published version, so the need for this feature appears quite far-fetched.

    The grand finale

    Throughout the text, the author makes references to “the fundamental DCT property for image representation.” In his own words:

    This property was found by the author during implementation of the new DCT scaling features and is after his belief one of the most important discoveries in digital image coding after releasing the JPEG standard in 1992.

    The secret is to be revealed in an annex to the main text. This annex quotes in full a post by the author to the comp.dsp Usenet group in a thread with the subject why DCT. Reading the entire thread proves quite amusing. A few excerpts follow.

    The actual reason is much simpler, and therefore apparently very difficult to recognize by complicated-thinking people.

    Here is the explanation:

    What are people doing when they have a bunch of images and want a quick preview? They use thumbnails! What are thumbnails? Thumbnails are small downscaled versions of the original image! If you want more details of the image, you can zoom in stepwise by enlarging (upscaling) the image.

    So with proper understanding of the fundamental DCT property, the MPEG folks could make their videos more scalable, but, as in the case of JPEG, they are unable to recognize this simple but basic property, unfortunately, and pursue rather inferior approaches in actual developments.

    These are just phrases, and they don’t explain anything. But this is typical for the current state in this field: The relevant people ignore and deny the true reasons, and thus they turn in a circle and no progress is being made.

    However, there are dark forces in action today which ignore and deny any fruitful advances in this field. That is the reason that we didn’t see any progress in JPEG for more than a decade, and as long as those forces dominate, we will see more confusion and less enlightenment. The truth is always simple, and the DCT *is* simple, but this fact is suppressed by established people who don’t want to lose their dubious position.

    I believe a trip to the Total Perspective Vortex may be in order. Perhaps his tin-foil hat will save him.