
Recherche avancée
Médias (1)
-
1 000 000 (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
Autres articles (54)
-
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)
Sur d’autres sites (8772)
-
Playing Video on a Sega Dreamcast
9 mars 2011, par Multimedia Mike — Sega DreamcastHere’s an honest engineering question : If you were tasked to make compressed video play back on a Sega Dreamcast video game console, what video format would you choose ? Personally, I would choose RoQ, the format invented for The 11th Hour computer game and later used in Quake III and other games derived from the same engine. This post explains my reasoning.
Video Background
One of the things I wanted to do when I procured a used Sega Dreamcast back in 2001 was turn it into a set-top video playback unit. This is something that a lot of people tried to do, apparently, to varying degrees of success. Interest would wane in a few years as it became easier and easier to crack an Xbox and install XBMC. The Xbox was much better suited to playing codecs that were getting big at the time, most notably MPEG-4 part 2 video (DivX/XviD).The Dreamcast, while quite capable when it was released in 1999, was not very well-equipped to deal with an MPEG-type codec. I have recently learned that there are other hackers out there on the internet who are still trying to get the most out of this system. I was contacted for advice about how to make Theora perform better on the Dreamcast.
Interesting thing about consoles and codecs : Since you are necessarily distributing code along with your data, you have far more freedom to use whatever codecs you want for your audio and video data. This is why Vorbis and even Theora have seen quite a bit of use in video games, "internet standards" be darned. Thus, when I realized this application had no hard and fast requirement to use Theora, and that it could use any codec that fit the platform, my mind started churning. When I was programming the DC 10 years ago, I didn’t have access to the same wealth of multimedia knowledge that is currently available.Requirements Gathering
What do we need here ?- Codec needs to run on the Sega Dreamcast ; this eliminates codecs for which only binary decoder implementations are available
- Must decode 320x240 video at 30 fps ; higher resolutions up to 640x480 would be desirable
- Must deliver decent quality at 12X optical read speeds (DC drive speed)
- There must be some decent, preferably free, encoder readily available ; speed of encoding, however, is not important ; i.e., "take as long as you need, encoder"
Theora was the go-to codec because it’s just commonly known as "the free, open source video codec". But clearly it’s not suitable for, well... any purpose, really (sorry, easy target ; OW ! stop throwing things !). VP8/WebM — Theora’s heir apparent — would not qualify either, as my prior experiments have already demonstrated.
Candidates
What did the big boys use for video on the Dreamcast ? A lot of games relied on CRI’s Sofdec middleware which was MPEG-1 video and a custom ADPCM format. I don’t know if I have ever seen DC games that used MPEG-1 video at a higher resolution than 320x240 (though I have not searched exhaustively). The fact that CRI used a custom ADPCM format for this application may indicate that there wasn’t enough CPU power left over to decode a perceptual, transform-based audio codec alongside the 320x240 video.A few other DC games used 4X Technologies’ 4XM format. The most notable licensee was Alone in the Dark : The New Nightmare (DC version only ; PC version used Bink). This codec was DCT-based but incorporated 16-bit RGB colorspace into its design, presumably to optimize for applications like game consoles that couldn’t directly handle planar YUV. AITD:TNN’s videos were 640x360, a marked improvement over the typical Sofdec fare. I was about to write off 4XM as a contender due to lack of encoder, but the encoding tools are preserved on our samples site. A few other issues, though : The FFmpeg decoder doesn’t seem to work correctly as of this writing (and nobody has noticed yet, even though it’s tested via FATE).
What ideas do I have ? Right off the bat, I’m thinking vector quantizer (VQ). Vector quantizers are notoriously slow to compress but are blazingly fast to decompress which is why they were popular in the early days of video compression. First, there’s Cinepak. I fear that might be too simple for this application. Plus, I don’t know if existing (binary-only) compressors are very decent. It seems that they only ever had to handle small videos and I’ve heard that they can really fall over if anything more is demanded of them.
Sorenson Video 1 is another contender. FFmpeg has an encoder (which some allege is better than Sorenson’s original compressor). However, I fear that the wonky algorithm and colorspace might not mesh well with the Dreamcast.
My thinking quickly converged on RoQ. This was designed to run fullscreen (640x480) video on i486-class hardware. While RoQ fundamentally operates in a YUV colorspace, it’s trivial to convert it to any other colorspace during decoding and the image will be rendered in that colorspace. Plus, there are open source encoders available for the format (namely, several versions of Eric Lasota’s Switchblade encoder, one of which lives natively in FFmpeg), as well as the original proprietary encoder.
Which Library ?
There are several code choices here : FFmpeg (LGPL), Switchblade (GPL), and the original Quake 3 source code (GPL). There is one more option that I think might be easiest, which is the decoder Dr. Tim created when he reverse engineered the format in the first place. That has a very liberal "do whatever you like, but be nice and give me credit" license (probably qualifies as BSD).This code is no longer at its original home but the Wayback Machine still had a copy, which I have now mirrored (idroq.tar.gz).
Adaptation
Dr. Tim’s code still compiles and runs great on Linux (64-bit !) with SDL output. I would like to get it ported to the Dreamcast using the same SDL output, which KallistiOS supports. Then, there is the matter of fixing the longstanding chroma bug in the original sample decoder (described here). The decoder also needs to be modified to natively render RGB565 data, as that will work best with the DC’s graphics hardware.After making the code work, I want to profile it and test whether it can handle full-frame 640x480 playback at 30 frames/second. I will need to contrive a sample to achieve this.
Unfortunately, things went off the rails pretty quickly when I tried to get the RoQ decoder ported to DC/KOS. It looks like there’s a bug in KallistiOS’s minimalistic standard C library, or at least a discrepancy with my desktop Linux system. When you read to the end of a file and then seek backwards to someplace that isn’t the end, is the file still in EOF state ?
According to my Linux desktop :
open file ; feof() = 0 seek to end ; feof() = 0 read one more byte ; feof() = 1 seek back to start ; feof() = 0
According to KallistiOS :
open file ; feof() = 0 seek to end ; feof() = 0 read one more byte ; feof() = 1 seek back to start ; feof() = 1
Here’s the seek-test.c program I used to test this issue :
C :-
#include <stdio .h>
-
-
int main()
-
{
-
FILE *f ;
-
unsigned char byte ;
-
-
f = fopen("seek_test.c", "r") ;
-
fseek(f, 0, SEEK_END) ;
-
fread(&byte, 1, 1, f) ;
-
fseek(f, 0, SEEK_SET) ;
-
fclose(f) ;
-
-
return 0 ;
-
}
EOF
Speaking of EOF, I’m about done for this evening.What codec would you select for this task, given the requirements involved ?
-
The Big VP8 Debug
20 novembre 2010, par Multimedia Mike — VP8I hope my previous walkthrough of the VP8 4x4 intra coding process was educational. Today, I’ll be walking through an example of what happens when my toy VP8 encoder encodes an intra 16x16 block. This may prove educational to those who have never been exposed to the deep details of this or related algorithms. Also, I wanted to illustrate where I think my VP8 encoder process is going bad and generating such grotesque results.
Before I start, let me give a shout-out to Google Docs’ Drawing tool which I used to generate these diagrams. It works quite well.
Results
(Always cut to the chase in a blog post ; results first.) I’m glad I composed this post. In the course of doing so, I found the problem, fixed it, and am now able to present this image that was decoded from the bitstream encoded by my
toyworking VP8 encoder :
Yeah, I know that image doesn’t look like anything you haven’t seen before. The difference is that it has made a successful trip through my VP8 encoder.
Follow along through the encoding process and learn of the mistake...
Original Block and Subblocks
Here is the 16x16 block to be encoded :
The block is broken down into 16 4x4 subblocks for further encoding :
Prediction
The first step is to pick a prediction mode, generate a prediction block, and subtract the predictors from the macroblock. In this case, we will use DC prediction which means the predictor will be the same for each element.In 4x4 VP8 DC intra prediction, samples outside of the frame are assumed to be 128. It’s a little different in 16x16 DC intra prediction— samples above the top row are assumed to be 127 while samples left of the leftmost column are assumed to be 129. For the top left macroblock, this still works out to 128.
Subtract 128 from each of the samples :
Forward Transform
Run each of the 16 prediction-removed subblocks through the forward transform. This example uses the forward transform from libvpx 0.9.5 :
I have highlighted the DC coefficients in each subblock. That’s because those receive special consideration in 16x16 intra coding.
Quantization
The Y plane AC quantizer is 4 in this example, the minimum allowed. (The Y plane DC quantizer is also 4 but doesn’t come into play for intra 16x16 coding since the DC coefficients follow a different process.) Thus, quantize (integer divide) each AC element in each subblock (we’ll ignore the DC coefficient for this part) :
The Y2 Round Trip
Those highlighted DC coefficients from each of the 16 subblocks comprise the Y2 block. This block is transformed with a slightly different algorithm called the Walsh-Hadamard Transform (WHT). The results of this transform are then quantized (using 8 for both Y2 DC and AC in this example, as those are the smallest Y2 quantizers that VP8 allows), then zigzagged and entropy-coded along with the rest of the macroblock coefficients.
On the decoder side, the Y2 coefficients are decoded, de-zigzagged, dequantized and run through the inverse WHT.
And this is where I suspect that most of the error is creeping into my VP8 encoder. Observe the round-trip through the Y2 process :
As intimated, this part causes me consternation due to the wide discrepancy between the original and the reconstructed Y2 blocks. Observe the absolute difference between the 2 vectors :
That’s really significant and leads me to believe that this is where the big problem is.
What’s Wrong ?
My first suspicion is that the quantization is throwing off the process. I was disabused of this idea when I removed quantization from the equation and immediately reversed the transform :
So perhaps there is a problem with the forward WHT. Just like with the usual subblock transform, the VP8 spec doesn’t define how to perform the forward WHT, only the inverse WHT. Do I need to audition different forward WHTs from various versions of libvpx, similar to what I did with the other transform ? That doesn’t make a lot of sense— libvpx doesn’t seem to have so much trouble with basic encoding.
The Punchline
I reviewed the forward WHT code, the stuff that I plagiarized from libvpx 0.9.0. The function takes, among other parameters, a pitch value. There are 2 loops in the code. The first iterates through the rows of the input matrix— which I assumed was a 4x4 matrix. I was puzzled that during each iteration of the row loop, the input pointer was only being advanced by
(pitch/2)
. I removed the division by 2 and the problem went away. I.e., the encoded image looks correct.What’s up with the
(pitch/2)
, anyway ? It seems that the encoder likes to pack 2 4x4 subblocks into an 8x4 block data structure. In fact, the forward DCTs in the libvpx encoder have the same artifact. Remember how I surveyed several variations of forward DCT from different versions of libvpx ? The one that proved most accurate in that test was the one I had already modified to advance the input pointer properly. Fixing the other 2 candidates yields similar results :input : 92 91 89 86 91 90 88 86 89 89 89 88 89 87 88 93 short 0.9.0 : -311 6 2 0 0 11 -6 1 2 -3 3 0 0 0 -2 1 inverse : 92 91 89 86 91 90 88 87 90 89 89 88 89 87 88 93 fast 0.9.0 : -313 5 1 0 1 11 -6 1 3 -3 4 0 0 0 -2 1 inverse : 91 91 89 86 90 90 88 86 89 89 89 88 89 87 88 93 short 0.9.5 : -312 7 1 0 1 12 -5 2 2 -3 3 -1 1 0 -2 1 inverse : 92 91 89 86 91 90 88 86 89 89 89 88 89 87 88 93
Code cribber beware !
Corrected Y2 Round Trip
Let’s look at that Y2 round trip one more time :
And another look at the error between the original and the reconstruction :
Better.
Dequantization, Prediction, Inverse Transforms, and Reconstruction
To be honest, now that I solved the major problem, I’m getting a little tired of making these pictures. Long story short, all elements of the original 16 subblocks are dequantized and their DC coefficients are filled in with the appropriate item from the reconstructed Y2 block. A base predictor block is generated (all 128 values in this case). And each Y block is run through the inverse transform and added to the predictor block. The following is the reconstruction :
And if you compare that against the original luma macroblock (I don’t feel like doing it right now), you’ll find that it’s pretty close.
I can’t believe how close I was all this time, and how long that pitch bug held me up.
-
Anatomy of an optimization : H.264 deblocking
As mentioned in the previous post, H.264 has an adaptive deblocking filter. But what exactly does that mean — and more importantly, what does it mean for performance ? And how can we make it as fast as possible ? In this post I’ll try to answer these questions, particularly in relation to my recent deblocking optimizations in x264.
H.264′s deblocking filter has two steps : strength calculation and the actual filter. The first step calculates the parameters for the second step. The filter runs on all the edges in each macroblock. That’s 4 vertical edges of length 16 pixels and 4 horizontal edges of length 16 pixels. The vertical edges are filtered first, from left to right, then the horizontal edges, from top to bottom (order matters !). The leftmost edge is the one between the current macroblock and the left macroblock, while the topmost edge is the one between the current macroblock and the top macroblock.
Here’s the formula for the strength calculation in progressive mode. The highest strength that applies is always selected.
If we’re on the edge between an intra macroblock and any other macroblock : Strength 4
If we’re on an internal edge of an intra macroblock : Strength 3
If either side of a 4-pixel-long edge has residual data : Strength 2
If the motion vectors on opposite sides of a 4-pixel-long edge are at least a pixel apart (in either x or y direction) or the reference frames aren’t the same : Strength 1
Otherwise : Strength 0 (no deblocking)These values are then thrown into a lookup table depending on the quantizer : higher quantizers have stronger deblocking. Then the actual filter is run with the appropriate parameters. Note that Strength 4 is actually a special deblocking mode that performs a much stronger filter and affects more pixels.
One can see somewhat intuitively why these strengths are chosen. The deblocker exists to get rid of sharp edges caused by the block-based nature of H.264, and so the strength depends on what exists that might cause such sharp edges. The strength calculation is a way to use existing data from the video stream to make better decisions during the deblocking process, improving compression and quality.
Both the strength calculation and the actual filter (not described here) are very complex if naively implemented. The latter can be SIMD’d with not too much difficulty ; no H.264 decoder can get away with reasonable performance without such a thing. But what about optimizing the strength calculation ? A quick analysis shows that this can be beneficial as well.
Since we have to check both horizontal and vertical edges, we have to check up to 32 pairs of coefficient counts (for residual), 16 pairs of reference frame indices, and 128 motion vector values (counting x and y as separate values). This is a lot of calculation ; a naive implementation can take 500-1000 clock cycles on a modern CPU. Of course, there’s a lot of shortcuts we can take. Here’s some examples :
- If the macroblock uses the 8×8 transform, we only need to check 2 edges in each direction instead of 4, because we don’t deblock inside of the 8×8 blocks.
- If the macroblock is a P-skip, we only have to check the first edge in each direction, since there’s guaranteed to be no motion vector differences, reference frame differences, or residual inside of the macroblock.
- If the macroblock has no residual at all, we can skip that check.
- If we know the partition type of the macroblock, we can do motion vector checks only along the edges of the partitions.
- If the effective quantizer is so low that no deblocking would be performed no matter what, don’t bother calculating the strength.
But even all of this doesn’t save us from ourselves. We still have to iterate over a ton of edges, checking each one. Stuff like the partition-checking logic greatly complicates the code and adds overhead even as it reduces the number of checks. And in many cases decoupling the checks to add such logic will make it slower : if the checks are coupled, we can avoid doing a motion vector check if there’s residual, since Strength 2 overrides Strength 1.
But wait. What if we could do this in SIMD, just like the actual loopfilter itself ? Sure, it seems more of a problem for C code than assembly, but there aren’t any obvious things in the way. Many years ago, Loren Merritt (pengvado) wrote the first SIMD implementation that I know of (for ffmpeg’s decoder) ; it is quite fast, so I decided to work on porting the idea to x264 to see if we could eke out a bit more speed here as well.
Before I go over what I had to do to make this change, let me first describe how deblocking is implemented in x264. Since the filter is a loopfilter, it acts “in loop” and must be done in both the encoder and decoder — hence why x264 has it too, not just decoders. At the end of encoding one row of macroblocks, x264 goes back and deblocks the row, then performs half-pixel interpolation for use in encoding the next frame.
We do it per-row for reasons of cache coherency : deblocking accesses a lot of pixels and a lot of code that wouldn’t otherwise be used, so it’s more efficient to do it in a single pass as opposed to deblocking each macroblock immediately after encoding. Then half-pixel interpolation can immediately re-use the resulting data.
Now to the change. First, I modified deblocking to implement a subset of the macroblock_cache_load function : spend an extra bit of effort loading the necessary data into a data structure which is much simpler to address — as an assembly implementation would need (x264_macroblock_cache_load_deblock). Then I massively cleaned up deblocking to move all of the core strength-calculation logic into a single, small function that could be converted to assembly (deblock_strength_c). Finally, I wrote the assembly functions and worked with Loren to optimize them. Here’s the result.
And the timings for the resulting assembly function on my Core i7, in cycles :
deblock_strength_c : 309
deblock_strength_mmx : 79
deblock_strength_sse2 : 37
deblock_strength_ssse3 : 33Now that is a seriously nice improvement. 33 cycles on average to perform that many comparisons–that’s absurdly low, especially considering the SIMD takes no branchy shortcuts : it always checks every single edge ! I walked over to my performance chart and happily crossed off a box.
But I had a hunch that I could do better. Remember, as mentioned earlier, we’re reloading all that data back into our data structures in order to address it. This isn’t that slow, but takes enough time to significantly cut down on the gain of the assembly code. And worse, less than a row ago, all this data was in the correct place to be used (when we just finished encoding the macroblock) ! But if we did the deblocking right after encoding each macroblock, the cache issues would make it too slow to be worth it (yes, I tested this). So I went back to other things, a bit annoyed that I couldn’t get the full benefit of the changes.
Then, yesterday, I was talking with Pascal, a former Xvid dev and current video hacker over at Google, about various possible x264 optimizations. He had seen my deblocking changes and we discussed that a bit as well. Then two lines hit me like a pile of bricks :
<_skal_> tried computing the strength at least ?
<_skal_> while it’s freshWhy hadn’t I thought of that ? Do the strength calculation immediately after encoding each macroblock, save the result, and then go pick it up later for the main deblocking filter. Then we can use the data right there and then for strength calculation, but we don’t have to do the whole deblock process until later.
I went and implemented it and, after working my way through a horde of bugs, eventually got a working implementation. A big catch was that of slices : deblocking normally acts between slices even though normal encoding does not, so I had to perform extra munging to get that to work. By midday today I was able to go cross yet another box off on the performance chart. And now it’s committed.
Sometimes chatting for 10 minutes with another developer is enough to spot the idea that your brain somehow managed to miss for nearly a straight week.
NB : the performance chart is on a specific test clip at a specific set of settings (super fast settings) relevant to the company I work at, so it isn’t accurate nor complete for, say, default settings.
Update : Here’s a higher resolution version of the current chart, as requested in the comments.