Recherche avancée

Médias (39)

Mot : - Tags -/audio

Autres articles (78)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Mise à disposition des fichiers

    14 avril 2011, par

    Par défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
    Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
    Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)

Sur d’autres sites (12275)

  • Hacking the Popcorn Hour C-200

    3 mai 2010, par Mans — Hardware, MIPS

    Update : A new firmware version has been released since the publication of this article. I do not know if the procedure described below will work with the new version.

    The Popcorn Hour C-200 is a Linux-based media player with impressive specifications. At its heart is a Sigma Designs SMP8643 system on chip with a 667MHz MIPS 74Kf as main CPU, several co-processors, and 512MB of DRAM attached. Gigabit Ethernet, SATA, and USB provide connectivity with the world around it. With a modest $299 on the price tag, the temptation to repurpose the unit as a low-power server or cheap development board is hard to resist. This article shows how such a conversion can be achieved.

    Kernel

    The PCH runs a patched Linux 2.6.22.19 kernel. A source tarball is available from the manufacturer. This contains the sources with Sigma support patches, Con Kolivas’ patch set (scheduler tweaks), and assorted unrelated changes. Properly split patches are unfortunately not available. I have created a reduced patch against vanilla 2.6.22.19 with only Sigma-specific changes, available here.

    The installed kernel has a number of features disabled, notably PTY support and oprofile. We will use kexec to load a more friendly one.

    As might be expected, the PCH kernel does not have kexec support enabled. It does however, by virtue of using closed-source components, support module loading. This lets us turn kexec into a module and load it. A patch for this is available here. To build the module, apply the patch to the PCH sources and build using this configuration. This will produce two modules, kexec.ko and mips_kexec.ko. No other products of this build will be needed.

    The replacement kernel can be built from the PCH sources or, if one prefers, from vanilla 2.6.22.19 with the Sigma-only patch. For the latter case, this config provides a minimal starting point suitable for NFS-root.

    When configuring the kernel, make sure CONFIG_TANGOX_IGNORE_CMDLINE is enabled. Otherwise the command line will be overridden by a useless one stored in flash. A good command line can be set with CONFIG_CMDLINE (under “Kernel hacking” in menuconfig) or passed from kexec.

    Taking control

    In order to load our kexec module, we must first gain root privileges on the PCH, and here a few features of the system are working to our advantage :

    1. The PCH allows mounting any NFS export to access media files stored there.
    2. There is an HTTP server running. As root.
    3. This HTTP server can be readily instructed to fetch files from an NFS mount.
    4. Files with a name ending in .cgi are executed. As root.

    All we need do to profit from this is place the kexec modules, the kexec userspace tools, and a simple script on an NFS export. Once this is done, and the mount point configured on the PCH, a simple HTTP request will send the old kernel screaming to /dev/null, our shiny new kernel taking its place.

    The rootfs

    A kernel is mostly useless without a root filesystem containing tools and applications. A number of tools for cross-compiling a full system exist, each with its strengths and weaknesses. The only thing to look out for is the version of kernel headers used (usually a linux-headers package). As we will be running an old kernel, chances are the default version is too recent. Other than this, everything should be by the book.

    Assembling the parts

    Having gathered all the pieces, it is now time to assemble the hack. The following steps are suitable for an NFS-root system. Adaptation to a disk-based system is left as an exercise.

    1. Build a rootfs for MIPS 74Kf little endian. Make sure kernel headers used are no more recent than 2.6.22.x. Include a recent version of the kexec userspace tools.
    2. Fetch and unpack the PCH kernel sources.
    3. Apply the modular kexec patch.
    4. Using this config, build the modules and install them as usual to the rootfs. The version string must be 2.6.22.19-19-4.
    5. From either the same kernel sources or plain 2.6.22.19 with Sigma patches, build a vmlinux and (optionally) modules using this config. Modify the compiled-in command line to point to the correct rootfs. Set the version string to something other than in the previous step.
    6. Copy vmlinux to any directory in the rootfs.
    7. Copy kexec.sh and kexec.cgi to the same directory as vmlinux.
    8. Export the rootfs over NFS with full read/write permissions for the PCH.
    9. Power on the PCH, and update to latest firmware.
    10. Configure an NFS mount of the rootfs.
    11. Navigate to the rootfs in the PCH UI. A directory listing of bin, dev, etc. should be displayed.
    12. On the host system, run the kexec.sh script with the target hostname or IP address as argument.
    13. If all goes well, the new kernel will boot and mount the rootfs.

    Serial console

    A serial console is indispensable for solving boot problems. The PCH board has two UART connectors. We will use the one labeled UART0. The pinout is as follows (not standard PC pinout).

            +-----------+
           2| * * * * * |10
           1| * * * * * |9
            -----------+
              J7 UART0
        /---------------------/ board edge
    
    Pin Function
    1 +5V
    5 Rx
    6 Tx
    10 GND

    The signals are 3.3V so a converter, e.g. MAX202, is required for connecting this to a PC serial port. The default port settings are 115200 bps 8n1.

  • The Big VP8 Debug

    20 novembre 2010, par Multimedia Mike — VP8

    I hope my previous walkthrough of the VP8 4x4 intra coding process was educational. Today, I’ll be walking through an example of what happens when my toy VP8 encoder encodes an intra 16x16 block. This may prove educational to those who have never been exposed to the deep details of this or related algorithms. Also, I wanted to illustrate where I think my VP8 encoder process is going bad and generating such grotesque results.

    Before I start, let me give a shout-out to Google Docs’ Drawing tool which I used to generate these diagrams. It works quite well.

    Results

    (Always cut to the chase in a blog post ; results first.) I’m glad I composed this post. In the course of doing so, I found the problem, fixed it, and am now able to present this image that was decoded from the bitstream encoded by my toy working VP8 encoder :



    Yeah, I know that image doesn’t look like anything you haven’t seen before. The difference is that it has made a successful trip through my VP8 encoder.

    Follow along through the encoding process and learn of the mistake...

    Original Block and Subblocks

    Here is the 16x16 block to be encoded :



    The block is broken down into 16 4x4 subblocks for further encoding :



    Prediction

    The first step is to pick a prediction mode, generate a prediction block, and subtract the predictors from the macroblock. In this case, we will use DC prediction which means the predictor will be the same for each element.

    In 4x4 VP8 DC intra prediction, samples outside of the frame are assumed to be 128. It’s a little different in 16x16 DC intra prediction— samples above the top row are assumed to be 127 while samples left of the leftmost column are assumed to be 129. For the top left macroblock, this still works out to 128.

    Subtract 128 from each of the samples :



    Forward Transform

    Run each of the 16 prediction-removed subblocks through the forward transform. This example uses the forward transform from libvpx 0.9.5 :



    I have highlighted the DC coefficients in each subblock. That’s because those receive special consideration in 16x16 intra coding.

    Quantization

    The Y plane AC quantizer is 4 in this example, the minimum allowed. (The Y plane DC quantizer is also 4 but doesn’t come into play for intra 16x16 coding since the DC coefficients follow a different process.) Thus, quantize (integer divide) each AC element in each subblock (we’ll ignore the DC coefficient for this part) :



    The Y2 Round Trip

    Those highlighted DC coefficients from each of the 16 subblocks comprise the Y2 block. This block is transformed with a slightly different algorithm called the Walsh-Hadamard Transform (WHT). The results of this transform are then quantized (using 8 for both Y2 DC and AC in this example, as those are the smallest Y2 quantizers that VP8 allows), then zigzagged and entropy-coded along with the rest of the macroblock coefficients.

    On the decoder side, the Y2 coefficients are decoded, de-zigzagged, dequantized and run through the inverse WHT.

    And this is where I suspect that most of the error is creeping into my VP8 encoder. Observe the round-trip through the Y2 process :



    As intimated, this part causes me consternation due to the wide discrepancy between the original and the reconstructed Y2 blocks. Observe the absolute difference between the 2 vectors :



    That’s really significant and leads me to believe that this is where the big problem is.

    What’s Wrong ?

    My first suspicion is that the quantization is throwing off the process. I was disabused of this idea when I removed quantization from the equation and immediately reversed the transform :



    So perhaps there is a problem with the forward WHT. Just like with the usual subblock transform, the VP8 spec doesn’t define how to perform the forward WHT, only the inverse WHT. Do I need to audition different forward WHTs from various versions of libvpx, similar to what I did with the other transform ? That doesn’t make a lot of sense— libvpx doesn’t seem to have so much trouble with basic encoding.

    The Punchline

    I reviewed the forward WHT code, the stuff that I plagiarized from libvpx 0.9.0. The function takes, among other parameters, a pitch value. There are 2 loops in the code. The first iterates through the rows of the input matrix— which I assumed was a 4x4 matrix. I was puzzled that during each iteration of the row loop, the input pointer was only being advanced by (pitch/2). I removed the division by 2 and the problem went away. I.e., the encoded image looks correct.

    What’s up with the (pitch/2), anyway ? It seems that the encoder likes to pack 2 4x4 subblocks into an 8x4 block data structure. In fact, the forward DCTs in the libvpx encoder have the same artifact. Remember how I surveyed several variations of forward DCT from different versions of libvpx ? The one that proved most accurate in that test was the one I had already modified to advance the input pointer properly. Fixing the other 2 candidates yields similar results :

    input :   92 91 89 86 91 90 88 86 89 89 89 88 89 87 88 93
    short 0.9.0 : -311 6 2 0 0 11 -6 1 2 -3 3 0 0 0 -2 1
    inverse : 92 91 89 86 91 90 88 87 90 89 89 88 89 87 88 93
    fast  0.9.0 : -313 5 1 0 1 11 -6 1 3 -3 4 0 0 0 -2 1
    inverse : 91 91 89 86 90 90 88 86 89 89 89 88 89 87 88 93
    short 0.9.5 : -312 7 1 0 1 12 -5 2 2 -3 3 -1 1 0 -2 1
    inverse : 92 91 89 86 91 90 88 86 89 89 89 88 89 87 88 93
    

    Code cribber beware !

    Corrected Y2 Round Trip

    Let’s look at that Y2 round trip one more time :



    And another look at the error between the original and the reconstruction :



    Better.

    Dequantization, Prediction, Inverse Transforms, and Reconstruction

    To be honest, now that I solved the major problem, I’m getting a little tired of making these pictures. Long story short, all elements of the original 16 subblocks are dequantized and their DC coefficients are filled in with the appropriate item from the reconstructed Y2 block. A base predictor block is generated (all 128 values in this case). And each Y block is run through the inverse transform and added to the predictor block. The following is the reconstruction :



    And if you compare that against the original luma macroblock (I don’t feel like doing it right now), you’ll find that it’s pretty close.

    I can’t believe how close I was all this time, and how long that pitch bug held me up.

  • Anatomy of an optimization : H.264 deblocking

    26 mai 2010, par Dark Shikari — H.264, assembly, development, speed, x264

    As mentioned in the previous post, H.264 has an adaptive deblocking filter. But what exactly does that mean — and more importantly, what does it mean for performance ? And how can we make it as fast as possible ? In this post I’ll try to answer these questions, particularly in relation to my recent deblocking optimizations in x264.

    H.264′s deblocking filter has two steps : strength calculation and the actual filter. The first step calculates the parameters for the second step. The filter runs on all the edges in each macroblock. That’s 4 vertical edges of length 16 pixels and 4 horizontal edges of length 16 pixels. The vertical edges are filtered first, from left to right, then the horizontal edges, from top to bottom (order matters !). The leftmost edge is the one between the current macroblock and the left macroblock, while the topmost edge is the one between the current macroblock and the top macroblock.

    Here’s the formula for the strength calculation in progressive mode. The highest strength that applies is always selected.

    If we’re on the edge between an intra macroblock and any other macroblock : Strength 4
    If we’re on an internal edge of an intra macroblock : Strength 3
    If either side of a 4-pixel-long edge has residual data : Strength 2
    If the motion vectors on opposite sides of a 4-pixel-long edge are at least a pixel apart (in either x or y direction) or the reference frames aren’t the same : Strength 1
    Otherwise : Strength 0 (no deblocking)

    These values are then thrown into a lookup table depending on the quantizer : higher quantizers have stronger deblocking. Then the actual filter is run with the appropriate parameters. Note that Strength 4 is actually a special deblocking mode that performs a much stronger filter and affects more pixels.

    One can see somewhat intuitively why these strengths are chosen. The deblocker exists to get rid of sharp edges caused by the block-based nature of H.264, and so the strength depends on what exists that might cause such sharp edges. The strength calculation is a way to use existing data from the video stream to make better decisions during the deblocking process, improving compression and quality.

    Both the strength calculation and the actual filter (not described here) are very complex if naively implemented. The latter can be SIMD’d with not too much difficulty ; no H.264 decoder can get away with reasonable performance without such a thing. But what about optimizing the strength calculation ? A quick analysis shows that this can be beneficial as well.

    Since we have to check both horizontal and vertical edges, we have to check up to 32 pairs of coefficient counts (for residual), 16 pairs of reference frame indices, and 128 motion vector values (counting x and y as separate values). This is a lot of calculation ; a naive implementation can take 500-1000 clock cycles on a modern CPU. Of course, there’s a lot of shortcuts we can take. Here’s some examples :

    • If the macroblock uses the 8×8 transform, we only need to check 2 edges in each direction instead of 4, because we don’t deblock inside of the 8×8 blocks.
    • If the macroblock is a P-skip, we only have to check the first edge in each direction, since there’s guaranteed to be no motion vector differences, reference frame differences, or residual inside of the macroblock.
    • If the macroblock has no residual at all, we can skip that check.
    • If we know the partition type of the macroblock, we can do motion vector checks only along the edges of the partitions.
    • If the effective quantizer is so low that no deblocking would be performed no matter what, don’t bother calculating the strength.

    But even all of this doesn’t save us from ourselves. We still have to iterate over a ton of edges, checking each one. Stuff like the partition-checking logic greatly complicates the code and adds overhead even as it reduces the number of checks. And in many cases decoupling the checks to add such logic will make it slower : if the checks are coupled, we can avoid doing a motion vector check if there’s residual, since Strength 2 overrides Strength 1.

    But wait. What if we could do this in SIMD, just like the actual loopfilter itself ? Sure, it seems more of a problem for C code than assembly, but there aren’t any obvious things in the way. Many years ago, Loren Merritt (pengvado) wrote the first SIMD implementation that I know of (for ffmpeg’s decoder) ; it is quite fast, so I decided to work on porting the idea to x264 to see if we could eke out a bit more speed here as well.

    Before I go over what I had to do to make this change, let me first describe how deblocking is implemented in x264. Since the filter is a loopfilter, it acts “in loop” and must be done in both the encoder and decoder — hence why x264 has it too, not just decoders. At the end of encoding one row of macroblocks, x264 goes back and deblocks the row, then performs half-pixel interpolation for use in encoding the next frame.

    We do it per-row for reasons of cache coherency : deblocking accesses a lot of pixels and a lot of code that wouldn’t otherwise be used, so it’s more efficient to do it in a single pass as opposed to deblocking each macroblock immediately after encoding. Then half-pixel interpolation can immediately re-use the resulting data.

    Now to the change. First, I modified deblocking to implement a subset of the macroblock_cache_load function : spend an extra bit of effort loading the necessary data into a data structure which is much simpler to address — as an assembly implementation would need (x264_macroblock_cache_load_deblock). Then I massively cleaned up deblocking to move all of the core strength-calculation logic into a single, small function that could be converted to assembly (deblock_strength_c). Finally, I wrote the assembly functions and worked with Loren to optimize them. Here’s the result.

    And the timings for the resulting assembly function on my Core i7, in cycles :

    deblock_strength_c : 309
    deblock_strength_mmx : 79
    deblock_strength_sse2 : 37
    deblock_strength_ssse3 : 33

    Now that is a seriously nice improvement. 33 cycles on average to perform that many comparisons–that’s absurdly low, especially considering the SIMD takes no branchy shortcuts : it always checks every single edge ! I walked over to my performance chart and happily crossed off a box.

    But I had a hunch that I could do better. Remember, as mentioned earlier, we’re reloading all that data back into our data structures in order to address it. This isn’t that slow, but takes enough time to significantly cut down on the gain of the assembly code. And worse, less than a row ago, all this data was in the correct place to be used (when we just finished encoding the macroblock) ! But if we did the deblocking right after encoding each macroblock, the cache issues would make it too slow to be worth it (yes, I tested this). So I went back to other things, a bit annoyed that I couldn’t get the full benefit of the changes.

    Then, yesterday, I was talking with Pascal, a former Xvid dev and current video hacker over at Google, about various possible x264 optimizations. He had seen my deblocking changes and we discussed that a bit as well. Then two lines hit me like a pile of bricks :

    <_skal_> tried computing the strength at least ?
    <_skal_> while it’s fresh

    Why hadn’t I thought of that ? Do the strength calculation immediately after encoding each macroblock, save the result, and then go pick it up later for the main deblocking filter. Then we can use the data right there and then for strength calculation, but we don’t have to do the whole deblock process until later.

    I went and implemented it and, after working my way through a horde of bugs, eventually got a working implementation. A big catch was that of slices : deblocking normally acts between slices even though normal encoding does not, so I had to perform extra munging to get that to work. By midday today I was able to go cross yet another box off on the performance chart. And now it’s committed.

    Sometimes chatting for 10 minutes with another developer is enough to spot the idea that your brain somehow managed to miss for nearly a straight week.

    NB : the performance chart is on a specific test clip at a specific set of settings (super fast settings) relevant to the company I work at, so it isn’t accurate nor complete for, say, default settings.

    Update : Here’s a higher resolution version of the current chart, as requested in the comments.