Recherche avancée

Médias (91)

Autres articles (105)

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (10539)

  • The Big VP8 Debug

    20 novembre 2010, par Multimedia Mike — VP8

    I hope my previous walkthrough of the VP8 4x4 intra coding process was educational. Today, I’ll be walking through an example of what happens when my toy VP8 encoder encodes an intra 16x16 block. This may prove educational to those who have never been exposed to the deep details of this or related algorithms. Also, I wanted to illustrate where I think my VP8 encoder process is going bad and generating such grotesque results.

    Before I start, let me give a shout-out to Google Docs’ Drawing tool which I used to generate these diagrams. It works quite well.

    Results

    (Always cut to the chase in a blog post ; results first.) I’m glad I composed this post. In the course of doing so, I found the problem, fixed it, and am now able to present this image that was decoded from the bitstream encoded by my toy working VP8 encoder :



    Yeah, I know that image doesn’t look like anything you haven’t seen before. The difference is that it has made a successful trip through my VP8 encoder.

    Follow along through the encoding process and learn of the mistake...

    Original Block and Subblocks

    Here is the 16x16 block to be encoded :



    The block is broken down into 16 4x4 subblocks for further encoding :



    Prediction

    The first step is to pick a prediction mode, generate a prediction block, and subtract the predictors from the macroblock. In this case, we will use DC prediction which means the predictor will be the same for each element.

    In 4x4 VP8 DC intra prediction, samples outside of the frame are assumed to be 128. It’s a little different in 16x16 DC intra prediction— samples above the top row are assumed to be 127 while samples left of the leftmost column are assumed to be 129. For the top left macroblock, this still works out to 128.

    Subtract 128 from each of the samples :



    Forward Transform

    Run each of the 16 prediction-removed subblocks through the forward transform. This example uses the forward transform from libvpx 0.9.5 :



    I have highlighted the DC coefficients in each subblock. That’s because those receive special consideration in 16x16 intra coding.

    Quantization

    The Y plane AC quantizer is 4 in this example, the minimum allowed. (The Y plane DC quantizer is also 4 but doesn’t come into play for intra 16x16 coding since the DC coefficients follow a different process.) Thus, quantize (integer divide) each AC element in each subblock (we’ll ignore the DC coefficient for this part) :



    The Y2 Round Trip

    Those highlighted DC coefficients from each of the 16 subblocks comprise the Y2 block. This block is transformed with a slightly different algorithm called the Walsh-Hadamard Transform (WHT). The results of this transform are then quantized (using 8 for both Y2 DC and AC in this example, as those are the smallest Y2 quantizers that VP8 allows), then zigzagged and entropy-coded along with the rest of the macroblock coefficients.

    On the decoder side, the Y2 coefficients are decoded, de-zigzagged, dequantized and run through the inverse WHT.

    And this is where I suspect that most of the error is creeping into my VP8 encoder. Observe the round-trip through the Y2 process :



    As intimated, this part causes me consternation due to the wide discrepancy between the original and the reconstructed Y2 blocks. Observe the absolute difference between the 2 vectors :



    That’s really significant and leads me to believe that this is where the big problem is.

    What’s Wrong ?

    My first suspicion is that the quantization is throwing off the process. I was disabused of this idea when I removed quantization from the equation and immediately reversed the transform :



    So perhaps there is a problem with the forward WHT. Just like with the usual subblock transform, the VP8 spec doesn’t define how to perform the forward WHT, only the inverse WHT. Do I need to audition different forward WHTs from various versions of libvpx, similar to what I did with the other transform ? That doesn’t make a lot of sense— libvpx doesn’t seem to have so much trouble with basic encoding.

    The Punchline

    I reviewed the forward WHT code, the stuff that I plagiarized from libvpx 0.9.0. The function takes, among other parameters, a pitch value. There are 2 loops in the code. The first iterates through the rows of the input matrix— which I assumed was a 4x4 matrix. I was puzzled that during each iteration of the row loop, the input pointer was only being advanced by (pitch/2). I removed the division by 2 and the problem went away. I.e., the encoded image looks correct.

    What’s up with the (pitch/2), anyway ? It seems that the encoder likes to pack 2 4x4 subblocks into an 8x4 block data structure. In fact, the forward DCTs in the libvpx encoder have the same artifact. Remember how I surveyed several variations of forward DCT from different versions of libvpx ? The one that proved most accurate in that test was the one I had already modified to advance the input pointer properly. Fixing the other 2 candidates yields similar results :

    input :   92 91 89 86 91 90 88 86 89 89 89 88 89 87 88 93
    short 0.9.0 : -311 6 2 0 0 11 -6 1 2 -3 3 0 0 0 -2 1
    inverse : 92 91 89 86 91 90 88 87 90 89 89 88 89 87 88 93
    fast  0.9.0 : -313 5 1 0 1 11 -6 1 3 -3 4 0 0 0 -2 1
    inverse : 91 91 89 86 90 90 88 86 89 89 89 88 89 87 88 93
    short 0.9.5 : -312 7 1 0 1 12 -5 2 2 -3 3 -1 1 0 -2 1
    inverse : 92 91 89 86 91 90 88 86 89 89 89 88 89 87 88 93
    

    Code cribber beware !

    Corrected Y2 Round Trip

    Let’s look at that Y2 round trip one more time :



    And another look at the error between the original and the reconstruction :



    Better.

    Dequantization, Prediction, Inverse Transforms, and Reconstruction

    To be honest, now that I solved the major problem, I’m getting a little tired of making these pictures. Long story short, all elements of the original 16 subblocks are dequantized and their DC coefficients are filled in with the appropriate item from the reconstructed Y2 block. A base predictor block is generated (all 128 values in this case). And each Y block is run through the inverse transform and added to the predictor block. The following is the reconstruction :



    And if you compare that against the original luma macroblock (I don’t feel like doing it right now), you’ll find that it’s pretty close.

    I can’t believe how close I was all this time, and how long that pitch bug held me up.

  • Processing Big Data Problems

    8 janvier 2011, par Multimedia Mike — Big Data

    I’m becoming more interested in big data problems, i.e., extracting useful information out of absurdly sized sets of input data. I know it’s a growing field and there is a lot to read on the subject. But you know how I roll— just think of a problem to solve and dive right in.

    Here’s how my adventure unfolded.

    The Corpus
    I need to run a command line program on a set of files I have collected. This corpus is on the order of 350,000 files. The files range from 7 bytes to 175 MB. Combined, they occupy around 164 GB of storage space.

    Oh, and said storage space resides on an external, USB 2.0-connected hard drive. Stop laughing.

    A file is named according to the SHA-1 hash of its data. The files are organized in a directory hierarchy according to the first 6 hex digits of the SHA-1 hash (e.g., a file named a4d5832f... is stored in a4/d5/83/a4d5832f...). All of this file hash, path, and size information is stored in an SQLite database.

    First Pass
    I wrote a Python script that read all the filenames from the database, fed them into a pool of worker processes using Python’s multiprocessing module, and wrote some resulting data for each file back to the SQLite database. My Eee PC has a single-core, hyperthreaded Atom which presents 2 CPUs to the system. Thus, 2 worker threads crunched the corpus. It took awhile. It took somewhere on the order of 9 or 10 or maybe even 12 hours. It took long enough that I’m in no hurry to re-run the test and get more precise numbers.

    At least I extracted my initial set of data from the corpus. Or did I ?

    Think About The Future

    A few days later, I went back to revisit the data only to notice that the SQLite database was corrupted. To add insult to that bit of injury, the script I had written to process the data was also completely corrupted (overwritten with something unrelated to Python code). BTW, this is was on a RAID brick configured for redundancy. So that’s strike 3 in my personal dealings with RAID technology.

    I moved the corpus to a different external drive and also verified the files after writing (easy to do since I already had the SHA-1 hashes on record).

    The corrupted script was pretty simple to rewrite, even a little better than before. Then I got to re-run it. However, this run was on a faster machine, a hyperthreaded, quad-core beast that exposes 8 CPUs to the system. The reason I wasn’t too concerned about the poor performance with my Eee PC is that I knew I was going to be able to run in on this monster later.

    So I let the rewritten script rip. The script gave me little updates regarding its progress. As it did so, I ran some rough calculations and realized that it wasn’t predicted to finish much sooner than it would have if I were running it on the Eee PC.

    Limiting Factors
    It had been suggested to me that I/O bandwidth of the external USB drive might be a limiting factor. This is when I started to take that idea very seriously.

    The first idea I had was to move the SQLite database to a different drive. The script records data to the database for every file processed, though it only commits once every 100 UPDATEs, so at least it’s not constantly syncing the disc. I ran before and after tests with a small subset of the corpus and noticed a substantial speedup thanks to this policy chance.

    Then I remembered hearing something about "atime" which is access time. Linux filesystems, per default, record the time that a file was last accessed. You can watch this in action by running 'stat <file> ; cat <file> > /dev/null ; stat <file>' and observe that the "Access" field has been updated to NOW(). This also means that every single file that gets read from the external drive still causes an additional write. To avoid this, I started mounting the external drive with '-o noatime' which instructs Linux not to record "last accessed" time for files.

    On the limited subset test, this more than doubled script performance. I then wondered about mounting the external drive as read-only. This had the same performance as noatime. I thought about using both options together but verified that access times are not updated for a read-only filesystem.

    A Note On Profiling
    Once you start accessing files in Linux, those files start getting cached in RAM. Thus, if you profile, say, reading a gigabyte file from a disk and get 31 MB/sec, and then repeat the same test, you’re likely to see the test complete instantaneously. That’s because the file is already sitting in memory, cached. This is useful in general application use, but not if you’re trying to profile disk performance.

    Thus, in between runs, do (as root) 'sync; echo 3 > /proc/sys/vm/drop_caches' in order to wipe caches (explained here).

    Even Better ?
    I re-ran the test using these little improvements. Now it takes somewhere around 5 or 6 hours to run.

    I contrived an artificially large file on the external drive and did some 'dd' tests to measure what the drive could really do. The drive consistently measured a bit over 31 MB/sec. If I could read and process the data at 30 MB/sec, the script would be done in about 95 minutes.

    But it’s probably rather unreasonable to expect that kind of transfer rate for lots of smaller files scattered around a filesystem. However, it can’t be that helpful to have 8 different processes constantly asking the HD for 8 different files at any one time.

    So I wrote a script called stream-corpus.py which simply fetched all the filenames from the database and loaded the contents of each in turn, leaving the data to be garbage-collected at Python’s leisure. This test completed in 174 minutes, just shy of 3 hours. I computed an average read speed of around 17 MB/sec.

    Single-Reader Script
    I began to theorize that if I only have one thread reading, performance should improve greatly. To test this hypothesis without having to do a lot of extra work, I cleared the caches and ran stream-corpus.py until 'top' reported that about half of the real memory had been filled with data. Then I let the main processing script loose on the data. As both scripts were using sorted lists of files, they iterated over the filenames in the same order.

    Result : The processing script tore through the files that had obviously been cached thanks to stream-corpus.py, degrading drastically once it had caught up to the streaming script.

    Thus, I was incented to reorganize the processing script just slightly. Now, there is a reader thread which reads each file and stuffs the name of the file into an IPC queue that one of the worker threads can pick up and process. Note that no file data is exchanged between threads. No need— the operating system is already implicitly holding onto the file data, waiting in case someone asks for it again before something needs that bit of RAM. Technically, this approach accesses each file multiple times. But it makes little practical difference thanks to caching.

    Result : About 183 minutes to process the complete corpus (which works out to a little over 16 MB/sec).

    Why Multiprocess
    Is it even worthwhile to bother multithreading this operation ? Monitoring the whole operation via 'top', most instances of the processing script are barely using any CPU time. Indeed, it’s likely that only one of the worker threads is doing any work most of the time, pulling a file out of the IPC queue as soon the reader thread triggers its load into cache. Right now, the processing is usually pretty quick. There are cases where the processing (external program) might hang (one of the reasons I’m running this project is to find those cases) ; the multiprocessing architecture at least allows other processes to take over until a hanging process is timed out and killed by its monitoring process.

    Further, the processing is pretty simple now but is likely to get more intensive in future iterations. Plus, there’s the possibility that I might move everything onto a more appropriately-connected storage medium which should help alleviate the bottleneck bravely battled in this post.

    There’s also the theoretical possibility that the reader thread could read too far ahead of the processing threads. Obviously, that’s not too much of an issue in the current setup. But to guard against it, the processes could share a variable that tracks the total number of bytes that have been processed. The reader thread adds filesizes to the count while the processing threads subtract file sizes. The reader thread would delay reading more if the number got above a certain threshold.

    Leftovers
    I wondered if the order of accessing the files mattered. I didn’t write them to the drive in any special order. The drive is formatted with Linux ext3. I ran stream-corpus.py on all the filenames sorted by filename (remember the SHA-1 naming convention described above) and also by sorting them randomly.

    Result : It helps immensely for the filenames to be sorted. The sorted variant was a little more than twice as fast as the random variant. Maybe it has to do with accessing all the files in a single directory before moving onto another directory.

    Further, I have long been under the impression that the best read speed you can expect from USB 2.0 was 27 Mbytes/sec (even though 480 Mbit/sec is bandied about in relation to the spec). This comes from profiling I performed with an external enclosure that supports both USB 2.0 and FireWire-400 (and eSata). FW-400 was able to read the same file at nearly 40 Mbytes/sec that USB 2.0 could only read at 27 Mbytes/sec. Other sources I have read corroborate this number. But this test (using different hardware), achieved over 31 Mbytes/sec.

  • Heroic Defender of the Stack

    27 janvier 2011, par Multimedia Mike — Programming

    Problem Statement

    I have been investigating stack smashing and countermeasures (stack smashing prevention, or SSP). Briefly, stack smashing occurs when a function allocates a static array on the stack and writes past the end of it, onto other local variables and eventually onto other function stack frames. When it comes time to return from the function, the return address has been corrupted and the program ends up some place it really shouldn’t. In the best case, the program just crashes ; in the worst case, a malicious party crafts code to exploit this malfunction.

    Further, debugging such a problem is especially obnoxious because by the time the program has crashed, it has already trashed any record (on the stack) of how it got into the errant state.

    Preventative Countermeasure

    GCC has had SSP since version 4.1. The computer inserts SSP as additional code when the -fstack-protector command line switch is specified. Implementation-wise, SSP basically inserts a special value (the literature refers to this as the ’canary’ as in "canary in the coalmine") at the top of the stack frame when entering the function, and code before leaving the function to make sure the canary didn’t get stepped on. If something happens to the canary, the program is immediately aborted with a message to stderr about what happened. Further, gcc’s man page on my Ubuntu machine proudly trumpets that this functionality is enabled per default ever since Ubuntu 6.10.

    And that’s really all there is to it. Your code is safe from stack smashing by default. Or so the hand-wavy documentation would have you believe.

    Not exactly

    Exercising the SSP

    I wanted to see the SSP in action to make sure it was a real thing. So I wrote some code that smashes the stack in pretty brazen ways so that I could reasonably expect to trigger the SSP (see later in this post for the code). Here’s what I learned that wasn’t in any documentation :

    SSP is only emitted for functions that have static arrays of 8-bit data (i.e., [unsigned] chars). If you have static arrays of other data types (like, say, 32-bit ints), those are still fair game for stack smashing.

    Evaluating the security vs. speed/code size trade-offs, it makes sense that the compiler wouldn’t apply this protection everywhere (I can only muse about how my optimization-obsessive multimedia hacking colleagues would absolute freak out if this code were unilaterally added to all functions). So why are only static char arrays deemed to be "vulnerable objects" (the wording that the gcc man page uses) ? A security hacking colleague suggested that this is probably due to the fact that the kind of data which poses the highest risk is arrays of 8-bit input data from, e.g., network sources.

    The gcc man page also lists an option -fstack-protector-all that is supposed to protect all functions. The man page’s definition of "all functions" perhaps differs from my own since invoking the option does not have differ in result from plain, vanilla -fstack-protector.

    The Valgrind Connection

    "Memory trouble ? Run Valgrind !" That may as well be Valgrind’s marketing slogan. Indeed, it’s the go-to utility for finding troublesome memory-related problems and has saved me on a number of occasions. However, it must be noted that it is useless for debugging this type of problem. If you understand how Valgrind works, this makes perfect sense. Valgrind operates by watching all memory accesses and ensuring that the program is only accessing memory to which it has privileges. In the stack smashing scenario, the program is fully allowed to write to that stack space ; after all, the program recently, legitimately pushed that return value onto the stack when calling the errant, stack smashing function.

    Valgrind embodies a suite of tools. My idea for an addition to this suite would be a mechanism which tracks return values every time a call instruction is encountered. The tool could track the return values in a separate stack data structure, though this might have some thorny consequences for some more unusual program flows. Instead, it might track them in some kind of hash/dictionary data structure and warn the programmer whenever a ’ret’ instruction is returning to an address that isn’t in the dictionary.

    Simple Stack Smashing Code

    Here’s the code I wrote to test exactly how SSP gets invoked in gcc. Compile with ’gcc -g -O0 -Wall -fstack-protector-all -Wstack-protector stack-fun.c -o stack-fun’.

    stack-fun.c :

    C :
    1. /* keep outside of the stack frame */
    2. static int i ;
    3.  
    4. void stack_smasher32(void)
    5. {
    6.  int buffer32[8] ;
    7.  // uncomment this array and compile without optimizations
    8.  // in order to force this function to compile with SSP
    9. // char buffer_to_trigger_ssp[8] ;
    10.  
    11.  for (i = 0 ; i <50 ; i++)
    12.   buffer32[i] = 0xA5 ;
    13. }
    14.  
    15. void stack_smasher8(void)
    16. {
    17.  char buffer8[8] ;
    18.  for (i = 0 ; i <50 ; i++)
    19.   buffer8[i] = 0xA5 ;
    20. }
    21.  
    22. int main()
    23. {
    24. // stack_smasher8() ;
    25.  stack_smasher32() ;
    26.  return 0 ;
    27. }

    The above incarnation should just produce the traditional "Segmentation fault". However, uncommenting and executing stack_smasher8() in favor of stack_smasher32() should result in "*** stack smashing detected *** : ./stack-fun terminated", followed by the venerable "Segmentation fault".

    As indicated in the comments for stack_smasher32(), it’s possible to trick the compiler into emitting SSP for a function by inserting an array of at least 8 bytes (any less and SSP won’t emit, as documented, unless gcc’s ssp-buffer-size parameter is tweaked). This has to be compiled with no optimization at all (-O0) or else the compiler will (quite justifiably) optimize away the unused buffer and omit SSP.

    For reference, I ran my tests on Ubuntu 10.04.1 with gcc 4.4.3 compiling the code for both x86_32 and x86_64.