
Recherche avancée
Médias (1)
-
Richard Stallman et le logiciel libre
19 octobre 2011, par
Mis à jour : Mai 2013
Langue : français
Type : Texte
Autres articles (44)
-
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)
Sur d’autres sites (7932)
-
Revisiting the Belco Alpha-400
26 août 2010, par Multimedia Mike — GeneralRelieved of the primary FATE maintenance duties, I decided to dust off my MIPS-based Belco Alpha-400 and try to get it doing FATE cycles. And just as I was about to get FATE running, I saw that Mans already got his MIPS-based Popcorn Hour device to run FATE. But here are my notes anyway.
Getting A Prompt
For my own benefit, I made a PDF to remind me precisely how to get a root prompt on the Alpha-400. The ‘jailbreak’ expression seems a little juvenile to me, but it seems to be in vogue right now.Toolchain
When I last tinkered with the Alpha-400, I was trying to build a toolchain that could build binaries to run on the unit’s MIPS chip, to no avail. Sometime last year, MichaelK put together x86_32-hosted toolchains that are able to build mipsel 32-bit binaries for Linux 2.4 and 2.6. The Alpha-400 uses a 2.4 kernel and the corresponding toolchain works famously for building current FFmpeg (--disable-devices
is necessary for building).FATE Samples
Next problem : Making the FATE suite available to the Alpha-400. I copied all of the FATE suite samples onto a VFAT-formatted SD card. The filename case is not preserved for all files which confounds me since it is preserved in other cases. I tried formatting the card for ext3 but the Alpha-400 would not mount it, even though /proc/filesystems lists ext3 (supporting an older version of ext3 ?).Alternative : Copy all of the FATE samples to the device’s rootfs. Space will be a little tight, though. Then again, there is over 600 MB of space free ; I misread earlier and thought there were only 300 MB free.
Remote Execution
To perform FATE cycles on a remote device, it helps to be able to SSH into that remote device. I don’t even want to know how complicated it would be to build OpenSSH for the device. However, the last time I brought up this topic, I learned about a lighter weight SSH replacement called Dropbear. It turns out that Dropbear runs great on this MIPS computer.Running FATE Remotely
I thought all the pieces would be in place to run FATE at this point. However, there is one more issue : Running FATE on a remote system requires that the host and the target are sharing a filesystem somehow. My personal favorite remote filesystem method is sshfs which is supposed to work wherever there is an SSH server. That’s not entirely true, though– sshfs also requires sftp-server to be installed on the server side, a program that Dropbear does not currently provide.I’m not even going to think about getting Samba or NFS server software installed on the Alpha-400. According to the unit’s /proc/filesystems file, nfs is a supported filesystem. I hate setting up NFS but may see if I can get that working anyway.
Residual Weirdness
The unit comes with the venerable Busybox program (BusyBox v1.4.1 (2007-06-01 20:37:18 CST) multi-call binary
) for most of its standard command line utilities. I noticed a quirk where BusyBox’s md5sum gives weird hex characters. This might be a known/fixed issue.Another item is that the Alpha-400′s /dev/null file only has rwxr-xr-x per default. This caused trouble when I first tried to scp using Dropbear using a newly-created, unprivileged user.
-
Of ctors and dtors
18 février 2011, par Multimedia Mike — Programming, Sega DreamcastI haven’t given up on the Sega Dreamcast programming. I was able to compile a bunch of homebrew code for the DC many years ago and I can’t make it work anymore. Again, I was working with a purpose-built, open source RTOS named KallistiOS (or KOS). I can make the programs compile but not run. I had ELF files left over from years ago which still executed. But when I tried to build new ELF files, no luck— the programs crashed before even reaching my main() function.
I found the problem : ELF files are comprised of a number of sections and 2 of these sections are named ’.ctors’ and ’.dtors’ which stand for constructors and destructors. The KOS RTOS performs a manual traversal of .ctors section during program initialization and this is where things go bad. The traversal code doesn’t seem to account for a .ctors section that only contains a single entry. I commented out the function that does the traversal and programs started to work, at least until it was time to exit the program and return control to the program loader. That’s when the counterpart .dtors section traversal code ran and demonstrated the same problem. I’ll exhibit the problematic code at the end of this post.
So I’m finally tinkering with Sega Dreamcast programming once again and with a slightly better grasp of software engineering than the first time I did this.
Portable and Compatible C ?
If nothing else, this low-level embedded stuff exposes you to some serious toolchain arcana, the likes of which you will likely never see working strictly in the desktop arena.Still, this exercise makes me wonder why C code from a decade ago doesn’t compile reliably now. Part of it is because gcc has gotten stricter about the syntax it will accept. In the case of this specific crashing problem, I suspect it comes down to a difference in the way the linker generates the final ELF file. I’ve written a list of items I have had to modify in the KOS codebase in order to get it to compile on more recent gcc versions. I wonder if it would be worth publishing the specifics, or if anyone would ever find the information useful ? Oh, who am I kidding ? Of course I’ll write it up, perhaps publish a new version of the code, if only because that’s the best chance I have of finding my own work again some years down the road.
Problematic C Code
See if this code makes any sense to you. It somehow traverse a list of 32-bit function pointers (in different directions, depending on constructors or destructors), executing each in turn. However, it appears to fall over if the list of pointers consists of a single entry.
C :-
typedef void (*fptr)(void) ;
-
-
static fptr ctor_list[1] __attribute__((section(".ctors"))) = { (fptr) -1 } ;
-
static fptr dtor_list[1] __attribute__((section(".dtors"))) = { (fptr) -1 } ;
-
-
/* Call this to execute all ctors */
-
void arch_ctors() {
-
fptr *fpp ;
-
-
/* Run up to the end of the list (defined by crtend) */
-
for (fpp=ctor_list + 1 ; *fpp != 0 ; ++fpp)
-
;
-
-
/* Now run the ctors backwards */
-
while (—fpp> ctor_list)
-
(**fpp)() ;
-
}
-
-
/* Call this to execute all dtors */
-
void arch_dtors() {
-
fptr *fpp ;
-
-
/* Do the dtors forwards */
-
for (fpp=dtor_list + 1 ; *fpp != 0 ; ++fpp )
-
(**fpp)() ;
-
}
-
-
Basic Video Palette Conversion
How do you take a 24-bit RGB image and convert it to an 8-bit paletted image for the purpose of compression using a codec that requires 8-bit input images ? Seems simple enough and that’s what I’m tackling in this post.
Ask FFmpeg/Libav To Do It
Ideally, FFmpeg / Libav should be able to handle this automatically. Indeed, FFmpeg used to be able to, at least at the time I wrote this post about ZMBV and was unhappy with FFmpeg’s default results. Somewhere along the line, FFmpeg and Libav lost the ability to do this. I suspect it got removed during some swscale refactoring.Still, there’s no telling if the old system would have computed palettes correctly for QuickTime files.
Distance Approach
When I started writing my SMC video encoder, I needed to convert RGB (from PNG files) to PAL8 colorspace. The path of least resistance was to match the pixels in the input image to the default 256-color palette that QuickTime assumes (and is hardcoded into FFmpeg/Libav).How to perform the matching ? Find the palette entry that is closest to a given input pixel, where "closest" is the minimum distance as computed by the usual distance formula (square root of the sum of the squares of the diffs of all the components).
That means for each pixel in an image, check the pixel against 256 palette entries (early termination is possible if an acceptable threshold is met). As you might imagine, this can be a bit time-consuming. I wondered about a faster approach...
Lookup Table
I think this is the approach that FFmpeg used to use, but I went and derived it for myself after studying the default QuickTime palette table. There’s a pattern there— all of the RGB entries are comprised of combinations of 6 values — 0x00, 0x33, 0x66, 0x99, 0xCC, and 0xFF. If you mix and match these for red, green, and blue values, you come up with6 * 6 * 6 = 216
different colors. This happens to be identical to the web-safe color palette.The first (0th) entry in the table is (FF, FF, FF), followed by (FF, FF, CC), (FF, FF, 99), and on down to (FF, FF, 00) when the green component gets knocked down and step and the next color is (FF, CC, FF). The first 36 palette entries in the table all have a red component of 0xFF. Thus, if an input RGB pixel has a red color closest to 0xFF, it must map to one of those first 36 entries.
I created a table which maps indices 0..215 to values from 5..0. Each of the R, G, and B components of an input pixel are used to index into this table and derive 3 indices ri, gi, and bi. Finally, the index into the palette table is given by :
index = ri * 36 + gi * 6 + bi
For example, the pixel (0xFE, 0xFE, 0x01) would yield ri, gi, and bi values of 0, 0, and 5. Therefore :
index = 0 * 36 + 0 * 6 + 5
The palette index is 5, which maps to color (0xFF, 0xFF, 0x00).
Validation
So I was pretty pleased with myself for coming up with that. Now, ideally, swapping out one algorithm for another in my SMC encoder should yield identical results. That wasn’t the case, initially.One problem is that the regulation QuickTime palette actually has 40 more entries above and beyond the typical 216-entry color cube (rounding out the grand total of 256 colors). Thus, using the distance approach with the full default table provides for a little more accuracy.
However, there still seems to be a problem. Let’s check our old standby, the Big Buck Bunny logo image :
Distance approach using the full 256-color QuickTime default palette
Distance approach using the 216-color palette
Table lookup approach using the 216-color palette
I can’t quite account for that big red splotch there. That’s the most notable difference between images 1 and 2 and the only visible difference between images 2 and 3.
To prove to myself that the distance approach is equivalent to the table approach, I wrote a Python script to iterate through all possible RGB combinations and verify the equivalence. If you’re not up on your base 2 math, that’s 224 or 16,777,216 colors to run through. I used Python’s multiprocessing module to great effect and really maximized a Core i7 CPU with 8 hardware threads.
So I’m confident that the palette conversion techniques are sound. The red spot is probably attributable to a bug in my WIP SMC encoder.
Source Code
Update August 23, 2011 : Here’s the Python code I used for proving equivalence between the 2 approaches. In terms of leveraging multiple CPUs, it’s possibly the best program I have written to date.PYTHON :-
# !/usr/bin/python
-
-
from multiprocessing import Pool
-
-
palette = []
-
pal8_table = []
-
-
def process_r(r) :
-
counts = []
-
-
for i in xrange(216) :
-
counts.append(0)
-
-
print "r = %d" % (r)
-
for g in xrange(256) :
-
for b in xrange(256) :
-
min_dsqrd = 0xFFFFFFFF
-
best_index = 0
-
for i in xrange(len(palette)) :
-
dr = palette[i][0] - r
-
dg = palette[i][1] - g
-
db = palette[i][2] - b
-
dsqrd = dr * dr + dg * dg + db * db
-
if dsqrd <min_dsqrd :
-
min_dsqrd = dsqrd
-
best_index = i
-
counts[best_index] += 1
-
-
# check if the distance approach deviates from the table-based approach
-
i = best_index
-
r = palette[i][0]
-
g = palette[i][1]
-
b = palette[i][2]
-
ri = pal8_table[r]
-
gi = pal8_table[g]
-
bi = pal8_table[b]
-
table_index = ri * 36 + gi * 6 + bi ;
-
if table_index != best_index :
-
print "(0x%02X 0x%02X 0x%02X) : distance index = %d, table index = %d" % (r, g, b, best_index, table_index)
-
-
return counts
-
-
if __name__ == ’__main__’ :
-
counts = []
-
for i in xrange(216) :
-
counts.append(0)
-
-
# initialize reference palette
-
color_steps = [ 0xFF, 0xCC, 0x99, 0x66, 0x33, 0x00 ]
-
for r in color_steps :
-
for g in color_steps :
-
for b in color_steps :
-
palette.append([r, g, b])
-
-
# initialize palette conversion table
-
for i in range(0, 26) :
-
pal8_table.append(5)
-
for i in range(26, 77) :
-
pal8_table.append(4)
-
for i in range(77, 128) :
-
pal8_table.append(3)
-
for i in range(128, 179) :
-
pal8_table.append(2)
-
for i in range(179, 230) :
-
pal8_table.append(1)
-
for i in range(230, 256) :
-
pal8_table.append(0)
-
-
# create a pool of worker threads and break up the overall job
-
pool = Pool()
-
it = pool.imap_unordered(process_r, range(256))
-
try :
-
while 1 :
-
partial_counts = it.next()
-
for i in xrange(216) :
-
counts[i] += partial_counts[i]
-
except StopIteration :
-
pass
-
-
print "index, count, red, green, blue"
-
for i in xrange(len(counts)) :
-
print "%d, %d, %d, %d, %d" % (i, counts[i], palette[i][0], palette[i][1], palette[i][2])
-