
Recherche avancée
Médias (17)
-
Matmos - Action at a Distance
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
DJ Dolores - Oslodum 2004 (includes (cc) sample of “Oslodum” by Gilberto Gil)
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Danger Mouse & Jemini - What U Sittin’ On ? (starring Cee Lo and Tha Alkaholiks)
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Cornelius - Wataridori 2
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
The Rapture - Sister Saviour (Blackstrobe Remix)
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Chuck D with Fine Arts Militia - No Meaning No
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (111)
-
Script d’installation automatique de MediaSPIP
25 avril 2011, parAfin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
La documentation de l’utilisation du script d’installation (...) -
Que fait exactement ce script ?
18 janvier 2011, parCe script est écrit en bash. Il est donc facilement utilisable sur n’importe quel serveur.
Il n’est compatible qu’avec une liste de distributions précises (voir Liste des distributions compatibles).
Installation de dépendances de MediaSPIP
Son rôle principal est d’installer l’ensemble des dépendances logicielles nécessaires coté serveur à savoir :
Les outils de base pour pouvoir installer le reste des dépendances Les outils de développements : build-essential (via APT depuis les dépôts officiels) ; (...) -
Automated installation script of MediaSPIP
25 avril 2011, parTo overcome the difficulties mainly due to the installation of server side software dependencies, an "all-in-one" installation script written in bash was created to facilitate this step on a server with a compatible Linux distribution.
You must have access to your server via SSH and a root account to use it, which will install the dependencies. Contact your provider if you do not have that.
The documentation of the use of this installation script is available here.
The code of this (...)
Sur d’autres sites (10770)
-
dm365 mpeg4 encoder P-Frames
14 juin 2012, par UlteriorI am implementing the operation of encoding video with TI DM365 mpeg4 encoder and containerizing it with ffmpeg mp4 container using a dummy FMP4 codec to produce headers and footers. While the container is proven to be working correctly using similar Intel based mpeg4 encoder, the dm365 gives a mosaic result if P frames are used at all. Using only I frames works, but I would like to minimize amount of data stored.
The example of the result can be viewed here. Settings are 1-Iframe, 9-Pframes
TI developers didn't answer my question regarding this in 2 days, so I am trying to get help here.
-
Ode to the Gravis Ultrasound
1er août 2011, par Multimedia Mike — GeneralWARNING : This post is a bunch of nostalgia. Feel free to follow along if you recall the DOS days of the early-mid 1990s.
I finally let go of my Gravis Ultrasound MAX sound card a little while ago. It felt like the end of an era for me, even though I had scarcely used the card in recent memory.
The Beginning
What is the Gravis Ultrasound ? Only the finest PC sound card from the classic DOS days. Back in the day (very early 1990s), most consumer PC sound cards were Yamaha OPL FM synthesizers paired with a basic digital to analog converter (DAC). Gravis, a company known for game controllers, dared to break with the dominant paradigm of Sound Blaster clones and create a sound card that had 32 digital channels.
I heard about the GUS sometime in 1992 through one of the dominant online services at the time, Prodigy. Through the message boards, I learned of a promotion with Electronic Arts in which customers could pre-order a GUS at a certain discount along with 2 EA games from a selected catalog (with progressive discounts when ordering more games from the list). I know I got the DOS version of PowerMonger ; I think the other was Night Shift, though that doesn’t seem to be an EA title.Anyway, 1992 saw many maddening delays of the GUS hardware. Finally, reports of GUS shipments began to trickle into the Prodigy message forums. Then one day in November, 1992, mine arrived. Into the 286 machine it went and a valiant attempt at software installation was made. A friend and I fought with the software late into the evening, trying to make this thing work reasonably. I remember grabbing a pair of old headphones sitting near the computer that were used for an ancient (even for the time) portable radio. That was the only means of sound reproduction we had available at that moment. And it still sounded incredible.
After graduating to progressively superior headphones, I would later return to that original pair only to feel my ears were being physically assaulted. Strange, they sounded fine that first night I was trying to make the GUS work. I guess this was my first understanding that the degree to which one is a snobby audiophile is all a matter of hard-earned experience.
Technology
The GUS was powered by something called a GF1 which was supposed to use a technology called wavetable synthesis. In the early days, I thought (and I wasn’t alone in this) that this meant that the GF1 chip had a bunch of digitized instrument samples stored in the ASIC. That wasn’t it.However, it did feature 32 digital channels at a time when most PC audio cards had 2 (plus that Yamaha FM synthesizer). There was some hemming and hawing about how the original GUS couldn’t drive all 32 channels at a full 44.1 kHz ("CD quality") playback rate. It’s true— if 14 channels were enabled, all could be played at 44.1 kHz. Enabling more channels started progressive degradation and with all 32 channels, each was only playing at around 19 kHz. Still, from my emerging game programmer perspective, that allowed for 8-channel tracker music and 6 channels of sound effects, all at the vaunted CD level of quality.
Games and Compatibility
The primary reason to have a discrete sound card was for entertainment applications — ahem, games. GUS support was pretty sketchy out of the gate (ostensibly a major reason for the card’s delay). While many sound cards offered Sound Blaster emulation by basically having the same hardware as Sound Blaster cards, the GUS took a software route towards emulating the SB. To do this required a program called the Sound Blaster Operating System, or SBOS.Oh, how awesome it was to hear the program exclaim "SBOS installed !" And how harshly it grated on your nerves after the 200th time hearing it due to so many reboots and fiddling with options to make your games work. Also, I’ve always wondered if there’s something special about sampling an ’s’ sound — does it strain the sampling frequency range ? Perhaps the phrase was sampled at too low a bitrate because the ’s’ sounds didn’t come through very clearly, which is something you notice after hundreds of iterations when there are 3 ’s’ sounds in the phrase.
Fortunately, SBOS became less relevant with the advent of Mega-Em, a separate emulator which intercepted calls to Roland MIDI systems and routed them to the very capable GUS. Roland-supporting games sounded beautiful.
Eventually, more and more DOS games were released with native Gravis support, sometimes with the help of The Miles Sound System (from our friends at Rad Game Tools — you know, the people behind Smacker and Bink). The library changelog is quite the trip down PC memory lane.
An important area where the GUS shined brightly was that of demos and music trackers. The emerging PC demo scene embraced the powerful GUS (aided, no doubt, by Gravis’ sponsorship of the community) and the coolest computer art and music of the time natively supported the card.
Programming
At this point in my life, I was a budding programmer in high school and was fairly intent on programming video games. So far, I had figured out how to make a few blips using a borrowed Sound Blaster card. I went to great lengths to learn how to program the Gravis Ultrasound.Oh you kids today, with your easy access to information at the tips of your fingers thanks to Google and the broader internet. I had to track down whatever information I could find through a combination of Prodigy message boards and local dialup BBSes and FidoNet message bases. Gravis was initially tight-lipped about programming information for its powerful card, as was de rigueur of hardware companies (something that largely persists to this day). But Gravis eventually saw an opportunity to one-up encumbent Creative Labs and released a full SDK for the Ultrasound. I wanted the SDK badly.
So it was early-mid 1993. Gravis released an SDK. I heard that it was available on their support BBS. Their BBS with a long distance phone number. If memory serves, the SDK was only in the neighborhood of 1.5 Mbytes. That takes a long time to transfer via a 2400 baud modem at a time when long distance phone charges were still a thing and not insubstantial.
Luckily, they also put the SDK on something called an ’FTP site’. Fortunately, about this time, I had the opportunity to get some internet access via the local university.
Indeed, my entire motivation for initially wanting to get on the internet was to obtain special programming information. Is that nerdy enough for you ?
I see that the GUS SDK is still available via the Gravis FTP site. The file GUSDK222.ZIP is dated 1998 and is less than a megabyte.
Next Generation : CD Support
So I had my original GUS by the end of 1992. That was just the first iteration of the Gravis Ultrasound. The next generation was the GUS MAX. When I was ready to get into the CD-ROM era, this was what I wanted in my computer. This is because the GUS MAX had CD-ROM support. This is odd to think about now when all optical drives have SATA interfaces and (P)ATA interfaces before that— what did CD-ROM compatibility mean back then ? I wasn’t quite sure. But in early 1995, I headed over to Computer City (R.I.P.) and bought a new GUS MAX and Sony double-speed CD-ROM drive to install in the family’s PC.
About the "CD-ROM compatibility" : It seems that there were numerous competing interfaces in the early days of CD-ROM technology. The GUS MAX simply integrated 3 different CD-ROM controllers onto the audio card. This was superfluous to me since the Sony drive came with an appropriate controller card anyway, though I didn’t figure out that the extra controller card was unnecessary until after I installed it. No matter ; computers of the day were rife with expansion ports.
The 3 different CD-ROM controllers on the GUS MAX
Explaining The Difference
It was difficult to explain the difference in quality to those who didn’t really care. Sometime during 1995, I picked up a quasi-promotional CD-ROM called "The Gravis Ultrasound Experience" from Babbage’s computer store (remember when that was a thing ?). As most PC software had been distributed on floppy discs up until this point, this CD-ROM was an embarrassment of riches. Tons of game demos, scene demos, tracker music, and all the latest GUS drivers and support software.Further, the CD-ROM had a number of red book CD audio tracks that illustrated the difference between Sound Blaster cards and the GUS. I remember loaning this to a tech-savvy coworker who disbelieved how awesome the GUS was. The coworker took it home, listened to it, and wholly agreed that the GUS audio sounded better than the SB audio in the comparison — and was thoroughly confused because she was hearing this audio emanating from her Sound Blaster. It was the difference between real-time and pre-rendered audio, I suppose, but I failed to convey that message. I imagine the same issue comes up even today regarding real-time video rendering vs., e.g., a pre-rendered HD cinematic posted on YouTube.
Regrettably, I can’t find that CD-ROM anymore which leads me to believe that the coworker never gave it back. Too bad, because it was quite the treasure trove.
Aftermath
According to folklore I’ve heard, Gravis couldn’t keep up as the world changed to Windows and failed to deliver decent drivers. Indeed, I remember trying to keep my GUS in service under Windows 95 well into 1998 but eventually relented and installed some kind of more appropriate sound card that was better supported under Windows.Of course, audio output capability has been standard issue for any PC for at least 10 years and many people aren’t even aware that discrete sound cards still exist. Real-time audio rendering has become less essential as full musical tracks can be composed and compressed into PCM format and delivered with the near limitless space afforded by optical storage.
A few years ago, it was easy to pick up old GUS cards on eBay for cheap. As of this writing, there are only a few and they’re pricy (but perhaps not selling). Maybe I was just viewing during the trough of no value a few years ago.
Nowadays, of course, anyone interested in studying the old GUS or getting a nostalgia fix need only boot up the always-excellent DOSBox emulator which provides remarkable GUS emulation support.
-
Simply beyond ridiculous
For the past few years, various improvements on H.264 have been periodically proposed, ranging from larger transforms to better intra prediction. These finally came together in the JCT-VC meeting this past April, where over two dozen proposals were made for a next-generation video coding standard. Of course, all of these were in very rough-draft form ; it will likely take years to filter it down into a usable standard. In the process, they’ll pick the most useful features (hopefully) from each proposal and combine them into something a bit more sane. But, of course, it all has to start somewhere.
A number of features were common : larger block sizes, larger transform sizes, fancier interpolation filters, improved intra prediction schemes, improved motion vector prediction, increased internal bit depth, new entropy coding schemes, and so forth. A lot of these are potentially quite promising and resolve a lot of complaints I’ve had about H.264, so I decided to try out the proposal that appeared the most interesting : the Samsung+BBC proposal (A124), which claims compression improvements of around 40%.
The proposal combines a bouillabaisse of new features, ranging from a 12-tap interpolation filter to 12thpel motion compensation and transforms as large as 64×64. Overall, I would say it’s a good proposal and I don’t doubt their results given the sheer volume of useful features they’ve dumped into it. I was a bit worried about complexity, however, as 12-tap interpolation filters don’t exactly scream “fast”.
I prepared myself for the slowness of an unoptimized encoder implementation, compiled their tool, and started a test encode with their recommended settings.
I waited. The first frame, an I-frame, completed.
I took a nap.
I waited. The second frame, a P-frame, was done.
I played a game of Settlers.
I waited. The third frame, a B-frame, was done.
I worked on a term paper.
I waited. The fourth frame, a B-frame, was done.
After a full 6 hours, 8 frames had encoded. Yes, at this rate, it would take a full two weeks to encode 10 seconds of HD video. On a Core i7. This is not merely slow ; this is over 1000 times slower than x264 on “placebo” mode. This is so slow that it is not merely impractical ; it is impossible to even test. This encoder is apparently designed for some sort of hypothetical future computer from space. And word from other developers is that the Intel proposal is even slower.
This has led me to suspect that there is a great deal of cheating going on in the H.265 proposals. The goal of the proposals, of course, is to pick the best feature set for the next generation video compression standard. But there is an extra motivation : organizations whose features get accepted get patents on the resulting standard, and thus income. With such large sums of money in the picture, dishonesty becomes all the more profitable.
There is a set of rules, of course, to limit how the proposals can optimize their encoders. If different encoders use different optimization techniques, the results will no longer be comparable — remember, they are trying to compare compression features, not methods of optimizing encoder-side decisions. Thus all encoders are required to use a constant quantizer, specified frame types, and so forth. But there are no limits on how slow an encoder can be or what algorithms it can use.
It would be one thing if the proposed encoder was a mere 10 times slower than the current reference ; that would be reasonable, given the low level of optimization and higher complexity of the new standard. But this is beyond ridiculous. With the prize given to whoever can eke out the most PSNR at a given quantizer at the lowest bitrate (with no limits on speed), we’re just going to get an arms race of slow encoders, with every company trying to use the most ridiculous optimizations possible, even if they involve encoding the frame 100,000 times over to choose the optimal parameters. And the end result will be as I encountered here : encoders so slow that they are simply impossible to even test.
Such an arms race certainly does little good in optimizing for reality where we don’t have 30 years to encode an HD movie : a feature that gives great compression improvements is useless if it’s impossible to optimize for in a reasonable amount of time. Certainly once the standard is finalized practical encoders will be written — but it makes no sense to optimize the standard for a use-case that doesn’t exist. And even attempting to “optimize” anything is difficult when encoding a few seconds of video takes weeks.
Update : The people involved have contacted me and insist that there was in fact no cheating going on. This is probably correct ; the problem appears to be that the rules that were set out were simply not strict enough, making many changes that I would intuitively consider “cheating” to be perfectly allowed, and thus everyone can do it.
I would like to apologize if I implied that the results weren’t valid ; they are — the Samsung-BBC proposal is definitely one of the best, which is why I picked it to test with. It’s just that I think any situation in which it’s impossible to test your own software is unreasonable, and thus the entire situation is an inherently broken one, given the lax rules, slow baseline encoder, and no restrictions on compute time.