
Recherche avancée
Médias (91)
-
Richard Stallman et le logiciel libre
19 octobre 2011, par
Mis à jour : Mai 2013
Langue : français
Type : Texte
-
Stereo master soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Audio
-
Elephants Dream - Cover of the soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
#7 Ambience
16 octobre 2011, par
Mis à jour : Juin 2015
Langue : English
Type : Audio
-
#6 Teaser Music
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#5 End Title
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
Autres articles (80)
-
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...) -
Configurer la prise en compte des langues
15 novembre 2010, parAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...) -
XMP PHP
13 mai 2011, parDixit Wikipedia, XMP signifie :
Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)
Sur d’autres sites (8465)
-
Playing Video on a Sega Dreamcast
9 mars 2011, par Multimedia Mike — Sega DreamcastHere’s an honest engineering question : If you were tasked to make compressed video play back on a Sega Dreamcast video game console, what video format would you choose ? Personally, I would choose RoQ, the format invented for The 11th Hour computer game and later used in Quake III and other games derived from the same engine. This post explains my reasoning.
Video Background
One of the things I wanted to do when I procured a used Sega Dreamcast back in 2001 was turn it into a set-top video playback unit. This is something that a lot of people tried to do, apparently, to varying degrees of success. Interest would wane in a few years as it became easier and easier to crack an Xbox and install XBMC. The Xbox was much better suited to playing codecs that were getting big at the time, most notably MPEG-4 part 2 video (DivX/XviD).The Dreamcast, while quite capable when it was released in 1999, was not very well-equipped to deal with an MPEG-type codec. I have recently learned that there are other hackers out there on the internet who are still trying to get the most out of this system. I was contacted for advice about how to make Theora perform better on the Dreamcast.
Interesting thing about consoles and codecs : Since you are necessarily distributing code along with your data, you have far more freedom to use whatever codecs you want for your audio and video data. This is why Vorbis and even Theora have seen quite a bit of use in video games, "internet standards" be darned. Thus, when I realized this application had no hard and fast requirement to use Theora, and that it could use any codec that fit the platform, my mind started churning. When I was programming the DC 10 years ago, I didn’t have access to the same wealth of multimedia knowledge that is currently available.Requirements Gathering
What do we need here ?- Codec needs to run on the Sega Dreamcast ; this eliminates codecs for which only binary decoder implementations are available
- Must decode 320x240 video at 30 fps ; higher resolutions up to 640x480 would be desirable
- Must deliver decent quality at 12X optical read speeds (DC drive speed)
- There must be some decent, preferably free, encoder readily available ; speed of encoding, however, is not important ; i.e., "take as long as you need, encoder"
Theora was the go-to codec because it’s just commonly known as "the free, open source video codec". But clearly it’s not suitable for, well... any purpose, really (sorry, easy target ; OW ! stop throwing things !). VP8/WebM — Theora’s heir apparent — would not qualify either, as my prior experiments have already demonstrated.
Candidates
What did the big boys use for video on the Dreamcast ? A lot of games relied on CRI’s Sofdec middleware which was MPEG-1 video and a custom ADPCM format. I don’t know if I have ever seen DC games that used MPEG-1 video at a higher resolution than 320x240 (though I have not searched exhaustively). The fact that CRI used a custom ADPCM format for this application may indicate that there wasn’t enough CPU power left over to decode a perceptual, transform-based audio codec alongside the 320x240 video.A few other DC games used 4X Technologies’ 4XM format. The most notable licensee was Alone in the Dark : The New Nightmare (DC version only ; PC version used Bink). This codec was DCT-based but incorporated 16-bit RGB colorspace into its design, presumably to optimize for applications like game consoles that couldn’t directly handle planar YUV. AITD:TNN’s videos were 640x360, a marked improvement over the typical Sofdec fare. I was about to write off 4XM as a contender due to lack of encoder, but the encoding tools are preserved on our samples site. A few other issues, though : The FFmpeg decoder doesn’t seem to work correctly as of this writing (and nobody has noticed yet, even though it’s tested via FATE).
What ideas do I have ? Right off the bat, I’m thinking vector quantizer (VQ). Vector quantizers are notoriously slow to compress but are blazingly fast to decompress which is why they were popular in the early days of video compression. First, there’s Cinepak. I fear that might be too simple for this application. Plus, I don’t know if existing (binary-only) compressors are very decent. It seems that they only ever had to handle small videos and I’ve heard that they can really fall over if anything more is demanded of them.
Sorenson Video 1 is another contender. FFmpeg has an encoder (which some allege is better than Sorenson’s original compressor). However, I fear that the wonky algorithm and colorspace might not mesh well with the Dreamcast.
My thinking quickly converged on RoQ. This was designed to run fullscreen (640x480) video on i486-class hardware. While RoQ fundamentally operates in a YUV colorspace, it’s trivial to convert it to any other colorspace during decoding and the image will be rendered in that colorspace. Plus, there are open source encoders available for the format (namely, several versions of Eric Lasota’s Switchblade encoder, one of which lives natively in FFmpeg), as well as the original proprietary encoder.
Which Library ?
There are several code choices here : FFmpeg (LGPL), Switchblade (GPL), and the original Quake 3 source code (GPL). There is one more option that I think might be easiest, which is the decoder Dr. Tim created when he reverse engineered the format in the first place. That has a very liberal "do whatever you like, but be nice and give me credit" license (probably qualifies as BSD).This code is no longer at its original home but the Wayback Machine still had a copy, which I have now mirrored (idroq.tar.gz).
Adaptation
Dr. Tim’s code still compiles and runs great on Linux (64-bit !) with SDL output. I would like to get it ported to the Dreamcast using the same SDL output, which KallistiOS supports. Then, there is the matter of fixing the longstanding chroma bug in the original sample decoder (described here). The decoder also needs to be modified to natively render RGB565 data, as that will work best with the DC’s graphics hardware.After making the code work, I want to profile it and test whether it can handle full-frame 640x480 playback at 30 frames/second. I will need to contrive a sample to achieve this.
Unfortunately, things went off the rails pretty quickly when I tried to get the RoQ decoder ported to DC/KOS. It looks like there’s a bug in KallistiOS’s minimalistic standard C library, or at least a discrepancy with my desktop Linux system. When you read to the end of a file and then seek backwards to someplace that isn’t the end, is the file still in EOF state ?
According to my Linux desktop :
open file ; feof() = 0 seek to end ; feof() = 0 read one more byte ; feof() = 1 seek back to start ; feof() = 0
According to KallistiOS :
open file ; feof() = 0 seek to end ; feof() = 0 read one more byte ; feof() = 1 seek back to start ; feof() = 1
Here’s the seek-test.c program I used to test this issue :
C :-
#include <stdio .h>
-
-
int main()
-
{
-
FILE *f ;
-
unsigned char byte ;
-
-
f = fopen("seek_test.c", "r") ;
-
fseek(f, 0, SEEK_END) ;
-
fread(&byte, 1, 1, f) ;
-
fseek(f, 0, SEEK_SET) ;
-
fclose(f) ;
-
-
return 0 ;
-
}
EOF
Speaking of EOF, I’m about done for this evening.What codec would you select for this task, given the requirements involved ?
-
Anomalie #2002 (En cours) : revisions_rubrique - pipeline ’post_edition’ (RETOUR)
24 février 2011, par b bMon avis (à moi) ^^ J’ai bien sûr testé ce qui passe dans le pipeline et comme je le disais sur le ticket #2001 : on voit bien que les champs modifiés sont présents Tite question : ne pouvais-tu pas commenter le ticket précédent ou alors est-ce que le fait que je l’ai fermé ne te donne plus accès à (...)
-
Simply beyond ridiculous
For the past few years, various improvements on H.264 have been periodically proposed, ranging from larger transforms to better intra prediction. These finally came together in the JCT-VC meeting this past April, where over two dozen proposals were made for a next-generation video coding standard. Of course, all of these were in very rough-draft form ; it will likely take years to filter it down into a usable standard. In the process, they’ll pick the most useful features (hopefully) from each proposal and combine them into something a bit more sane. But, of course, it all has to start somewhere.
A number of features were common : larger block sizes, larger transform sizes, fancier interpolation filters, improved intra prediction schemes, improved motion vector prediction, increased internal bit depth, new entropy coding schemes, and so forth. A lot of these are potentially quite promising and resolve a lot of complaints I’ve had about H.264, so I decided to try out the proposal that appeared the most interesting : the Samsung+BBC proposal (A124), which claims compression improvements of around 40%.
The proposal combines a bouillabaisse of new features, ranging from a 12-tap interpolation filter to 12thpel motion compensation and transforms as large as 64×64. Overall, I would say it’s a good proposal and I don’t doubt their results given the sheer volume of useful features they’ve dumped into it. I was a bit worried about complexity, however, as 12-tap interpolation filters don’t exactly scream “fast”.
I prepared myself for the slowness of an unoptimized encoder implementation, compiled their tool, and started a test encode with their recommended settings.
I waited. The first frame, an I-frame, completed.
I took a nap.
I waited. The second frame, a P-frame, was done.
I played a game of Settlers.
I waited. The third frame, a B-frame, was done.
I worked on a term paper.
I waited. The fourth frame, a B-frame, was done.
After a full 6 hours, 8 frames had encoded. Yes, at this rate, it would take a full two weeks to encode 10 seconds of HD video. On a Core i7. This is not merely slow ; this is over 1000 times slower than x264 on “placebo” mode. This is so slow that it is not merely impractical ; it is impossible to even test. This encoder is apparently designed for some sort of hypothetical future computer from space. And word from other developers is that the Intel proposal is even slower.
This has led me to suspect that there is a great deal of cheating going on in the H.265 proposals. The goal of the proposals, of course, is to pick the best feature set for the next generation video compression standard. But there is an extra motivation : organizations whose features get accepted get patents on the resulting standard, and thus income. With such large sums of money in the picture, dishonesty becomes all the more profitable.
There is a set of rules, of course, to limit how the proposals can optimize their encoders. If different encoders use different optimization techniques, the results will no longer be comparable — remember, they are trying to compare compression features, not methods of optimizing encoder-side decisions. Thus all encoders are required to use a constant quantizer, specified frame types, and so forth. But there are no limits on how slow an encoder can be or what algorithms it can use.
It would be one thing if the proposed encoder was a mere 10 times slower than the current reference ; that would be reasonable, given the low level of optimization and higher complexity of the new standard. But this is beyond ridiculous. With the prize given to whoever can eke out the most PSNR at a given quantizer at the lowest bitrate (with no limits on speed), we’re just going to get an arms race of slow encoders, with every company trying to use the most ridiculous optimizations possible, even if they involve encoding the frame 100,000 times over to choose the optimal parameters. And the end result will be as I encountered here : encoders so slow that they are simply impossible to even test.
Such an arms race certainly does little good in optimizing for reality where we don’t have 30 years to encode an HD movie : a feature that gives great compression improvements is useless if it’s impossible to optimize for in a reasonable amount of time. Certainly once the standard is finalized practical encoders will be written — but it makes no sense to optimize the standard for a use-case that doesn’t exist. And even attempting to “optimize” anything is difficult when encoding a few seconds of video takes weeks.
Update : The people involved have contacted me and insist that there was in fact no cheating going on. This is probably correct ; the problem appears to be that the rules that were set out were simply not strict enough, making many changes that I would intuitively consider “cheating” to be perfectly allowed, and thus everyone can do it.
I would like to apologize if I implied that the results weren’t valid ; they are — the Samsung-BBC proposal is definitely one of the best, which is why I picked it to test with. It’s just that I think any situation in which it’s impossible to test your own software is unreasonable, and thus the entire situation is an inherently broken one, given the lax rules, slow baseline encoder, and no restrictions on compute time.