
Recherche avancée
Médias (1)
-
Sintel MP4 Surround 5.1 Full
13 mai 2011, par
Mis à jour : Février 2012
Langue : English
Type : Video
Autres articles (84)
-
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...) -
Configurer la prise en compte des langues
15 novembre 2010, parAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...) -
XMP PHP
13 mai 2011, parDixit Wikipedia, XMP signifie :
Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)
Sur d’autres sites (6675)
-
2 GB Should Be Enough For Me
31 août 2010, par Multimedia Mike — GeneralMy new EeePC 1201PN netbook has 2 GB of RAM. Call me shortsighted but I feel like “that ought to be enough for me”. I’m not trying to claim that it ought to be enough for everyone. I am, however, questioning the utility of swap space for those skilled in the art of computing.
Technology marches on : This ancient 128 MB RAM module is larger than my digital camera’s battery charger… and I just realized that comparison doesn’t make any sense
Does anyone else have this issue ? It has gotten to the point where I deliberately disable swap partitions on Linux desktops I’m using (
'swapoff -a'
), and try not to allocate a swap partition during install time. I’m encountering Linux installers that seem to be making it tougher to do this, essentially pleading with you to create a swap partition– “Seriously, you might need 8 total gigabytes of virtual memory one day.” I’m of the opinion that if 2 GB of physical memory isn’t enough for my normal operation, I might need to re-examine my processes.In the course of my normal computer usage (which is definitely not normal by the standard of a normal computer user), swap space is just another way for the software to screw things up behind the scenes. In this case, the mistake is performance-related as the software makes poor decisions about what needs to be kept in RAM.
And then there are the netbook-oriented Linux distributions that insisted upon setting aside as swap 1/2 gigabyte of the already constrained 4 gigabytes of my Eee PC 701′s on-board flash memory, never offering the choice to opt out of swap space during installation. Earmarking flash memory for swap space is generally regarded as exceptionally poor form. To be fair, I don’t know that SSD has been all that prevalent in netbooks since the very earliest units in the netbook epoch.
Am I alone in this ? Does anyone else prefer to keep all of their memory physical in this day and age ?
-
Using gcovr with FFmpeg
6 septembre 2010, par Multimedia Mike — FATE ServerWhen I started investigating code coverage tools to analyze FFmpeg, I knew there had to be an easier way to do what I was trying to do (obtain code coverage statistics on a macro level for the entire project). I was hoping there was a way to ask the GNU gcov tool to do this directly. John K informed me in the comments of a tool called gcovr. Like my tool from the previous post, gcovr is a Python script that aggregates data collected by gcov. gcovr proves to be a little more competent than my tool.
Results
Here is the spreadsheet of results, reflecting FATE code coverage as of this writing. All FFmpeg source files are on the same sheet this time, including header files, sorted by percent covered (ascending), then total lines (descending).Methodology
I wasn’t easily able to work with the default output from the gcovr tool. So I modified it into a tool called gcovr-csv which creates data that spreadsheets can digest more easily.- Build FFmpeg using the
'-fprofile-arcs -ftest-coverage'
in both the extra cflags and extra ldflags configuration options 'make'
'make fate'
- From build directory :
'gcovr-csv > output.csv'
- Massage the data a bit, deleting information about system header files (assuming you don’t care how much of /usr/include/stdlib.h is covered — 66%, BTW)
Leftovers
I became aware of some spreadsheet limitations thanks to this tool :- OpenOffice can’t process percent values correctly– it imports the percent data from the CSV file but sorts it alphabetically rather than numerically.
- Google Spreadsheet expects CSV to really be comma-delimited– forget about any other delimiters. Also, line length is an issue which is why I needed my tool to omit the uncovered ine number ranges, which it does in its default state.
- Build FFmpeg using the
-
Announcing the world’s fastest VP8 decoder : ffvp8
Back when I originally reviewed VP8, I noted that the official decoder, libvpx, was rather slow. While there was no particular reason that it should be much faster than a good H.264 decoder, it shouldn’t have been that much slower either ! So, I set out with Ronald Bultje and David Conrad to make a better one in FFmpeg. This one would be community-developed and free from the beginning, rather than the proprietary code-dump that was libvpx. A few weeks ago the decoder was complete enough to be bit-exact with libvpx, making it the first independent free implementation of a VP8 decoder. Now, with the first round of optimizations complete, it should be ready for primetime. I’ll go into some detail about the development process, but first, let’s get to the real meat of this post : the benchmarks.
We tested on two 1080p clips : Parkjoy, a live-action 1080p clip, and the Sintel trailer, a CGI 1080p clip. Testing was done using “time ffmpeg -vcodec libvpx or vp8 -i input -vsync 0 -an -f null -”. We all used the latest SVN FFmpeg at the time of this posting ; the last revision optimizing the VP8 decoder was r24471.
As these benchmarks show, ffvp8 is clearly much faster than libvpx, particularly on 64-bit. It’s even faster by a large margin on Atom, despite the fact that we haven’t even begun optimizing for it. In many cases, ffvp8′s extra speed can make the difference between a video that plays and one that doesn’t, especially in modern browsers with software compositing engines taking up a lot of CPU time. Want to get faster playback of VP8 videos ? The next versions of FFmpeg-based players, like VLC, will include ffvp8. Want to get faster playback of WebM in your browser ? Lobby your browser developers to use ffvp8 instead of libvpx. I expect Chrome to switch first, as they already use libavcodec for most of their playback system.
Keep in mind ffvp8 is not “done” — we will continue to improve it and make it faster. We still have a number of optimizations in the pipeline that aren’t committed yet.
Developing ffvp8
The initial challenge, primarily pioneered by David and Ronald, was constructing the core decoder and making it bit-exact to libvpx. This was rather challenging, especially given the lack of a real spec. Many parts of the spec were outright misleading and contradicted libvpx itself. It didn’t help that the suite of official conformance tests didn’t even cover all the features used by the official encoder ! We’ve already started adding our own conformance tests to deal with this. But I’ve complained enough in past posts about the lack of a spec ; let’s get onto the gritty details.
The next step was adding SIMD assembly for all of the important DSP functions. VP8′s motion compensation and deblocking filter are by far the most CPU-intensive parts, much the same as in H.264. Unlike H.264, the deblocking filter relies on a lot of internal saturation steps, which are free in SIMD but costly in a normal C implementation, making the plain C code even slower. Of course, none of this is a particularly large problem ; any sane video decoder has all this stuff in SIMD.
I tutored Ronald in x86 SIMD and wrote most of the motion compensation, intra prediction, and some inverse transforms. Ronald wrote the rest of the inverse transforms and a bit of the motion compensation. He also did the most difficult part : the deblocking filter. Deblocking filters are always a bit difficult because every one is different. Motion compensation, by comparison, is usually very similar regardless of video format ; a 6-tap filter is a 6-tap filter, and most of the variation going on is just the choice of numbers to multiply by.
The biggest challenge in an SIMD deblocking filter is to avoid unpacking, that is, going from 8-bit to 16-bit. Many operations in deblocking filters would naively appear to require more than 8-bit precision. A simple example in the case of x86 is abs(a-b), where a and b are 8-bit unsigned integers. The result of “a-b” requires a 9-bit signed integer (it can be anywhere from -255 to 255), so it can’t fit in 8-bit. But this is quite possible to do without unpacking : (satsub(a,b) | satsub(b,a)), where “satsub” performs a saturating subtract on the two values. If the value is positive, it yields the result ; if the value is negative, it yields zero. Oring the two together yields the desired result. This requires 4 ops on x86 ; unpacking would probably require at least 10, including the unpack and pack steps.
After the SIMD came optimizing the C code, which still took a significant portion of the total runtime. One of my biggest optimizations was adding aggressive “smart” prefetching to reduce cache misses. ffvp8 prefetches the reference frames (PREVIOUS, GOLDEN, and ALTREF)… but only the ones which have been used reasonably often this frame. This lets us prefetch everything we need without prefetching things that we probably won’t use. libvpx very often encodes frames that almost never (but not quite never) use GOLDEN or ALTREF, so this optimization greatly reduces time spent prefetching in a lot of real videos. There are of course countless other optimizations we made that are too long to list here as well, such as David’s entropy decoder optimizations. I’d also like to thank Eli Friedman for his invaluable help in benchmarking a lot of these changes.
What next ? Altivec (PPC) assembly is almost nonexistent, with the only functions being David’s motion compensation code. NEON (ARM) is completely nonexistent : we’ll need that to be fast on mobile devices as well. Of course, all this will come in due time — and as always — patches welcome !
Appendix : the raw numbers
Here’s the raw numbers (in fps) for the graphs at the start of this post, with standard error values :
Core i7 620QM (1.6Ghz), Windows 7, 32-bit :
Parkjoy ffvp8 : 44.58 0.44
Parkjoy libvpx : 33.06 0.23
Sintel ffvp8 : 74.26 1.18
Sintel libvpx : 56.11 0.96Core i5 520M (2.4Ghz), Linux, 64-bit :
Parkjoy ffvp8 : 68.29 0.06
Parkjoy libvpx : 41.06 0.04
Sintel ffvp8 : 112.38 0.37
Sintel libvpx : 69.64 0.09Core 2 T9300 (2.5Ghz), Mac OS X 10.6.4, 64-bit :
Parkjoy ffvp8 : 54.09 0.02
Parkjoy libvpx : 33.68 0.01
Sintel ffvp8 : 87.54 0.03
Sintel libvpx : 52.74 0.04Core Duo (2Ghz), Mac OS X 10.6.4, 32-bit :
Parkjoy ffvp8 : 21.31 0.02
Parkjoy libvpx : 17.96 0.00
Sintel ffvp8 : 41.24 0.01
Sintel libvpx : 29.65 0.02Atom N270 (1.6Ghz), Linux, 32-bit :
Parkjoy ffvp8 : 15.29 0.01
Parkjoy libvpx : 12.46 0.01
Sintel ffvp8 : 26.87 0.05
Sintel libvpx : 20.41 0.02