
Recherche avancée
Autres articles (111)
-
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Déploiements possibles
31 janvier 2010, parDeux types de déploiements sont envisageable dépendant de deux aspects : La méthode d’installation envisagée (en standalone ou en ferme) ; Le nombre d’encodages journaliers et la fréquentation envisagés ;
L’encodage de vidéos est un processus lourd consommant énormément de ressources système (CPU et RAM), il est nécessaire de prendre tout cela en considération. Ce système n’est donc possible que sur un ou plusieurs serveurs dédiés.
Version mono serveur
La version mono serveur consiste à n’utiliser qu’une (...) -
Les tâches Cron régulières de la ferme
1er décembre 2010, parLa gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
Le super Cron (gestion_mutu_super_cron)
Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)
Sur d’autres sites (6783)
-
NAB 2010 wrapup
15 avril 2010Another year of NAB has come and gone. Making it out of Vegas with some remaining faith in humanity seems like a successful outcome. So, anything worth talking about at the show ?
First off, there’s 3d. 3D is The Next Big Thing, and that was obvious to anyone who spent half a second on the show floor. Everything from camera rigs, to post production apps, to display technology was all 3d, all the time. I’m not a huge fan of 3d in most cases, but the industry is at least feigning interest.
Luckily, at a show as big as NAB, there’s plenty of other cool stuff to see. So, what struck my fancy ?
First off, Avid and Adobe were showing new versions of Media Composer and Premiere. Both sounded pretty amazing on paper, but I must say I was somewhat underwhelmed by both in reality. Premiere felt a little rough around the edges - the Mercurial Engine wasn’t the sort of next generation tech that I expected. Media Composer 5 has some nice new tweaks, but it’s still rather Avid-y - which is good for Avid people, less interesting for the rest of us.
In other software news, Blackmagic Design was showing off some of what they’re doing with the DaVinci technology that they acquired. Software-only Da Vinci Resolve for $999 is a pretty amazing deal, and the demos were quite nice. That said, color correction is an art, so just making the technology cheaper isn’t necessarily going to dramatically change the number of folks who do it well - see Color.
Blackmagic also has a pile of new USB 3.0 hardware devices, including the absolutely gorgeous UltraStudio Pro. Makes me pine for USB 3.0 on the mac.
On the production side, we saw new cameras from just about everyone. To start at the high end, the Arri Alexa was absolutely stunning. Perhaps the nicest digital cinema footage I’ve seen. Not only that, but they’ve worked out a usable workflow, recording to ProRes plus RAW. At the price point they’re promising, the world is going to get a lot more difficult for RED.
Sony’s new XDCam EX gear is another good step forward for that format. Nothing groundbreaking, but another nice progression. I was kind of hoping we’d see 4:2:2 EX gear from them, but I suppose they need to justify the disc based formats for a while longer.
The Panasonic AG-AF100 is another interesting camera, bringing micro 4/3rds into video. The only strange thing is the recording side - AVCHD to SD cards. While I’m thrilled to see them using SD instead of P2, it sure would have been nice to have an AVCIntra option.
Finally, Canon’s 4:2:2 XF cams are a nice option for the ENG/EFP market. Nothing groundbreaking, aside from the extra color sampling, but it’s a nice step up from what they’ve been doing.
Speaking of Canon, it’s interesting to see the ways that the 5d and 7d have made their way into mainstream filmmaking. At one point, I thought they’d be relegated to the indie community - folks looking for nice DoF on a budget. Instead, they seem to have been adopted by a huge range of productions, from episodic TV to features. While they’re not right for everyone, the price and quality make them an easy choice in many cases.
One of the stars of the show for me was the GoPro, a small waterproof HD camera that ships with a variety of mounts, designed to be used in places where you couldn’t or wouldn’t use a more full featured camera. No LCD, just a record button and a wide angle lens. I bought two.
Those are the things that stand out for me. While there was plenty of interesting stuff to be seen, given the current economic conditions at the University, I wasn’t exactly in a shopping mindset. The show definitely felt more optimistic than it did last year, and companies are again pushing out new products. However, attendances was about 20% lower than 2008, and that was definitely noticeable on the show floor.
-
Creating A Lossless SMC Encoder
26 avril 2011, par Multimedia Mike — GeneralLook, I can’t explain how or why I come up with this stuff. For some reason, I thought it would be interesting to write a new encoder for the Apple SMC video codec. I can’t even remember why. I just sat down the other day, started writing, and now I have a lossless SMC encoder that I’m not sure what to do with. Maybe this is to be my new thing— writing encoders for marginal multimedia formats.
Introduction
SMC is a vector quantizer (a lossy method) but I decided to attack it from the angle of lossless encoding. A.k.a. Apple Graphics Codec, SMC operates on 4x4 blocks in an 8-bit paletted colorspace. Each 4x4 block can be encoded with 1, 2, 4, 8, or 16 colors. Blocks can also be skipped (copied from previous frame) or copied from blocks rendered immediately prior within the same frame.Step 1 : Validating Infrastructure
The goal of this step is to encode the most braindead SMC frame possible and see if FFmpeg/libav’s QuickTime muxer can create a valid file. I think the simplest frame would be one in which each vector is encoded with the single-color mode, starting with color 0 and incrementing through the palette.Status : Successful. The only ’trick’ was to set
avctx->bits_per_coded_sample
to 8. (For fun, this can also be set to 40 (8 | 0x20) to specify a grayscale palette.)
Step 2 : Preprocessing
The video frames will arrive at the encoder as 32-bit RGB. These will need to be converted to a paletted colorspace before encoding. I don’t want to use FFmpeg’s default dithering approach as this will result in a substantial loss of quality as described in this post. I would rather maintain a palette built from observed colors throughout successive frames. If the total number of unique observed colors ever exceeds 256, error out.That’s what I would like to do. However, I noticed that FFmpeg/libav’s QuickTime muxer has never taken into account the possibility of encoding palettes. The path of least resistance in this case is to dither the input to match QuickTime’s default 8-bit palette (if a paletted QuickTime file does not specify a palette, a default 1-, 2-, 4-, or 8-bit palette is selected).
Status : Successful, if slow. I definitely need to optimize this step later.
Step 3 : Most Naive Encoding
The most basic encoding is to "encode" each block as a 16-color block. This will actually result in a slightly larger frame size than a raw encoding since each 4x4 block will be prepended by a byte opcode (0xE0 in this case) to indicate encoding mode. This should demonstrate that the encoder is functioning at the most basic level.Status : Successful. Try not to laugh too hard at the Big Buck Bunny dithered to an 8-bit palette :
Step 4 : Better Representation
It seems to me that encoding this format (losslessly) will entail performing vector operations on lots of 16-element (4x4-pixel) vectors. These could be done on the frame as-is, but it strikes me as more efficient and perhaps less error prone to rearrange the input images into a vector of vectors (or array of arrays if you prefer) :0 1 2 3 w ... 4 5 6 7 x ... 8 9 A B y ... C D E F z ...
0 : [0 1 2 3 4 5 6 7 8 9 A B C D E F] 1 : [...]
Status : Successful.
Step 5 : Add Interframe Skip Codes
Time to add a bit of brainpower to the proceedings : On non-keyframes, compare the current vector to the vector at the same position from the previous frame.Test this by encoding a pair of identical frames. Ideally, all codes should be skip codes.
Status : Successful, though my vector matching function could probably be improved.
Step 6 : Analyze Blocks For Optimal Color Coding
This is where things get potentially interesting, algorithmically. At least, I need to figure out (or look up) an algorithm to count the unique elements in a vector.Naive algorithm (i.e., first thing I can think of) :
- initialize a count variable to 0
- initialize an array of 256 flags to false
- for each 8-bit element in vector :
- if flag array[element] is 0, set array[element] to true and increment count
Status : Successful. Here is the distribution for the 640x360 Big Buck Bunny title :
1194 4636 4113 2140 1138 568 325 154 80 36 9 5 2 0 0 0
Or, in pretty graph form, demonstrating that vectors with few distinct elements dominate :
Step 7 : Encode Monochrome Blocks
At this point, the structure is starting to come together pretty well. This phase involves encoding a 0x60 opcode and a palette index when the count_distinct() function returns 1.Status : Absolutely no problem.
Step 8 : Encode 2-, 4-, and 8-color Modes
This step is a little more involved. This is where SMC’s 2-, 4-, and 8-color circular palette caches come into play. E.g., when the first 2-color block is encoded, the pair of colors it uses will be inserted into entry 0 of the 2-color cache. During the next 2-color block encoding, if the block uses a pair of colors that already occurs in the cache, the encoding can reference that cache entry. Otherwise, it adds the pair to the next available cache entry, looping back around to 0 as necessary.I think I should modify the count_distinct() function to also return a 16-byte array that contains a sorted list of the palette indicies used in the vector. The color pair cache will contain 256 16-bit, 32-bit ints for the quads and 64-bit ints for the octets. This will allow a slightly faster linear cache search.
Status : The 2-color encoding wasn’t too much trouble and I was able to adapt it to the 4-color mode pretty quickly afterward. I’m still having trouble with the insane 8-color coding mode, though. So that’s commented out for the time being.
Step 9 : Run Encoding and Putting It All Together
For each frame, convert the input pixels to a paletted format via one method or another (match to default QuickTime palette for first pass). Then, preprocess each vector to determine the minimum number of elements that can be used to represent it, storing the sorted list of distinct colors in a separate array. The number of elements can either be 0 (only for interframes and indicates a skip block), 1, 2, 4, 8, or 16. Also during this phase, for each vector after the first, test if the vector is the same as the previous vector. If it is, denote this fact in the preprocessed encoding (set the high bit of the element count number).Finally, pack it into the bytestream. Iterate through the element count array and search for the longest runs of elements that are encoded with the same mode (up to 256 for skip modes, up to 16 for other modes). If the high bit of an element count is set, that indicates that a copy mode can be encoded. Look for the longest run of element counts with the high bit set and encode a copy mode.
Status : In-process. Will finish this as motivation strikes.
-
FATE Under New Management
2 août 2010, par Multimedia Mike — FATE ServerAt any given time, I have between 20-30 blog posts in some phase of development. Half of them seem to be contemplations regarding the design and future of my original FATE system and are thus ready for the recycle bin at this point. Mans is a man of considerably fewer words, so I thought I would use a few words to describe the new FATE system that he put together.
Overview
Here are the distinguishing features that Mans mentioned in his announcement message :- Test specs are part of the ffmpeg repo. They are thus properly versioned, and any developer can update them as needed.
- Support for inexact tests.
- Parallel testing on multi-core systems.
- Anyone registered with FATE can add systems.
- Client side entirely in POSIX shell script and GNU make.
- Open source backend and web interface.
- Client and backend entirely decoupled.
- Anyone can contribute patches.
Client
The FATE build/test client source code is contained in tests/fate.sh in the FFmpeg source tree. The script — as the extension implies — is a shell script. It takes a text file full of shell variables, updates source code, configures, builds, and tests. It’s a considerably minor amount of code, especially compared to my original Python code. Part of this is because most of the testing logic has shifted into FFmpeg itself. The build system knows about all the FATE tests and all of the specs are now maintained in the codebase (thanks to all who spearheaded that effort— I think it was Vitor and Mans).The client creates a report file which contains a series of lines to be transported to the server. The first line has some information about the configuration and compiler, plus the overall status of the build/test iteration. The second line contains ’./configure’ information. Each of the remaining lines contain information about an individual FATE test, mostly in Base64 format.
Server
The server source code lives at http://git.mansr.com/?p=fateweb. It is written in Perl and plugs into a CGI-capable HTTP server. Authentication between the client and the server operates via SSH/SSL. In stark contrast to the original FATE server, there is no database component on the backend. The new system maintains information in a series of flat files.