Breaking Eggs And Making Omelettes
A blog dealing with technical multimedia matters, binary reverse engineering, and the occasional video game hacking.
Les articles publiés sur le site
-
Resurrecting SCD
6 août 2010, par Multimedia Mike — Reverse EngineeringWhen I became interested in reverse engineering all the way back in 2000, the first Win32 disassembler I stumbled across was simply called "Win32 Program Disassembler" authored by one Sang Cho. I took to calling it 'scd' for Sang Cho's Disassembler. The original program versions and source code are still available for download. I remember being able to compile v0.23 of the source code with gcc under Unix; 0.25 is no go due to extensive reliance on the Win32 development environment.
I recently wanted to use scd again but had some trouble compiling. As was the case the first time I tried compiling the source code a decade ago, it's necessary to transform line endings from DOS -> Unix using 'dos2unix' (I see that this has been renamed to/replaced by 'fromdos' on Ubuntu).
Beyond this, it seems that there are some C constructs that used to be valid but are now rejected by gcc. The first looks like this:
C:-
return (int) c = *(PBYTE)((int)lpFile + vCodeOffset);
Ahem, "error: lvalue required as left operand of assignment". Removing the "(int)" before the 'c' makes the problem go away. It's a strange way to write a return statement in general. However, 'c' is a global variable that is apparently part of some YACC/BISON-type output.
The second issue is when a case-switch block has a default label but with no code inside. Current gcc doesn't like that. It's necessary to at least provide a break statement after the default label.
Finally, the program turns out to not be 64-bit safe. It is necessary to compile it in 32-bit mode (compile and link with the '-m32' flag or build on a 32-bit system). The static 32-bit binary should run fine under a 64-bit kernel.
Alternatively: What are some other Win32 disassemblers that work under Linux?
-
-
FATE’s New Look
4 août 2010, par Multimedia Mike — FATE ServerThe FATE main page exposes a lot of data. The manner in which it is presented has always been bounded by my extremely limited web development abilities. I wrestled with whether I should learn better web development skills first and allow that to inform any improved design, or focus on the more useful design and invest my web development learning time towards realizing that design.
Fortunately, Mans solved this conundrum with an elegantly simple solution:
The top of the page displays a status bar that illustrates -- at a glance -- how functional the codebase is. The web page source code identifies this as the failometer. It took me a few seconds to recognize what information that status bar was attempting to convey; maybe it could use a succinct explanation.
Mini-Book Review
Before Mans took over, I thought about this problem quite a bit. I needed inspiration for creating a better FATE main page and aggregating a large amount of data in a useful, easily-digested form. Looking around the web, I see no shortage of methods for visualizing data. I could start shoehorning FATE data into available methods and see what works. But I thought it would be better to take a step back and think about the best way to organize the data. My first clue came awhile ago in the form of an xkcd comic: Blogofractal. Actually, the clue came from the mouseover text which recommended Edward Tufte's "The Visual Display of Quantitative Information".
I ordered this up and plowed through it. It's an interesting read, to be sure. However, I think it illustrates what a book on multimedia and compression technology would look like if authored by yours truly-- a book of technical curiosities from epochs past that discusses little in the way of modern practical application. Tufte's book showed me lots of examples of infographics from decades and even centuries past, but I never concisely learned exactly how to present data such as FATE's main page in a more useful form.
Visualization Blog
More recently, I discovered a blog called Flowing Data, authored by a statistics Ph.D. candidate who purportedly eats, sleeps, and breathes infographics. The post 11 Ways to Visualize Changes Over Time: A Guide offers a good starting point for creating useful data presentations.I still subscribe to and eagerly read Flowing Data. But I might not have as much use for data visualization now that Mans is on FATE duty.
-
FATE Under New Management
2 août 2010, par Multimedia Mike — FATE ServerAt any given time, I have between 20-30 blog posts in some phase of development. Half of them seem to be contemplations regarding the design and future of my original FATE system and are thus ready for the recycle bin at this point. Mans is a man of considerably fewer words, so I thought I would use a few words to describe the new FATE system that he put together.
Overview
Here are the distinguishing features that Mans mentioned in his announcement message:- Test specs are part of the ffmpeg repo. They are thus properly versioned, and any developer can update them as needed.
- Support for inexact tests.
- Parallel testing on multi-core systems.
- Anyone registered with FATE can add systems.
- Client side entirely in POSIX shell script and GNU make.
- Open source backend and web interface.
- Client and backend entirely decoupled.
- Anyone can contribute patches.
Client
The FATE build/test client source code is contained in tests/fate.sh in the FFmpeg source tree. The script -- as the extension implies -- is a shell script. It takes a text file full of shell variables, updates source code, configures, builds, and tests. It's a considerably minor amount of code, especially compared to my original Python code. Part of this is because most of the testing logic has shifted into FFmpeg itself. The build system knows about all the FATE tests and all of the specs are now maintained in the codebase (thanks to all who spearheaded that effort-- I think it was Vitor and Mans).The client creates a report file which contains a series of lines to be transported to the server. The first line has some information about the configuration and compiler, plus the overall status of the build/test iteration. The second line contains './configure' information. Each of the remaining lines contain information about an individual FATE test, mostly in Base64 format.
Server
The server source code lives at http://git.mansr.com/?p=fateweb. It is written in Perl and plugs into a CGI-capable HTTP server. Authentication between the client and the server operates via SSH/SSL. In stark contrast to the original FATE server, there is no database component on the backend. The new system maintains information in a series of flat files. -
Usurper of FATE
31 juillet 2010, par Multimedia Mike — FATE ServerMans sent a message to the FFmpeg-devel list today:
A new FATE
Mike's FATE system has done a great job over the last few years. It
is however beginning to prove inadequate in various ways:[various shortcomings already dissected at length on this very blog]
To address the above-mentioned issues, I have been working on a
replacement system which is now ready to be announced.Check it out: http://fate.ffmpeg.org/.
Considering that he just obsoleted something I've poured a lot of time and energy into over the last 2.5 years, is my first reaction to this news supposed to be unbridled joy? Hey, I'm already on record as stating that I wouldn't mind throwing away all of FATE if there was a better alternative.
I'm not certain but I'm pretty sure that at this point, the original FATE server is practically obsolete. Mans is already testing all of his configurations as well as the configs I test. As soon as the other FATE installations switch over to the new server, I should be able to redirect fate.multimedia.cx -> fate.ffmpeg.org, sell most of my computers, and spend more time with my family.
Thanks, Mans!
-
Official RealVideo Specifications
29 juillet 2010, par Multimedia Mike — GeneralA little birdie tipped me off to a publicly-accessible URL on the Helix community site (does anyone actually use Helix?) that contains a bunch of specifications for RealVideo 8 and 9. I have been sifting through the documents to see exactly what they contain as the different files seem to be higher revisions of the same documents. Here is the title, date, and version of each PDF document:
- RNDecoderPerformanceARM.pdf: Decoder Performance on StrongARM and XScale; May 12, 2003; Version 1.1
- rv89_decoder_summary.pdf: RealVideo 8/9 Combo Decoder Summary; October 23, 2002; Version 1.0
- rv9_dec_external_spec_v14.pdf: RealVideo 9 External Specification; November 7, 2003; Version 1.4
- rv8_dec_external_spec_v20.pdf: RealVideo 8 External Specification; September 19, 2002; Version 2.0
- RV8DecoderExternalSpecificationv201.pdf: RealVideo 8 External Specification; October 20, 2006; Version 2.01
- RV8DecoderExternalSpecificationv202.pdf: RealVideo 8 External Specification; April 23, 2007; Version 2.02
- RV8DecoderExternalSpecificationv203.pdf: RealVideo 8 External Specification; July 20, 2007; Version 2.03
- RV8DecoderExternalSpecificationv21.pdf: RealVideo 8 External Specification; September 11, 2007; Version 2.1
- RV9DecoderExternalSpecificationv15.pdf; RealVideo 9 External Specification; January 26, 2002; Version 1.5
- RV9DecoderExternalSpecificationv16.pdf; RealVideo 9 External Specification; August 17, 2005; Version 1.6
- RV9DecoderExternalSpecificationv18.pdf; RealVideo 9 External Specification; September 11, 2007; Version 1.8
Additionally, there is an Excel spreadsheet entitled realvideo-faq.xls that appears to contain some general tech support advice for using Real's official code. There are also 3 ZIP archives which contain profiling information about the official source code (post processing and entropy decoding top the charts which is no big surprise).
I guess the latest version of each document (the ones dated September 11, 2007) are worth mirroring. Unfortunately, those latest document versions use a terrible font.