
Recherche avancée
Médias (1)
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
Autres articles (64)
-
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
List of compatible distributions
26 avril 2011, parThe table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...) -
Liste des distributions compatibles
26 avril 2011, parLe tableau ci-dessous correspond à la liste des distributions Linux compatible avec le script d’installation automatique de MediaSPIP. Nom de la distributionNom de la versionNuméro de version Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
Si vous souhaitez nous aider à améliorer cette liste, vous pouvez nous fournir un accès à une machine dont la distribution n’est pas citée ci-dessus ou nous envoyer le (...)
Sur d’autres sites (7945)
-
FFMPEG not enough data (x y), trying to decode anyway
7 juin 2016, par Forest J. HandfordI’m trying to make videos of Direct3D games using a C# app. For non-Direct3D games I stream images from Graphics.CopyFromScreen which works. When I copy the screen from Direct3D and stream it to FFMPEG I get :
[bmp @ 00000276b0b9c280] not enough data (5070 < 129654), trying to
decode anywayAn MP4 file is created, but it is always 0 bytes.
To get screenshots from Direct3D, I am using Justin Stenning’s Direct3DHook. This produces images MUCH bigger than when I get images from Graphics.CopyFromScreen (8 MB vs 136 KB). I’ve tried increasing the buffer (-bufsize) but the number on the left of the error is not impacted.
I’ve tried resizing the image to 1/6th the original. That reduces the number on the right, but does not eliminate it. Even when the number on the right is close to what I have for Graphics.CopyFromScreen I get an error. Here is a sample of the current code :
using System;
using System.Diagnostics;
using System.Threading;
using System.Drawing;
using Capture.Hook;
using Capture.Interface;
using Capture;
using System.IO;
namespace GameRecord
{
public class Video
{
private const int VID_FRAME_FPS = 8;
private const int SIZE_MODIFIER = 6;
private const double FRAMES_PER_MS = VID_FRAME_FPS * 0.001;
private const int SLEEP_INTERVAL = 2;
private const int CONSTANT_RATE_FACTOR = 18; // Lower crf = Higher Quality https://trac.ffmpeg.org/wiki/Encode/H.264
private Image image;
private Capture captureScreen;
private int processId = 0;
private Process process;
private CaptureProcess captureProcess;
private Process launchingFFMPEG;
private string arg;
private int frame = 0;
private Size? resize = null;
/// <summary>
/// Generates the Videos by gathering frames and processing via FFMPEG.
/// </summary>
public void RecordScreenTillGameEnd(string exe, OutputDirectory outputDirectory, CustomMessageBox alertBox, Thread workerThread)
{
AttachProcess(exe);
RequestD3DScreenShot();
while (image == null) ;
Logger.log.Info("Launching FFMPEG ....");
resize = new Size(image.Width / SIZE_MODIFIER, image.Height / SIZE_MODIFIER);
// H.264 can let us do 8 FPS in high res . . . but must be licensed for commercial use.
arg = "-f image2pipe -framerate " + VID_FRAME_FPS + " -i pipe:.bmp -pix_fmt yuv420p -crf " +
CONSTANT_RATE_FACTOR + " -preset ultrafast -s " + resize.Value.Width + "x" +
resize.Value.Height + " -vcodec libx264 -bufsize 30000k -y \"" +
outputDirectory.pathToVideo + "\"";
launchingFFMPEG = new Process
{
StartInfo = new ProcessStartInfo
{
FileName = "ffmpeg",
Arguments = arg,
UseShellExecute = false,
CreateNoWindow = true,
RedirectStandardInput = true,
RedirectStandardError = true
}
};
launchingFFMPEG.Start();
Stopwatch stopWatch = Stopwatch.StartNew(); //creates and start the instance of Stopwatch
do
{
Thread.Sleep(SLEEP_INTERVAL);
} while (workerThread.IsAlive);
Logger.log.Info("Total frames: " + frame + " Expected frames: " + (ExpectedFrames(stopWatch.ElapsedMilliseconds) - 1));
launchingFFMPEG.StandardInput.Close();
#if DEBUG
string line;
while ((line = launchingFFMPEG.StandardError.ReadLine()) != null)
{
Logger.log.Debug(line);
}
#endif
launchingFFMPEG.Close();
alertBox.Show();
}
void RequestD3DScreenShot()
{
captureProcess.CaptureInterface.BeginGetScreenshot(new Rectangle(0, 0, 0, 0), new TimeSpan(0, 0, 2), Callback, resize, (ImageFormat)Enum.Parse(typeof(ImageFormat), "Bitmap"));
}
private void AttachProcess(string exe)
{
Thread.Sleep(300);
Process[] processes = Process.GetProcessesByName(Path.GetFileNameWithoutExtension(exe));
foreach (Process currProcess in processes)
{
// Simply attach to the first one found.
// If the process doesn't have a mainwindowhandle yet, skip it (we need to be able to get the hwnd to set foreground etc)
if (currProcess.MainWindowHandle == IntPtr.Zero)
{
continue;
}
// Skip if the process is already hooked (and we want to hook multiple applications)
if (HookManager.IsHooked(currProcess.Id))
{
continue;
}
Direct3DVersion direct3DVersion = Direct3DVersion.AutoDetect;
CaptureConfig cc = new CaptureConfig()
{
Direct3DVersion = direct3DVersion,
ShowOverlay = false
};
processId = currProcess.Id;
process = currProcess;
var captureInterface = new CaptureInterface();
captureInterface.RemoteMessage += new MessageReceivedEvent(CaptureInterface_RemoteMessage);
captureProcess = new CaptureProcess(process, cc, captureInterface);
break;
}
Thread.Sleep(10);
if (captureProcess == null)
{
ShowUser.Exception("No executable found matching: '" + exe + "'");
}
}
/// <summary>
/// The callback for when the screenshot has been taken
/// </summary>
///
///
///
void Callback(IAsyncResult result)
{
using (Screenshot screenshot = captureProcess.CaptureInterface.EndGetScreenshot(result))
if (screenshot != null && screenshot.Data != null && arg != null)
{
if (image != null)
{
image.Dispose();
}
image = screenshot.ToBitmap();
// image.Save("D3DImageTest.bmp");
image.Save(launchingFFMPEG.StandardInput.BaseStream, System.Drawing.Imaging.ImageFormat.Bmp);
launchingFFMPEG.StandardInput.Flush();
frame++;
}
if (frame < 5)
{
Thread t = new Thread(new ThreadStart(RequestD3DScreenShot));
t.Start();
}
else
{
Logger.log.Info("Done getting shots from D3D.");
}
}
/// <summary>
/// Display messages from the target process
/// </summary>
///
private void CaptureInterface_RemoteMessage(MessageReceivedEventArgs message)
{
Logger.log.Info(message);
}
}
}When I search the internet for the error all I get is the FFMPEG source code, which has not proven to be illuminating. I have been able to save the image directly to disk, which makes me feel like it is not an issue with disposing the data. I have also tried only grabbing one frame, but that produces the same error, which suggests to me it is not a threading issue.
Here is the full sample of stderr :
2016-06-02 18:29:38,046 === ffmpeg version N-79143-g8ff0f6a Copyright (c) 2000-2016 the FFmpeg developers
2016-06-02 18:29:38,047 === built with gcc 5.3.0 (GCC)
2016-06-02 18:29:38,048 === configuration: --enable-gpl
--enable-version3 --disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libdcadec --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmfx --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-decklink --enable-zlib
2016-06-02 18:29:38,062 === libavutil 55. 19.100 / 55. 19.100
2016-06-02 18:29:38,063 === libavcodec 57. 30.100 / 57. 30.100
2016-06-02 18:29:38,064 === libavformat 57. 29.101 / 57. 29.101
2016-06-02 18:29:38,064 === libavdevice 57. 0.101 / 57. 0.101
2016-06-02 18:29:38,065 === libavfilter 6. 40.102 / 6. 40.102
2016-06-02 18:29:38,066 === libswscale 4. 0.100 / 4. 0.100
2016-06-02 18:29:38,067 === libswresample 2. 0.101 / 2. 0.101
2016-06-02 18:29:38,068 === libpostproc 54. 0.100 / 54. 0.100
2016-06-02 18:29:38,068 === [bmp @ 000002cd7e5cc280] not enough data (13070 < 8294454), trying to decode anyway
2016-06-02 18:29:38,069 === [bmp @ 000002cd7e5cc280] not enough data (13016 < 8294400)
2016-06-02 18:29:38,069 === Input #0, image2pipe, from 'pipe:.bmp':
2016-06-02 18:29:38,262 === Duration: N/A, bitrate: N/A
2016-06-02 18:29:38,262 === Stream #0:0: Video: bmp, bgra, 1920x1080, 8 tbr, 8 tbn, 8 tbc
2016-06-02 18:29:38,263 === [libx264 @ 000002cd7e5d59a0] VBV bufsize set but maxrate unspecified, ignored
2016-06-02 18:29:38,264 === [libx264 @ 000002cd7e5d59a0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 AVX2 LZCNT BMI2
2016-06-02 18:29:38,265 === [libx264 @ 000002cd7e5d59a0] profile Constrained Baseline, level 1.1
2016-06-02 18:29:38,266 === [libx264 @ 000002cd7e5d59a0] 264 - core 148 r2665 a01e339 - H.264/MPEG-4 AVC codec - Copyleft 2003-2016 - http://www.videolan.org/x264.html - options: cabac=0 ref=1 deblock=0:0:0 analyse=0:0 me=dia subme=0 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=0 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=0 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=0 weightp=0 keyint=250 keyint_min=8 scenecut=0 intra_refresh=0 rc=crf mbtree=0 crf=18.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=0
2016-06-02 18:29:38,463 === Output #0, mp4, to 'C:\Users\fores\AppData\Roaming\Affectiva\n_Artifacts_20160602_182857\GameplayVidOut.mp4':
2016-06-02 18:29:38,464 === Metadata:
2016-06-02 18:29:38,465 === encoder : Lavf57.29.101
2016-06-02 18:29:38,469 === Stream #0:0: Video: h264 (libx264) ([33][0][0][0] / 0x0021), yuv420p, 320x180, q=-1--1, 8 fps, 16384 tbn, 8 tbc
2016-06-02 18:29:38,470 === Metadata:
2016-06-02 18:29:38,472 === encoder : Lavc57.30.100 libx264
2016-06-02 18:29:38,474 === Side data:
2016-06-02 18:29:38,475 === cpb: bitrate max/min/avg: 0/0/0 buffer size: 30000000 vbv_delay: -1
2016-06-02 18:29:38,476 === Stream mapping:
2016-06-02 18:29:38,477 === Stream #0:0 -> #0:0 (bmp (native) -> h264 (libx264))
2016-06-02 18:29:38,480 === [bmp @ 000002cd7e5cc9a0] not enough data (13070 < 8294454), trying to decode anyway
2016-06-02 18:29:38,662 === [bmp @ 000002cd7e5cc9a0] not enough data (13016 < 8294400)
2016-06-02 18:29:38,662 === Error while decoding stream #0:0: Invalid data found when processing input
2016-06-02 18:29:38,663 === frame= 0 fps=0.0 q=0.0 Lsize= 0kB time=00:00:00.00 bitrate=N/A speed= 0x
2016-06-02 18:29:38,663 === video:0kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
2016-06-02 18:29:38,664 === Conversion failed!In memory, the current image is 320 pixels wide and 180 pixels long. The pixel format is Format32bppRgb. The horizontal and vertical resolutions seem odd, they are both 96.01199. When filed to disk here is ffprobe output for the file :
ffprobe version N-79143-g8ff0f6a Copyright (c) 2007-2016 the FFmpeg developers
built with gcc 5.3.0 (GCC)
configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libdcadec --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmfx --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-decklink --enable-zlib
libavutil 55. 19.100 / 55. 19.100
libavcodec 57. 30.100 / 57. 30.100
libavformat 57. 29.101 / 57. 29.101
libavdevice 57. 0.101 / 57. 0.101
libavfilter 6. 40.102 / 6. 40.102
libswscale 4. 0.100 / 4. 0.100
libswresample 2. 0.101 / 2. 0.101
libpostproc 54. 0.100 / 54. 0.100
Input #0, png_pipe, from 'C:\Users\fores\git\game-playtest-tool\GamePlayTest\bin\x64\Debug\D3DFromCapture.bmp':
Duration: N/A, bitrate: N/A
Stream #0:0: Video: png, rgba(pc), 1920x1080 [SAR 3779:3779 DAR 16:9], 25 tbr, 25 tbn, 25 tbcHere is a PNG version of an example screenshot from the current code (playing Portal 2) :
Any ideas would be greatly appreciated. My current workaround is to save the files to the HDD and compile the video after gameplay, but it’s a far less performant option. Thank you !
-
Subtitling Sierra RBT Files
2 juin 2016, par Multimedia Mike — Game HackingThis is part 2 of the adventure started in my Subtitling Sierra VMD Files post. After I completed the VMD subtitling, The Translator discovered a wealth of animation files in a format called RBT (this apparently stands for “Robot” but I think “Ribbit” format could be more fun). What are we going to do ? We had come so far by solving the VMD subtitling problem for Phantasmagoria. It would be a shame if the effort ground to a halt due to this.
Fortunately, the folks behind the ScummVM project already figured out enough of the format to be able to decode the RBT files in Phantasmagoria.
In the end, I was successful in creating a completely standalone tool that can take a Robot file and a subtitle file and create a new Robot file with subtitles. The source code is here (subtitle-rbt.c). Here’s what the final result looks like :
“What’s in the refrigerator ?” I should note at this juncture that I am not sure if this particular Robot file even has sound or dialogue since I was conducting these experiments on a computer with non-working audio.
The RBT Format
I have created a new MultimediaWiki page describing the Robot Animation format based on the ScummVM source code. I have not worked with a format quite like this before. These are paletted animations which consist of a sequence of independent frames that are designed to be overlaid on top of static background. Because of these characteristics, each frame encodes its own unique dimensions and origin coordinate within the frame. While the Phantasmagoria VMD files are usually 288×144 (which are usually double-sized for the benefit of a 640×400 Super VGA canvas), these frames are meant to be plotted on a game field that was roughly 576×288 (288×144 doublesized).
For example, 2 minimalist animation frames from a desk investigation Robot file :
100×147
101×149As for compression, my first impression was that the algorithm was the same as VMD. This is wrong. It evidently uses an unmodified version of a standard algorithm called Lempel-Ziv-Stac (LZS). It shows up in several RFCs and was apparently used in MS-DOS’s transparent disk compression scheme.
Approach
Thankfully, many of the lessons I learned from the previous project are applicable to this project, including : subtitle library interfacing, subtitling in the paletted colorspace, and replacing encoded frames from the original file instead of trying to create a new file.Here is the pitch for this project :
- Create a C program that can traverse through an input file, piece by piece, and generate an output file. The result of this should be a bitwise identical file.
- Adapt the LZS compression decoding algorithm from ScummVM into the new tool. Make the tool dump raw Portable NetMap (PNM) files of varying dimensions and ensure that they look correct.
- Compress using LZS.
- Stretch the frames and draw subtitles.
- More compression. Find the minimum window for each frame.
Compression
Normally, my first goal is to decompress the video and store the data in a raw form. However, this turned out to be mathematically intractable. While the format does support both compressed and uncompressed frames (even though ScummVM indicates that the uncompressed path is yet unexercised), the goal of this project requires making the frames so large that they overflow certain parameters of the file.A Robot file has a sequence of frames and 2 tables describing the size of each frame. One table describes the entire frame size (audio + video) while the second table describes just the video frame size. Since these tables only use 16 bits to specify a size, the maximum frame size is 65536 bytes. Leaving space for the audio portion of the frame, this only leaves a per-frame byte budget of about 63000 bytes for the video. Expanding the frame to 576×288 (165,888 pixels) would overflow this limit.
Anyway, the upshot is that I needed to compress the data up front.
Fortunately, the LZS compressor is pretty straightforward, at least if you have experience writing VLC-oriented codecs. While the algorithm revolves around back references, my approach was to essentially write an RLE encoder. My compressor would search for runs of data (plentiful when I started to stretch the frame for subtitling purposes). When a run length of n=3 or more of the same pixel is found, encode the pixel by itself, and then store a back reference of offset -1 and length (n-1). It took a little while to iron out a few problems, but I eventually got it to work perfectly.
I have to say, however, that the format is a little bit weird in how it codes very large numbers. The length encoding is somewhat Golomb-like, i.e., smaller values are encoded with fewer bits. However, when it gets to large numbers, it starts encoding counts of 15 as blocks of 1111. For example, 24 is bigger than 7. Thus, emit 1111 into the bitstream and subtract 8 from 23 -> 16. Still bigger than 15, so stuff another 1111 into the bitstream and subtract 15. Now we’re at 1, so stuff 0001. So 24 is 11111111 0001. 12 bits is not too horrible. But the total number of bytes (value / 30). So a value of 300 takes around 10 bytes (80 bits) to encode.
Palette Slices
As in the VMD subtitling project, I took the subtitle color offered in the subtitle spec file as a suggestion and used Euclidean distance to match to the closest available color in the palette. One problem, however, is that the palette is a lot smaller in these animations. According to my notes, for the set of animations I scanned, only about 80 colors were specified, starting at palette index 55. I hypothesize that different slices of the palette are reserved for different uses. E.g., animation, background, and user interface. Thus, there is a smaller number of colors to draw upon for subtitling purposes.Scaling
One bit of residual weirdness in this format is the presence of a per-frame scale factor. While most frames set this to 100 (100% scale), I have observed 70%, 80%, and 90%. ScummVM is a bit unsure about how to handle these, so I am as well. However, I eventually realized I didn’t really need to care, at least not when decoding and re-encoding the frame. Just preserve the scale factor. I intend to modify the tool further to take scale factor into account when creating the subtitle.The Final Resolution
Right around the time that I was composing this post, The Translator emailed me and notified me that he had found a better way to subtitle the Robot files by modifying the scripts, rendering my entire approach moot. The result is much cleaner :
Turns out that the engine supported subtitles all along
It’s a good thing that I enjoyed the challenge or I might be annoyed at this point.
See Also
- Subtitling Sierra VMD Files : My effort to subtitle the main FMV files found in Sierra games.
-
Subtitling Sierra VMD Files
1er juin 2016, par Multimedia Mike — Game HackingI was contacted by a game translation hobbyist from Spain (henceforth known as The Translator). He had set his sights on Sierra’s 7-CD Phantasmagoria. This mammoth game was driven by a lot of FMV files and animations that have speech. These require language translation in the form of video subtitling. He’s lucky that he found possibly the one person on the whole internet who has just the right combination of skill, time, and interest to pull this off. And why would I care about helping ? I guess I share a certain camaraderie with game hackers. Don’t act so surprised. You know what kind of stuff I like to work on.
The FMV format used in this game is VMD, which makes an appearance in numerous Sierra titles. FFmpeg already supports decoding this format. FFmpeg also supports subtitling video. So, ideally, all that’s necessary to support this goal is to add a muxer for the VMD format which can encode raw video and audio, which the format supports. Implement video compression as extra credit.
The pipeline that I envisioned looks like this :
VMD Subtitling Process
“Trivial !” I surmised. I just never learn, do I ?
The Plan
So here’s my initial pitch, outlining the work I estimated that I would need to do towards the stated goal :- Create a new file muxer that produces a syntactically valid VMD file with bogus video and audio data. Make sure it works with both FFmpeg’s playback system as well as the proper Phantasmagoria engine.
- Create a new video encoder that essentially operates in pass-through mode while correctly building a palette.
- Create a new basic encoder for the video frames.
A big unknown for me was exactly how subtitle handling operates in FFmpeg. Thanks to this project, I now know. I was concerned because I was pretty sure that font rendering entails anti-aliasing which bodes poorly for keeping the palette count under 256 unique colors.
Computer Science Puzzle
When pondering how to process the palette, I was excited for the opportunity to exercise actual computer science. FFmpeg converts frames from paletted frames to full RGB frames. Then it needs to convert them back to paletted frames. I had a vague recollection of solving this problem once before when I was experimenting with a new paletted video codec. I seem to recall that I did the palette conversion in a very naive manner. I just used a static 256-element array and processed each RGB pixel of the frame, seeing if the value already occurred in the table (O(n) lookup) and adding it otherwise.
There are more efficient algorithms, however, such as hash tables and trees. Somewhere along the line, FFmpeg helpfully acquired a rarely-used tree data structure, which was perfect for this project.
So I was pretty pleased with this optimization. Too bad this wouldn’t survive to the end of the effort.
Another palette-related challenge was the fact that a group of pictures would be accumulating a new palette but that palette needed to be recorded before the group. Thus, the muxer needed to have extra logic to rewind the file when the video encoder transmitted a palette change.
Video Compression
VMD has a few methods in its compression toolbox. It can use interframe differencing, it has some RLE, or it can code a frame raw. It can also use a custom LZ-like format on top of these. For early prototypes, I elected to leave each frame coded raw. After the concept was proved, I implemented the frame differencing.
Top frame compared with the middle frame yields the bottom frame : red pixels indicate changesEncoding only those red dots in between vast runs of unchanged pixels yielded a vast measurable improvement. The next step was to try wiring up FFmpeg’s existing LZ compression facilities to the encoder. This turned out to be implausible since VMD’s LZ variant has nothing to do with anything FFmpeg already provides. Fortunately, the LZ piece is not absolutely required and the frame differencing + RLE provides plenty of compression.
Subtitling
I’ve never done anything, multimedia programming-wise, concerning subtitles. I guess all the entertainment I care about has always been in my native tongue. What a good excuse to program outside of my comfort zone !First, I needed to know how to access FFmpeg’s subtitling facilities. Fortunately, The Translator did the legwork on this matter so I didn’t have to figure it out.
However, I intuitively had misgivings about this phase. I had heard that the subtitling process performs anti-aliasing. That means that the image would need to be promoted to a higher colorspace for this phase and that the anti-aliasing process would likely push the color count way past 256. Some quick tests revealed this to be the case, as the running color count would leap by several hundred colors as soon as the palette accounting algorithm encountered a subtitle.
So I dug into the subtitle subsystem. I discovered that the subtitle library operates by creating a linked list of subtitle bitmaps that the client app must render. The bitmaps are comprised of 8-bit alpha transparency values that must be composited onto the target frame (i.e., 0 = transparent, 255 = 100% opaque). For example, the letter ‘H’ :
(with 00s removed) 13 F8 41 00 00 00 00 68 E4 | 13 F8 41 68 E4 14 FF 44 00 00 00 00 6C EC | 14 FF 44 6C EC 14 FF 44 00 00 00 00 6C EC | 14 FF 44 6C EC 14 FF 44 00 00 00 00 6C EC | 14 FF 44 6C EC 14 FF DC D0 D0 D0 D0 E4 EC | 14 FF DC D0 D0 D0 D0 E4 EC 14 FF 7E 50 50 50 50 9A EC | 14 FF 7E 50 50 50 50 9A EC 14 FF 44 00 00 00 00 6C EC | 14 FF 44 6C EC 14 FF 44 00 00 00 00 6C EC | 14 FF 44 6C EC 14 FF 44 00 00 00 00 6C EC | 14 FF 44 6C EC 11 E0 3B 00 00 00 00 5E CE | 11 E0 3B 5E CE
To get around the color explosion problem, I chose a threshold value and quantized values above and below to 255 and 0, respectively. Further, the process chooses an appropriate color from the existing palette rather than introducing any new colors.
Muxing Matters
In order to force VMD into a general purpose media framework, a lot of special information needs to be passed around. Like many paletted codecs, the palette needs to be transmitted from the file demuxer to the video decoder via some side channel. For re-encoding, this also implies that the palette needs to make the trip from the video encoder to the file muxer. As if this wasn’t enough, individual VMD frames have even more data that needs to be ferried between the muxer and codec levels, including frame change boundaries. FFmpeg provides methods to do these things, but I could not always rely on the systems to relay the data in all cases. I was probably doing something wrong ; I accept that. Instead, I just packed all the information at the front of an encoded frame and split it apart in the muxer.I could not quite figure out how to get the audio and video muxed correctly. As a result, neither FFmpeg nor the Phantasmagoria engine could replay the files correctly.
Plan B
Since I was having so much trouble creating an entirely new VMD file, likely due to numerous unknown bits of the file format, I thought of another angle : re-use the existing VMD file. For this approach, I kept the video encoder and file muxer that I created in the initial phase, but modified the file muxer to emit a special intermediate file. Then, I created a Python tool to repackage the original VMD file using compressed video data in the intermediate file.For this phase, I also implemented a command line switch for FFmpeg to disable subtitle blending, to make the feature feel like less of an unofficial hack, as though this nonsense would ever have a chance of being incorporated upstream.
At this point, I was seeing some success with the complete, albeit roundabout, subtitling process. I constructed a subtitle file using “Spanish I Learned From Mexican Telenovelas” and the frames turned out fairly readable :
“she cheated on him”
“he’s a scumbag” … these random subtitles could fit surprisingly well !
The few files that I tested appeared to work fine. But then I handed off my work to The Translator and he immediately found a bunch of problems. According to my notes, the problems mostly took the form of flashing, solid color frames. Further, I found tiny, mostly imperceptible flaws in my RLE compressor, usually only detectable by running strict comparison tools ; but I wasn’t satisfied.
At this point, I think I attempted to just encode the entire palette at the front of each frame, as allowed by the format, but that did not seem to fix any problems. My notes are not completely clear on this matter (likely because I was still trying to figure out the exact problem), but I think it had to do with FFmpeg inserting extra video frames in order to even out gaps in the video framerate.
Sigh, Plan C
At this point, I was getting tired of trying to force FFmpeg to do this. So I decided to minimize its involvement using lessons learned up to this point.The next pitch :
- Create a new C program that can open an existing VMD file and output an identical VMD file. I know this sounds easy, but the specific method of copying entails interpreting individual parts of the file and writing those individual parts to the new file. This is in preparation for…
- Import the VMD video decoder functions directly into the program to decode the individual video frames and re-encode them, replacing the video frames as the file is rewritten.
- Wire up the subtitle system. During the adventure to disable subtitle blending, I accidentally learned enough about interfacing to the subtitle library to just invoke it directly.
- Rewrite the RLE method so that it is 100% correct.
Off to work I went. That part about lifting the existing VMD decoder functions out of their libavcodec nest turned out to not be that straightforward. As an alternative, I modified the decoder to dump the raw frames to an intermediate file. In doing so, I think I was able to avoid the issue of the duplicated frames that plagued the previous efforts.
Also, remember how I was really pleased with the palette conversion technique in which I was able to leverage computer science big-O theory ? By this stage, I had no reason to convert the paletted video to RGB in the first place ; all of the decoding, subtitling and re-encoding operates in the paletted colorspace.
This approach seemed to work pretty well. The final program is subtitle-vmd.c. The process is still a little weird. The modifications in my own FFmpeg fork are necessary to create an intermediate file that the new C tool can operate with.
Next Steps
The Translator has found some assorted bugs and corner cases that still need to be ironed out. Further, for extra credit, I need find the change windows for each frame to improve compression just a little more. I don’t think I will be trying for LZ compression, though.However, almost as soon as I had this whole system working, The Translator informed me that there is another, different movie format in play in the Phantasmagoria engine called ROBOT, with an extension of RBT. Fortunately, enough of the algorithms have been reverse engineered and re-implemented in ScummVM that I was able to sort out enough details for another subtitling project. That will be the subject of a future post.
See Also :
- Subtitling Sierra RBT Files : The followup in which I discuss how to scribble text on the other animation format