
Recherche avancée
Médias (1)
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (64)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (9542)
-
How to create a widget – Introducing the Piwik Platform
4 septembre 2014, par Thomas Steur — DevelopmentThis is the next post of our blog series where we introduce the capabilities of the Piwik platform (our previous post was How to create a scheduled task in Piwik). This time you’ll learn how to create a new widget. For this tutorial you will need to have basic knowledge of PHP.
What is a widget in Piwik ?
Widgets can be added to your dashboards or exported via a URL to embed it on any page. Most widgets in Piwik represent a report but a widget can display anything. For instance a RSS feed of your corporate news. If you prefer to have most of your business relevant data in one dashboard why not display the number of offline sales, the latest stock price, or other key metrics together with your analytics data ?
Getting started
In this series of posts, we assume that you have already set up your development environment. If not, visit the Piwik Developer Zone where you’ll find the tutorial Setting up Piwik.
To summarize the things you have to do to get setup :
- Install Piwik (for instance via git).
- Activate the developer mode :
./console development:enable --full
. - Generate a plugin :
./console generate:plugin --name="MyWidgetPlugin"
. There should now be a folderplugins/MyWidgetPlugin
. - And activate the created plugin under Settings => Plugins.
Let’s start creating a widget
We start by using the Piwik Console to create a widget template :
./console generate:widget
The command will ask you to enter the name of the plugin the widget should belong to. I will simply use the above chosen plugin name “MyWidgetPlugin”. It will ask you for a widget category as well. You can select any existing category, for instance “Visitors”, “Live !” or “Actions”, or you can define a new category, for instance your company name. There should now be a file
plugins/MyWidgetPlugin/Widgets.php
which contains already some examples to get you started easily :- class Widgets extends \Piwik\Plugin\Widgets
- {
- /**
- * Here you can define the category the widget belongs to. You can reuse any existing widget category or define your own category.
- * @var string
- */
- protected $category = 'ExampleCompany';
- /**
- * Here you can add one or multiple widgets. You can add a widget by calling the method "addWidget()" and pass the name of the widget as well as a method name that should be called to render the widget. The method can be defined either directly here in this widget class or in the controller in case you want to reuse the same action for instance in the menu etc.
- */
- protected function init()
- {
- $this->addWidget('Example Widget Name', $method = 'myExampleWidget');
- $this->addWidget('Example Widget 2', $method = 'myExampleWidget', $params = array('myparam' => 'myvalue'));
- }
- /**
- * This method renders a widget as defined in "init()". It's on you how to generate the content of the widget. As long as you return a string everything is fine. You can use for instance a "Piwik\View" to render a twig template. In such a case don't forget to create a twig template (eg. myViewTemplate.twig) in the "templates" directory of your plugin.
- *
- * @return string
- */
- public function myExampleWidget()
- {
- $view = new View('@MyWidgetPlugin/myViewTemplate');
- return $view->render();
- }
- }
As you might have noticed in the generated template we put emphasis on adding comments to explain you directly how to continue and where to get more information. Ideally this saves you some time and you don’t even have to search for more information on our developer pages. The category is defined in the property
$category
and can be changed at any time. Starting from Piwik 2.6.0 the generator will directly create a translation key if necessary to make it easy to translate the category into any language. Translations will be a topic in one of our future posts until then you can explore this feature on our Internationalization guide.A simple example
We can define one or multiple widgets in the
init
method by callingaddWidget($widgetName, $methodName)
. To do so we define the name of a widget which will be seen by your users as well as the name of the method that shall render the widget.protected $category = 'Example Company';
public function init()
{
// Registers a widget named 'News' under the category 'Example Company'.
// The method 'myCorporateNews' will be used to render the widget.
$this->addWidget('News', $method = 'myCorporateNews');
}
public function myCorporateNews()
{
return file_get_contents('http://example.com/news');
}This example would display the content of the specified URL within the widget as defined in the method
myCorporateNews
. It’s on you how to generate the content of the widget. Any string returned by this method will be displayed within the widget. You can use for example a View to render a Twig template. For simplification we are fetching the content from another site. A more complex version would cache this content for faster performance. Caching and views will be covered in one of our future posts as well.Did you know ? To make your life as a developer as stress-free as possible the platform checks whether the registered method actually exists and whether the method is public. If not, Piwik will display a notification in the UI and advice you with the next step.
Checking permissions
Often you do not want to have the content of a widget visible to everyone. You can check for permissions by using one of our many convenient methods which all start with
\Piwik\Piwik::checkUser*
. Just to introduce some of them :// Make sure the current user has super user access
\Piwik\Piwik::checkUserHasSuperUserAccess();
// Make sure the current user is logged in and not anonymous
\Piwik\Piwik::checkUserIsNotAnonymous();And here is an example how you can use it within your widget :
public function myCorporateNews()
{
// Make sure there is an idSite URL parameter
$idSite = Common::getRequestVar('idSite', null, 'int');
// Make sure the user has at least view access for the specified site. This is useful if you want to display data that is related to the specified site.
Piwik::checkUserHasViewAccess($idSite);
$siteUrl = \Piwik\Site::getMainUrlFor($idSite);
return file_get_contents($siteUrl . '/news');
}In case any condition is not met an exception will be thrown and an error message will be presented to the user explaining that he does not have enough permissions. You’ll find the documentation for those methods in the Piwik class reference.
How to test a widget
After you have created your widgets you are surely wondering how to test it. First, you should write a unit or integration test which we will cover in one of our future blog posts. Just one hint : You can use the command
./console generate:test
to create a test. To manually test a widget you can add a widget to a dashboard or export it.Publishing your Plugin on the Marketplace
In case you want to share your widgets with other Piwik users you can do this by pushing your plugin to a public GitHub repository and creating a tag. Easy as that. Read more about how to distribute a plugin.
Advanced features
Isn’t it easy to create a widget ? We never even created a file ! Of course, based on our API design principle “The complexity of our API should never exceed the complexity of your use case.” you can accomplish more if you want : You can clarify parameters that will be passed to your widget, you can create a method in the Controller instead of the Widget class to make the same method also reusable for adding it to the menu, you can assign different categories to different widgets, you can remove any widgets that were added by the Piwik core or other plugins and more.
Would you like to know more about widgets ? Go to our Widgets class reference in the Piwik Developer Zone.
If you have any feedback regarding our APIs or our guides in the Developer Zone feel free to send it to us.
-
Subtitling Sierra VMD Files
1er juin 2016, par Multimedia Mike — Game HackingI was contacted by a game translation hobbyist from Spain (henceforth known as The Translator). He had set his sights on Sierra’s 7-CD Phantasmagoria. This mammoth game was driven by a lot of FMV files and animations that have speech. These require language translation in the form of video subtitling. He’s lucky that he found possibly the one person on the whole internet who has just the right combination of skill, time, and interest to pull this off. And why would I care about helping ? I guess I share a certain camaraderie with game hackers. Don’t act so surprised. You know what kind of stuff I like to work on.
The FMV format used in this game is VMD, which makes an appearance in numerous Sierra titles. FFmpeg already supports decoding this format. FFmpeg also supports subtitling video. So, ideally, all that’s necessary to support this goal is to add a muxer for the VMD format which can encode raw video and audio, which the format supports. Implement video compression as extra credit.
The pipeline that I envisioned looks like this :
VMD Subtitling Process
“Trivial !” I surmised. I just never learn, do I ?
The Plan
So here’s my initial pitch, outlining the work I estimated that I would need to do towards the stated goal :- Create a new file muxer that produces a syntactically valid VMD file with bogus video and audio data. Make sure it works with both FFmpeg’s playback system as well as the proper Phantasmagoria engine.
- Create a new video encoder that essentially operates in pass-through mode while correctly building a palette.
- Create a new basic encoder for the video frames.
A big unknown for me was exactly how subtitle handling operates in FFmpeg. Thanks to this project, I now know. I was concerned because I was pretty sure that font rendering entails anti-aliasing which bodes poorly for keeping the palette count under 256 unique colors.
Computer Science Puzzle
When pondering how to process the palette, I was excited for the opportunity to exercise actual computer science. FFmpeg converts frames from paletted frames to full RGB frames. Then it needs to convert them back to paletted frames. I had a vague recollection of solving this problem once before when I was experimenting with a new paletted video codec. I seem to recall that I did the palette conversion in a very naive manner. I just used a static 256-element array and processed each RGB pixel of the frame, seeing if the value already occurred in the table (O(n) lookup) and adding it otherwise.
There are more efficient algorithms, however, such as hash tables and trees. Somewhere along the line, FFmpeg helpfully acquired a rarely-used tree data structure, which was perfect for this project.
So I was pretty pleased with this optimization. Too bad this wouldn’t survive to the end of the effort.
Another palette-related challenge was the fact that a group of pictures would be accumulating a new palette but that palette needed to be recorded before the group. Thus, the muxer needed to have extra logic to rewind the file when the video encoder transmitted a palette change.
Video Compression
VMD has a few methods in its compression toolbox. It can use interframe differencing, it has some RLE, or it can code a frame raw. It can also use a custom LZ-like format on top of these. For early prototypes, I elected to leave each frame coded raw. After the concept was proved, I implemented the frame differencing.
Top frame compared with the middle frame yields the bottom frame : red pixels indicate changesEncoding only those red dots in between vast runs of unchanged pixels yielded a vast measurable improvement. The next step was to try wiring up FFmpeg’s existing LZ compression facilities to the encoder. This turned out to be implausible since VMD’s LZ variant has nothing to do with anything FFmpeg already provides. Fortunately, the LZ piece is not absolutely required and the frame differencing + RLE provides plenty of compression.
Subtitling
I’ve never done anything, multimedia programming-wise, concerning subtitles. I guess all the entertainment I care about has always been in my native tongue. What a good excuse to program outside of my comfort zone !First, I needed to know how to access FFmpeg’s subtitling facilities. Fortunately, The Translator did the legwork on this matter so I didn’t have to figure it out.
However, I intuitively had misgivings about this phase. I had heard that the subtitling process performs anti-aliasing. That means that the image would need to be promoted to a higher colorspace for this phase and that the anti-aliasing process would likely push the color count way past 256. Some quick tests revealed this to be the case, as the running color count would leap by several hundred colors as soon as the palette accounting algorithm encountered a subtitle.
So I dug into the subtitle subsystem. I discovered that the subtitle library operates by creating a linked list of subtitle bitmaps that the client app must render. The bitmaps are comprised of 8-bit alpha transparency values that must be composited onto the target frame (i.e., 0 = transparent, 255 = 100% opaque). For example, the letter ‘H’ :
(with 00s removed) 13 F8 41 00 00 00 00 68 E4 | 13 F8 41 68 E4 14 FF 44 00 00 00 00 6C EC | 14 FF 44 6C EC 14 FF 44 00 00 00 00 6C EC | 14 FF 44 6C EC 14 FF 44 00 00 00 00 6C EC | 14 FF 44 6C EC 14 FF DC D0 D0 D0 D0 E4 EC | 14 FF DC D0 D0 D0 D0 E4 EC 14 FF 7E 50 50 50 50 9A EC | 14 FF 7E 50 50 50 50 9A EC 14 FF 44 00 00 00 00 6C EC | 14 FF 44 6C EC 14 FF 44 00 00 00 00 6C EC | 14 FF 44 6C EC 14 FF 44 00 00 00 00 6C EC | 14 FF 44 6C EC 11 E0 3B 00 00 00 00 5E CE | 11 E0 3B 5E CE
To get around the color explosion problem, I chose a threshold value and quantized values above and below to 255 and 0, respectively. Further, the process chooses an appropriate color from the existing palette rather than introducing any new colors.
Muxing Matters
In order to force VMD into a general purpose media framework, a lot of special information needs to be passed around. Like many paletted codecs, the palette needs to be transmitted from the file demuxer to the video decoder via some side channel. For re-encoding, this also implies that the palette needs to make the trip from the video encoder to the file muxer. As if this wasn’t enough, individual VMD frames have even more data that needs to be ferried between the muxer and codec levels, including frame change boundaries. FFmpeg provides methods to do these things, but I could not always rely on the systems to relay the data in all cases. I was probably doing something wrong ; I accept that. Instead, I just packed all the information at the front of an encoded frame and split it apart in the muxer.I could not quite figure out how to get the audio and video muxed correctly. As a result, neither FFmpeg nor the Phantasmagoria engine could replay the files correctly.
Plan B
Since I was having so much trouble creating an entirely new VMD file, likely due to numerous unknown bits of the file format, I thought of another angle : re-use the existing VMD file. For this approach, I kept the video encoder and file muxer that I created in the initial phase, but modified the file muxer to emit a special intermediate file. Then, I created a Python tool to repackage the original VMD file using compressed video data in the intermediate file.For this phase, I also implemented a command line switch for FFmpeg to disable subtitle blending, to make the feature feel like less of an unofficial hack, as though this nonsense would ever have a chance of being incorporated upstream.
At this point, I was seeing some success with the complete, albeit roundabout, subtitling process. I constructed a subtitle file using “Spanish I Learned From Mexican Telenovelas” and the frames turned out fairly readable :
“she cheated on him”
“he’s a scumbag” … these random subtitles could fit surprisingly well !
The few files that I tested appeared to work fine. But then I handed off my work to The Translator and he immediately found a bunch of problems. According to my notes, the problems mostly took the form of flashing, solid color frames. Further, I found tiny, mostly imperceptible flaws in my RLE compressor, usually only detectable by running strict comparison tools ; but I wasn’t satisfied.
At this point, I think I attempted to just encode the entire palette at the front of each frame, as allowed by the format, but that did not seem to fix any problems. My notes are not completely clear on this matter (likely because I was still trying to figure out the exact problem), but I think it had to do with FFmpeg inserting extra video frames in order to even out gaps in the video framerate.
Sigh, Plan C
At this point, I was getting tired of trying to force FFmpeg to do this. So I decided to minimize its involvement using lessons learned up to this point.The next pitch :
- Create a new C program that can open an existing VMD file and output an identical VMD file. I know this sounds easy, but the specific method of copying entails interpreting individual parts of the file and writing those individual parts to the new file. This is in preparation for…
- Import the VMD video decoder functions directly into the program to decode the individual video frames and re-encode them, replacing the video frames as the file is rewritten.
- Wire up the subtitle system. During the adventure to disable subtitle blending, I accidentally learned enough about interfacing to the subtitle library to just invoke it directly.
- Rewrite the RLE method so that it is 100% correct.
Off to work I went. That part about lifting the existing VMD decoder functions out of their libavcodec nest turned out to not be that straightforward. As an alternative, I modified the decoder to dump the raw frames to an intermediate file. In doing so, I think I was able to avoid the issue of the duplicated frames that plagued the previous efforts.
Also, remember how I was really pleased with the palette conversion technique in which I was able to leverage computer science big-O theory ? By this stage, I had no reason to convert the paletted video to RGB in the first place ; all of the decoding, subtitling and re-encoding operates in the paletted colorspace.
This approach seemed to work pretty well. The final program is subtitle-vmd.c. The process is still a little weird. The modifications in my own FFmpeg fork are necessary to create an intermediate file that the new C tool can operate with.
Next Steps
The Translator has found some assorted bugs and corner cases that still need to be ironed out. Further, for extra credit, I need find the change windows for each frame to improve compression just a little more. I don’t think I will be trying for LZ compression, though.However, almost as soon as I had this whole system working, The Translator informed me that there is another, different movie format in play in the Phantasmagoria engine called ROBOT, with an extension of RBT. Fortunately, enough of the algorithms have been reverse engineered and re-implemented in ScummVM that I was able to sort out enough details for another subtitling project. That will be the subject of a future post.
See Also :
- Subtitling Sierra RBT Files : The followup in which I discuss how to scribble text on the other animation format
The post Subtitling Sierra VMD Files first appeared on Breaking Eggs And Making Omelettes.
-
Subtitling Sierra VMD Files
1er juin 2016, par Multimedia Mike — Game HackingI was contacted by a game translation hobbyist from Spain (henceforth known as The Translator). He had set his sights on Sierra’s 7-CD Phantasmagoria. This mammoth game was driven by a lot of FMV files and animations that have speech. These require language translation in the form of video subtitling. He’s lucky that he found possibly the one person on the whole internet who has just the right combination of skill, time, and interest to pull this off. And why would I care about helping ? I guess I share a certain camaraderie with game hackers. Don’t act so surprised. You know what kind of stuff I like to work on.
The FMV format used in this game is VMD, which makes an appearance in numerous Sierra titles. FFmpeg already supports decoding this format. FFmpeg also supports subtitling video. So, ideally, all that’s necessary to support this goal is to add a muxer for the VMD format which can encode raw video and audio, which the format supports. Implement video compression as extra credit.
The pipeline that I envisioned looks like this :
VMD Subtitling Process
“Trivial !” I surmised. I just never learn, do I ?
The Plan
So here’s my initial pitch, outlining the work I estimated that I would need to do towards the stated goal :- Create a new file muxer that produces a syntactically valid VMD file with bogus video and audio data. Make sure it works with both FFmpeg’s playback system as well as the proper Phantasmagoria engine.
- Create a new video encoder that essentially operates in pass-through mode while correctly building a palette.
- Create a new basic encoder for the video frames.
A big unknown for me was exactly how subtitle handling operates in FFmpeg. Thanks to this project, I now know. I was concerned because I was pretty sure that font rendering entails anti-aliasing which bodes poorly for keeping the palette count under 256 unique colors.
Computer Science Puzzle
When pondering how to process the palette, I was excited for the opportunity to exercise actual computer science. FFmpeg converts frames from paletted frames to full RGB frames. Then it needs to convert them back to paletted frames. I had a vague recollection of solving this problem once before when I was experimenting with a new paletted video codec. I seem to recall that I did the palette conversion in a very naive manner. I just used a static 256-element array and processed each RGB pixel of the frame, seeing if the value already occurred in the table (O(n) lookup) and adding it otherwise.
There are more efficient algorithms, however, such as hash tables and trees. Somewhere along the line, FFmpeg helpfully acquired a rarely-used tree data structure, which was perfect for this project.
So I was pretty pleased with this optimization. Too bad this wouldn’t survive to the end of the effort.
Another palette-related challenge was the fact that a group of pictures would be accumulating a new palette but that palette needed to be recorded before the group. Thus, the muxer needed to have extra logic to rewind the file when the video encoder transmitted a palette change.
Video Compression
VMD has a few methods in its compression toolbox. It can use interframe differencing, it has some RLE, or it can code a frame raw. It can also use a custom LZ-like format on top of these. For early prototypes, I elected to leave each frame coded raw. After the concept was proved, I implemented the frame differencing.
Top frame compared with the middle frame yields the bottom frame : red pixels indicate changesEncoding only those red dots in between vast runs of unchanged pixels yielded a vast measurable improvement. The next step was to try wiring up FFmpeg’s existing LZ compression facilities to the encoder. This turned out to be implausible since VMD’s LZ variant has nothing to do with anything FFmpeg already provides. Fortunately, the LZ piece is not absolutely required and the frame differencing + RLE provides plenty of compression.
Subtitling
I’ve never done anything, multimedia programming-wise, concerning subtitles. I guess all the entertainment I care about has always been in my native tongue. What a good excuse to program outside of my comfort zone !First, I needed to know how to access FFmpeg’s subtitling facilities. Fortunately, The Translator did the legwork on this matter so I didn’t have to figure it out.
However, I intuitively had misgivings about this phase. I had heard that the subtitling process performs anti-aliasing. That means that the image would need to be promoted to a higher colorspace for this phase and that the anti-aliasing process would likely push the color count way past 256. Some quick tests revealed this to be the case, as the running color count would leap by several hundred colors as soon as the palette accounting algorithm encountered a subtitle.
So I dug into the subtitle subsystem. I discovered that the subtitle library operates by creating a linked list of subtitle bitmaps that the client app must render. The bitmaps are comprised of 8-bit alpha transparency values that must be composited onto the target frame (i.e., 0 = transparent, 255 = 100% opaque). For example, the letter ‘H’ :
(with 00s removed) 13 F8 41 00 00 00 00 68 E4 | 13 F8 41 68 E4 14 FF 44 00 00 00 00 6C EC | 14 FF 44 6C EC 14 FF 44 00 00 00 00 6C EC | 14 FF 44 6C EC 14 FF 44 00 00 00 00 6C EC | 14 FF 44 6C EC 14 FF DC D0 D0 D0 D0 E4 EC | 14 FF DC D0 D0 D0 D0 E4 EC 14 FF 7E 50 50 50 50 9A EC | 14 FF 7E 50 50 50 50 9A EC 14 FF 44 00 00 00 00 6C EC | 14 FF 44 6C EC 14 FF 44 00 00 00 00 6C EC | 14 FF 44 6C EC 14 FF 44 00 00 00 00 6C EC | 14 FF 44 6C EC 11 E0 3B 00 00 00 00 5E CE | 11 E0 3B 5E CE
To get around the color explosion problem, I chose a threshold value and quantized values above and below to 255 and 0, respectively. Further, the process chooses an appropriate color from the existing palette rather than introducing any new colors.
Muxing Matters
In order to force VMD into a general purpose media framework, a lot of special information needs to be passed around. Like many paletted codecs, the palette needs to be transmitted from the file demuxer to the video decoder via some side channel. For re-encoding, this also implies that the palette needs to make the trip from the video encoder to the file muxer. As if this wasn’t enough, individual VMD frames have even more data that needs to be ferried between the muxer and codec levels, including frame change boundaries. FFmpeg provides methods to do these things, but I could not always rely on the systems to relay the data in all cases. I was probably doing something wrong ; I accept that. Instead, I just packed all the information at the front of an encoded frame and split it apart in the muxer.I could not quite figure out how to get the audio and video muxed correctly. As a result, neither FFmpeg nor the Phantasmagoria engine could replay the files correctly.
Plan B
Since I was having so much trouble creating an entirely new VMD file, likely due to numerous unknown bits of the file format, I thought of another angle : re-use the existing VMD file. For this approach, I kept the video encoder and file muxer that I created in the initial phase, but modified the file muxer to emit a special intermediate file. Then, I created a Python tool to repackage the original VMD file using compressed video data in the intermediate file.For this phase, I also implemented a command line switch for FFmpeg to disable subtitle blending, to make the feature feel like less of an unofficial hack, as though this nonsense would ever have a chance of being incorporated upstream.
At this point, I was seeing some success with the complete, albeit roundabout, subtitling process. I constructed a subtitle file using “Spanish I Learned From Mexican Telenovelas” and the frames turned out fairly readable :
“she cheated on him”
“he’s a scumbag” … these random subtitles could fit surprisingly well !
The few files that I tested appeared to work fine. But then I handed off my work to The Translator and he immediately found a bunch of problems. According to my notes, the problems mostly took the form of flashing, solid color frames. Further, I found tiny, mostly imperceptible flaws in my RLE compressor, usually only detectable by running strict comparison tools ; but I wasn’t satisfied.
At this point, I think I attempted to just encode the entire palette at the front of each frame, as allowed by the format, but that did not seem to fix any problems. My notes are not completely clear on this matter (likely because I was still trying to figure out the exact problem), but I think it had to do with FFmpeg inserting extra video frames in order to even out gaps in the video framerate.
Sigh, Plan C
At this point, I was getting tired of trying to force FFmpeg to do this. So I decided to minimize its involvement using lessons learned up to this point.The next pitch :
- Create a new C program that can open an existing VMD file and output an identical VMD file. I know this sounds easy, but the specific method of copying entails interpreting individual parts of the file and writing those individual parts to the new file. This is in preparation for…
- Import the VMD video decoder functions directly into the program to decode the individual video frames and re-encode them, replacing the video frames as the file is rewritten.
- Wire up the subtitle system. During the adventure to disable subtitle blending, I accidentally learned enough about interfacing to the subtitle library to just invoke it directly.
- Rewrite the RLE method so that it is 100% correct.
Off to work I went. That part about lifting the existing VMD decoder functions out of their libavcodec nest turned out to not be that straightforward. As an alternative, I modified the decoder to dump the raw frames to an intermediate file. In doing so, I think I was able to avoid the issue of the duplicated frames that plagued the previous efforts.
Also, remember how I was really pleased with the palette conversion technique in which I was able to leverage computer science big-O theory ? By this stage, I had no reason to convert the paletted video to RGB in the first place ; all of the decoding, subtitling and re-encoding operates in the paletted colorspace.
This approach seemed to work pretty well. The final program is subtitle-vmd.c. The process is still a little weird. The modifications in my own FFmpeg fork are necessary to create an intermediate file that the new C tool can operate with.
Next Steps
The Translator has found some assorted bugs and corner cases that still need to be ironed out. Further, for extra credit, I need find the change windows for each frame to improve compression just a little more. I don’t think I will be trying for LZ compression, though.However, almost as soon as I had this whole system working, The Translator informed me that there is another, different movie format in play in the Phantasmagoria engine called ROBOT, with an extension of RBT. Fortunately, enough of the algorithms have been reverse engineered and re-implemented in ScummVM that I was able to sort out enough details for another subtitling project. That will be the subject of a future post.
See Also :
- Subtitling Sierra RBT Files : The followup in which I discuss how to scribble text on the other animation format