Recherche avancée

Médias (0)

Mot : - Tags -/xmlrpc

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (58)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Contribute to documentation

    13 avril 2011

    Documentation is vital to the development of improved technical capabilities.
    MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
    To contribute, register to the project users’ mailing (...)

Sur d’autres sites (6655)

  • Reverse Engineering Clue Chronicles Compression

    15 janvier 2019, par Multimedia Mike — Game Hacking

    My last post described my exploration into the 1999 computer game Clue Chronicles : Fatal Illusion. Some readers expressed interest in the details so I thought I would post a bit more about how I have investigated and what I have learned.

    It’s frustrating to need to reverse engineer a compression algorithm that is only applied to a total of 8 files (out of a total set of 140), but here we are. Still, I’m glad some others expressed interest in this challenge as it motivated me to author this post, which in turn prompted me to test and challenge some of my assumptions.

    Spoiler : Commenter ‘m’ gave me the clue I needed : PKWare Data Compression Library used the implode algorithm rather than deflate. I was able to run this .ini data through an open source explode algorithm found in libmpq and got the correct data out.

    Files To Study
    I uploaded a selection of files for others to study, should they feel so inclined. These include the main game binary (if anyone has ideas about how to isolate the decompression algorithm from the deadlisting) ; compressed and uncompressed examples from 2 files (newspaper.ini and Drink.ini) ; and the compressed version of Clue.ini, which I suspect is the root of the game’s script.

    The Story So Far
    This ad-hoc scripting language found in the Clue Chronicles game is driven by a series of .ini files that are available in both compressed and uncompressed forms, save for a handful of them which only come in compressed flavor. I have figured out a few obvious details of the compressed file format :

    bytes 0-3 "COMP"
    bytes 4-11 unknown
    bytes 12-15 size of uncompressed data
    bytes 16-19 size of compressed data (filesize - 20 bytes)
    bytes 20- compressed payload
    

    The average compression ratio is on the same order as what could be achieved by running ‘gzip’ against the uncompressed files and using one of the lower number settings (i.e., favor speed vs. compression size, e.g., ‘gzip -2’ or ‘gzip -3’). Since the zlib/DEFLATE algorithm is quite widespread on every known computing platform, I thought that this would be a good candidate to test.

    Exploration
    My thinking was that I could load the bytes in the compressed ini file and feed it into Python’s zlib library, sliding through the first 100 bytes to see if any of them “catch” on the zlib decompression algorithm.

    Here is the exploration script :

    <script src="https://gist.github.com/multimediamike/c95f1a9cc58b959f4d8b2a299927d35e.js"></script>

    It didn’t work, i.e., the script did not find any valid zlib data. A commentor on my last post suggested trying bzip2, so I tried the same script but with the bzip2 decompressor library. Still no luck.

    Wrong Approach
    I realized I had not tested to make sure that this exploratory script would work on known zlib data. So I ran it on a .gz file and it failed to find zlib data. So it looks like my assumptions were wrong. Meanwhile, I can instruct Python to compress data with zlib and dump the data to a file, and then run the script against that raw zlib output and the script recognizes the data.

    I spent some time examining how zlib and gzip interact at the format level. It looks like the zlib data doesn’t actually begin on byte boundaries within a gzip container. So this approach was doomed to failure.

    A Closer Look At The Executable
    Installation of Clue Chronicles results in a main Windows executable named Fatal_Illusion.exe. It occurred to me to examine this again, specifically for references to something like zlib.dll. Nothing like that. However, a search for ‘compr’ shows various error messages which imply that there is PNG-related code inside (referencing IHDR and zTXt data types), even though PNG files are not present in the game’s asset mix.

    But there are also strings like “PKWARE Data Compression Library for Win32”. So I have started going down the rabbit hole of determining whether the compression is part of a ZIP format file. After all, a ZIP local file header data structure has 4-byte compressed and uncompressed sizes, as seen in this format.

    Binary Reverse Engineering
    At one point, I took the approach of attempting to reverse engineer the binary. When studying a deadlisting of the code, it’s easy to search for the string “COMP” and find some code that cares about these compressed files. Unfortunately, the code quickly follows an indirect jump instruction which makes it intractable to track the algorithm from a simple deadlisting.

    I also tried installing some old Microsoft dev tools on my old Windows XP box and setting some breakpoints while the game was running and do some old-fashioned step debugging. That was a total non-starter. According to my notes :

    Address 0x004A3C32 is the setup to the strncmp(“COMP”, ini_data, 4) function call. Start there.

    Problem : The game forces 640x480x256 mode and that makes debugging very difficult.

    Just For One Game ?
    I keep wondering if this engine was used for any other games. Clue Chronicles was created by EAI Interactive. As I review the list of games they are known to have created (ranging between 1997 and 2000), a few of them jump out at me as possibly being able to leverage the same engine. I have a few of them, so I checked those… nothing. Then I scrubbed some YouTube videos showing gameplay of other suspects. None of those strike me as having similar engine characteristics to Clue Chronicles. So this remains a mystery : did they really craft this engine with its own scripting language just for one game ?

    The post Reverse Engineering Clue Chronicles Compression first appeared on Breaking Eggs And Making Omelettes.

  • Neutral net or neutered

    4 juin 2013, par Mans — Law and liberty

    In recent weeks, a number of high-profile events, in the UK and elsewhere, have been quickly seized upon to promote a variety of schemes for monitoring or filtering Internet access. These proposals, despite their good intentions of protecting children or fighting terrorism, pose a serious threat to fundamental liberties. Although at a glance the ideas may seem like a reasonable price to pay for the prevention of some truly hideous crimes, there is more than first meets the eye. Internet regulation in any form whatsoever is the thin end of a wedge at whose other end we find severely restricted freedom of expression of the kind usually associated with oppressive dictatorships. Where the Internet was once a novelty, it now forms an integrated part of modern society ; regulating the Internet means regulating our lives.

    Terrorism

    Following the brutal murder of British soldier Lee Rigby in Woolwich, attempts were made in the UK to revive the controversial Communications Data Bill, also dubbed the snooper’s charter. The bill would give police and security services unfettered access to details (excluding content) of all digital communication in the UK without needing so much as a warrant.

    The powers afforded by the snooper’s charter would, the argument goes, enable police to prevent crimes such as the one witnessed in Woolwich. True or not, the proposal would, if implemented, also bring about infrastructure for snooping on anyone at any time for any purpose. Once available, the temptation may become strong to extend, little by little, the legal use of these abilities to cover ever more everyday activities, all in the name of crime prevention, of course.

    In the emotional aftermath of a gruesome act, anything with the promise of preventing it happening again may seem like a good idea. At times like these it is important, more than ever, to remain rational and carefully consider all the potential consequences of legislation, not only the intended ones.

    Hate speech

    Hand in hand with terrorism goes hate speech, preachings designed to inspire violence against people of some singled-out nation, race, or other group. Naturally, hate speech is often to be found on the Internet, where it can reach large audiences while the author remains relatively protected. Naturally, we would prefer for it not to exist.

    To fulfil the utopian desire of a clean Internet, some advocate mandatory filtering by Internet service providers and search engines to remove this unwanted content. Exactly how such censoring might be implemented is however rarely dwelt upon, much less the consequences inadvertent blocking of innocent material might have.

    Pornography

    Another common target of calls for filtering is pornography. While few object to the blocking of child pornography, at least in principle, the debate runs hotter when it comes to the legal variety. Pornography, it is claimed, promotes violence towards women and is immoral or generally offensive. As such it ought to be blocked in the name of the greater good.

    The conviction last week of paedophile Mark Bridger for the abduction and murder of five-year-old April Jones renewed the debate about filtering of pornography in the UK ; his laptop was found to contain child pornography. John Carr of the UK government’s Council on Child Internet Safety went so far as suggesting a default blocking of all pornography, access being granted to an Internet user only once he or she had registered with some unspecified entity. Registering people wishing only to access perfectly legal material is not something we do in a democracy.

    The reality is that Google and other major search engines already remove illegal images from search results and report them to the appropriate authorities. In the UK, the Internet Watch Foundation, a non-government organisation, maintains a blacklist of what it deems ‘potentially criminal’ content, and many Internet service providers block access based on this list.

    While well-intentioned, the IWF and its blacklist should raise some concerns. Firstly, a vigilante organisation operating in secret and with no government oversight acting as the nation’s morality police has serious implications for freedom of speech. Secondly, the blocks imposed are sometimes more far-reaching than intended. In one incident, an attempt to block the cover image of the Scorpions album Virgin Killer hosted by Wikipedia (in itself a dubious decision) rendered the entire related article inaccessible as well as interfered with editing.

    Net neutrality

    Content filtering, or more precisely the lack thereof, is central to the concept of net neutrality. Usually discussed in the context of Internet service providers, this is the principle that the user should have equal, unfiltered access to all content. As a consequence, ISPs should not be held responsible for the content they deliver. Compare this to how the postal system works.

    The current debate shows that the principle of net neutrality is important not only at the ISP level, but should also include providers of essential services on the Internet. This means search engines should not be responsible for or be required to filter results, email hosts should not be required to scan users’ messages, and so on. No mandatory censoring can be effective without infringing the essential liberties of freedom of speech and press.

    Social networks operate in a less well-defined space. They are clearly not part of the essential Internet infrastructure, and they require that users sign up and agree to their terms and conditions. Because of this, they can include restrictions that would be unacceptable for the Internet as a whole. At the same time, social networks are growing in importance as means of communication between people, and as such they have a moral obligation to act fairly and apply their rules in a transparent manner.

    Facebook was recently under fire, accused of not taking sufficient measures to curb ‘hate speech,’ particularly against women. Eventually they pledged to review their policies and methods, and reducing the proliferation of such content will surely make the web a better place. Nevertheless, one must ask how Facebook (or another social network) might react to similar pressure from, say, a religious group demanding removal of ‘blasphemous’ content. What about demands from a foreign government ? Only yesterday, the Turkish prime minister Erdogan branded Twitter ‘a plague’ in a TV interview.

    Rather than impose upon Internet companies the burden of law enforcement, we should provide them the latitude to set their own policies as well as the legal confidence to stand firm in the face of unreasonable demands. The usual market forces will promote those acting responsibly.

    Further reading

  • 4K Screen Recording on 1080p Monitors [closed]

    10 avril, par Souhail Benlhachemi

    I have created a basic windows screen recording app (ffmpeg + GUI), but I noticed that the quality of the recording depends on the monitor used to record, the video recording quality when recorded using a full HD is different from he video recording quality when recorded using a 4k monitor (which is obvious).

    


    There is not much difference between the two when playing the recorded video with a scale of 100%, but when I zoom to 150% or more, we clearly can see the difference between the two recorded videos (1920x1080 VS the 4k).

    


    I did some research on how to do screen recording with a 4k quality on a full hd monitor, and here is what I found :

    


    I played with the windows duplicate API (AcquireNextFrame function which gives you the next frame on the swap chain), I successfully managed to convert the buffer to a PNG image and save it locally to my machine, but as you expect the quality was the same as a normal screenshot ! Because AcquireNextFrame return a frame after it is rasterized.

    


    Then I came across what’s called “Graphics pipeline”, I spent some time to understand the basics, and finally I came to a conclusion that I need to intercept somehow the pre-rasterize data (the data that comes before the Rasterizer Stage - Geometry shaders, etc...) and then duplicate this data and do an off-screen render on a new 4k render target, but the windows API don’t allow that, there is no way to do that ! The only option they have on docs is what’s called Stream Output Stage, but this is useful only if you want to render your own shaders, not the ones that my display is using. (I tried to use MinHook to intercept data but no luck).

    


    After that, I tried a different approach, I managed to create a virtual display as extended monitor with 4k resolution, and record it using ffmpeg, but as you know what I’m seeing on my main display on my monitor is different from the virtual display (only an empty desktop), what I need to do is drag and drop app windows using my mouse to that screen manually, but this will put us in a problem when recording, we are not seeing what we are recording xD.

    


    I found some YouTube videos that talk about DSR (Dynamic Super Resolution), I tried that on my nvidia control panel (manually with GUI) and it works. I managed to fake the system that I have a 4k monitor and the quality of the recording was crystal clear. But I didn’t find anyway to do that programmatically using NVAPI + there is no API for that on AMD.

    


    Has anyone worked on a similar project ? Or know a similar project that I can use as reference ?

    


    suggestions ?