Recherche avancée

Médias (2)

Mot : - Tags -/plugins

Autres articles (56)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • (Dés)Activation de fonctionnalités (plugins)

    18 février 2011, par

    Pour gérer l’ajout et la suppression de fonctionnalités supplémentaires (ou plugins), MediaSPIP utilise à partir de la version 0.2 SVP.
    SVP permet l’activation facile de plugins depuis l’espace de configuration de MediaSPIP.
    Pour y accéder, il suffit de se rendre dans l’espace de configuration puis de se rendre sur la page "Gestion des plugins".
    MediaSPIP est fourni par défaut avec l’ensemble des plugins dits "compatibles", ils ont été testés et intégrés afin de fonctionner parfaitement avec chaque (...)

Sur d’autres sites (8140)

  • Further Dreamcast Hacking

    3 février 2011, par Multimedia Mike — Sega Dreamcast

    I’m still haunted by Sega Dreamcast programming, specifically the fact that I used to be able to execute custom programs on the thing (roughly 8-10 years ago) and now I cannot. I’m going to compose a post to describe my current adventures on this front. There are 3 approaches I have been using : Raw, Kallistios, and the almighty Linux.


    Raw
    What I refer to as "raw" is an assortment of programs that lived in a small number of source files (sometimes just one ASM file) and could be compiled with the most basic SH-4 toolchain. The advantage here is that there aren’t many moving parts and not many things that can possibly go wrong, so it provides a good functional baseline.

    One of the original Dreamcast hackers was Marcus Comstedt, who still has his original DC material hosted at the reasonably easy-to-remember URL mc.pp.se/dc. I can get some of these simple demos to work, but not others.

    I also successfully assembled and ran a pair of 256-byte (!!) demos from this old DC scene page.

    KallistiOS
    KallistiOS (or just KOS) was a real-time OS developed for the DC and was popular among the DC homebrew community. All the programming I did back in the day was based around KOS. Now I can’t get any of it to work. More specifically, KOS can’t seem to make it past a certain point in its system initialization.

    The Linux Option
    I was never that excited about running Linux on my Dreamcast. For some hackers, running Linux on a given piece of consumer electronics is the highest attainable goal. Back in the day, I looked at it from a much more pragmatic perspective— I didn’t see much use in running Linux on the DC, not as much as running KOS which was developed to be a much more appropriate fit.

    However, I was able to burn a CD-R of an old binary image of Linux 2.4.5 compiled for the Dreamcast and boot it some months ago. So I at least have a feeling that this should work. I have never cross-compiled a kernel of my own (though I have compiled many, many x86 kernels in my time, so I’m not a total n00b in this regard). I figured this might be a good time to start.

    The first item that worries me is getting a functional cross-compiling toolchain. Fortunately, a little digging in the Linux kernel documentation pointed me in the direction of a bunch of ready-made toolchains hosted at kernel.org. So I grabbed one of the SH toolchains (gcc-4.3.3-nolibc) and got rolling.

    I’m well familiar with the cycle of 'make menuconfig' in order to pick configuration options, and then 'make' to build a kernel (or usually 'make zImage' or 'make bzImage' to create compressed images). For cross compiling, the primary difference seems to be editing the root Makefile in the Linux source code tree (I’m using 2.6.37, the latest stable as of this writing) and setting a value for the CROSS_COMPILE variable. Then, run 'make menuconfig' followed by 'make' as normal.

    The Linux 2.6 series is supposed to support a range of Renesas (formerly Hitachi) SH processors and board configurations. This includes reasonable defaults for the Sega Dreamcast hardware. I got it all compiling except for a series of .S files. Linus Torvalds once helped me debug a program I work on so I thought I’d see if there was something I could help debug here.

    The first issue was with ASM statements of a form similar to :

    mov #0xffffffe0, r1
    

    Now, the DC’s SH-4 is a RISC CPU. A lot of RISC architectures adopt a fixed instruction size of 32 bits. You can’t encode an entire 32-bit immediate value inside of a 32-bit instruction (there would be no room for the instruction encoding). Further, the SH series encoded instructions with a mere 16 bits. The move immediate data instruction only allows for an 8-bit, sign-extended value.

    I decided that the above statement is equivalent to :

    mov #-32, r1
    

    I’ll give this statement the benefit of the doubt that it used to work with the gcc toolchain somewhere along the line. I assume that the assembler is supposed to know enough to substitute the first form with the second.

    The next problem is that an ’sti’ instruction shows up in a number of spots. Using Intel x86 conventions, this is a "set interrupt flag" instruction (I remember that the 6502 CPU had the same instruction mnemonic, though its interrupt flag’s operation was opposite that of the x86). The SH-4 reference manual lists no ’sti’ instruction. When it gets to these lines, the assembler complains about immediate move instructions with too large data, like the instructions above. I’m guessing they must be macro’d to something else but I failed to find where. I commented out those lines for the time being. Probably not that smart, but I want to keep this moving for now.

    So I got the code to compile into a kernel file called ’vmlinux’. I’ve seen this file many times before but never thought about how to get it to run directly. The process has usually been to compress it and send it over to lilo or grub for loading, as that is the job of the bootloader. I have never even wondered what format the vmlinux file takes until now. It seems that ’vmlinux’ is just a plain old ELF file :

    $ file vmlinux
    vmlinux : ELF 32-bit LSB executable, Renesas SH,
    version 1 (SYSV), statically linked, not stripped
    

    The ’dc-tool’ program that uploads executables to the waiting bootloader on the Dreamcast is perfectly cool accepting ELF files (and S-record files, and raw binary files). After a very lengthy upload process, execution fails (resets the system).

    For the sake of comparison, I dusted off that Linux 2.4.5 bootable Dreamcast CD-ROM and directly uploaded the vmlinux file from that disc. That works just fine (until it’s time to go to the next loading phase, i.e., finding a filesystem). Possible issues here could include the commented ’sti’ instructions (could be that they aren’t just decoration). I’m also trying to understand the memory organization— perhaps the bootloader wants the ELF to be based at a different address. Or maybe the kernel and the bootloader don’t like each other in the first place— in this case, I need to study the bootable Linux CD-ROM to see how it’s done.

    Optimism
    Even though I’m meeting with rather marginal success, this is tremendously educational. I greatly enjoy these exercises if only for the deeper understanding they bring for the lowest-level system details.

  • How to Use Analytics & Reports for Marketing, Sales & More

    28 septembre 2023, par Erin — Analytics Tips

    By now, most professionals know they should be using analytics and reports to make better business decisions. Blogs and thought leaders talk about it all the time. But most sources don’t tell you how to use analytics and reports. So marketers, salespeople and others either skim whatever reports they come across or give up on making data-driven decisions entirely. 

    But it doesn’t have to be this way.

    In this article, we’ll cover what analytics and reports are, how they differ and give you examples of each. Then, we’ll explain how clean data comes into play and how marketing, sales, and user experience teams can use reports and analytics to uncover actionable insights.

    What’s the difference between analytics & reports ? 

    Many people speak of reports and analytics as if the terms are interchangeable, but they have two distinct meanings.

    A report is a collection of data presented in one place. By tracking key metrics and providing numbers, reports tell you what is happening in your business. Analytics is the study of data and the process of generating insights from data. Both rely on data and are essential for understanding and improving your business results.

    https://docs.google.com/document/d/1teSgciAq0vi2oXtq_I2_n6Cv89kPi0gBF1l0zve1L2Q/edit

    A science experiment is a helpful analogy for how reporting and analytics work together. To conduct an experiment, scientists collect data and results and compile a report of what happened. But the process doesn’t stop there. After generating a data report, scientists analyse the data and try to understand the why behind the results.

    In a business context, you collect and organise data in reports. With analytics, you then use those reports and their data to draw conclusions about what works and what doesn’t.

    Reports examples 

    Reports are a valuable tool for just about any part of your business, from sales to finance to human resources. For example, your finance team might collect data about spending and use it to create a report. It might show how much you spend on employee compensation, real estate, raw materials and shipping.

    On the other hand, your marketing team might benefit from a report on lead sources. This would mean collecting data on where your sales leads come from (social media, email, organic search, etc.). You could collect and present lead source data over time for a more in-depth report. This shows which sources are becoming more effective over time. With advanced tools, you can create detailed, custom reports that include multiple factors, such as time, geographical location and device type.

    Analytics examples 

    Because analytics requires looking at and drawing insights from data and reports to collect and present data, analytics often begins by studying reports. 

    In our example of a report on lead sources, an analytics professional might study the report and notice that webinars are an important source of leads. To better understand this, they might look closely at the number of leads acquired compared to how often webinars occur. If they notice that the number of webinar leads has been growing, they might conclude that the business should invest in more webinars to generate more leads. This is just one kind of insight analytics can provide.

    For another example, your human resources team might study a report on employee retention. After analysing the data, they could discover valuable insights, such as which teams have the highest turnover rate. Further analysis might help them uncover why certain teams fail to keep employees and what they can do to solve the problem.

    The importance of clean data 

    Both analytics and reporting rely on data, so it’s essential your data is clean. Clean data means you’ve audited your data, removed inaccuracies and duplicate entries, and corrected mislabelled data or errors. Basically, you want to ensure that each piece of information you’re using for reports and analytics is accurate and organised correctly.

    If your data isn’t clean and accurate, neither will your reports be. And making business decisions based on bad data can come at a considerable cost. Inaccurate data might lead you to invest in a channel that appears more valuable than it actually is. Or it could cause you to overlook opportunities for growth. Moreover, poor data maintenance and the poor insight it provides will lead your team to have less trust in your reports and analytics team.

    The simplest way to maintain clean data is to be meticulous when inputting or transferring data. This can be as simple as ensuring that your sales team fills in every field of an account record. When you need to import or transfer data from other sources, you need to perform quality assurance (QA) checks to make sure data is appropriately labelled and organised. 

    Another way to maintain clean data is by avoiding cookies. Most web visitors reject cookie consent banners. When this happens, analysts and marketers don’t get data on these visitors and only see the percentage of users who accept tracking. This means they decide on a smaller sample size, leading to poor or inaccurate data. These banners also create a poor user experience and annoy web visitors.

    Matomo can be configured to run cookieless — which, in most countries, means you don’t need to have an annoying cookie consent screen on your site. This way, you can get more accurate data and create a better user experience.

    Marketing analytics and reports 

    Analytics and reporting help you measure and improve the effectiveness of your marketing efforts. They help you learn what’s working and what you should invest more time and money into. And bolstering the effectiveness of your marketing will create more opportunities for sales.

    One common area where marketing teams use analytics and reports is to understand and improve their keyword rankings and search engine optimization. They use web analytics platforms like Matomo to report on how their website performs for specific keywords. Insights from these reports are then used to inform changes to the website and the development of new content.

    As we mentioned above, marketing teams often use reports on lead sources to understand how their prospects and customers are learning about the brand. They might analyse their lead sources to better understand their audience. 

    For example, if your company finds that you receive a lot of leads from LinkedIn, you might decide to study the content you post there and how it differs from other platforms. You could apply a similar content approach to other channels to see if it increases lead generation. You can then study reporting on how lead source data changes after you change content strategies. This is one example of how analysing a report can lead to marketing experimentation. 

    Email and paid advertising are also marketing channels that can be optimised with reports and analysis. By studying the data around what emails and ads your audience clicks on, you can draw insights into what topics and messaging resonate with your customers.

    Marketing teams often use A/B testing to learn about audience preferences. In an A/B test, you can test two landing page versions, such as two different types of call-to-action (CTA) buttons. Matomo will generate a report showing how many people clicked each version. From those results, you may draw an insight into the design your audience prefers.

    Sales analytics and reports 

    Sales analytics and reports are used to help teams close more deals and sell more efficiently. They also help businesses understand their revenue, set goals, and optimise sales processes. And understanding your sales and revenue allows you to plan for the future.

    One of the keys to building a successful sales strategy and team is understanding your sales cycle. That’s why it’s so important for companies to analyse their lead and sales data. For business-to-business (B2B) companies in particular, the sales cycle can be a long process. But you can use reporting and analytics to learn about the stages of the buying cycle, including how long they take and how many leads proceed to the next step.

    Analysing lead and customer data also allows you to gain insights into who your customers are. With detailed account records, you can track where your customers are, what industries they come from, what their role is and how much they spend. While you can use reports to gather customer data, you also have to use analysis and qualitative information in order to build buyer personas. 

    Many sales teams use past individual and business performance to understand revenue trends. For instance, you might study historical data reports to learn how seasonality affects your revenue. If you dive deeper, you might find that seasonal trends may depend on the country where your customers live. 

    Sales rep, money and clock

    Conversely, it’s also important to analyse what internal variables are affecting revenue. You can use revenue reports to identify your top-performing sales associates. You can then try to expand and replicate that success. While sales is a field often driven by personal relationships and conversations, many types of reports allow you to learn about and improve the process.

    Website and user behaviour analytics and reports 

    More and more, businesses view their websites as an experience and user behaviour as an important part of their business. And just like sales and marketing, reporting and analytics help you better understand and optimise your web experience. 

    Many web and user behaviour metrics, like traffic source, have important implications for marketing. For example, page traffic and user flows can provide valuable insights into what your customers are interested in. This can then drive future content development and marketing campaigns.

    You can also learn about how your users navigate and use your website. A robust web analytics tool, like Matomo, can supply user session recordings and visitor tracking. For example, you could study which pages a particular user visits. But Matomo also has a feature called Transitions that provides visual reports showing where a particular page’s traffic comes from and where visitors tend to go afterward. 

    As you consider why people might be leaving your website, site performance is another important area for reporting. Most users are accustomed to near-instantaneous web experiences, so it’s worth monitoring your page load time and looking out for backend delays. In today’s world, your website experience is part of what you’re selling to customers. Don’t miss out on opportunities to impress and delight them.

    Dive into your data

    Reporting and analytics can seem like mysterious buzzwords we’re all supposed to understand already. But, like anything else, they require definitions and meaningful examples. When you dig into the topic, though, the applications for reporting and analytics are endless.

    Use these examples to identify how you can use analytics and reports in your role and department to achieve better results, whether that means higher quality leads, bigger deal size or a better user experience.

    To see how Matomo can collect accurate and reliable data and turn it into in-depth analytics and reports, start a free 21-day trial. No credit card required.

  • Parsing The Clue Chronicles

    30 décembre 2018, par Multimedia Mike — Game Hacking

    A long time ago, I procured a 1999 game called Clue Chronicles : Fatal Illusion, based on the classic board game Clue, a.k.a. Cluedo. At the time, I was big into collecting old, unloved PC games so that I could research obscure multimedia formats.



    Surveying the 3 CD-ROMs contained in the box packaging revealed only Smacker (SMK) videos for full motion video which was nothing new to me or the multimedia hacking community at the time. Studying the mix of data formats present on the discs, I found a selection of straightforward formats such as WAV for audio and BMP for still images. I generally find myself more fascinated by how computer games are constructed rather than by playing them, and this mix of files has always triggered a strong “I could implement a new engine for this !” feeling in me, perhaps as part of the ScummVM project which already provides the core infrastructure for reimplementing engines for 2D adventure games.

    Tying all of the assets together is a custom high-level programming language. I have touched on this before in a blog post over a decade ago. The scripts are in a series of files bearing the extension .ini (usually reserved for configuration scripts, but we’ll let that slide). A representative sample of such a script can be found here :

    clue-chronicles-scarlet-1.txt

    What Is This Language ?
    At the time I first analyzed this language, I was still primarily a C/C++-minded programmer, with a decent amount of Perl experience as a high level language, and had just started to explore Python. I assessed this language to be “mildly object oriented with C++-type comments (‘//’) and reliant upon a number of implicit library functions”. Other people saw other properties. When I look at it nowadays, it reminds me a bit more of JavaScript than C++. I think it’s sort of a Rorschach test for programming languages.

    Strangely, I sort of had this fear that I would put a lot of effort into figuring out how to parse out the language only for someone to come along and point out that it’s a well-known yet academic language that already has a great deal of supporting code and libraries available as open source. Google for “spanish dolphins far side comic” for an illustration of the feeling this would leave me with.

    It doesn’t matter in the end. Even if such libraries exist, how easy would they be to integrate into something like ScummVM ? Time to focus on a workable approach to understanding and processing the format.

    Problem Scope
    So I set about to see if I can write a program to parse the language seen in these INI files. Some questions :

    1. How large is the corpus of data that I need to be sure to support ?
    2. What parsing approach should I take ?
    3. What is the exact language format ?
    4. Other hidden challenges ?

    To figure out how large the data corpus is, I counted all of the INI files on all of the discs. There are 138 unique INI files between the 3 discs. However, there are 146 unique INI files after installation. This leads to a hidden challenge described a bit later.

    What parsing approach should I take ? I worried a bit too much that I might not be doing this the “right” way. I’m trying to ignore doubts like this, like how “SQL Shame” blocked me on a task for a little while a few years ago as I concerned myself that I might not be using the purest, most elegant approach to the problem. I know I covered language parsing a lot time ago in university computer science education and there is a lot of academic literature to the matter. But sometimes, you just have to charge in and experiment and prototype and see what falls out. In doing so, I expect to have a better understanding of the problems that need to solved and the right questions to ask, not unlike that time that I wrote a continuous integration system from scratch because I didn’t actually know that “continuous integration” was the keyword I needed.

    Next, what is the exact language format ? I realized that parsing the language isn’t the first and foremost problem here– I need to know exactly what the language is. I need to know what the grammar are keywords are. In essence, I need to reverse engineer the language before I write a proper parser for it. I guess that fits in nicely with the historical aim of this blog (reverse engineering).

    Now, about the hidden challenges– I mentioned that there are 8 more INI files after the game installs itself. Okay, so what’s the big deal ? For some reason, all of the INI files are in plaintext on the CD-ROM but get compressed (apparently, according to file size ratios) when installed to the hard drive. This includes those 8 extra INI files. I thought to look inside the CAB installation archive file on the CD-ROM and the files were there… but all in compressed form. I suspect that one of the files forms the “root” of the program and is the launching point for the game.

    Parsing Approach
    I took a stab at parsing an INI file. My approach was to first perform lexical analysis on the file and create a list of 4 types : symbols, numbers, strings, and language elements ([]{}()=., :). Apparently, this is the kind of thing that Lex/Flex are good at. This prototyping tool is written in Python, but when I port this to ScummVM, it might be useful to call upon the services of Lex/Flex, or another lexical analyzer, for there are many. I have a feeling it will be easier to use better tools when I understand the full structure of the language based on the data available.

    The purpose of this tool is to explore all the possibilities of the existing corpus of INI files. To that end, I ran all 138 of the plaintext files through it, collected all of the symbols, and massaged the results, assuming that the symbols that occurred most frequently are probably core language features. These are all the symbols which occur more than 1000 times among all the scripts :

       6248 false
       5734 looping
       4390 scripts
       3877 layer
       3423 sequentialscript
       3408 setactive
       3360 file
       3257 thescreen
       3239 true
       3008 autoplay
       2914 offset
       2599 transparent
       2441 text
       2361 caption
       2276 add
       2205 ge
       2197 smackanimation
       2196 graphicscript
       2196 graphic
       1977 setstate
       1642 state
       1611 skippable
       1576 desc
       1413 delayscript
       1298 script
       1267 seconds
       1019 rect
    

    About That Compression
    I have sorted out at least these few details of the compression :

    bytes 0-3    "COMP" (a pretty strong sign that this is, in fact, compressed data)
    bytes 4-11   unknown
    bytes 12-15  size of uncompressed data
    bytes 16-19  size of compressed data (filesize - 20)
    bytes 20-    compressed payload
    

    The compression ratios are on the same order of gzip. I was hoping that it was stock zlib data. However, I have been unable to prove this. I wrote a Python script that scrubbed through the first 100 bytes of payload data and tried to get Python’s zlib.decompress to initialize– no luck. It’s frustrating to know that I’ll have to reverse engineer a compression algorithm that deals with just 8 total text files if I want to see this effort through to fruition.

    Update, January 15, 2019
    Some folks expressed interest in trying to sort out the details of the compression format. So I have posted a followup in which I post some samples and go into deeper details about things I have tried :

    Reverse Engineering Clue Chronicles Compression

    The post Parsing The Clue Chronicles first appeared on Breaking Eggs And Making Omelettes.