Recherche avancée

Médias (0)

Mot : - Tags -/utilisateurs

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (106)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (12386)

  • Visualizing Call Graphs Using Gephi

    1er septembre 2014, par Multimedia Mike — General

    When I was at university studying computer science, I took a basic chemistry course. During an accompanying lab, the teaching assistant chatted me up and asked about my major. He then said, “Computer science ? Well, that’s just typing stuff, right ?”

    My impulsive retort : “Sure, and chemistry is just about mixing together liquids and coming up with different colored liquids, as seen on the cover of my high school chemistry textbook, right ?”


    Chemistry fun

    In fact, pure computer science has precious little to do with typing (as is joked in CS circles, computer science is about computers in the same way that astronomy is about telescopes). However, people who study computer science often pursue careers as programmers, or to put it in fancier professional language, software engineers.

    So, what’s a software engineer’s job ? Isn’t it just typing ? That’s where I’ve been going with this overly long setup. After thinking about it for long enough, I like to say that a software engineer’s trade is managing complexity.

    A few years ago, I discovered Gephi, an open source tool for graph and data visualization. It looked neat but I didn’t have much use for it at the time. Recently, however, I was trying to get a better handle on a large codebase. I.e., I was trying to manage the project’s complexity. And then I thought of Gephi again.

    Prior Work
    One way to get a grip on a large C codebase is to instrument it for profiling and extract details from the profiler. On Linux systems, this means compiling and linking the code using the -pg flag. After running the executable, there will be a gmon.out file which is post-processed using the gprof command.

    GNU software development tools have a reputation for being rather powerful and flexible, but also extremely raw. This first hit home when I was learning how to use the GNU tool for code coverage — gcov — and the way it outputs very raw data that you need to massage with other tools in order to get really useful intelligence.

    And so it is with gprof output. The output gives you a list of functions sorted by the amount of processing time spent in each. Then it gives you a flattened call tree. This is arranged as “during the profiled executions, function c was called by functions a and b and called functions d, e, and f ; function d was called by function c and called functions g and h”.

    How can this call tree data be represented in a more instructive manner that is easier to navigate ? My first impulse (and I don’t think I’m alone in this) is to convert the gprof call tree into a representation suitable for interpretation by Graphviz. Unfortunately, doing so tends to generate some enormous and unwieldy static images.

    Feeding gprof Data To Gephi
    I learned of Gephi a few years ago and recalled it when I developed an interest in gaining better perspective on a large base of alien C code. To understand what this codebase is doing for a particular use case, instrument it with gprof, gather execution data, and then study the code paths.

    How could I feed the gprof data into Gephi ? Gephi supports numerous graphing formats including an XML-based format named GEXF.

    Thus, the challenge becomes converting gprof output to GEXF.

    Which I did.

    Demonstration
    I have been absent from FFmpeg development for a long time, which is a pity because a lot of interesting development has occurred over the last 2-3 years after a troubling period of stagnation. I know that 2 big video codec developments have been HEVC (next in the line of MPEG codecs) and VP9 (heir to VP8’s throne). FFmpeg implements them both now.

    I decided I wanted to study the code flow of VP9. So I got the latest FFmpeg code from git and built it using the options "--extra-cflags=-pg --extra-ldflags=-pg". Annoyingly, I also needed to specify "--disable-asm" because gcc complains of some register allocation snafus when compiling inline ASM in profiling mode (and this is on x86_64). No matter ; ASM isn’t necessary for understanding overall code flow.

    After compiling, the binary ‘ffmpeg_g’ will have symbols and be instrumented for profiling. I grabbed a sample from this VP9 test vector set and went to work.

    ./ffmpeg_g -i vp90-2-00-quantizer-00.webm -f null /dev/null
    gprof ./ffmpeg_g > vp9decode.txt
    convert-gprof-to-gexf.py vp9decode.txt > /bigdisk/vp9decode.gexf
    

    Gephi loads vp9decode.gexf with no problem. Using Gephi, however, can be a bit challenging if one is not versed in any data exploration jargon. I recommend this Gephi getting starting guide in slide deck form. Here’s what the default graph looks like :


    gprof-ffmpeg-gephi-1

    Not very pretty or helpful. BTW, that beefy arrow running from mid-top to lower-right is the call from decode_coeffs_b -> iwht_iwht_4x4_add_c. There were 18774 from the former to the latter in this execution. Right now, the edge thicknesses correlate to number of calls between the nodes, which I’m not sure is the best representation.

    Following the tutorial slide deck, I at least learned how to enable the node labels (function symbols in this case) and apply a layout algorithm. The tutorial shows the force atlas layout. Here’s what the node neighborhood looks like for probing file type :


    gprof-ffmpeg-gephi-2

    Okay, so that’s not especially surprising– avprobe_input_format3 calls all of the *_probe functions in order to automatically determine input type. Let’s find that decode_coeffs_b function and see what its neighborhood looks like :


    gprof-ffmpeg-gephi-3

    That’s not very useful. Perhaps another algorithm might help. I select the Fruchterman–Reingold algorithm instead and get a slightly more coherent representation of the decoding node neighborhood :


    gprof-ffmpeg-gephi-4

    Further Work
    Obviously, I’m just getting started with this data exploration topic. One thing I would really appreciate in such a tool is the ability to interactively travel the graph since that’s what I’m really hoping to get out of this experiment– watching the code flows.

    Perhaps someone else can find better use cases for visualizing call graph data. Thus, I have published the source code for this tool at Github.

  • Parsing The Clue Chronicles

    30 décembre 2018, par Multimedia Mike — Game Hacking

    A long time ago, I procured a 1999 game called Clue Chronicles : Fatal Illusion, based on the classic board game Clue, a.k.a. Cluedo. At the time, I was big into collecting old, unloved PC games so that I could research obscure multimedia formats.



    Surveying the 3 CD-ROMs contained in the box packaging revealed only Smacker (SMK) videos for full motion video which was nothing new to me or the multimedia hacking community at the time. Studying the mix of data formats present on the discs, I found a selection of straightforward formats such as WAV for audio and BMP for still images. I generally find myself more fascinated by how computer games are constructed rather than by playing them, and this mix of files has always triggered a strong “I could implement a new engine for this !” feeling in me, perhaps as part of the ScummVM project which already provides the core infrastructure for reimplementing engines for 2D adventure games.

    Tying all of the assets together is a custom high-level programming language. I have touched on this before in a blog post over a decade ago. The scripts are in a series of files bearing the extension .ini (usually reserved for configuration scripts, but we’ll let that slide). A representative sample of such a script can be found here :

    clue-chronicles-scarlet-1.txt

    What Is This Language ?
    At the time I first analyzed this language, I was still primarily a C/C++-minded programmer, with a decent amount of Perl experience as a high level language, and had just started to explore Python. I assessed this language to be “mildly object oriented with C++-type comments (‘//’) and reliant upon a number of implicit library functions”. Other people saw other properties. When I look at it nowadays, it reminds me a bit more of JavaScript than C++. I think it’s sort of a Rorschach test for programming languages.

    Strangely, I sort of had this fear that I would put a lot of effort into figuring out how to parse out the language only for someone to come along and point out that it’s a well-known yet academic language that already has a great deal of supporting code and libraries available as open source. Google for “spanish dolphins far side comic” for an illustration of the feeling this would leave me with.

    It doesn’t matter in the end. Even if such libraries exist, how easy would they be to integrate into something like ScummVM ? Time to focus on a workable approach to understanding and processing the format.

    Problem Scope
    So I set about to see if I can write a program to parse the language seen in these INI files. Some questions :

    1. How large is the corpus of data that I need to be sure to support ?
    2. What parsing approach should I take ?
    3. What is the exact language format ?
    4. Other hidden challenges ?

    To figure out how large the data corpus is, I counted all of the INI files on all of the discs. There are 138 unique INI files between the 3 discs. However, there are 146 unique INI files after installation. This leads to a hidden challenge described a bit later.

    What parsing approach should I take ? I worried a bit too much that I might not be doing this the “right” way. I’m trying to ignore doubts like this, like how “SQL Shame” blocked me on a task for a little while a few years ago as I concerned myself that I might not be using the purest, most elegant approach to the problem. I know I covered language parsing a lot time ago in university computer science education and there is a lot of academic literature to the matter. But sometimes, you just have to charge in and experiment and prototype and see what falls out. In doing so, I expect to have a better understanding of the problems that need to solved and the right questions to ask, not unlike that time that I wrote a continuous integration system from scratch because I didn’t actually know that “continuous integration” was the keyword I needed.

    Next, what is the exact language format ? I realized that parsing the language isn’t the first and foremost problem here– I need to know exactly what the language is. I need to know what the grammar are keywords are. In essence, I need to reverse engineer the language before I write a proper parser for it. I guess that fits in nicely with the historical aim of this blog (reverse engineering).

    Now, about the hidden challenges– I mentioned that there are 8 more INI files after the game installs itself. Okay, so what’s the big deal ? For some reason, all of the INI files are in plaintext on the CD-ROM but get compressed (apparently, according to file size ratios) when installed to the hard drive. This includes those 8 extra INI files. I thought to look inside the CAB installation archive file on the CD-ROM and the files were there… but all in compressed form. I suspect that one of the files forms the “root” of the program and is the launching point for the game.

    Parsing Approach
    I took a stab at parsing an INI file. My approach was to first perform lexical analysis on the file and create a list of 4 types : symbols, numbers, strings, and language elements ([]{}()=., :). Apparently, this is the kind of thing that Lex/Flex are good at. This prototyping tool is written in Python, but when I port this to ScummVM, it might be useful to call upon the services of Lex/Flex, or another lexical analyzer, for there are many. I have a feeling it will be easier to use better tools when I understand the full structure of the language based on the data available.

    The purpose of this tool is to explore all the possibilities of the existing corpus of INI files. To that end, I ran all 138 of the plaintext files through it, collected all of the symbols, and massaged the results, assuming that the symbols that occurred most frequently are probably core language features. These are all the symbols which occur more than 1000 times among all the scripts :

       6248 false
       5734 looping
       4390 scripts
       3877 layer
       3423 sequentialscript
       3408 setactive
       3360 file
       3257 thescreen
       3239 true
       3008 autoplay
       2914 offset
       2599 transparent
       2441 text
       2361 caption
       2276 add
       2205 ge
       2197 smackanimation
       2196 graphicscript
       2196 graphic
       1977 setstate
       1642 state
       1611 skippable
       1576 desc
       1413 delayscript
       1298 script
       1267 seconds
       1019 rect
    

    About That Compression
    I have sorted out at least these few details of the compression :

    bytes 0-3    "COMP" (a pretty strong sign that this is, in fact, compressed data)
    bytes 4-11   unknown
    bytes 12-15  size of uncompressed data
    bytes 16-19  size of compressed data (filesize - 20)
    bytes 20-    compressed payload
    

    The compression ratios are on the same order of gzip. I was hoping that it was stock zlib data. However, I have been unable to prove this. I wrote a Python script that scrubbed through the first 100 bytes of payload data and tried to get Python’s zlib.decompress to initialize– no luck. It’s frustrating to know that I’ll have to reverse engineer a compression algorithm that deals with just 8 total text files if I want to see this effort through to fruition.

    Update, January 15, 2019
    Some folks expressed interest in trying to sort out the details of the compression format. So I have posted a followup in which I post some samples and go into deeper details about things I have tried :

    Reverse Engineering Clue Chronicles Compression

    The post Parsing The Clue Chronicles first appeared on Breaking Eggs And Making Omelettes.

  • 6 Crucial Benefits of Conversion Rate Optimisation

    26 février 2024, par Erin

    Whether investing time or money in marketing, you want the best return on your investment. You want to get as many customers as possible with your budget and resources.

    That’s what conversion rate optimisation (CRO) aims to do. But how does it help you achieve this major goal ? 

    This guide explores the concrete benefits of conversion rate optimisation and how they lead to more effective marketing and ROI. We’ll also introduce specific CRO best practices to help unlock these benefits.

    What is conversion rate optimisation ?

    Conversion rate optimisation (CRO) is the process of examining your website for improvements and creating tests to increase the number of visitors who take a desired action, like purchasing a product or submitting a form.

    The conversion rate is the percentage of visitors who complete a specific goal.

    Illustration of what conversion rate optimisation is

    In order to improve your conversion rate, you need to figure out :

    • Where your customers come from
    • How potential customers navigate or interact with your website
    • Where potential customers are likely to exit your site (or abandon carts)
    • What patterns drive valuable actions like sign-ups and sales

    From there, you can gradually implement changes that will drive more visitors to convert. That’s the essence of conversion rate optimisation.

    6 top benefits of conversion rate optimisation (and best practices to unlock them)

    Conversion rate optimisation can help you get more out of your campaigns without investing more. CRO helps you in these six ways :

    1. Understand your visitors (and customers) better

    The main goal of CRO is to boost conversions, but it’s more than that. In the process of improving conversion rates, you’ll also benefit by gaining deep insights into user behaviour, preferences, and needs. 

    Using web analytics, tests and behavioural analytics, CRO helps marketers shape their website to match what users need.

    Best practices for understanding your customer :

    First, analyse how visitors act with full context (the pages they view, how long they stay and more). 

    In Matomo, you can use the Users Flow report to understand how visitors navigate through your site. This will help you visualise and identify trends in the buyer’s journey.

    User flow chart in Matomo analytics

    Then, you can dive deeper by defining and analysing journeys with Funnels. This shows you how many potential customers follow through each step in your defined journey and identify where you might have a leaky funnel. 

    Goal funnel chart in Matomo analytics

    In the above Funnel Report, nearly half of our visitors, just 44%, are moving forward in the buyer’s journey after landing on our scuba diving mask promotion page. With 56% of potential customers dropping off at this page, it’s a prime opportunity for optimising conversions.

    Think of Funnels as your map, and pages with high drop-off rates as valuable opportunities for improvement.

    Once you notice patterns, you can try to identify the why. Analyse the pages, do user testing and do your best to improve them.

    2. Deliver a better user experience

    A better understanding of your customers’ needs means you can deliver a better user experience.

    Illustration of improving the user experience

    For example, if you notice many people spend more time than expected on a particular step in the sign-up process, you can work to streamline it.

    Best practices for improving your user experience : 

    To do this, you need to come up with testable hypotheses. Start by using Heatmaps and Session Recordings to visualise the user experience and understand where visitors are hesitating, experiencing points of frustration, and exiting. 

    You need to outline what drives certain patterns in behaviour — like cart abandonment for specific products, and what you think can fix them.

    Example of a heatmap in Matomo analytics

    Let’s look at an example. In the screenshot above, we used Matomo’s Heatmap feature to analyse user behaviour on our website. 

    Only 65% of visitors scroll down far enough to encounter our main call to action to “Write a Review.” This insight suggests a potential opportunity for optimisation, where we can focus efforts on encouraging more users to engage with this key element on our site.

    Once you’ve identified an area of improvement, you need to test the results of your proposed solution to the problem. The most common way to do this is with an A/B test. 

    This is a test where you create a new version of the problematic page, trying different titles, comparing long, and short copy, adding or removing images, testing variations of call-to-action buttons and more. Then, you compare the results — the conversion rate — against the original. With Matomo’s A/B Testing feature, you can easily split traffic between the original and one or more variations.

    A/B testing in Matomo analytics

    In the example above from Matomo, we can see that testing different header sizes on a page revealed that the wider header led to a higher conversion rate of 47%, compared to the original rate of 35% and the smaller header’s 36%.

    Matomo’s report also analyses the “statistical significance” of the difference in results. Essentially, this is the likelihood that the difference comes from the changes you made in the variation. With a small sample size, random patterns (like one page receiving more organic search visits) can cause the differences.

    If you see a significant change over a larger sample size, you can be fairly certain that the difference is meaningful. And that’s exactly what a high statistical significance rating indicates in Matomo. 

    Once a winner is identified, you can apply the change and start a new experiment. 

    3. Create a culture of data-driven decision-making

    Marketers can no longer afford to rely on guesswork or gamble away budgets and resources. In our digital age, you must use data to get ahead of the competition. In 2021, 65% of business leaders agreed that decisions were getting more complex.

    CRO is a great way to start a company-wide focus on data-driven decision-making. 

    Best practices to start a data-driven culture :

    Don’t only test “hunches” or “best practices” — look at the data. Figure out the patterns that highlight how different types of visitors interact with your site.

    Try to answer these questions :

    • How do our most valuable customers interact with our site before purchasing ?
    • How do potential customers who abandon their carts act ?
    • Where do our most valuable customers come from ?

    Moreover, it’s key to democratise insights by providing multiple team members access to information, fostering informed decision-making company-wide.

    4. Lower your acquisition costs and get higher ROI from all marketing efforts

    Once you make meaningful optimisations, CRO can help you lower customer acquisition costs (CAC). Getting new customers through advertising will be cheaper.

    As a result, you’ll get a better return on investment (ROI) on all your campaigns. Every ad and dollar invested will get you closer to a new customer than before. That’s the bottom line of CRO.

    Best practices to lower your CAC (customer acquisition costs) through CRO adjustments :

    The easiest way to lower acquisition costs is to understand where your customers come from. Use marketing attribution to track the results of your campaigns, revealing how each touchpoint contributes to conversions and revenue over time, beyond just last-click attribution.

    You can then compare the number of conversions to the marketing costs of each channel, to get a channel-specific breakdown of CAC.

    This performance overview can help you quickly prioritise the best value channels and ads, lowering your CAC. But these are only surface-level insights. 

    You can also further lower CAC by optimising the pages these campaigns send visitors to. Start with a deep dive into your landing pages using features like Matomo’s Session Recordings or Heatmaps.

    They can help you identify issues with an unengaging user experience or content. Using these insights, you can create A/B tests, where you implement a new page that replaces problematic headlines, buttons, copy, or visuals.

    Example of a multivariate test for headlines

    When a test shows a statistically significant improvement in conversion rates, implement the new version. Repeat this over time, and you can increase your conversion rates significantly, getting more customers with the same spend. This will reduce your customer acquisition costs, and help your company grow faster without increasing your ad budget.

    5. Improve your average order value (AOV) and customer lifetime value (CLV)

    CRO isn’t only about increasing the number of customers you convert. If you adapt your approach, you can also use it to increase the revenue from each customer you bring in. 

    But you can’t do that by only tracking conversion rates, you also need to track exactly what your customers buy.

    If you only blindly optimise for CAC, you even risk lowering your CLV and the overall profitability of your campaigns. (For example, if you focus on Facebook Ads with a $6 CAC, but an average CLV of $50, over Google Ads with a $12 CAC, but a $100 CLV.)

    Best practices to track and improve CLV :

    First, integrate your analytics platform with your e-commerce (B2C) or your CRM (B2B). This will help you get a more holistic view of your customers. You don’t want the data to stop at “converted.” You want to be able to dive deep into the patterns of high-value customers.

    The sales report in Matomo’s ecommerce analytics makes it easy to break down average order value by channels, campaigns, and specific ads.

    Ecommerce sales report in Matomo analytics

    In the report above, we can see that search engines drive customers who spend significantly more, on average, than social networks — $241 vs. $184. But social networks drive a higher volume of customers and more revenue.

    To figure out which channel to focus on, you need to see how the CAC compares to the AOV (or CLV for B2B customers). Let’s say the CAC of social networks is $50, while the search engine CAC is $65. Search engine customers are more profitable — $176 vs. $134. So you may want to adjust some more budget to that channel.

    To put it simply :

    Profit per customer = AOV (or CLV) – CAC

    Example :

    • Profit per customer for social networks = $184 – $50 = $134
    • Profit per customer for search engines = $241 – $65 = $176

    You can also try to A/B test changes that may increase the AOV, like creating a product bundle and recommending it on specific sales pages.

    An improvement in CLV will make your campaigns more profitable, and help stretch your advertising budget even further.

    6. Improve your content and SEO rankings

    A valuable side-effect of focusing on CRO metrics and analyses is that it can boost your SEO rankings. 

    How ? 

    CRO helps you improve the user experience of your website. That’s a key signal Google (and other search engines) care about when ranking webpages. 

    Illustration of how better content improves SEO rankings

    For example, Google’s algorithm considers “dwell time,” AKA how long a user stays on your page. If many users quickly return to the results page and click another result, that’s a bad sign. But if most people stay on your site for a while (or don’t return to Google at all), Google thinks your page gives the user their answer.

    As a result, Google will improve your website’s ranking in the search results.

    Best practices to make the most of CRO when it comes to SEO :

    Use A/B Testing, Heatmaps, and Session Recordings to run experiments and understand user behaviour. Test changes to headlines, page layout, imagery and more to see how it impacts the user experience. You can even experiment with completely changing the content on a page, like substituting an introduction.

    Bring your CRO-testing mindset to important pages that aren’t ranking well to improve metrics like dwell time.

    Start optimising your conversion rate today

    As you’ve seen, enjoying the benefits of CRO heavily relies on the data from a reliable web analytics solution. 

    But in an increasingly privacy-conscious world (just look at the timeline of GDPR updates and fines), you must tread carefully. One of the dilemmas that marketing managers face today is whether to prioritise data quality or privacy (and regulations).

    With Matomo, you don’t have to choose. Matomo values both data quality and privacy, adhering to stringent privacy laws like GDPR and CCPA.

    Unlike other web analytics, Matomo doesn’t sample data or use AI and machine learning to fill data gaps. Plus, you can track without annoying visitors with a cookie consent banner – so you capture 100% of traffic while respecting user privacy (excluding in Germany and UK).

    And as you’ve already seen above, you’ll still get plenty of reports and insights to drive your CRO efforts. With User Flows, Funnels, Session Recordings, Form Analytics, and Heatmaps, you can immediately find insights to improve your bottom line.

    And our built-in A/B testing feature will help you test your hypotheses and drive reliable progress. If you’re ready to reliably optimise conversion rates (with accuracy and without privacy concerns), try Matomo for free for 21 days. No credit card required.