Recherche avancée

Médias (0)

Mot : - Tags -/auteurs

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (86)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

Sur d’autres sites (13218)

  • Things I Have Learned About Emscripten

    1er septembre 2015, par Multimedia Mike — Cirrus Retro

    3 years ago, I released my Game Music Appreciation project, a website with a ludicrously uninspired title which allowed users a relatively frictionless method to experience a range of specialized music files related to old video games. However, the site required use of a special Chrome plugin. Ever since that initial release, my #1 most requested feature has been for a pure JavaScript version of the music player.

    “Impossible !” I exclaimed. “There’s no way JS could ever run fast enough to run these CPU emulators and audio synthesizers in real time, and allow for the visualization that I demand !” Well, I’m pleased to report that I have proved me wrong. I recently quietly launched a new site with what I hope is a catchier title, meant to evoke a cloud-based retro-music-as-a-service product : Cirrus Retro. Right now, it’s basically the same as the old site, but without the wonky Chrome-specific technology.

    Along the way, I’ve learned a few things about using Emscripten that I thought might be useful to share with other people who wish to embark on a similar journey. This is geared more towards someone who has a stronger low-level background (such as C/C++) vs. high-level (like JavaScript).

    General Goals
    Do you want to cross-compile an entire desktop application, one that relies on an extensive GUI toolkit ? That might be difficult (though I believe there is a path for porting qt code directly with Emscripten). Your better wager might be to abstract out the core logic and processes of the program and then create a new web UI to access them.

    Do you want to compile a game that basically just paints stuff to a 2D canvas ? You’re in luck ! Emscripten has a porting path for SDL. Make a version of your C/C++ software that targets SDL (generally not a tall order) and then compile that with Emscripten.

    Do you just want to cross-compile some functionality that lives in a library ? That’s what I’ve done with the Cirrus Retro project. For this, plan to compile the library into a JS file that exports some public functions that other, higher-level, native JS (i.e., JS written by a human and not a computer) will invoke.

    Memory Levels
    When porting C/C++ software to JavaScript using Emscripten, you have to think on 2 different levels. Or perhaps you need to force JavaScript into a low level C lens, especially if you want to write native JS code that will interact with Emscripten-compiled code. This often means somehow allocating chunks of memory via JS and passing them to the Emscripten-compiled functions. And you wouldn’t believe the type of gymnastics you need to execute to get native JS and Emscripten-compiled JS to cooperate.

    “Emscripten : Pointers and Pointers” is the best (and, really, ONLY) explanation I could find for understanding the basic mechanics of this process, at least when I started this journey. However, there’s a mistake in the explanation that left me confused for a little while, and I’m at a loss to contact the author (doesn’t anyone post a simple email address anymore ?).

    Per the best of my understanding, Emscripten allocates a large JS array and calls that the memory space that the compiled C/C++ code is allowed to operate in. A pointer in C/C++ code will just be an index into that mighty array. Really, that’s not too far off from how a low-level program process is supposed to view memory– as a flat array.

    Eventually, I just learned to cargo-cult my way through the memory allocation process. Here’s the JS code for allocating an Emscripten-compatible byte buffer, taken from my test harness (more on that later) :

    var musicBuffer = fs.readFileSync(testSpec[’filename’]) ;
    var musicBufferBytes = new Uint8Array(musicBuffer) ;
    var bytesMalloc = player._malloc(musicBufferBytes.length) ;
    var bytes = new Uint8Array(player.HEAPU8.buffer, bytesMalloc, musicBufferBytes.length) ;
    bytes.set(new Uint8Array(musicBufferBytes.buffer)) ;
    

    So, read the array of bytes from some input source, create a Uint8Array from the bytes, use the Emscripten _malloc() function to allocate enough bytes from the Emscripten memory array for the input bytes, then create a new array… then copy the bytes…

    You know what ? It’s late and I can’t remember how it works exactly, but it does. It has been a few months since I touched that code (been fighting with front-end website tech since then). You write that memory allocation code enough times and it begins to make sense, and then you hope you don’t have to write it too many more times.

    Multithreading
    You can’t port multithreaded code to JS via Emscripten. JavaScript has no notion of threads ! If you don’t understand the computer science behind this limitation, a more thorough explanation is beyond the scope of this post. But trust me, I’ve thought about it a lot. In fact, the official Emscripten literature states that you should be able to port most any C/C++ code as long as 1) none of the code is proprietary (i.e., all the raw source is available) ; and 2) there are no threads.

    Yes, I read about the experimental pthreads support added to Emscripten recently. Don’t get too excited ; that won’t be ready and widespread for a long time to come as it relies on a new browser API. In the meantime, figure out how to make your multithreaded C/C++ code run in a single thread if you want it to run in a browser.

    Printing Facility
    Eventually, getting software to work boils down to debugging, and the most primitive tool in many a programmer’s toolbox is the humble print statement. A print statement allows you to inspect a piece of a program’s state at key junctures. Eventually, when you try to cross-compile C/C++ code to JS using Emscripten, something is not going to work correctly in the generated JS “object code” and you need to understand what. You’ll be pleading for a method of just inspecting one variable deep in the original C/C++ code.

    I came up with this simple printf-workalike called emprintf() :

    #ifndef EMPRINTF_H
    #define EMPRINTF_H
    

    #include <stdio .h>
    #include <stdarg .h>
    #include <emscripten .h>

    #define MAX_MSG_LEN 1000

    /* NOTE : Don’t pass format strings that contain single quote (’) or newline
    * characters. */
    static void emprintf(const char *format, ...)

    char msg[MAX_MSG_LEN] ;
    char consoleMsg[MAX_MSG_LEN + 16] ;
    va_list args ;

    /* create the string */
    va_start(args, format) ;
    vsnprintf(msg, MAX_MSG_LEN, format, args) ;
    va_end(args) ;

    /* wrap the string in a console.log(’’) statement */
    snprintf(consoleMsg, MAX_MSG_LEN + 16, "console.log(’%s’)", msg) ;

    /* send the final string to the JavaScript console */
    emscripten_run_script(consoleMsg) ;

    #endif /* EMPRINTF_H */

    Put it in a file called “emprint.h”. Include it into any C/C++ file where you need debugging visibility, use emprintf() as a replacement for printf() and the output will magically show up on the browser’s JavaScript debug console. Heed the comments and don’t put any single quotes or newlines in strings, and keep it under 1000 characters. I didn’t say it was perfect, but it has helped me a lot in my Emscripten adventures.

    Optimization Levels
    Remember to turn on optimization when compiling. I have empirically found that optimizing for size (-Os) leads to the best performance all around, in addition to having the smallest size. Just be sure to specify some optimization level. If you don’t, the default is -O0 which offers horrible performance when running in JS.

    Static Compression For HTTP Delivery
    JavaScript code compresses pretty efficiently, even after it has been optimized for size using -Os. I routinely see compression ratios between 3.5:1 and 5:1 using gzip.

    Web servers in this day and age are supposed to be smart enough to detect when a requesting web browser can accept gzip-compressed data and do the compression on the fly. They’re even supposed to be smart enough to cache compressed output so the same content is not recompressed for each request. I would have to set up a series of tests to establish whether either of the foregoing assertions are correct and I can’t be bothered. Instead, I took it into my own hands. The trick is to pre-compress the JS files and then instruct the webserver to serve these files with a ‘Content-Type’ of ‘application/javascript’ and a ‘Content-Encoding’ of ‘gzip’.

    1. Compress your large Emscripten-build JS files with ‘gzip’ : ‘gzip compiled-code.js’
    2. Rename them from extension .js.gz to .jsgz
    3. Tell the webserver to deliver .jsgz files with the correct Content-Type and Content-Encoding headers

    To do that last step with Apache, specify these lines :

    AddType application/javascript jsgz
    AddEncoding gzip jsgz
    

    They belong in either a directory’s .htaccess file or in the sitewide configuration (/etc/apache2/mods-available/mime.conf works on my setup).

    Build System and Build Time Optimization
    Oh goodie, build systems ! I had a very specific manner in which I wanted to build my JS modules using Emscripten. Can I possibly coerce any of the many popular build systems to do this ? It has been a few months since I worked on this problem specifically but I seem to recall that the build systems I tried to used would freak out at the prospect of compiling stuff to a final binary target of .js.

    I had high hopes for Bazel, which Google released while I was developing Cirrus Retro. Surely, this is software that has been battle-tested in the harshest conditions of one of the most prominent software-developing companies in the world, needing to take into account the most bizarre corner cases and still build efficiently and correctly every time. And I have little doubt that it fulfills the order. Similarly, I’m confident that Google also has a team of no fewer than 100 or so people dedicated to developing and supporting the project within the organization. When you only have, at best, 1-2 hours per night to work on projects like this, you prefer not to fight with such cutting edge technology and after losing 2 or 3 nights trying to make a go of Bazel, I eventually put it aside.

    I also tried to use Autotools. It failed horribly for me, mostly for my own carelessness and lack of early-project source control.

    After that, it was strictly vanilla makefiles with no real dependency management. But you know what helps in these cases ? ccache ! Or at least, it would if it didn’t fail with Emscripten.

    Quick tip : ccache has trouble with LLVM unless you set the CCACHE_CPP2 environment variable (e.g. : “export CCACHE_CPP2=1”). I don’t remember the specifics, but it magically fixes things. Then, the lazy build process becomes “make clean && make”.

    Testing
    If you have never used Node.js, testing Emscripten-compiled JS code might be a good opportunity to start. I was able to use Node.js to great effect for testing the individually-compiled music player modules, wiring up a series of invocations using Python for a broader test suite (wouldn’t want to go too deep down the JS rabbit hole, after all).

    Be advised that Node.js doesn’t enjoy the same kind of JIT optimizations that the browser engines leverage. Thus, in the case of time critical code like, say, an audio synthesis library, the code might not run in real time. But as long as it produces the correct bitwise waveform, that’s good enough for continuous integration.

    Also, if you have largely been a low-level programmer for your whole career and are generally unfamiliar with the world of single-threaded, event-driven, callback-oriented programming, you might be in for a bit of a shock. When I wanted to learn how to read the contents of a file in Node.js, this is the first tutorial I found on the matter. I thought the code presented was a parody of bad coding style :

    var fs = require("fs") ;
    var fileName = "foo.txt" ;
    

    fs.exists(fileName, function(exists)
    if (exists)
    fs.stat(fileName, function(error, stats)
    fs.open(fileName, "r", function(error, fd)
    var buffer = new Buffer(stats.size) ;

    fs.read(fd, buffer, 0, buffer.length, null, function(error, bytesRead, buffer)
    var data = buffer.toString("utf8", 0, buffer.length) ;

    console.log(data) ;
    fs.close(fd) ;
    ) ;
    ) ;
    ) ;
    ) ;

    Apparently, this kind of thing doesn’t raise an eyebrow in the JS world.

    Now, I understand and respect the JS programming model. But this was seriously frustrating when I first encountered it because a simple script like the one I was trying to write just has an ordered list of tasks to complete. When it asks for bytes from a file, it really has nothing better to do than to wait for the answer.

    Thankfully, it turns out that Node’s fs module includes synchronous versions of the various file access functions. So it’s all good.

    Conclusion
    I’m sure I missed or underexplained some things. But if other brave souls are interested in dipping their toes in the waters of Emscripten, I hope these tips will come in handy.

    The post Things I Have Learned About Emscripten first appeared on Breaking Eggs And Making Omelettes.

  • Cohort Analysis 101 : How-To, Examples & Top Tools

    13 novembre 2023, par Erin — Analytics Tips

    Imagine that a farmer is trying to figure out why certain hens are laying large brown eggs and others are laying average-sized white eggs.

    The farmer decides to group the hens into cohorts based on what kind of eggs they lay to make it easier to detect patterns in their day-to-day lives. After careful observation and analysis, she discovered that the hens laying big brown eggs ate more than the roost’s other hens.

    With this cohort analysis, the farmer deduced that a hen’s body weight directly corresponds to egg size. She can now develop a strategy to increase the body weight of her hens to sell more large brown eggs, which are very popular at the weekly farmers’ market.

    Cohort analysis has a myriad of applications in the world of web analytics. Like our farmer, you can use it to better understand user behaviour and reap the benefits of your efforts. This article will discuss the best practices for conducting an effective cohort analysis and compare the top cohort analysis tools for 2024. 

    What is cohort analysis ?

    By definition, cohort analysis refers to a technique where users are grouped based on shared characteristics or behaviours and then examined over a specified period.

    Think of it as a marketing superpower, enabling you to comprehend user behaviours, craft personalised campaigns and allocate resources wisely, ultimately resulting in improved performance and better ROI.

    Why does cohort analysis matter ?

    In web analytics, a cohort is a group of users who share a certain behaviour or characteristic. The goal of cohort analysis is to uncover patterns and compare the performance and behaviour of different cohorts over time.

    An example of a cohort is a group of users who made their first purchase during the holidays. By analysing this cohort, you could learn more about their behaviour and buying patterns. You may discover that this cohort is more likely to buy specific product categories as holiday gifts — you can then tailor future holiday marketing campaigns to include these categories. 

    Types of cohort analysis

    There are a few different types of notable cohorts : 

    1. Time-based cohorts are groups of users categorised by a specific time. The example of the farmer we went over at the beginning of this section is a great example of a time-based cohort.
    2. Acquisition cohorts are users acquired during a specific time frame, event or marketing channel. Analysing these cohorts can help you determine the value of different acquisition methods. 
    3. Behavioural cohorts consist of users who show similar patterns of behaviour. Examples include frequent purchases with your mobile app or digital content engagement. 
    4. Demographic cohorts share common demographic characteristics like age, gender, education level and income. 
    5. Churn cohorts are buyers who have cancelled a subscription/stopped using your service within a specific time frame. Analysing churn cohorts can help you understand why customers leave.
    6. Geographic cohorts are pretty self-explanatory — you can use them to tailor your marketing efforts to specific regions. 
    7. Customer journey cohorts are based on the buyer lifecycle — from acquisition to adoption to retention. 
    8. Product usage cohorts are buyers who use your product/service specifically (think basic users, power users or occasional users). 

    Best practices for conducting a cohort analysis 

    So, you’ve decided you want to understand your user base better but don’t know how to go about it. Perhaps you want to reduce churn and create a more engaging user experience. In this section, we’ll walk you through the dos and don’ts of conducting an effective cohort analysis. Remember that you should tailor your cohort analysis strategy for organisation-specific goals.

    A line graph depicting product usage cohort data with a blue line for new users and a green line for power users.

    1. Preparing for cohort analysis : 

      • First, define specific goals you want your cohort analysis to achieve. Examples include improving conversion rates or reducing churn.
      • Choosing the right time frame will help you compare short-term vs. long-term data trends. 

    2. Creating effective cohorts : 

      • Define your segmentation criteria — anything from demographics to location, purchase history or user engagement level. Narrowing in on your specific segments will make your cohort analysis more precise. 
      • It’s important to find a balance between cohort size and similarity. If your cohort is too small and diverse, you won’t be able to find specific behavioural patterns.

    3. Performing cohort analysis :

        • Study retention rates across cohorts to identify patterns in user behaviour and engagement over time. Pay special attention to cohorts with high retention or churn rates. 
        • Analysing cohorts can reveal interesting behavioural insights — how do specific cohorts interact with your website ? Do they have certain preferences ? Why ? 

    4. Visualising and interpreting data :

      • Visualising your findings can be a great way to reveal patterns. Line charts can help you spot trends, while bar charts can help you compare cohorts.
      • Guide your analytics team on how to interpret patterns in cohort data. Watch for sudden drops or spikes and what they could mean. 

    5. Continue improving :

      • User behaviour is constantly evolving, so be adaptable. Continuous tracking of user behaviour will help keep your strategies up to date. 
      • Encourage iterative analysis optimisation based on your findings. 
    wrench trying to hammer in a nail, and a hammer trying to screw in a screw to a piece of wood

    The top cohort analysis tools for 2024

    In this section, we’ll go over the best cohort analysis tools for 2024, including their key features, cohort analysis dashboards, cost and pros and cons.

    1. Matomo

    A screenshot of a cohorts graph in Matomo

    Matomo is an open-source, GDPR-compliant web analytics solution that offers cohort analysis as a standard feature in Matomo Cloud and is available as a plugin for Matomo On-Premise. Pairing traditional web analytics with cohort analysis will help you gain even deeper insights into understanding user behaviour over time. 

    You can use the data you get from web analytics to identify patterns in user behaviour and target your marketing strategies to specific cohorts. 

    Key features

    • Matomo offers a cohorts table that lets you compare cohorts side-by-side, and it comes with a time series.
      • All core session and conversion metrics are also available in the Cohorts report.
    • Create custom segments based on demographics, geography, referral sources, acquisition date, device types or user behaviour. 
    • Matomo provides retention analysis so you can track how many users from a specific cohort return to your website and when. 
    • Flexibly analyse your cohorts with custom reports. Customise your reports by combining metrics and dimensions specific to different cohorts. 
    • Create cohorts based on events or interactions with your website. 
    • Intuitive, colour-coded data visualisation, so you can easily spot patterns.

    Pros

    • No setup is needed if you use the JavaScript tracker
    • You can fetch cohort without any limit
    • 100% accurate data, no AI or Machine Learning data filling, and without the use of data sampling

    Cons

    • Matomo On-Premise (self-hosted) is free, but advanced features come with additional charges
    • Servers and technical know-how are required for Matomo On-Premise. Alternatively, for those not ready for self-hosting, Matomo Cloud presents a more accessible option and starts at $19 per month.

    Price : 

    • Matomo Cloud : 21-day free trial, then starts at $19 per month (includes Cohorts).
    • Matomo On-Premise : Free to self-host ; Cohorts plugin : 30-day free trial, then $99 per year.

    2. Mixpanel

    Mixpanel is a product analytics tool designed to help teams better understand user behaviour. It is especially well-suited for analysing user behaviour on iOS and Android apps. It offers various cohort analytics features that can be used to identify patterns and engage your users. 

    Key features

    • Create cohorts based on criteria such as sign-up date, first purchase date, referral source, geographic location, device type or another custom event/property. 
    • Compare how different cohorts engage with your app with Mixpanel’s comparative analysis features.
    • Create interactive dashboards, charts and graphs to visualise data.
    • Mixpanel provides retention analysis tools to see how often users return to your product over time. 
    • Send targeted messages and notifications to specific cohorts to encourage user engagement, announce new features, etc. 
    • Track and analyse user behaviours within cohorts — understand how different types of users engage with your product.

    Pros

    • Easily export cohort analysis data for further analysis
    • Combined with Mixpanel reports, cohorts can be a powerful tool for improving your product

    Cons

    • With the free Mixpanel plan, you can’t save cohorts for future use
    • Enterprise-level pricing is expensive
    • Time-consuming cohort creation process

    Price : Free basic version. The growth version starts at £16/month.

    3. Amplitude

    A screenshot of a cohorts graph in Amplitude

    Amplitude is another product analytics solution that can help businesses track user interactions across digital platforms. Amplitude offers a standard toolkit for in-depth cohort analysis.

    Key features

    • Create cohorts based on criteria such as sign-up date, first purchase date, referral source, geographic location, device type or another custom event/property. 
    • Conduct behavioural, time-based and retention analyses.
    • Create custom reports with custom data.
    • Segment cohorts further based on additional criteria and compare multiple cohorts side-by-side.

    Pros

    • Highly customisable and flexible
    • Quick and simple setup

    Cons

    • Steep learning curve — requires significant training 
    • Slow loading speed
    • High price point compared to other tools

    Price : Free basic version. Plus version starts at £40/month (billed annually).

    4. Kissmetrics

    A screenshot of a cohorts graph in Kissmetrics

    Kissmetrics is a customer engagement automation platform that offers powerful analytics features. Kissmetrics provides behavioural analytics, segmentation and email campaign automation. 

    Key features

    • Create cohorts based on demographics, user behaviour, referral sources, events and specific time frames.
    • The user path tool provides path visualisation so you can identify common paths users take and spot abandonment points. 
    • Create and optimise conversion funnels.
    • Customise events, user properties, funnels, segments, cohorts and more.

    Pros

    • Powerful data visualisation options
    • Highly customisable

    Cons

    • Difficult to install
    • Not well-suited for small businesses
    • Limited integration with other tools

    Price : Starting at £21/month for 10k events (billed monthly).

    Improve your cohort analysis with Matomo

    When choosing a cohort analysis tool, consider factors such as the tool’s ease of integration with your existing systems, data accuracy, the flexibility it offers in defining cohorts, the comprehensiveness of reporting features, and its scalability to accommodate the growth of your data and analysis needs over time. Moreover, it’s essential to confirm GDPR compliance to uphold rigorous privacy standards. 

    If you’re ready to understand your user’s behaviour, take Matomo for a test drive. Paired with web analytics, this powerful combination can advance your marketing efforts. Start your 21-day free trial today — no credit card required.

  • Making Your First-Party Data Work for You and Your Customers

    11 mars, par Alex Carmona

    At last count, 162 countries had enacted data privacy policies of one kind or another. These laws or regulations, without exception, intend to eliminate the use of third-party data. That puts marketing under pressure because third-party data has been the foundation of online marketing efforts since the dawn of the Internet.

    Marketers need to future-proof their operations by switching to first-party data. This will require considerable adjustment to systems and processes, but the reward will be effective marketing campaigns that satisfy privacy compliance requirements and bring the business closer to its customers.

    To do that, you’ll need a coherent first-party data strategy. That’s what this article is all about. We’ll explain the different types of personal data and discuss how to use them in marketing without compromising or breaching data privacy regulations. We’ll also discuss how to build that strategy in your business. 

    So, let’s dive in.

    The different data types

    There are four distinct types of personal data used in marketing, each subject to different data privacy regulations.

    Before getting into the different types, it’s essential to understand that all four may comprise one or more of the following :

    Identifying dataName, email address, phone number, etc.
    Behavioural dataWebsite activity, app usage, wishlist content, purchase history, etc.
    Transactional dataOrders, payments, subscription details, etc.
    Account dataCommunication preferences, product interests, wish lists, etc.
    Demographic dataAge, gender, income level, education, etc.
    Geographic DataLocation-based information, such as zip codes or regional preferences.
    Psychographic DataInterests, hobbies and lifestyle preferences.

    First-party data

    When businesses communicate directly with customers, any data they exchange is first-party. It doesn’t matter how the interaction occurs : on the telephone, a website, a chat session, or even in person.

    Of course, the parties involved aren’t necessarily individuals. They may be companies, but people within those businesses will probably share at least some of the data with colleagues. That’s fine, so long as the data : 

    • Remains confidential between the original two parties involved, and 
    • It is handled and stored following applicable data privacy regulations.

    The core characteristic of first-party data is that it’s collected directly from customer interactions. This makes it reliable, accurate and inherently compliant with privacy regulations — assuming the collecting party complies with data privacy laws.

    A great example of first-party data use is in banking. Data collected from customer interactions is used to provide personalised services, detect fraud, assess credit risk and improve customer retention.

    Zero-party data

    There’s also a subset of first-party data, sometimes called zero-party data. It’s what users intentionally and proactively share with a business. It can be preferences, intentions, personal information, survey responses, support tickets, etc.

    What makes it different is that the collection of this data depends heavily on the user’s trust. Transparency is a critical factor, too ; visitors expect to be informed about how you’ll use their data. Consumers also have the right to withdraw permission to use all or some of their information at any time.

    Diagram showing how a first-party data strategy is built on trust and transparency

    Second-party data

    This data is acquired from a separate organisation that collects it firsthand. Second-party data is someone else’s first-party data that’s later shared with or sold to other businesses. The key here is that whoever owns that data must give explicit consent and be informed of who businesses share their data with.

    A good example is the cooperation between hotel chains, car rental companies, and airlines. They share joint customers’ flight data, hotel reservations, and car rental bookings, much like travel agents did before the internet undermined that business model.

    Third-party data

    This type of data is the arch-enemy of lawmakers and regulators trying to protect the personal data of citizens and residents in their country. It’s information collected by entities that have no direct relationship with the individuals whose data it is.

    Third-party data is usually gathered, aggregated, and sold by data brokers or companies, often by using third-party cookies on popular websites. It’s an entire business model — these third-party brokers sell data for marketing, analytics, or research purposes. 

    Most of the time, third-party data subjects are unaware that their data has been gathered and sold. Hence the need for strong data privacy regulations.

    Benefits of a first-party data strategy

    First-party data is reliable, accurate, and ethically sourced. It’s an essential part of any modern digital marketing strategy.

    More personalised experiences

    The most important application of first-party data is customising and personalising customers’ interactions based on real behaviours and preferences. Personalised experiences aren’t restricted to websites and can extend to all customer communication.

    The result is company communications and marketing messages are far more relevant to customers. It allows businesses to engage more meaningfully with them, building trust and strengthening customer relationships. Inevitably, this also results in stronger customer loyalty and better customer retention.

    Greater understanding of customers

    Because first-party data is more accurate and reliable, it can be used to derive valuable insights into customer needs and wants. When all the disparate first-party data points are centralised and organised, it’s possible to uncover trends and patterns in customer behaviour that might not be apparent using other data.

    This helps businesses predict and respond to customer needs. It also allows marketing teams to be more deliberate when segmenting customers and prospects into like-minded groups. The data can also be used to create more precise personas for future campaigns or reveal how likely a customer would be to purchase in response to a campaign.

    Build trust with customers

    First-party data is unique to a business and originates from interactions with customers. It’s also data collected with consent and is “owned” by the company — if you can ever own someone else’s data. If treated like the precious resource, it can help businesses build trust with customers.

    However, developing that trust requires a transparent, step-by-step approach. This gradually strengthens relationships to the point where customers are more comfortable sharing the information they’re asked for.

    However, while building trust is a long and sometimes arduous process, it can be lost in an instant. That’s why first-party data must be protected like the Crown Jewels.

    Image showing the five key elements of a first-party data strategy

    Components of a first-party data strategy

    Security is essential to any first-party data strategy, and for good reason. As Gartner puts it, a business must find the optimal balance between business outcomes and data risk mitigation. Once security is baked in, attention can turn to the different aspects of the strategy.

    Data collection

    There are many ways to collect first-party data ethically, within the law and while complying with data privacy regulations, such as Europe’s General Data Protection Regulation (GDPR). Potential sources include :

    Website activityforms and surveys, behavioural tracking, cookies, tracking pixels and chatbots
    Mobile app interactionsin-app analytics, push notifications and in-app forms
    Email marketingnewsletter sign-ups, email engagement tracking, promotions, polls and surveys 
    Eventsregistrations, post-event surveys and virtual event analytics
    Social media interactionpolls and surveys, direct messages and social media analytics
    Previous transactionspurchase history, loyalty programmes and e-receipts 
    Customer service call centre data, live chat, chatbots and feedback forms
    In-person interactions in-store purchases, customer feedback and Wi-Fi sign-ins
    Gated contentwhitepapers, ebooks, podcasts, webinars and video downloads
    Interactive contentquizzes, assessments, calculators and free tools
    CRM platformscustomer profiles and sales data
    Consent managementprivacy policies, consent forms, preference setting

    Consent management

    It may be the final item on the list above, but it’s also a key requirement of many data privacy laws and regulations. For example, the GDPR is very clear about consent : “Processing personal data is generally prohibited, unless it is expressly allowed by law, or the data subject has consented to the processing.”

    For that reason, your first-party data strategy must incorporate various transparent consent mechanisms, such as cookie banners and opt-in forms. Crucially, you must provide customers with a mechanism to manage their preferences and revoke that consent easily if they wish to.

    Data management

    Effective first-party data management, mainly its security and storage, is critical. Most data privacy regimes restrict the transfer of personal data to other jurisdictions and even prohibit it in some instances. Many even specify where residents’ data must be stored.

    Consider this cautionary tale : The single biggest fine levied for data privacy infringement so far was €1.2 billion. The Irish Data Protection Commission imposed a massive fine on Meta for transferring EU users’ data to the US without adequate data protection mechanisms.

    Data security is critical. If first-party data is compromised, it becomes third-party data, and any customer trust developed with the business will evaporate. To add insult to injury, data regulators could come knocking. That’s why the trend is to use encryption and anonymisation techniques alongside standard access controls.

    Once security is assured, the focus is on data management. Many businesses use a Customer Data Platform. This software gathers, combines and manages data from many sources to create a complete and central customer profile. Modern CRM systems can also do that job. AI tools could help find patterns and study them. But the most important thing is to keep databases clean and well-organised to make it easier to use and avoid data silos.

    Data activation

    Once first-party data has been collected and analysed, it needs to be activated, which means a business needs to use it for the intended purpose. This is the implementation phase where a well-constructed first-party strategy pays off. 

    The activation stage is where businesses use the intelligence they gather to :

    • Personalise website and app experiences
    • Adapt marketing campaigns
    • Improve conversion rates
    • Match stated preferences
    • Cater to observed behaviours
    • Customise recommendations based on purchase history
    • Create segmented email campaigns
    • Improve retargeting efforts
    • Develop more impactful content

    Measurement and optimisation

    Because first-party data is collected directly from customers or prospects, it’s far more relevant, reliable, and specific. Your analytics and campaign tracking will be more accurate. This gives you direct and actionable insights into your audience’s behaviour, empowering you to optimise your strategies and achieve better results.

    The same goes for your collection and activation efforts. An advanced web analytics platform like Matomo lets you identify key user behaviour and optimise your tracking. Heatmaps, marketing attribution tools, user behaviour analytics and custom reports allow you to segment audiences for better traction (and collect even more first-party data).

    Image showing the five steps to developing a first-party data strategy

    How to build a first-party data strategy

    There are five important and sequential steps to building a first-party data strategy. But this isn’t a one-time process. It must be revisited regularly as operating and regulatory environments change. There are five steps : 

    1. Audit existing data

    Chances are that customers already freely provide a lot of first-party data in the normal course of business. The first step is to locate this data, and the easiest way to do that is by mapping the customer journey. This identifies all the touchpoints where first-party data might be found.

    1. Define objectives

    Then, it’s time to step back and figure out the goals of the first-party data strategy. Consider what you’re trying to achieve. For example :

    • Reduce churn 
    • Expand an existing loyalty programme
    • Unload excess inventory
    • Improve customer experiences

    Whatever the objectives are, they should be clear and measurable.

    1. Implement tools and technology

    The first two steps point to data gaps. Now, the focus turns to ethical web analytics with a tool like Matomo. 

    To further comply with data privacy regulations, it may also be appropriate to implement a Consent Management Platform (CMP) to help manage preferences and consent choices.

    1. Build trust with transparency

    With the tools in place, it’s time to engage customers. To build trust, keep them informed about how their data is used and remind them of their right to withdraw their consent. 

    Transparency is crucial in such engagement, as outlined in the 7 GDPR principles.

    1. Continuously improve

    Rinse and repeat. The one constant in business and life is change. As things change, they expose weaknesses or flaws in the logic behind systems and processes. That’s why a first-party data strategy needs to be continually reviewed, updated, and revised. It must adapt to changing trends, markets, regulations, etc. 

    Tools that can help

    Looking back at the different types of data, it’s clear that some are harder and more bothersome to get than others. But capturing behaviours and interactions can be easy — especially if you use tools that follow data privacy rules.

    But here’s a tip. Google Analytics 4 isn’t compliant by default, especially not with Europe’s GDPR. It may also struggle to comply with some of the newer data privacy regulations planned by different US states and other countries.

    Matomo Analytics is compliant with the GDPR and many other data privacy regulations worldwide. Because it’s open source, it can be integrated with any consent manager.

    Get started today by trying Matomo for free for 21 days,
    no credit card required.