Recherche avancée

Médias (91)

Autres articles (50)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • MediaSPIP Init et Diogène : types de publications de MediaSPIP

    11 novembre 2010, par

    À l’installation d’un site MediaSPIP, le plugin MediaSPIP Init réalise certaines opérations dont la principale consiste à créer quatre rubriques principales dans le site et de créer cinq templates de formulaire pour Diogène.
    Ces quatre rubriques principales (aussi appelées secteurs) sont : Medias ; Sites ; Editos ; Actualités ;
    Pour chacune de ces rubriques est créé un template de formulaire spécifique éponyme. Pour la rubrique "Medias" un second template "catégorie" est créé permettant d’ajouter (...)

  • Changer son thème graphique

    22 février 2011, par

    Le thème graphique ne touche pas à la disposition à proprement dite des éléments dans la page. Il ne fait que modifier l’apparence des éléments.
    Le placement peut être modifié effectivement, mais cette modification n’est que visuelle et non pas au niveau de la représentation sémantique de la page.
    Modifier le thème graphique utilisé
    Pour modifier le thème graphique utilisé, il est nécessaire que le plugin zen-garden soit activé sur le site.
    Il suffit ensuite de se rendre dans l’espace de configuration du (...)

Sur d’autres sites (5107)

  • A Guide to Ethical Web Analytics in 2024

    17 juin 2024, par Erin

    User data is more valuable and sought after than ever. 

    Ninety-four percent of respondents in Cisco’s Data Privacy Benchmark Study said their customers wouldn’t buy from them if their data weren’t protected, with 95% saying privacy was a business imperative. 

    Unfortunately, the data collection practices of most businesses are far from acceptable and often put their customers’ privacy at risk. 

    But it doesn’t have to be this way. You can ethically collect valuable and insightful customer data—you just need the right tools.

    In this article, we show you what an ethical web analytics solution can look like, why Google Analytics is a problem and how you can collect data without risking your customers’ privacy.

    What is ethical web analytics ?

    Ethical web analytics put user privacy first. These platforms prioritise privacy and transparency by only collecting necessary data, avoiding implicit user identification and openly communicating data practices and tracking methods. 

    Ethical tools adhere to data protection laws like GDPR as standard (meaning businesses using these tools never have to worry about fines or disruptions). In other words, ethical web analytics refrain from exploiting and profiting from user behaviour and data. 

    Unfortunately, most traditional data solutions collect as much data as possible without users’ knowledge or consent.

    Why does digital privacy matter ?

    Digital privacy matters because companies have repeatedly proven they will collect and use data for financial gain. It also presents security risks. Unsecured user data can lead to identity theft, cyberattacks and harassment. 

    Big tech companies like Google and Meta are often to blame for all this. These companies collect millions of user data points — like age, gender, income, political beliefs and location. Worse still, they share this information with interested third parties.

    After public outrage over data breaches and other privacy scandals, consumers are taking active steps to disallow tracking where possible. IAPP’s Privacy and Consumer Trust Report finds that 68% of consumers across 19 countries are somewhat or very concerned about their digital privacy. 

    There’s no way around it : companies of all sizes and shapes need to consider how they handle and protect customers’ private information

    Why should you use an ethical web analytics tool ?

    When companies use ethical web analytics tools they can build customer trust, boost their brand reputation, improve data security practices and future proof their website tracking solution. 

    Boost brand reputation

    The fallout from a data privacy scandal can be severe. 

    Just look at what happened to Facebook during the Cambridge Analytica data scandal. The eponymous consulting firm harvested 50 million Facebook profiles and used that information to target people with political messages. Due to the instant public backlash, Facebook’s stock tanked, and use of the “delete Facebook” hashtag increased by 423% in the following days.

    That’s because consumers care about data privacy, according to Deloitte’s Connected Consumer Study :

    • Almost 90 percent agree they should be able to view and delete data companies collect 
    • 77 percent want the government to introduce stricter regulations
    • Half feel the benefits they get from online services outweigh data privacy concerns.

    If you can prove you buck the trend by collecting data using ethical methods, it can boost your brand’s reputation. 

    Build trust with customers

    At the same time, collecting data in an ethical way can help you build customer trust. You’ll go a long way to changing consumer perceptions, too. Almost half of consumers don’t like sharing data, and 57% believe companies sell their data. 

    This additional trust should generate a positive ROI for your business. According to Cisco’s Data Privacy Benchmark Study, the average company gains $180 for every $100 they invest in privacy. 

    Improve data security

    According to IBM’s Cost of a Data Breach report, the average cost of a data breach is nearly $4.5 million. This kind of scenario becomes much less likely when you use an ethical tool that collects less data overall and anonymises the data you do collect. 

    Futureproof your web analytics solution

    The obvious risk of not complying with privacy regulations is a fine — which can be up to €20 million, or 4% of worldwide annual revenue in the case of GDPR.

    It’s not just fines and penalties you risk if you fail to comply with privacy regulations like GDPR. For some companies, especially larger ones, the biggest risk of non-compliance with privacy regulations is the potential sudden need to abandon Google Analytics and switch to an ethical alternative.

    If Data Protection Authorities ban Google Analytics again, as has happened in Austria, France, and other countries, businesses will be forced to drop everything and make an immediate transition to a compliant web analytics solution.

    When an organisation’s entire marketing operation relies on data, migrating to a new solution can be incredibly painful and time-consuming. So, the sooner you switch to an ethical tool, the less of a headache the process will be. 

    The problem with Google Analytics

    Google Analytics (GA) is the most popular analytics platform in the world, but it’s a world away from being an ethical tool. Here’s why :

    You don’t have data ownership

    Google Analytics is attractive to businesses of all sizes because of its price. Everyone loves getting something for free, but there’s still a cost — your and your customers’ data.

    That’s because Google combines the data you collect with information from the millions of other websites it tracks to inform its advertising efforts. It may also use your data to train large language models like Gemini. 

    It has a rocky history with GDPR laws

    Google and EU regulators haven’t always got along. For example, the German Data Protection Authority is investigating 200,000 pending cases against websites using GA. The platform has also been banned and added back to the EU-US Data Privacy Framework several times over the past few years. 

    You can use GA to collect data about EU customers right now, but there’s no guarantee you’ll be able to do so in the future. 

    It requires a specific setup to remain compliant

    While you can currently use GA in a GDPR-compliant way — owing to its inclusion in the EU-US Data Privacy Framework — you have to set it up in a very specific way. That’s because the platform’s compliance depends on what data you collect, how you inform users and the level of consent you acquire. You’ll still need to include an extensive privacy policy on your website. 

    What does ethical web analytics look like ?

    An ethical web analytics solution should put user privacy first, ensure compliance with regulations like GDPR, give businesses 100% control of the data they collect and be completely transparent about data collection and storage practices. 

    What does ethical web tracking look like?

    100% data ownership

    You don’t fully control customer data when you use Google Analytics. The search giant uses your data for its own advertising purposes and may also use it to train large language models like Gemini. 

    When you choose an ethical web analytics alternative like Matomo, you can ensure you completely own your data.

    Try Matomo for Free

    Get the web insights you need, without compromising data accuracy.

    No credit card required

    Respects user privacy

    It’s possible to track and measure user behaviour without collecting personally identifiable information (PII). Just look at the ethical web analytics tools we’ve reviewed below. 

    These platforms respect user privacy and conform to strict privacy regulations like GDPR, CCPA and HIPAA by incorporating some or all of the following features :

    In Matomo’s case, it’s all of the above. Better still, you can check our privacy credentials yourself. Our software’s source code is open source on GitHub and accessible to anyone at any time. 

    Compliant with government regulations

    While Google’s history with data regulations is tumultuous, an ethical web analytics platform should follow even the strictest privacy laws, including GDPR, HIPAA, CCPA, LGPD and PECR.

    But why stop there ? Matomo has been approved by the French Data Protection Authority (CNIL) as one of the few web analytics tools that French sites can use to collect data without tracking consent. So you don’t need an annoying consent banner popping up on your website anymore. 

    Try Matomo for Free

    Get the web insights you need, without compromising data accuracy.

    No credit card required

    Complete transparency 

    Ethical web analytics tools will be upfront about their data collection practices, whether that’s in the U.S., EU, or on your own private servers. Look for a solution that refrains from collecting personally identifiable information, shows where data is stored, and lets you alter tracking methods to increase privacy even further. 

    Some solutions, like Matomo, will increase transparency further by providing open source software. Anyone can find our source code on GitHub to see exactly how our platform tracks and stores user data. This means our code is regularly examined and reviewed by a community of developers, making it more secure, too.

    Ethical web analytics solutions

    There are several options for an ethical web analytics tool. We list three of the best providers below. 

    Matomo

    Matomo is an open source web analytics tool and privacy-focused Google Analytics alternative used by over one million sites globally. 

    Screenshot example of the Matomo dashboard

    Matomo is fully compliant with prominent global privacy regulations like GDPR, CCPA and HIPAA, meaning you never have to worry about collecting consent when tracking user behaviour. 

    The data you collect is completely accurate since Matomo doesn’t use data sampling and is 100% yours. We don’t share data with third parties but can prove it. Our product source code is publicly available on GitHub. As a community-led project, you can download and install it yourself for free. 

    With Matomo, you get a full range of web analytics capabilities and behavioural analytics. That includes your standard metrics (think visitors, traffic sources, bounce rates, etc.), advanced features to analyse user behaviour like A/B Testing, Form Analytics, Heatmaps and Session Recordings. 

    Migrating to Matomo is easy. You can even import historical Google Analytics data to generate meaningful insights immediately. 

    Try Matomo for Free

    Get the web insights you need, without compromising data accuracy.

    No credit card required

    Fathom

    Fathom Analytics is a lightweight privacy-focused analytics solution that launched in 2018. It aims to be an easy-to-use Google Analytics alternative that doesn’t compromise privacy. 

    A screenshot of the Fathom website

    Like Matomo, Fathom complies with all major privacy regulations, including GDPR and CCPA. It also provides 100% accurate, unsampled reports and doesn’t share your data with third parties. 

    While Fathom provides fairly comprehensive analytics reports, it doesn’t have some of Matomo’s more advanced features. That includes e-commerce tracking, heatmaps, session recordings, and more. 

    Plausible

    Plausible Analytics is another open source Google Analytics alternative that was built and hosted in the EU. 

    A screenshot of the Plausible website

    Launched in 2019, Plausible is a newer player in the privacy-focused analytics market. Still, its ultra-lightweight script makes it an attractive option for organisations that prioritise speed over everything else. 

    Like Matomo and Fathom, Plausible is GDPR and CCPA-compliant by design. Nor is there any cap on the amount of data you collect or any debate over whether the data is accurate (Plausible doesn’t use data sampling) or who owns the data (you do). 

    Matomo makes it easy to migrate to an ethical web analytics alternative

    There’s no reason to put your users’ privacy at risk, especially when there are so many benefits to choosing an ethical tool. Whether you want to avoid fines, build trust with your customers, or simply know you’re doing the right thing, choosing a privacy-focused, ethical solution like Matomo is taking a massive step in the right direction. 

    Making the switch is easy, too. Matomo is one of the few options that lets you import historical Google Analytics data, so starting from scratch is unnecessary. 

    Get started today by trying Matomo for free for 21-days. No credit card required. 

  • Developing A Shader-Based Video Codec

    22 juin 2013, par Multimedia Mike — Outlandish Brainstorms

    Early last month, this thing called ORBX.js was in the news. It ostensibly has something to do with streaming video and codec technology, which naturally catches my interest. The hype was kicked off by Mozilla honcho Brendan Eich when he posted an article asserting that HD video decoding could be entirely performed in JavaScript. We’ve seen this kind of thing before using Broadway– an H.264 decoder implemented entirely in JS. But that exposes some very obvious limitations (notably CPU usage).

    But this new video codec promises 1080p HD playback directly in JavaScript which is a lofty claim. How could it possibly do this ? I got the impression that performance was achieved using WebGL, an extension which allows JavaScript access to accelerated 3D graphics hardware. Browsing through the conversations surrounding the ORBX.js announcement, I found this confirmation from Eich himself :

    You’re right that WebGL does heavy lifting.

    As of this writing, ORBX.js remains some kind of private tech demo. If there were a public demo available, it would necessarily be easy to reverse engineer the downloadable JavaScript decoder.

    But the announcement was enough to make me wonder how it could be possible to create a video codec which effectively leverages 3D hardware.

    Prior Art
    In theorizing about this, it continually occurs to me that I can’t possibly be the first person to attempt to do this (or the ORBX.js people, for that matter). In googling on the matter, I found various forums and Q&A posts where people asked if it were possible to, e.g., accelerate JPEG decoding and presentation using 3D hardware, with no answers. I also found a blog post which describes a plan to use 3D hardware to accelerate VP8 video decoding. It was a project done under the banner of Google’s Summer of Code in 2011, though I’m not sure which open source group mentored the effort. The project did not end up producing the shader-based VP8 codec originally chartered but mentions that “The ‘client side’ of the VP8 VDPAU implementation is working and is currently being reviewed by the libvdpau maintainers.” I’m not sure what that means. Perhaps it includes modifications to the public API that supports VP8, but is waiting for the underlying hardware to actually implement VP8 decoding blocks in hardware.

    What’s So Hard About This ?
    Video decoding is a computationally intensive task. GPUs are known to be really awesome at chewing through computationally intensive tasks. So why aren’t GPUs a natural fit for decoding video codecs ?

    Generally, it boils down to parallelism, or lack of opportunities thereof. GPUs are really good at doing the exact same operations over lots of data at once. The problem is that decoding compressed video usually requires multiple phases that cannot be parallelized, and the individual phases often cannot be parallelized. In strictly mathematical terms, a compressed data stream will need to be decoded by applying a function f(x) over each data element, x0 .. xn. However, the function relies on having applied the function to the previous data element, i.e. :

    f(xn) = f(f(xn-1))
    

    What happens when you try to parallelize such an algorithm ? Temporal rifts in the space/time continuum, if you’re in a Star Trek episode. If you’re in the real world, you’ll get incorrect, unusuable data as the parallel computation is seeded with a bunch of invalid data at multiple points (which is illustrated in some of the pictures in the aforementioned blog post about accelerated VP8).

    Example : JPEG
    Let’s take a very general look at the various stages involved in decoding the ubiquitous JPEG format :


    High level JPEG decoding flow

    What are the opportunities to parallelize these various phases ?

    • Huffman decoding (run length decoding and zig-zag reordering is assumed to be rolled into this phase) : not many opportunities for parallelizing the various Huffman formats out there, including this one. Decoding most Huffman streams is necessarily a sequential operation. I once hypothesized that it would be possible to engineer a codec to achieve some parallelism during the entropy decoding phase, and later found that On2′s VP8 codec employs the scheme. However, such a scheme is unlikely to break down to such a fine level that WebGL would require.
    • Reverse DC prediction : JPEG — and many other codecs — doesn’t store full DC coefficients. It stores differences in successive DC coefficients. Reversing this process can’t be parallelized. See the discussion in the previous section.
    • Dequantize coefficients : This could be very parallelized. It should be noted that software decoders often don’t dequantize all coefficients. Many coefficients are 0 and it’s a waste of a multiplication operation to dequantize. Thus, this phase is sometimes rolled into the Huffman decoding phase.
    • Invert discrete cosine transform : This seems like it could be highly parallelizable. I will be exploring this further in this post.
    • Convert YUV -> RGB for final display : This is a well-established use case for 3D acceleration.

    Crash Course in 3D Shaders and Humility
    So I wanted to see if I could accelerate some parts of JPEG decoding using something called shaders. I made an effort to understand 3D programming and its associated math throughout the 1990s but 3D technology left me behind a very long time ago while I got mixed up in this multimedia stuff. So I plowed through a few books concerning WebGL (thanks to my new Safari Books Online subscription). After I learned enough about WebGL/JS to be dangerous and just enough about shader programming to be absolutely lethal, I set out to try my hand at optimizing IDCT using shaders.

    Here’s my extremely high level (and probably hopelessly naive) view of the modern GPU shader programming model :


    Basic WebGL rendering pipeline

    The WebGL program written in JavaScript drives the show. It sends a set of vertices into the WebGL system and each vertex is processed through a vertex shader. Then, each pixel that falls within a set of vertices is sent through a fragment shader to compute the final pixel attributes (R, G, B, and alpha value). Another consideration is textures : This is data that the program uploads to GPU memory which can be accessed programmatically by the shaders).

    These shaders (vertex and fragment) are key to the GPU’s programmability. How are they programmed ? Using a special C-like shading language. Thought I : “C-like language ? I know C ! I should be able to master this in short order !” So I charged forward with my assumptions and proceeded to get smacked down repeatedly by the overall programming paradigm. I came to recognize this as a variation of the scientific method : Develop a hypothesis– in my case, a mental model of how the system works ; develop an experiment (short program) to prove or disprove the model ; realize something fundamental that I was overlooking ; formulate new hypothesis and repeat.

    First Approach : Vertex Workhorse
    My first pitch goes like this :

    • Upload DCT coefficients to GPU memory in the form of textures
    • Program a vertex mesh that encapsulates 16×16 macroblocks
    • Distribute the IDCT effort among multiple vertex shaders
    • Pass transformed Y, U, and V blocks to fragment shader which will convert the samples to RGB

    So the idea is that decoding of 16×16 macroblocks is parallelized. A macroblock embodies 6 blocks :


    JPEG macroblocks

    It would be nice to process one of these 6 blocks in each vertex. But that means drawing a square with 6 vertices. How do you do that ? I eventually realized that drawing a square with 6 vertices is the recommended method for drawing a square on 3D hardware. Using 2 triangles, each with 3 vertices (0, 1, 2 ; 3, 4, 5) :


    2 triangles make a square

    A vertex shader knows which (x, y) coordinates it has been assigned, so it could figure out which sections of coefficients it needs to access within the textures. But how would a vertex shader know which of the 6 blocks it should process ? Solution : Misappropriate the vertex’s z coordinate. It’s not used for anything else in this case.

    So I set all of that up. Then I hit a new roadblock : How to get the reconstructed Y, U, and V samples transported to the fragment shader ? I have found that communicating between shaders is quite difficult. Texture memory ? WebGL doesn’t allow shaders to write back to texture memory ; shaders can only read it. The standard way to communicate data from a vertex shader to a fragment shader is to declare variables as “varying”. Up until this point, I knew about varying variables but there was something I didn’t quite understand about them and it nagged at me : If 3 different executions of a vertex shader set 3 different values to a varying variable, what value is passed to the fragment shader ?

    It turns out that the varying variable varies, which means that the GPU passes interpolated values to each fragment shader invocation. This completely destroys this idea.

    Second Idea : Vertex Workhorse, Take 2
    The revised pitch is to work around the interpolation issue by just having each vertex shader invocation performs all 6 block transforms. That seems like a lot of redundant. However, I figured out that I can draw a square with only 4 vertices by arranging them in an ‘N’ pattern and asking WebGL to draw a TRIANGLE_STRIP instead of TRIANGLES. Now it’s only doing the 4x the extra work, and not 6x. GPUs are supposed to be great at this type of work, so it shouldn’t matter, right ?

    I wired up an experiment and then ran into a new problem : While I was able to transform a block (or at least pretend to), and load up a varying array (that wouldn’t vary since all vertex shaders wrote the same values) to transmit to the fragment shader, the fragment shader can’t access specific values within the varying block. To clarify, a WebGL shader can use a constant value — or a value that can be evaluated as a constant at compile time — to index into arrays ; a WebGL shader can not compute an index into an array. Per my reading, this is a WebGL security consideration and the limitation may not be present in other OpenGL(-ES) implementations.

    Not Giving Up Yet : Choking The Fragment Shader
    You might want to be sitting down for this pitch :

    • Vertex shader only interpolates texture coordinates to transmit to fragment shader
    • Fragment shader performs IDCT for a single Y sample, U sample, and V sample
    • Fragment shader converts YUV -> RGB

    Seems straightforward enough. However, that step concerning IDCT for Y, U, and V entails a gargantuan number of operations. When computing the IDCT for an entire block of samples, it’s possible to leverage a lot of redundancy in the math which equates to far fewer overall operations. If you absolutely have to compute each sample individually, for an 8×8 block, that requires 64 multiplication/accumulation (MAC) operations per sample. For 3 color planes, and including a few extra multiplications involved in the RGB conversion, that tallies up to about 200 MACs per pixel. Then there’s the fact that this approach means a 4x redundant operations on the color planes.

    It’s crazy, but I just want to see if it can be done. My approach is to pre-compute a pile of IDCT constants in the JavaScript and transmit them to the fragment shader via uniform variables. For a first order optimization, the IDCT constants are formatted as 4-element vectors. This allows computing 16 dot products rather than 64 individual multiplication/addition operations. Ideally, GPU hardware executes the dot products faster (and there is also the possibility of lining these calculations up as matrices).

    I can report that I actually got a sample correctly transformed using this approach. Just one sample, through. Then I ran into some new problems :

    Problem #1 : Computing sample #1 vs. sample #0 requires a different table of 64 IDCT constants. Okay, so create a long table of 64 * 64 IDCT constants. However, this suffers from the same problem as seen in the previous approach : I can’t dynamically compute the index into this array. What’s the alternative ? Maintain 64 separate named arrays and implement 64 branches, when branching of any kind is ill-advised in shader programming to begin with ? I started to go down this path until I ran into…

    Problem #2 : Shaders can only be so large. 64 * 64 floats (4 bytes each) requires 16 kbytes of data and this well exceeds the amount of shader storage that I can assume is allowed. That brings this path of exploration to a screeching halt.

    Further Brainstorming
    I suppose I could forgo pre-computing the constants and directly compute the IDCT for each sample which would entail lots more multiplications as well as 128 cosine calculations per sample (384 considering all 3 color planes). I’m a little stuck with the transform idea right now. Maybe there are some other transforms I could try.

    Another idea would be vector quantization. What little ORBX.js literature is available indicates that there is a method to allow real-time streaming but that it requires GPU assistance to yield enough horsepower to make it feasible. When I think of such severe asymmetry between compression and decompression, my mind drifts towards VQ algorithms. As I come to understand the benefits and limitations of GPU acceleration, I think I can envision a way that something similar to SVQ1, with its copious, hierarchical vector tables stored as textures, could be implemented using shaders.

    So far, this all pertains to intra-coded video frames. What about opportunities for inter-coded frames ? The only approach that I can envision here is to use WebGL’s readPixels() function to fetch the rasterized frame out of the GPU, and then upload it again as a new texture which a new frame processing pipeline could reference. Whether this idea is plausible would require some profiling.

    Using interframes in such a manner seems to imply that the entire codec would need to operate in RGB space and not YUV.

    Conclusions
    The people behind ORBX.js have apparently figured out a way to create a shader-based video codec. I have yet to even begin to reason out a plausible approach. However, I’m glad I did this exercise since I have finally broken through my ignorance regarding modern GPU shader programming. It’s nice to have a topic like multimedia that allows me a jumping-off point to explore other areas.

  • Marketing Touchpoints : Examples, KPIs, and Best Practices

    11 mars 2024, par Erin

    The customer journey is rarely straightforward. Rather, each stage comprises numerous points of contact with your brand, known as marketing touchpoints. And each touchpoint is equally important to the customer experience. 

    This article will explore marketing touchpoints in detail, including how to analyse them with attribution models and which KPIs to track. It will also share tips on incorporating these touchpoints into your marketing strategy. 

    What are marketing touchpoints ? 

    Marketing touchpoints are the interactions that take place between brands and customers throughout the latter’s journey, either online or in person. 

    Omni-channel digital marketing illustration

    By understanding how customers interact with your brand before, during and after a purchase, you can identify the channels that contribute to starting, driving and closing buyer journeys. Not only that, but you’ll also learn how to optimise the customer experience. This can also help you : 

    • Promote customer loyalty through increased customer satisfaction
    • Improve your brand reputation and foster a more positive perception of your brand, supported by social proof 
    • Build brand awareness among prospective customers 
    • Reconnect with current customers to drive repeat business

    According to a 2023 survey, social media and video-sharing platforms are the leading digital touchpoints among US consumers.

    With the customer journey divided into three stages — awareness, consideration, and decision — we can group these interactions into three touchpoint segments, depending on whether they occur before, during or after a purchase. 

    Touchpoints before a purchase

    Touchpoints before a purchase are those initial interactions between potential customers and brands that occur during the awareness stage — before they’ve made a purchase decision. 

    Here are some key touchpoints at the pre-purchase stage : 

    • Customer reviews, forums, and testimonials 
    • Social media posts
    • Online ads 
    • Company events and product demos
    • Other digital touchpoints, like video content, blog posts, or infographics
    • Peer referral 

    In PwC’s 2024 Global Consumer Insights Pulse Survey, 54% of consumers listed search engines as their primary source of pre-purchase information, followed by Amazon (35%) and retailer websites (33%). 

    Here are the survey’s findings in Western Europe, specifically : 

    Social channels are another major pre-purchase touchpoint ; 25% of social media users aged 18 to 44 have made a purchase through a social media app over the past three months. 

    Touchpoints during a purchase

    Touchpoints during a purchase occur when the prospective customer has made their purchase decision. It’s the beginning of a (hopefully) lasting relationship with them. 

    It’s important to involve both marketing and sales teams here — and to keep track of conversion metrics

    Here are the main touchpoints at this stage : 

    • Company website pages 
    • Product pages and catalogues 
    • Communication between customers and sales reps 
    • Product packaging and labelling 
    • Point-of-sale (POS) — the final touchpoint the prospective customer will reach before making the final purchasing decision 

    Touchpoints after a purchase

    You can use touchpoints after a purchase to maintain a positive relationship and keep current customers engaged. Examples of touchpoints that contribute to a good post-purchase experience for the customer include the following : 

    • Thank-you emails 
    • Email newsletters 
    • Customer satisfaction surveys 
    • Cross-selling emails 
    • Renewal options 
    • Customer loyalty programs

    Email marketing remains significant across all touchpoint segments, with 44% of CMOs agreeing that it’s essential to their marketing strategy — and it also plays a particularly important role in the post-purchase experience. For 61.1% of marketing teams, email open rates are higher than 20%.

    Sixty-nine percent of consumers say they’ve stopped doing business with a brand following a bad experience, so the importance of customer service touchpoints shouldn’t be overlooked. Live chat, chatbots, self-service resources, and customer service teams are integral to the post-purchase experience.

    Attribution models : Assigning value to marketing touchpoints 

    Determining the most effective touchpoints — those that directly contribute to conversions — is a process known as marketing attribution. The goal here is to identify the specific channels and points of contact with prospective customers that result in revenue for the company.

    Illustration of the marketing funnel stages

    You can use these insights to understand — and maximise — marketing return on investment (ROI). Otherwise, you risk allocating your budget to the wrong channels. 

    It’s possible to group attribution models into two categories — single-touch and multi-touch — depending on whether you assign value to one or more contributing touchpoints.

    Single-touch attribution models, where you’re giving credit for the conversion to a single touchpoint, include the following :

    • First-touch attribution : This assigns credit for the conversion to the first interaction a customer had with a brand ; however, it fails to consider lower-funnel touchpoints.
    • Last-click attribution : This focuses only on bottom-of-funnel marketing and credits the last interaction the customer had with a brand before completing a purchase.
    • Last non-direct : Credits the touchpoint immediately preceding a direct touchpoint with all the credit.

    Multi-touch attribution models are more complex and distribute the credit for conversion across multiple relevant touchpoints throughout the customer journey :

    • Linear attribution : The simplest multi-touch attribution model assigns equal values to all contributing touchpoints.
    • Position-based or U-shaped attribution : This assigns the greatest value to the first and last touchpoint — with 40% of the conversion credit each — and then divides the remaining 20% across all the other touchpoints.
    • Time-decay attribution : This model assigns the most credit to the customer’s most recent interactions with a brand, assuming that the touchpoints that occur later in the journey have a bigger impact on the conversion.

    Consider the following when choosing the most appropriate attribution model for your business :

    • The length of your typical sales cycle
    • Your marketing goals : increasing awareness, lead generation, driving revenue, etc.
    • How many stages and touchpoints make up your sales funnel

    Sometimes, it even makes sense to measure marketing performance using more than one attribution model.

    With the sheer volume of data that’s constantly generated across numerous online touchpoints, from your website to social media channels, it’s practically impossible to collect and analyse it manually.

    You’ll need an advanced web analytics platform to identify key touchpoints and assign value to them.

    Matomo’s Marketing Attribution feature can accurately measure the performance of different touchpoints to ensure that you’re allocating resources to the right channels. This is done in a compliant manner, without the need of data sampling or requiring cookie consent screens (excluding in Germany and the UK), ensuring both accuracy and privacy compliance.

    Try Matomo for Free

    Get the web insights you need, without compromising data accuracy.

    No credit card required

    Customer journey KPIs for measuring marketing campaign performance 

    Measuring the impact of different touchpoints on marketing campaign performance can help you understand how customer interactions drive conversions — and how to optimise your future efforts. 

    Illustration of customer journey concept

    Clearly, this is not a one-time effort. You should continuously reevaluate the crucial touchpoints that drive the most engagement at different stages of the customer journey. 

    Web analytics platforms can provide valuable insights into ever-changing consumer behaviours and trends and help you make informed decisions. 

    At the moment, Google is the most popular solution in the web analytics industry, with a combined market share of more than 70%

    However, if privacy, data accuracy, and GDPR compliance are a priority for you, Matomo is an alternative worth considering

    Try Matomo for Free

    Get the web insights you need, without compromising data accuracy.

    No credit card required

    KPIs to track before a purchase 

    During the pre-purchase stage, focus on the KPIs that measure the effectiveness of marketing activities across various online touchpoints — landing pages, email campaigns, social channels and ad placement on SERPs, for instance. 

    KPIs to track during the consideration stage include the following : 

    • Cost-per-click (CPC) : The CPC, the total cost of paid online advertising divided by the number of clicks those ads get, indicates whether you’re getting a good ROI. In the UK, the average CPC for search advertising is $1.22. Globally, it averages $0.62.
    • Engagement rate : The engagement rate, which is the total number of interactions divided by the number of followers, is useful for measuring the performance of social media touchpoints. Customer engagement also applies to other channels, like tracking average time on-page, form conversions, bounce rates, and other website interactions. 
    • Click-through rate (CTR) : The CTR — or the number of clicks your ads receive compared to the number of times they’re shown — helps you measure the performance of CTAs, email newsletters and pay-per-click (PPC) advertising.

    KPIs to track during a purchase 

    As a potential customer moves further down the sales funnel and reaches the decision stage, where they’re ready to make the choice to purchase, you should be tracking the following : 

    • Conversion rate : This is the percentage of leads that convert into customers by completing the desired action relative to the total number of website visitors. It shows you whether you’re targeting the right people and providing a frictionless checkout experience.
    • Sales revenue : This refers to the quantity of products sold multiplied by the product’s price. It helps you track the company’s ability to generate profit. 
    • Cost per conversion : This KPI is the total cost of online advertising in relation to the number of conversions. It measures the effectiveness of different marketing channels and the costs of converting prospective customers into buyers. It also forecasts future ad spend.

    KPIs to track after purchase 

    At the post-purchase stage, your priority should be gathering feedback : 

    Customer feedback surveys are great for collecting insights into customers’ post-purchase experience, opinions about your brand, products and services, and needs and expectations. 

    In addition to measuring customer satisfaction, these insights can help you identify points of friction, forecast future growth and revenue and spot customers at risk of churning. 

    Focus on the following customer satisfaction and retention metrics : 

    • Customer Satisfaction Score (CSAT) : This metric, which is gathered through customer satisfaction surveys, helps you gauge satisfaction levels. After all, 77% of consumers consider great customer service an important driver of brand loyalty.
    • Net Promoter Score (NPS) : Based on single-question customer surveys, NPS indicates how likely a customer is to recommend your business.
    • Customer Lifetime Value (CLV) : The CLV is the profit you can expect to generate from one customer throughout their relationship with your company. 
    • Customer Health Score (CHS) : This score can assess how “healthy” the customer’s relationship with your brand is and identify at-risk customers.

    Marketing touchpoints : Tips and best practices 

    Customer experience is more important today than ever. 

    Illustration of marketing funnel optimisation

    Salesforce’s 2022 State of the Connected Consumer report indicated that, for 88% of customers, the experience the brand provides is just as important as the product itself. 

    Here’s how you can build your customer touchpoint strategy and use effective touchpoints to improve customer satisfaction, build a loyal customer base, deliver better digital experiences and drive growth : 

    Understand the customer’s end-to-end experience 

    The typical customer’s journey follows a non-linear path of individual experiences that shape their awareness and brand preference. 

    Seventy-three percent of customers expect brands to understand their needs. So, personalising each interaction and delivering targeted content at different touchpoint segments — supported by customer segmentation and tools like Matomo — should be a priority. 

    Try to put yourself in the prospective customer’s shoes and understand their motivation and needs, focusing on their end-to-end experience rather than individual interactions. 

    Create a customer journey map 

    Once you understand how prospective customers interact with your brand, it becomes easier to map their journey from the pre-purchase stage to the actual purchase and beyond. 

    By creating these visual “roadmaps,” you make sure that you’re delivering the right content on the right channels at the right times and to the right audience — the key to successful marketing.

    Identify best-performing digital touchpoints 

    You can use insights from marketing attribution to pinpoint areas that are performing well. 

    By analysing the data provided by Matomo’s Marketing Attribution feature, you can determine which digital touchpoints are driving the most conversions or engagement, allowing you to focus your resources on optimising these channels for even greater success. 

    This targeted approach helps maximise the effectiveness of your marketing efforts and ensures a higher return on investment.

    Try Matomo for Free

    Get the web insights you need, without compromising data accuracy.

    No credit card required

    Discover key marketing touchpoints with Matomo 

    The customer’s journey rarely follows a direct route. If you hope to reach more customers and improve their experience, you’ll need to identify and manage individual marketing touchpoints every step of the way.

    While this process looks different for every business, it’s important to remember that your customers’ experience begins long before they interact with your brand for the first time — and carries on long after they complete the purchase. 

    In order to find these touchpoints and measure their effectiveness across multiple marketing channels, you’ll have to rely on accurate data — and a powerful web analytics tool like Matomo can provide those valuable marketing insights. 

    Try Matomo free for 21-days. No credit card required.