Recherche avancée

Médias (0)

Mot : - Tags -/page unique

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (39)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

Sur d’autres sites (6048)

  • Understanding GDPR compliance : Key principles and requirements

    28 août, par Joe

    Any company with an online presence will likely collect customers’ personal data in the normal course of business. But those with customers residing in the European Economic Area (EEA) — basically, the European Union (EU) plus Iceland, Liechtenstein and Norway — must comply with the General Data Protection Regulation (GDPR). Companies serving UK data subjects post-Brexit must also abide by the UK GDPR, which includes certain regional variations.

    GDPR authorities are only concerned with personal data (not with non-personal or anonymous data), ensuring that it’s collected, used, and stored in a way that respects users’ rights and privacy.

    Failure to comply can present serious business risks, including :

    • Financial penalties (more about that shortly)
    • Compensation claims from data subjects for mishandling their information
    • Reputational damage (if/when a data breach does occur)
    • Disruption to operations
    • Personal accountability of executives (including potential sanctions)

    This article explores the GDPR and personal data protection, the rights it confers on European data subjects, and how those rights are enforced. We’ll wrap up with an 11-step plan for GDPR compliance. 

    Let’s begin.

    The price of non-compliance

    The largest fine so far levied for GDPR non-compliance is €1.2 billion in May 2023. It was imposed by the Irish Data Protection Commission (DPC) on Meta (previously Facebook). And it was because of Meta’s transfers of EU/EEA data subjects’ personal data to the US from 16 July 2020 in breach of GDPR international data transfer rules.

    Many other fines have been levied for GDPR non-compliance, and there’ll probably be a lot more in the future :

    PenaltyCompanySupervisory AuthorityDate
    €746 millionAmazonLuxembourg National Commission for Data Protection (CNDP)16 July 2021
    €405 millionMetaIreland’s Data Protection Commission (DPC)5 September 2022
    €390 millionMetaIreland’s Data Protection Commission (DPC)6 January 2023
    €345 millionTikTokIreland’s Data Protection Commission (DPC)1 September 2023
    €310 millionLinkedInIreland’s Data Protection Commission (DPC)30 October 2024
    €290 millionUberDutch Data Protection Authority (DPA)26 August 2024

    Those are big numbers. European supervisory authorities take enforcement seriously

    So, what is personal data anyway ?

    GDPR defines personal data as any information about a data subject (an identified or identifiable individual). This covers both direct (name, address, ID numbers, etc.) and indirect identifiers (IP addresses, location data, etc.). It categorises personal data into two types : general and special category.

    General data includes identifiers like names, contact details, and financial information. 

    Special category data, such as racial or ethnic origin, health data, biometric information, and sexual orientation, needs more protection. 

    The processing of special category data is only allowed under certain conditions, for example, if consent was given explicitly or if vital interests (e.g., a threat to life), legal obligations, or public interest are involved. GDPR emphasises safeguarding sensitive data due to its potential impact on individuals’ privacy and rights.

    Important GDPR terminology

    Apart from the data subject, personal data, and special category data mentioned above, GDPR introduces other legal terms and concepts organisations must understand. A data controller decides what personal data to collect and how to use it. A data processor processes the data on behalf of the data controller.

    A Data Protection Officer (DPO) oversees GDPR compliance. Processing is any operation performed on data, such as collecting, analysing or storing it. That processing must also have a lawful basis, such as consent, contract, or legitimate interests. And consent must be freely given, specific, and easily withdrawable. 

    A data breach involves unauthorised access to or loss of personal data. A Data Protection Impact Assessment (DPIA) identifies risks to individuals’ rights. Data minimisation requires organisations to minimise what data they collect. Countries in the EU/EEA have appointed a supervisory authority to enforce GDPR in their territory.

    Rights of EU/EEA data subjects under GDPR 

    GDPR grants specific rights to individuals (data subjects) who are physically present in the EU/EEA when their personal data is processed, regardless of nationality or residence status. The business’s physical or legal presence is irrelevant, as the determining factor is the data subject’s location at the time of processing.

    Non-compliance can lead to significant penalties and even criminal charges in jurisdictions where such penalties are enforced under national law. 

    To support responsible data practices, the GDPR defines key foundational rights.

    Transparency

    Two rights granted to data subjects in the EU/EEA under GDPR relate to transparency :

    1. The right to be informed (proactive, applies at data collection)
    2. The right of access (reactive, applies when the data subject makes a request)

    They provide transparency by mandating that data subjects be provided specific details about that process, including :

    • Company or organisation processing the data (with contact details)
    • Reasons for using the data
    • Categories of personal data involved
    • Legal basis for processing the data
    • How long data will be stored
    • Other companies, organisations, or third parties with access to the data
    • Whether data will be transferred outside the EU/EEA

    Privacy notices should meet the standards in GDPR Articles 12–14, covering what data is collected, for what purpose, and how users can exercise their rights. 

    For a deeper dive, check out : How to write a GDPR-compliant privacy notice.

    Objections and restricted processing

    Under GDPR, individuals in the EU/EEA have the right to object to the processing of personal data in two key respects :

    1. They can object to direct marketing, after which organisations must stop processing their data immediately, with no justification required.
    2. If data is being processed on the basis of the organisation’s legitimate interests or for tasks carried out in the public interest, data subjects can object if they believe their own rights and freedoms outweigh those interests. Again, processing must stop unless the organisation proves compelling legitimate grounds outweighing the individual’s rights.

    Individuals can also request temporary restrictions on data processing when : 

    • Their data isn’t accurate (until verified).
    • Processing is unlawful, but they prefer restriction over deletion.
    • Their data is no longer being used, but must be retained for legal purposes.
    • After they object to processing while verification of legitimate grounds is pending.

    During restriction, the organisation can continue storing the data, but may not process it without explicit consent or when certain exceptions apply.

    Rectification and erasure

    Individuals have the right to rectify errors in their data and to erasure (deleting data). First, they can request corrections to inaccurate or incomplete personal data. GDPR requires organisations to act without undue delay to ensure that stored data remains accurate and up to date.

    The right to erasure (aka the right to be forgotten) enables individuals to request deletion of their personal data when :

    • It’s no longer needed for its original purpose
    • They withdraw consent, and no other legal basis exists
    • Processing is unlawful
    • They object to processing, and no overriding legitimate grounds exist
    • The data must be deleted to comply with a legal obligation

    Organisations must delete data unless exemptions (e.g., legal compliance, public interest, or legal claims) apply.

    Data portability

    GDPR provides the right to data portability. People can request their personal data in a structured, common, and machine-readable format so it’s easier to review or transfer to another service provider. This applies when data is :

    • Provided by the individual, either directly (e.g., name, email) or indirectly through use of a service (e.g., purchase history)
    • Processed based on consent or a contract
    • Handled using automated means

    Portability does not apply to personal data processed on the basis of legal obligations or legitimate interests. ItT only applies when processing is based on consent or a contract, and carried out by automated means.

    Where technically feasible, GDPR also requires organisations to facilitate direct transfers of personal data to another controller at the subject’s request.

    Image showing robots making decisions without human intervention

    Automated decision-making and profiling

    GDPR grants EU/EEA data subjects the right not to be subject exclusively to automated decision-making, with legal or similarly significant effects, without human involvement. This applies to issues affecting them, such as job screening, loan approvals, or insurance pricing. They can :

    • Request human intervention : A real person must review the decision.
    • Express their viewpoint : Provide additional information or dispute the outcome.
    • Challenge the decision : Demand justification and correction if unfair.

    For example, imagine someone applying for a loan online, and the algorithm rejects the application based on credit history. They can request a human review to ensure fairness and consider special circumstances, such as recent debt clearance.

    However, GDPR also provides for some exceptions. Automated decisions are allowed if one of the following statements is true :

    • It’s obtained with explicit consent.
    • It’s necessary for a contract.
    • It’s permitted by law, with safeguards.

    How is GDPR enforced ?

    GDPR enforcement is carried out primarily by national supervisory authorities in each EU/EEA country. These authorities investigate complaints, conduct audits, and impose penalties for non-compliance within their jurisdictions. In cross-border cases, they collaborate through the one-stop-shop mechanism, which designates a lead authority to coordinate enforcement.

    The European Data Protection Supervisor (EDPS) is the independent data protection authority for EU institutions and agencies. It does not supervise private-sector or national public-sector organisations and is not a general enforcer of the GDPR.

    The European Data Protection Board (EDPB) is the body responsible for ensuring consistent application of the GDPR across the EU/EEA. Made up of representatives from national supervisory authorities and the EDPS, the EDPB issues guidelines, resolves disputes between authorities, and adopts binding decisions in cross-border matters.

    The origins of GDPR

    The EU’s regulation was adopted in 2016 to replace the 1995 Data Protection Directive (DPD), which predated the digital age. As technology use increased, vast amounts of personal data were collected, analysed, and stored, often without people’s knowledge, threatening their privacy and security.

    The main motivation behind GDPR was to unify the application of data protection rules across the EU/EEA through a directly applicable regulation, rather than a directive that required separate implementation by each member state. The aim was to eliminate fragmentation, ensure consistent enforcement, and strengthen individuals’ rights.

    Enter GDPR. It was agreed upon after years of negotiations between the 27 EU member states, the European Parliament, and the European Commission. It was formally adopted in 2016 and became fully enforceable on May 25, 2018. But there’s a difference. DPD was a directive that had to be implemented separately by member states. From that date, GDPR has been applied uniformly across the EU/EEA.

    The EEA adopted the GDPR on 6 July 2018 and went into force on 20 July 2018. It’s since become a global template, influencing data protection and privacy laws in countries like Brazil (LGPD), India, and Japan. The UK retained GDPR after Brexit, adapting it into the UK GDPR, which closely mirrors the EU version but allows for future divergence.

    Who does it apply to ?

    GDPR protects the personal data of individuals who reside in the EU/EEA. It applies to any organisation processing that data, no matter where it’s located in the world. This remains true even if the data is transferred outside the EU/EEA for storage and/or processing.

    Organisations are having difficulty with this regulation, as evidenced by the fines that have been meted out. Whether the penalties are paid, reduced through negotiation or still owed, their existence is a lingering uncertainty for the companies involved.

    Who must comply

    GDPR applies if you :

    • Have an office or another form of establishment in the EU/EEA, or
    • Offer goods/services to data subjects located in the EU/EEA (even if free) or
    • Monitor EU/EEA data subjects’’ behaviour (e.g., via cookies or analytics)

    What does GDPR require ?

    GDPR requires organisations to respect a clear set of data protection principles : lawfulness, fairness and transparencypurpose limitationdata minimisationaccuracystorage limitationintegrity and confidentiality, and accountability. It also obliges them to ensure that they always have a valid legal basis (consent, contract, legal obligation, legitimate interests, etc.) to process the personal data.

    Data should also not be stored longer than necessary to fulfil the specific purpose for which it was collected. Appropriate organisational measures must be taken to ensure the security and integrity of the personal data and protect it from breaches, loss, or unauthorised access. Should a reportable data breach occur, it must be reported to the relevant supervisory authority within 72 hours. Affected individuals must be informed if the breach is likely to result in a high risk to their rights.

    Organisations must also demonstrate accountability by keeping detailed records of processing activities and conducting DPIAs for high-risk processing. If their core activities involve large-scale processing of special categories of data or regular and systematic monitoring of individuals, they must appoint a DPO. 

    Finally, organisations must implement adequate safeguards when transferring data outside the EU/EEA through the GDPR Chapter V mechanism, such as adequacy decisions, Standard Contractual Clauses, Binding Corporate Rules, etc.

    By adhering to these requirements, organisations ensure compliance with GDPR and protect the data privacy and rights of EU/EEA data subjects.

    11 steps to compliance

    Once you’ve confirmed that the GDPR applies to your organisation’s processing of personal data, you can begin working toward compliance.

    Below, we’ve broken the process into eleven clear steps to help guide you.

    Step 1 : Map your data : Purpose, use and legal basis

    Any organisation operating in the EU, EEA or UK and handling personal data of data subjects in those regions must audit all the personal data it currently holds. 

    Your organisation must identify the legal basis for processing all data subject to the GDPR. If no legal basis can be found or justified, the processing will not be permitted under the GDPR.

    Step 2 : Consider appointing a DPO

    According to the GDPR text, a DPO is mandatory only under certain conditions, mainly due to processing volume and the type of organisation. But there are certain scenarios where it’s required.

    • Public authorities that process personal data as a matter of course, except for courts in their judicial capacity.
    • Organisations whose core activities involve regular and systematic monitoring of data subjects on a large scale.
    • Organisations that process specific “special” data categories (as defined by the GDPR) or data relating to criminal offences as a core activity on a large scale.

    It’s vague, and GDPR doesn’t clearly define “core activity” or “large scale”. If you are unsure whether your organisation falls into these categories, seek legal advice and err on the side of caution. Regardless, even if you are not required to appoint a DPO, it’s a good idea to appoint someone to monitor and oversee GDPR compliance efforts internally.

    Step 3 : Identify supervisory authorities

    This is generally governed by the territories in which an organisation operates. However, GDPR does make provisions for operations that cover multiple countries. In those cases, the GDPR provides a one-stop-shop mechanism to streamline oversight.

    In such cases, a lead supervisory authority (LSA) is designated. Organisations cannot freely choose their lead supervisory authority ; it depends on the location of the main establishment (Art. 56 GDPR).

    Most EEA countries have only one supervisory authority. Germany is the exception. Federal states each have their own DPA, and the Bundesbeauftragte für den Datenschutz und die Informationsfreiheit oversees federal matters. 

    Step 4 : Consider a Data Protection Impact Assessment

    GDPR requires a DPIA when processing is likely to result in a high risk to individuals’ rights and freedoms. Examples include large-scale processing of sensitive data, systematic profiling, public monitoring, or innovative technology use. A DPIA involves describing the processing, assessing necessity, identifying risks, and implementing mitigation measures.

    If the process reveals residual, unmitigated high risks, the DPIA report must be submitted to the nominated supervisory authority for consultation before the processing can proceed. Feedback can be expected within 8 weeks (extendable to 14 weeks), and the recommendations must be implemented. Conducting a DPIA is one way to ensure compliance. It also protects individuals’ rights and avoids fines for non-compliance.

    Step 5 : Establish a data breach process

    Organisations must quickly implement systems to identify and assess breaches for scope and impact. They must act immediately to contain the breach and record all the details and the actions taken.

    Image with a bulleted list of incidents that may lead to a data breach

    Data breaches likely to result in a risk to individuals’ rights and freedoms must be reported to the supervisory authority within 72 hours of the organisation becoming aware of the breach. If the breach is likely to result in a high risk to the individuals’ rights and freedoms, the controller has an obligation to inform the affected individuals as well. Data breach processes should also be reviewed regularly and included in staff training. 

    Here’s a simplified version :

    Simplified data breach response checklist
    𝥁Detect and confirm the breach
    🮱Contain and mitigate the impact
    🮱Assess the severity and potential harm
    🮱Document the breach
    🮱Report the breach
    🮱Inform affected individuals
    🮱Review and improve
    🮱Train staff in breach response protocols

    Step 6 : Review websites and website form security

    Websites and the forms on them are common gateways for personal data, making them a high-value target for bad actors. Ensuring these entry points are secure is essential to protecting user data and supporting GDPR’s requirements for confidentiality, integrity, and resilience (Article 32).

    Here are some key actions to take : 

    Website and form security best practices
    Use HTTPS with a valid SSL/TLS certificateEnsure pages that collect/display personal data are served over HTTPS to encrypt data in transit and prevent interception.
    Secure all data collection formsValidate and sanitize user input to protect against common threats, such as cross-site scripting (XSS), injection attacks, and form spam.
    Use security headers such as Content Security Policy (CSP) to prevent malicious script execution.
    Implement CAPTCHAs or other bot detection.
    Restrict access to form submissionsStore submitted data securely and restrict access to authorized personnel.
    Use strong passwords, enable multi-factor authentication (MFA), and apply role-based access controls (RBAC) where possible.
    Keep your website software up to dateApply regular security patches to your CMS, plugins, and third-party libraries.
    Remove unused components and services that may introduce vulnerabilities.
    Monitor and test for vulnerabilitiesPerform regular security scans andpenetration tests to identify risks.
    Monitor error logs and unusual activity, especially around form endpoints.

    Taking these proactive steps to strengthen form security and reduce breach risk will support your organization’s GDPR compliance posture..

    Step 7 : Consider age when required

    Under Article 8 of the GDPR, age verification is only required when :

    • Personal data is being processed on the basis of consent, and
    • The service is offered directly to children (i.e., an information society service provided online)

    In these cases, organisations must ensure the child is at least 16 years old, unless a lower age threshold has been set by national law (e.g., 13 in the UK).

    Age verification methods must be proportionate to the level of risk, aligned with the principle of data minimisation, and appropriate for the audience. Common approaches include : 

    • Self-declaration with confirmation prompts
    • Email-based parental consent mechanisms
    • Content gating or notices for services not intended for children

    More intrusive methods, such as biometric estimation, government ID upload, or video verification, should be avoided unless absolutely necessary. When justified, such methods must undergo a Data Protection Impact Assessment (DPIA) and meet the requisite necessity and proportionality standards.

    Step 8 : Implement double-opt-in for all email lists and services

    At present, Germany is the only EU country with a clear legal mandate for double opt-in under its national GDPR implementation and ePrivacy laws. While not explicitly required elsewhere in the EU and EEA, double opt-in is widely recommended as a best practice to ensure explicit consent.

    This process confirms that the user explicitly agrees while reducing opportunities for fraud and improving compliance. It also builds trust, as customers know how you’re handling their data. A clear, up-to-date privacy policy is essential to the process. It must outline how data is used and stored and how an individual’s rights can be exercised.

    For example, obtaining consent in an email marketing campaign may involve the following steps :

    1. The user signs up for a newsletter or service.
    2. They receive a confirmation email/text message with a verification link.
    3. The user clicks the link to confirm consent.

    Step 9 : Restrict international data transfers

    GDPR limits data subjects’ personal data transfer outside the European Economic Area (EEA) unless certain conditions are met.

    Such transfers are not permitted unless one of the following conditions is met :

    1. Appropriate safeguards are in place, such as :
      • Standard contractual clauses (SCCs) approved by the Commission
      • Binding corporate rules (BCRs) for multinational groups
    2. The destination country is one of the following countries that has received an adequacy decision from the European Commission.
    Countries with GDPR adequacy decisions (as of July 2025)
    AndorraFull adequacy decision
    ArgentinaFull adequacy decision
    CanadaApplies only to commercial organisations under PIPEDA
    Faroe IslandsFull adequacy decision
    GuernseyFull adequacy decision
    Isle of ManFull adequacy decision
    IsraelFull adequacy decision
    JapanAdequacy with additional safeguards aligned to EU standards
    JerseyFull adequacy decision
    New ZealandFull adequacy decision
    Republic of KoreaAdequacy decision adopted in 2021
    SwitzerlandLongstanding adequacy decision (dating back to the 2000s)
    United KingdomAdequacy under both GDPR and the Law Enforcement Directive (LED)
    United StatesApplies only to commercial organisations certified under the EU-US Data Privacy Framework

    Major fines (like Meta’s €1.2 billion) have already been levied for unlawful data transfers. In addition, third-party service providers and data processors charged with handling EU data must also be GDPR-compliant. 

    If personal data is processed by a third party outside the EEA, organisations must verify that contractual safeguards comply with GDPR Article 28. These processor management safeguards cover :

    • Contractual – Defines what the processor is permitted to do with personal data
    • Security – Specifies technical and organisational safeguards to protect data
    • Breach notifications – Requires processors to report breaches in a timely manner
    • Sub-processor oversight – Grants approval rights over any sub-processors
    • End-of-service handling – Ensures return or proper disposal of personal data at contract end
    • Audit rights – Allows controllers to audit processor compliance if needed

    Step 10 : Record of Processing Activities (ROPA)

    GDPR obliges both data controllers and data processors to maintain a Record of Processing Activities (ROPA). This processing register details how and why personal data is processed, and it must include the following : 

    • Name and contact details (and DPO, if applicable)
    • Processing purposes (marketing, HR, customer service, etc.)
    • Data categories (names, emails, financial data, etc.)
    • Data subject categories (customers, employees)
    • Transfers outside the EEA (legal basis, safeguards like SCCs, etc.)
    • Retention periods for each data category
    • Security measures (encryption, access controls, etc.).

    For data controllers, the ROPA must also include the names and details of any people who receive personal data, such as services or processors. The register should also map the flow of data through the organisation (and any third parties), which is needed for audits or analysing a data breach.

    An effective ROPA depends on strong data governance. Clearly-defined processes, ongoing training, and regular reviews are necessary to keep internal policies aligned with how personal data is actually handled in practice.

    Maintaining a ROPA also supports GDPR’s accountability principle : organisations must be able to show compliance, not just claim it. Documented policies, audits, and training records provide the evidence needed to demonstrate this.

    Step 11 : Data subject rights management

    Organisations that collect, store, analyse, or process the personal data of EEA data subjects must regularly advise customers of their rights under GDPR. In particular, they must remind data subjects of their right to submit a Data Subject Access Request (DSAR) and respond promptly to DSARs from individuals requesting access to their personal data.

    Among other things, EEA data subjects may request :

    • Confirmation that their data is being processed
    • A copy of their data
    • Information about how and why their data is being processed
    • The purposes of processing
    • Categories of personal data involved
    • Recipients or categories of recipients who receive the data
    • Data retention periods or criteria used to determine them
    • The data source (if not collected directly from the individual)

    DSARs can be refused if they’re manifestly unfounded or excessive or if providing the data would adversely affect the rights of others. But it’s advisable to use that as a last resort.

    GDPR compliance in practice

    GDPR compliance isn’t automatic — not even with privacy-focused tools like Matomo or reconfigured platforms like Google Analytics 4

    Regardless of which analytics solution you use, data protection laws like GDPR and the ePrivacy Directive require organisations to : 

    • Track only occurs when lawful, and with valid user consent when required.
    • Configure privacy settings to comply with the GDPR.
    • Only collect data that is proportionate, transparent, and serves a legitimate, disclosed purpose.

    Even the best tools can fail if they aren’t used properly. That’s why governance, intentional setup, and consistent consent management are necessary parts of compliance.

    Matomo offers secure, privacy-focused GDPR analytics. It includes a built-in GDPR Manager and privacy centre to fine-tune your privacy settings.

    To get started with Matomo, you can sign up for a 21-day free trial — no credit card required. 

  • How to use Behavioural Analytics to Improve Website Performance

    20 septembre 2021, par Ben Erskine — Analytics Tips, Plugins, Heatmap

    User behavioural analytics (UBA) give your business unique insights into your customers. 

    Where traditional website metrics track what actions are completed or how many visitors you have, user behaviour shows the driving factors behind those actions. UBA tools such as website heatmap software provide an easy-to-read visualisation of this data. 

    Ultimately, user behaviour analysis improves website performance and conversions by boosting customer engagement, optimising positive customer experiences, and focusing on the most important part of your sales : the people who are actually buying from you. 

    What is user behaviour analytics ?

    User behaviour analytics (UBA) is data that shows how customers and website visitors interact with your brand online. 

    UBA is tracked using tools such as heatmaps, session recordings and data visualisation software. 

    Where traditional web analytics track metrics such as page views and bounce rates, behavioural analytics provide an even more in-depth picture of your website or funnel success. 

    For example, UBA tracks actions like 

    • How far users are scrolling down the page 
    • Which CTA’s and copy they are focusing on (or not focusing on) 
    • Which design elements, links or buttons they are interacting with 
    • What is happening in between each action

    Tracking user behaviour metrics help keep visitors on your website longer because they analyse where customers may be confused or unclear so you can fix it. 

    What’s the difference between data and behavioural analytics ?

    There are a few key differences between data and behavioural analytics. While data analytics are beneficial to improving website performance, using UBA creates a more customer-centric approach to funnel building. 

    The biggest difference between data and behavioural analytics ? Metric data shows which actions are happening. Behavioural analytics show you WHY they are happening. 

    For example, data can show you that a customer bounced or clicked away. Behaviour analytics show you that a page took a long time to load, they tried to click a link several times and then maybe got frustrated and clicked away. 

    Key differences between data analytics and behavioural analytics : 

    • What is happening versus what is driving it 
    • Track an action (e.g. click-through) versus tracking inaction (e.g. hover without clicking) 
    • Measuring completion of an action versus the flow of actions to complete action 
    • Source of traffic versus individual actions 
    • What happens when someone takes an action versus what happens in between taking action 

    Matomo heatmaps offer both website analytics and user behaviour for a comprehensive analysis.

    Why do behavioural analytics help improve website performance ?

    User behaviour is important because it doesn’t matter how many website visitors you have if they don’t convert. 

    If you have a lot of traffic on mobile devices, but a low CTR, heatmaps show you what is causing the low conversions. Perhaps there is a button that isn’t optimised for mobile scrolling, or a pop up that covers important copy. 

    Analysing the driving factors behind each decision means that you can increase sign-ups and conversions without losing money on website traffic that never actually buys. 

    Matomo's heatmaps feature

    How do heatmap tools show website user behaviour analytics ? 

    Heatmap tools provide a visual representation of user behaviour. 

    There are several key ways that heatmap tracking can improve website performance and therefore your overall conversions.

    Firstly, heatmaps show where to optimise website structure. It uses real visitor experiences to indicate whether customers have to scroll to reach important content, whether important messages are being missed, and whether CTAs are clear. 

    Secondly, heatmaps provide always-on UX and useability testing for your website, identifying user frustrations and optimising their experience over time.

    They also show valuable user experience insights for A/B versions of a landing page. Not only will you see the raw conversion data, but you will also understand why one page converts more than another.

    Ultimately, heatmaps increase ROI on marketing by optimising the traffic that you are sending to your website.

    Matomo Heatmaps - Hotjar alternative

    5 ways heatmaps and user behaviour analytics improve website performance and conversions

    #1. Improve customer experience

    One of the most important uses for UBA is to improve your customer experience. 

    Imagine you had a physical store. If there was something blocking customers from getting to the counter you could easily see and fix the problem. 

    It is just as important for an online store to find and fix these “roadblocks”. 

    Not only does it reduce friction in the sales funnel and make it easy for customers to buy from you, it improves their overall experience. And when 86% of buyers are willing to pay more for a great customer experience, UBA should be one of your number one priorities for growing your bottom line. 

    #2. Improve customer engagement

    Customer engagement is any interaction between a customer/product user and your business. 

    User behaviour analytics increase engagement at each customer journey touch point. 

    Using data from heatmaps will improve customer engagement because it gives you insights into how you can make your website more user friendly. This reduces friction and increases customer loyalty by making sure customers :

    • See important content 
    • Are not distracted by unnecessary elements 
    • Can easily access information or pages no matter what device they are using 
    • Are clicking on important page elements that take them further through the customer journey 

    For example, say a customer is on a sales page. A heatmap might show that pop ups or design elements like links to another page are pulling their attention away from the primary focus (i.e. the sales copy). 

    #3. Focus on customer-centric approach 

    A customer-centric approach means putting your customers at the centre of everything that you do. There is a lot of competition for your customers’ hard earned dollars, so you need to stand out. A good product or service is not enough on its own anymore. 

    User behaviour analytics are at the heart of customer-centric strategies. Instead of guessing how customers interact with your online presence, tools like heatmaps give insight into exactly what customers need. 

    This matched with an effective customer feedback strategy gives a holistic and effective approach to improving your customer experiences. 

    #4. Capture customer data across multiple channels

    Most customers won’t convert on their very first visit to a website. They might interact with your business across many channels and research your product multiple times before purchasing. 

    Multi Channel Conversion Attribution, also known as Cross Channel Attribution, lets you assign a value to each visit prior to a conversion or prior to a sale. By applying different attribution models, you get a better view on which channels actually lead to a conversion.

    User behaviour analytics like the multi channel conversion attribution that Matomo offers can show you exactly where you should focus your money to acquire new customers. 

    #5. Track and measure business objectives

    User behaviour analytics like heatmaps can show you whether you are actually hitting your targets. 

    Setting goals helps track your website performance against business objectives. 

    These include objectives such as lead generation, online sales and increased brand exposure. Matomo has a specific function for tracking goals and measuring analytics.

    Using a combination of UBA and data metrics will produce the most effective conversions. 

    For example, a customer reaching the payment confirmation page is a common objective to measure conversions. However, it is only tracked if they actually complete the action. Measuring on-page customer activity with heatmaps shows why they do or do not convert so you can fix issues. 

    Final thoughts on user behaviour analytics 

    User behavioural analytics (UBA) provide a unique and in-depth insight into your customers and their needs. Unlike traditional data metrics that track completed actions, UBA like heatmaps show you what happens in between each action and help fix any critical issues. 

    Heatmaps are your secret weapon to improving website performance while staying customer-centric ! 

    Want to know how heatmap analytics increase conversions and improve customer experience without spending more on traffic or marketing ? Check out some of the other in depth guides below. 

    The Ultimate Guide to Heatmap Software

    10 Proven Ways Heatmap Software Improves Website Conversions

    Heatmap Video

    Session Recording Video

  • Developing A Shader-Based Video Codec

    22 juin 2013, par Multimedia Mike — Outlandish Brainstorms

    Early last month, this thing called ORBX.js was in the news. It ostensibly has something to do with streaming video and codec technology, which naturally catches my interest. The hype was kicked off by Mozilla honcho Brendan Eich when he posted an article asserting that HD video decoding could be entirely performed in JavaScript. We’ve seen this kind of thing before using Broadway– an H.264 decoder implemented entirely in JS. But that exposes some very obvious limitations (notably CPU usage).

    But this new video codec promises 1080p HD playback directly in JavaScript which is a lofty claim. How could it possibly do this ? I got the impression that performance was achieved using WebGL, an extension which allows JavaScript access to accelerated 3D graphics hardware. Browsing through the conversations surrounding the ORBX.js announcement, I found this confirmation from Eich himself :

    You’re right that WebGL does heavy lifting.

    As of this writing, ORBX.js remains some kind of private tech demo. If there were a public demo available, it would necessarily be easy to reverse engineer the downloadable JavaScript decoder.

    But the announcement was enough to make me wonder how it could be possible to create a video codec which effectively leverages 3D hardware.

    Prior Art
    In theorizing about this, it continually occurs to me that I can’t possibly be the first person to attempt to do this (or the ORBX.js people, for that matter). In googling on the matter, I found various forums and Q&A posts where people asked if it were possible to, e.g., accelerate JPEG decoding and presentation using 3D hardware, with no answers. I also found a blog post which describes a plan to use 3D hardware to accelerate VP8 video decoding. It was a project done under the banner of Google’s Summer of Code in 2011, though I’m not sure which open source group mentored the effort. The project did not end up producing the shader-based VP8 codec originally chartered but mentions that “The ‘client side’ of the VP8 VDPAU implementation is working and is currently being reviewed by the libvdpau maintainers.” I’m not sure what that means. Perhaps it includes modifications to the public API that supports VP8, but is waiting for the underlying hardware to actually implement VP8 decoding blocks in hardware.

    What’s So Hard About This ?
    Video decoding is a computationally intensive task. GPUs are known to be really awesome at chewing through computationally intensive tasks. So why aren’t GPUs a natural fit for decoding video codecs ?

    Generally, it boils down to parallelism, or lack of opportunities thereof. GPUs are really good at doing the exact same operations over lots of data at once. The problem is that decoding compressed video usually requires multiple phases that cannot be parallelized, and the individual phases often cannot be parallelized. In strictly mathematical terms, a compressed data stream will need to be decoded by applying a function f(x) over each data element, x0 .. xn. However, the function relies on having applied the function to the previous data element, i.e. :

    f(xn) = f(f(xn-1))
    

    What happens when you try to parallelize such an algorithm ? Temporal rifts in the space/time continuum, if you’re in a Star Trek episode. If you’re in the real world, you’ll get incorrect, unusuable data as the parallel computation is seeded with a bunch of invalid data at multiple points (which is illustrated in some of the pictures in the aforementioned blog post about accelerated VP8).

    Example : JPEG
    Let’s take a very general look at the various stages involved in decoding the ubiquitous JPEG format :


    High level JPEG decoding flow

    What are the opportunities to parallelize these various phases ?

    • Huffman decoding (run length decoding and zig-zag reordering is assumed to be rolled into this phase) : not many opportunities for parallelizing the various Huffman formats out there, including this one. Decoding most Huffman streams is necessarily a sequential operation. I once hypothesized that it would be possible to engineer a codec to achieve some parallelism during the entropy decoding phase, and later found that On2′s VP8 codec employs the scheme. However, such a scheme is unlikely to break down to such a fine level that WebGL would require.
    • Reverse DC prediction : JPEG — and many other codecs — doesn’t store full DC coefficients. It stores differences in successive DC coefficients. Reversing this process can’t be parallelized. See the discussion in the previous section.
    • Dequantize coefficients : This could be very parallelized. It should be noted that software decoders often don’t dequantize all coefficients. Many coefficients are 0 and it’s a waste of a multiplication operation to dequantize. Thus, this phase is sometimes rolled into the Huffman decoding phase.
    • Invert discrete cosine transform : This seems like it could be highly parallelizable. I will be exploring this further in this post.
    • Convert YUV -> RGB for final display : This is a well-established use case for 3D acceleration.

    Crash Course in 3D Shaders and Humility
    So I wanted to see if I could accelerate some parts of JPEG decoding using something called shaders. I made an effort to understand 3D programming and its associated math throughout the 1990s but 3D technology left me behind a very long time ago while I got mixed up in this multimedia stuff. So I plowed through a few books concerning WebGL (thanks to my new Safari Books Online subscription). After I learned enough about WebGL/JS to be dangerous and just enough about shader programming to be absolutely lethal, I set out to try my hand at optimizing IDCT using shaders.

    Here’s my extremely high level (and probably hopelessly naive) view of the modern GPU shader programming model :


    Basic WebGL rendering pipeline

    The WebGL program written in JavaScript drives the show. It sends a set of vertices into the WebGL system and each vertex is processed through a vertex shader. Then, each pixel that falls within a set of vertices is sent through a fragment shader to compute the final pixel attributes (R, G, B, and alpha value). Another consideration is textures : This is data that the program uploads to GPU memory which can be accessed programmatically by the shaders).

    These shaders (vertex and fragment) are key to the GPU’s programmability. How are they programmed ? Using a special C-like shading language. Thought I : “C-like language ? I know C ! I should be able to master this in short order !” So I charged forward with my assumptions and proceeded to get smacked down repeatedly by the overall programming paradigm. I came to recognize this as a variation of the scientific method : Develop a hypothesis– in my case, a mental model of how the system works ; develop an experiment (short program) to prove or disprove the model ; realize something fundamental that I was overlooking ; formulate new hypothesis and repeat.

    First Approach : Vertex Workhorse
    My first pitch goes like this :

    • Upload DCT coefficients to GPU memory in the form of textures
    • Program a vertex mesh that encapsulates 16×16 macroblocks
    • Distribute the IDCT effort among multiple vertex shaders
    • Pass transformed Y, U, and V blocks to fragment shader which will convert the samples to RGB

    So the idea is that decoding of 16×16 macroblocks is parallelized. A macroblock embodies 6 blocks :


    JPEG macroblocks

    It would be nice to process one of these 6 blocks in each vertex. But that means drawing a square with 6 vertices. How do you do that ? I eventually realized that drawing a square with 6 vertices is the recommended method for drawing a square on 3D hardware. Using 2 triangles, each with 3 vertices (0, 1, 2 ; 3, 4, 5) :


    2 triangles make a square

    A vertex shader knows which (x, y) coordinates it has been assigned, so it could figure out which sections of coefficients it needs to access within the textures. But how would a vertex shader know which of the 6 blocks it should process ? Solution : Misappropriate the vertex’s z coordinate. It’s not used for anything else in this case.

    So I set all of that up. Then I hit a new roadblock : How to get the reconstructed Y, U, and V samples transported to the fragment shader ? I have found that communicating between shaders is quite difficult. Texture memory ? WebGL doesn’t allow shaders to write back to texture memory ; shaders can only read it. The standard way to communicate data from a vertex shader to a fragment shader is to declare variables as “varying”. Up until this point, I knew about varying variables but there was something I didn’t quite understand about them and it nagged at me : If 3 different executions of a vertex shader set 3 different values to a varying variable, what value is passed to the fragment shader ?

    It turns out that the varying variable varies, which means that the GPU passes interpolated values to each fragment shader invocation. This completely destroys this idea.

    Second Idea : Vertex Workhorse, Take 2
    The revised pitch is to work around the interpolation issue by just having each vertex shader invocation performs all 6 block transforms. That seems like a lot of redundant. However, I figured out that I can draw a square with only 4 vertices by arranging them in an ‘N’ pattern and asking WebGL to draw a TRIANGLE_STRIP instead of TRIANGLES. Now it’s only doing the 4x the extra work, and not 6x. GPUs are supposed to be great at this type of work, so it shouldn’t matter, right ?

    I wired up an experiment and then ran into a new problem : While I was able to transform a block (or at least pretend to), and load up a varying array (that wouldn’t vary since all vertex shaders wrote the same values) to transmit to the fragment shader, the fragment shader can’t access specific values within the varying block. To clarify, a WebGL shader can use a constant value — or a value that can be evaluated as a constant at compile time — to index into arrays ; a WebGL shader can not compute an index into an array. Per my reading, this is a WebGL security consideration and the limitation may not be present in other OpenGL(-ES) implementations.

    Not Giving Up Yet : Choking The Fragment Shader
    You might want to be sitting down for this pitch :

    • Vertex shader only interpolates texture coordinates to transmit to fragment shader
    • Fragment shader performs IDCT for a single Y sample, U sample, and V sample
    • Fragment shader converts YUV -> RGB

    Seems straightforward enough. However, that step concerning IDCT for Y, U, and V entails a gargantuan number of operations. When computing the IDCT for an entire block of samples, it’s possible to leverage a lot of redundancy in the math which equates to far fewer overall operations. If you absolutely have to compute each sample individually, for an 8×8 block, that requires 64 multiplication/accumulation (MAC) operations per sample. For 3 color planes, and including a few extra multiplications involved in the RGB conversion, that tallies up to about 200 MACs per pixel. Then there’s the fact that this approach means a 4x redundant operations on the color planes.

    It’s crazy, but I just want to see if it can be done. My approach is to pre-compute a pile of IDCT constants in the JavaScript and transmit them to the fragment shader via uniform variables. For a first order optimization, the IDCT constants are formatted as 4-element vectors. This allows computing 16 dot products rather than 64 individual multiplication/addition operations. Ideally, GPU hardware executes the dot products faster (and there is also the possibility of lining these calculations up as matrices).

    I can report that I actually got a sample correctly transformed using this approach. Just one sample, through. Then I ran into some new problems :

    Problem #1 : Computing sample #1 vs. sample #0 requires a different table of 64 IDCT constants. Okay, so create a long table of 64 * 64 IDCT constants. However, this suffers from the same problem as seen in the previous approach : I can’t dynamically compute the index into this array. What’s the alternative ? Maintain 64 separate named arrays and implement 64 branches, when branching of any kind is ill-advised in shader programming to begin with ? I started to go down this path until I ran into…

    Problem #2 : Shaders can only be so large. 64 * 64 floats (4 bytes each) requires 16 kbytes of data and this well exceeds the amount of shader storage that I can assume is allowed. That brings this path of exploration to a screeching halt.

    Further Brainstorming
    I suppose I could forgo pre-computing the constants and directly compute the IDCT for each sample which would entail lots more multiplications as well as 128 cosine calculations per sample (384 considering all 3 color planes). I’m a little stuck with the transform idea right now. Maybe there are some other transforms I could try.

    Another idea would be vector quantization. What little ORBX.js literature is available indicates that there is a method to allow real-time streaming but that it requires GPU assistance to yield enough horsepower to make it feasible. When I think of such severe asymmetry between compression and decompression, my mind drifts towards VQ algorithms. As I come to understand the benefits and limitations of GPU acceleration, I think I can envision a way that something similar to SVQ1, with its copious, hierarchical vector tables stored as textures, could be implemented using shaders.

    So far, this all pertains to intra-coded video frames. What about opportunities for inter-coded frames ? The only approach that I can envision here is to use WebGL’s readPixels() function to fetch the rasterized frame out of the GPU, and then upload it again as a new texture which a new frame processing pipeline could reference. Whether this idea is plausible would require some profiling.

    Using interframes in such a manner seems to imply that the entire codec would need to operate in RGB space and not YUV.

    Conclusions
    The people behind ORBX.js have apparently figured out a way to create a shader-based video codec. I have yet to even begin to reason out a plausible approach. However, I’m glad I did this exercise since I have finally broken through my ignorance regarding modern GPU shader programming. It’s nice to have a topic like multimedia that allows me a jumping-off point to explore other areas.