Recherche avancée

Médias (2)

Mot : - Tags -/documentation

Autres articles (31)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (4458)

  • Trying to get the current FPS and Frametime value into Matplotlib title

    16 juin 2022, par TiSoBr

    I try to turn an exported CSV with benchmark logs into an animated graph. Works so far, but I can't get the Titles on top of both plots with their current FPS and frametime in ms values animated.

    


    Thats the output I'm getting. Looks like he simply stores all values in there instead of updating them ?

    


    Screengrab of cli output
Screengrab of the final output (inverted)

    


    from __future__ import division
import sys, getopt
import time
import matplotlib
import numpy as np
import subprocess
import math
import re
import argparse
import os
import glob

import matplotlib.animation as animation
import matplotlib.pyplot as plt


def check_pos(arg):
    ivalue = int(arg)
    if ivalue <= 0:
        raise argparse.ArgumentTypeError("%s Not a valid positive integer value" % arg)
    return True
    
def moving_average(x, w):
    return np.convolve(x, np.ones(w), 'valid') / w
    

parser = argparse.ArgumentParser(
    description = "Example Usage python frame_scan.py -i mangohud -c '#fff' -o mymov",
    formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument("-i", "--input", help = "Input data set from mangohud", required = True, nargs='+', type=argparse.FileType('r'), default=sys.stdin)
parser.add_argument("-o", "--output", help = "Output file name", required = True, type=str, default = "")
parser.add_argument("-r", "--framerate", help = "Set the desired framerate", required = False, type=float, default = 60)
parser.add_argument("-c", "--colors", help = "Colors for the line graphs; must be in quotes", required = True, type=str, nargs='+', default = 60)
parser.add_argument("--fpslength", help = "Configures how long the data will be shown on the FPS graph", required = False, type=float, default = 5)
parser.add_argument("--fpsthickness", help = "Changes the line width for the FPS graph", required = False, type=float, default = 3)
parser.add_argument("--frametimelength", help = "Configures how long the data will be shown on the frametime graph", required = False, type=float, default = 2.5)
parser.add_argument("--frametimethickness", help = "Changes the line width for the frametime graph", required = False, type=float, default = 1.5)
parser.add_argument("--graphcolor", help = "Changes all of the line colors on the graph; expects hex value", required = False, default = '#FFF')
parser.add_argument("--graphthicknes", help = "Changes the line width of the graph", required = False, type=float, default = 1)
parser.add_argument("-ts","--textsize", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 23)
parser.add_argument("-fsM","--fpsmax", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 180)
parser.add_argument("-fsm","--fpsmin", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 0)
parser.add_argument("-fss","--fpsstep", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 30)
parser.add_argument("-ftM","--frametimemax", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 50)
parser.add_argument("-ftm","--frametimemin", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 0)
parser.add_argument("-fts","--frametimestep", help = "Changes the the size of numbers marking the ticks", required = False, type=float, default = 10)

arg = parser.parse_args()
status = False


if arg.input:
    status = True
if arg.output:
    status = True
if arg.framerate:
    status = check_pos(arg.framerate)
if arg.fpslength:
    status = check_pos(arg.fpslength)
if arg.fpsthickness:
    status = check_pos(arg.fpsthickness)
if arg.frametimelength:
    status = check_pos(arg.frametimelength)
if arg.frametimethickness:
    status = check_pos(arg.frametimethickness)
if arg.colors:
    if len(arg.output) != len(arg.colors):
        for i in arg.colors:
            if re.match(r"^#([A-Fa-f0-9]{6}|[A-Fa-f0-9]{3})$", i):
                status = True
            else:
                print('{} : Isn\'t a valid hex value!'.format(i))
                status = False
    else:
        print('You must have the same amount of colors as files in input!')
        status = False
if arg.graphcolor:
    if re.match(r"^#([A-Fa-f0-9]{6}|[A-Fa-f0-9]{3})$", arg.graphcolor):
        status = True
    else:
        print('{} : Isn\'t a vaild hex value!'.format(arg.graphcolor))
        status = False
if arg.graphthicknes:
    status = check_pos(arg.graphthicknes)
if arg.textsize:
    status = check_pos(arg.textsize)
if not status:
    print("For a list of arguments try -h or --help") 
    exit()


# Empty output folder
files = glob.glob('/output/*')
for f in files:
    os.remove(f)


# We need to know the longest recording out of all inputs so we know when to stop the video
longest_data = 0

# Format the raw data into a list of tuples (fps, frame time in ms, time from start in micro seconds)
# The first three lines of our data are setup so we ignore them
data_formated = []
for li, i in enumerate(arg.input):
    t = 0
    sublist = []
    for line in i.readlines()[3:]:
        x = line[:-1].split(',')
        fps = float(x[0])
        frametime = int(x[1])/1000 # convert from microseconds to milliseconds
        elapsed = int(x[11])/1000 # convert from nanosecond to microseconds
        data = (fps, frametime, elapsed)
        sublist.append(data)
    # Compare last entry of each list with the 
    if sublist[-1][2] >= longest_data:
        longest_data = sublist[-1][2]
    data_formated.append(sublist)


max_blocksize = max(arg.fpslength, arg.frametimelength) * arg.framerate
blockSize = arg.framerate * arg.fpslength


# Get step time in microseconds
step = (1/arg.framerate) * 1000000 # 1000000 is one second in microseconds
frame_size_fps = (arg.fpslength * arg.framerate) * step
frame_size_frametime = (arg.frametimelength * arg.framerate) * step


# Total frames will have to be updated for more then one source
total_frames = int(int(longest_data) / step)


if True: # Gonna be honest, this only exists so I can collapse this block of code

    # Sets up our figures to be next to each other (horizontally) and with a ratio 3:1 to each other
    fig, (ax1, ax2) = plt.subplots(1, 2, gridspec_kw={'width_ratios': [3, 1]})

    # Size of whole output 1920x360 1080/3=360
    fig.set_size_inches(19.20, 3.6)

    # Make the background transparent
    fig.patch.set_alpha(0)


    # Loop through all active axes; saves a lot of lines in ax1.do_thing(x) ax2.do_thing(x)
    for axes in fig.axes:

        # Set all splines to the same color and width
        for loc, spine in axes.spines.items():
            axes.spines[loc].set_color(arg.graphcolor)
            axes.spines[loc].set_linewidth(arg.graphthicknes)

        # Make sure we don't render any data points as this will be our background
        axes.set_xlim(-(max_blocksize * step), 0)
        

        # Make both plots transparent as well as the background
        axes.patch.set_alpha(.5)
        axes.patch.set_color('#020202')

        # Change the Y axis info to be on the right side
        axes.yaxis.set_label_position("right")
        axes.yaxis.tick_right()

        # Add the white lines across the graphs; the location of the lines are based off set_{}ticks
        axes.grid(alpha=.8, b=True, which='both', axis='y', color=arg.graphcolor, linewidth=arg.graphthicknes)

        # Remove X axis info
        axes.set_xticks([])

    # Add a another Y axis so ticks are on both sides
    tmp_ax1 = ax1.secondary_yaxis("left")
    tmp_ax2 = ax2.secondary_yaxis("left")

    # Set both to the same values
    ax1.set_yticks(np.arange(arg.fpsmin, arg.fpsmax + 1, step=arg.fpsstep))
    ax2.set_yticks(np.arange(arg.frametimemin, arg.frametimemax + 1, step=arg.frametimestep))
    tmp_ax1.set_yticks(np.arange(arg.fpsmin , arg.fpsmax + 1, step=arg.fpsstep))
    tmp_ax2.set_yticks(np.arange(arg.frametimemin, arg.frametimemax + 1, step=arg.frametimestep))

    # Change the "ticks" to be white and correct size also change font size
    ax1.tick_params(axis='y', color=arg.graphcolor ,width=arg.graphthicknes, length=16, labelsize=arg.textsize, labelcolor=arg.graphcolor)
    ax2.tick_params(axis='y', color=arg.graphcolor ,width=arg.graphthicknes, length=16, labelsize=arg.textsize, labelcolor=arg.graphcolor)
    tmp_ax1.tick_params(axis='y', color=arg.graphcolor ,width=arg.graphthicknes, length=8, labelsize=0) # Label size of 0 disables the fps/frame numbers
    tmp_ax2.tick_params(axis='y', color=arg.graphcolor ,width=arg.graphthicknes, length=8, labelsize=0)


    # Limits Y scale
    ax1.set_ylim(arg.fpsmin,arg.fpsmax + 1)
    ax2.set_ylim(arg.frametimemin,arg.frametimemax + 1)

    # Add an empty plot
    line = ax1.plot([], lw=arg.fpsthickness)
    line2 = ax2.plot([], lw=arg.frametimethickness)

    # Sets all the data for our benchmark
    for benchmarks, color in zip(data_formated, arg.colors):
        y = moving_average([x[0] for x in benchmarks], 25)
        y2 = [x[1] for x in benchmarks]
        x = [x[2] for x in benchmarks]
        line += ax1.plot(x[12:-12],y, c=color, lw=arg.fpsthickness)
        line2 += ax2.step(x,y2, c=color, lw=arg.fpsthickness)
    
    # Add titles with values
    ax1.set_title("Avg. frames per second: {}".format(y2), color=arg.graphcolor, fontsize=20, fontweight='bold', loc='left')
    ax2.set_title("Frametime in ms: {}".format(y2), color=arg.graphcolor, fontsize=20, fontweight='bold', loc='left')  

    # Removes unwanted white space; also controls the space between the two graphs
    plt.tight_layout(pad=0, h_pad=0, w_pad=2.5)
    
    fig.canvas.draw()

    # Cache the background
    axbackground = fig.canvas.copy_from_bbox(ax1.bbox)
    ax2background = fig.canvas.copy_from_bbox(ax2.bbox)


# Create a ffmpeg instance as a subprocess we will pipe the finished frame into ffmpeg
# encoded in Apple QuickTime (qtrle) for small(ish) file size and alpha support
# There are free and opensource types that will also do this but with much larger sizes
canvas_width, canvas_height = fig.canvas.get_width_height()
outf = '{}.mov'.format(arg.output)
cmdstring = ('ffmpeg',
                '-stats', '-hide_banner', '-loglevel', 'error', # Makes ffmpeg less annoying / to much console output
                '-y', '-r', '60', # set the fps of the video
                '-s', '%dx%d' % (canvas_width, canvas_height), # size of image string
                '-pix_fmt', 'argb', # format cant be changed since this is what  `fig.canvas.tostring_argb()` outputs
                '-f', 'rawvideo',  '-i', '-', # tell ffmpeg to expect raw video from the pipe
                '-vcodec', 'qtrle', outf) # output encoding must support alpha channel
pipe = subprocess.Popen(cmdstring, stdin=subprocess.PIPE)

def render_frame(frame : int):

    # Set the bounds of the graph for each frame to render the correct data
    start = (frame * step) - frame_size_fps
    end = start + frame_size_fps
    ax1.set_xlim(start,end)
     
     
    start = (frame * step) - frame_size_frametime
    end = start + frame_size_frametime
    ax2.set_xlim(start,end)
    

    # Restore background
    fig.canvas.restore_region(axbackground)
    fig.canvas.restore_region(ax2background)

    # Redraw just the points will only draw points with in `axes.set_xlim`
    for i in line:
        ax1.draw_artist(i)
        
    for i in line2:
        ax2.draw_artist(i)

    # Fill in the axes rectangle
    fig.canvas.blit(ax1.bbox)
    fig.canvas.blit(ax2.bbox)
    
    fig.canvas.flush_events()

    # Converts the finished frame to ARGB
    string = fig.canvas.tostring_argb()
    return string




#import multiprocessing
#p = multiprocessing.Pool()
#for i, _ in enumerate(p.imap(render_frame, range(0, int(total_frames + max_blocksize))), 20):
#    pipe.stdin.write(_)
#    sys.stderr.write('\rdone {0:%}'.format(i/(total_frames + max_blocksize)))
#p.close()

#Signle Threaded not much slower then multi-threading
if __name__ == "__main__":
    for i , _ in enumerate(range(0, int(total_frames + max_blocksize))):
        render_frame(_)
        pipe.stdin.write(render_frame(_))
        sys.stderr.write('\rdone {0:%}'.format(i/(total_frames + max_blocksize)))


    


  • Google Analytics Privacy Issues : Is It Really That Bad ?

    2 juin 2022, par Erin

    If you find yourself asking : “What’s the deal with Google Analytics privacy ?”, you probably have some second thoughts. 

    Your hunch is right. Google Analytics (GA) is a popular web analytics tool, but it’s far from being perfect when it comes to respecting users’ privacy. 

    This post helps you understand tremendous Google Analytics privacy concerns users, consumers and regulators expressed over the years.

    In this blog, we’ll cover :

    What Does Google Analytics Collect About Users ? 

    To understand Google Analytics privacy issues, you need to know how Google treats web users’ data. 

    By default, Google Analytics collects the following information : 

    • Session statistics — duration, page(s) viewed, etc. 
    • Referring website details — a link you came through or keyword used. 
    • Approximate geolocation — country, city. 
    • Browser and device information — mobile vs desktop, OS usage, etc. 

    Google obtains web analytics data about users via two means : an on-site Google Analytics tracking code and cookies.

    A cookie is a unique identifier (ID) assigned to each user visiting a web property. Each cookie stores two data items : unique user ID and website name. 

    With the help of cookies, web analytics solutions can recognise returning visitors and track their actions across the website(s).

    First-party vs third-party cookies
    • First party cookies are generated by one website and collect user behaviour data from said website only. 
    • Third-party cookies are generated by a third-party website object (for example, an ad) and can track user behaviour data across multiple websites. 

    As it’s easy to imagine, third-party cookies are a goldmine for companies selling online ads. Essentially, they allow ad platforms to continue watching how the user navigates the web after clicking a certain link. 

    Yet, people have little clue as to which data they are sharing and how it is being used. Also, user consent to tracking across websites is only marginally guaranteed by existing Google Analytics controls. 

    Why Third-Party Cookie Data Collection By GA Is Problematic 

    Cookies can transmit personally identifiable information (PII) such as name, log in details, IP address, saved payment method and so on. Some of these details can end up with advertisers without consumers’ direct knowledge or consent.

    Regulatory frameworks such as General Data Protection Regulation (GDPR) in Europe and California Consumer Privacy Act (CCPA) emerged as a response to uncontrolled user behaviour tracking.

    Under regulatory pressure, Big Tech companies had to adapt their data collection process.

    Apple was the first to implement by-default third-party blocking in the Safari browser. Then added a tracking consent mechanism for iPhone users starting from iOS 15.2 and later. 

    Google, too, said it would drop third-party cookie usage after The European Commission and UK’s Competition and Markets Authority (CMA) launched antitrust investigations into its activity. 

    To shake off the data watchdogs, Google released a Privacy Sandbox — a set of progressive tech, operational and compliance changes for ensuring greater consumer privacy. 

    Google’s biggest promise : deprecate third-party cookies usage for all web and mobile products. 

    Originally, Google promised to drop third-party cookies by 2022, but that didn’t happen. Instead, Google delayed cookie tracking depreciation for Chrome until the second half of 2023

    Why did they push back on this despite hefty fines from regulators ?

    Because online ads make Google a lot of money.

    In 2021, Alphabet Inc (parent company of Google), made $256.7 billion in revenue, of which $209.49 billion came from selling advertising. 

    Lax Google Analytics privacy enforcement — and its wide usage by website owners — help Google make those billions from collecting and selling user data. 

    How Google Uses Collected Google Analytics Data for Advertising 

    Over 28 million websites (or roughly 85% of the Internet) have Google Analytics tracking codes installed. 

    Even if one day we get a Google Analytics version without cookies, it still won’t address all the privacy concerns regulators and consumers have. 

    Over the years, Google has accumulated an extensive collection of user data. The company’s engineers used it to build state-of-the-art deep learning models, now employed to build advanced user profiles. 

    Deep learning is the process of training a machine to recognise data patterns. Then this “knowledge” is used to produce highly-accurate predictive insights. The more data you have for model training — the better its future accuracy will be. 

    Google has amassed huge deposits of data from its collection of products — GA, YouTube, Gmail, Google Docs and Google Maps among others. Now they are using this data to build a third-party cookies-less alternative mechanism for modelling people’s preferences, habits, lifestyles, etc. 

    Their latest model is called Google Topics. 

    This comes only after Google’s failed attempt to replace cookie-based training with Federated Learning of Cohorts (FLoC) model. But the solution wasn’t offering enough user transparency and user controls among other issues.

    Google Topics
    Source : Google Blog

    Google Topics promises to limit the granularity of data advertisers get about users. 

    But it’s still a web user surveillance method. With Google Topics, the company will continue collecting user data via Chrome (and likely other Google products) — and share it with advertisers. 

    Because as we said before : Google is in the business of profiting off consumers’ data. 

    Two Major Ways Google Takes Advantage of Customer Data

    Every bit of data Google collects across its ecosystem of products can be used in two ways :

    • For ad targeting and personalisation 
    • To improve Google’s products 

    The latter also helps the former. 

    Advanced Ad Personalisation and Targeting

    GA provides the company with ample data on users’ 

    • Recent and frequent searches 
    • Location history
    • Visited websites
    • Used apps 
    • Videos and ads viewed 
    • Personal data like age or gender 

    The company’s privacy policy explicitly states that :

    Google Analytics Privacy Policy
    Source : Google

    Google also admits to using collected data to “measure the effectiveness of advertising” and “personalise content and ads you see on Google.” 

    But there are no further elaborations on how exactly customers’ data is used — and what you can do to prevent it from being shared with third parties. 

    In some cases, Google also “forgets” to inform users about its in-product tracking.

    Journalists from CNBC and The New York Times independently concluded that Google monitors users’ Gmail activity. In particular, the company scans your inbox for recent purchases, trips, flights and bills notifications. 

    While Google says that this information isn’t sold to advertisers (directly), they still may use the “saved information about your orders in other Google services”. 

    Once again, this means you have little control or knowledge of subsequent data usage. 

    Improving Product Usability 

    Google has many “arms” to collect different data points — from user’s search history to frequently-travelled physical routes. 

    They also reserve the right to use these insights for improving existing products. 

    Here’s what it means : by combining different types of data points obtained from various products, Google can pierce a detailed picture of a person’s life. Even if such user profile data is anonymised, it is still alarmingly accurate. 

    Douglas Schmidt, a computer science researcher at Vanderbilt University, well summarised the matter : 

    “[Google’s] business model is to collect as much data about you as possible and cross-correlate it so they can try to link your online persona with your offline persona. This tracking is just absolutely essential to their business. ‘Surveillance capitalism’ is a perfect phrase for it.”

    Google Data Collection Obsession Is Backed Into Its Business Model 

    OK, but Google offers some privacy controls to users ? Yes. Google only sees and uses the information you voluntarily enter or permit them to access. 

    But as the Washington Post correspondent points out :

    “[Big Tech] companies get to set all the rules, as long as they run those rules by consumers in convoluted terms of service that even those capable of decoding the legalistic language rarely bother to read. Other mechanisms for notice and consent, such as opt-outs and opt-ins, create similar problems. Control for the consumer is mostly an illusion.”

    Google openly claims to be “one of many ad networks that personalise ads based on your activity online”. 

    The wrinkle is that they have more data than all other advertising networks (arguably combined). This helps Google sell high-precision targeting and contextually personalised ads for billions of dollars annually.

    Given that Google has stakes in so many products — it’s really hard to de-Google your business and minimise tracking and data collection from the company.

    They are also creating a monopoly on data collection and ownership. This fact makes regulators concerned. The 2021 antitrust lawsuit from the European Commission says : 

    “The formal investigation will notably examine whether Google is distorting competition by restricting access by third parties to user data for advertising purposes on websites and apps while reserving such data for its own use.”

    In other words : By using consumer data to its unfair advantage, Google allegedly shuts off competition.

    But that’s not the only matter worrying regulators and consumers alike. Over the years, Google also received numerous other lawsuits for breaching people’s privacy, over and over again. 

    Here’s a timeline : 

    Separately, Google has a very complex history with GDPR compliance

    How Google Analytics Contributes to the Web Privacy Problem 

    Google Analytics is the key puzzle piece that supports Google’s data-driven business model. 

    If Google was to release a privacy-focused Google Analytics alternative, it’d lose access to valuable web users’ data and a big portion of digital ad revenues. 

    Remember : Google collects more data than it shares with web analytics users and advertisers. But they keep a lot of it for personal usage — and keep looking for ways to share this intel with advertisers (in a way that keeps regulators off their tail).

    For Google Analytics to become truly ethical and privacy-focused, Google would need to change their entire revenue model — which is something they are unlikely to do.

    Where does this leave Google Analytics users ? 

    In a slippery territory. By proxy, companies using GA are complicit with Google’s shady data collection and usage practice. They become part of the problem.

    In fact, Google Analytics usage opens a business to two types of risks : 

    • Reputational. 77% of global consumers say that transparency around how data is collected and used is important to them when interacting with different brands. That’s why data breaches and data misuse by brands lead to major public outrages on social media and boycotts in some cases. 
    • Legal. EU regulators are on a continuous crusade against Google Analytics 4 (GA4) as it is in breach of GDPR. French and Austrian watchdogs ruled the “service” illegal. Since Google Analytics is not GDPR compliant, it opens any business using it to lawsuits (which is already happening).

    But there’s a way out.

    Choose a Privacy-Friendly Google Analytics Alternative 

    Google Analytics is a popular web analytics service, but not the only one available. You have alternatives such as Matomo. 

    Our guiding principle is : respecting privacy.

    Unlike Google Analytics, we leave data ownership 100% in users’ hands. Matomo lets you implement privacy-centred controls for user data collection.

    Plus, you can self-host Matomo On-Premise or choose Matomo Cloud with data securely stored in the EU and in compliance with GDPR.

    The best part ? You can try our ethical alternative to Google Analytics for free. No credit card required ! Start your free 21-day trial now

  • Google Analytics 4 and GDPR : Everything You Need to Know

    17 mai 2022, par Erin

    Four years have passed since the European General Data Protection Regulation (GDPR, also known as DSGVO in German, and RGPD in French) took effect.

    That’s ample time to get compliant, especially for an organisation as big and innovative as Google. Or is it ? 

    If you are wondering how GDPR affects Google Analytics 4 and what the compliance status is at present, here’s the lowdown. 

    Is Google Analytics 4 GDPR Compliant ?

    No. As of mid-2022, Google Analytics 4 (GA4) isn’t fully GDPR compliant. Despite adding extra privacy-focused features, GA4 still has murky status with the European regulators. After the invalidation of the Privacy Shield framework in 2020, Google is yet to regulate EU-US data protection. At present, the company doesn’t sufficiently protect EU citizens’ and residents’ data against US surveillance laws. This is a direct breach of GDPR.

    Google Analytics and GDPR : a Complex Relationship 

    European regulators have scrutinised Google since GDPR came into effect in 2018.

    While the company took steps to prepare for GDPR provisions, it didn’t fully comply with important regulations around user data storage, transfer and security.

    The relationship between Google and EU regulators got more heated after the Court of Justice of the European Union (CJEU) invalidated the Privacy Shield — a leeway Google used for EU-US data transfers. After 2020, GDPR litigation against Google followed. 

    This post summarises the main milestones in this story and explains the consequences for Google Analytics users. 

    Google Analytics and GDPR Timeline

    2018 : Google Analytics Meets GDPR 

    In 2018, the EU adopted the General Data Protection Regulation (GDPR) — a set of privacy and data security laws, covering all member states. Every business interacting with EU citizens and/or residents had to comply.

    GDPR harmonised data protection laws across member states and put down extra provisions for what constitutes sensitive personal information (or PII). Broadly, PII includes any data about the person’s :

    • Racial or ethnic origin 
    • Employment status 
    • Religious or political beliefs
    • State of health 
    • Genetic or biometric data 
    • Financial records (such as payment method data)
    • Address and phone numbers 

    Businesses were barred from collecting this information without explicit consent (and even with it in some cases). If collected, such sensitive information is also subject to strict requirements on how it should be stored, secured, transferred and used. 

    7 Main GDPR Principles Explained 

    Article 5 of the GDPR lays out seven main GDPR principles for personal data and privacy protection : 

    • Lawfulness, fairness and transparency — data must be obtained legally, collected with consent and in adherence to laws. 
    • Purpose limitation — all personal information must be collected for specified, explicit and legal purposes. 
    • Data minimisation — companies must collect only necessary and adequate data, aligned with the stated purpose. 
    • Accuracy — data accuracy must be ensured at all times. Companies must have mechanisms to erase or correct inaccurate data without delays. 
    • Storage limitation — data must be stored only for as long as the stated purpose suggests. Though there’s no upper time limit on data storage. 
    • Integrity and confidentiality (security) — companies must take measures to ensure secure data storage and prevent unlawful or unauthorised access to it. 
    • Accountability — companies must be able to demonstrate adherence to the above principles. 

    Google claimed to have taken steps to make all of their products GDPR compliant ahead of the deadline. But in practice, this wasn’t always the case.

    In March 2018, a group of publishers admonished Google for not providing them with enough tools for GDPR compliance :

    “[Y]ou refuse to provide publishers with any specific information about how you will collect, share and use the data. Placing the full burden of obtaining new consent on the publisher is untenable without providing the publisher with the specific information needed to provide sufficient transparency or to obtain the requisite specific, granular and informed consent under the GDPR.”

    The proposed Google Analytics GDPR consent form was hard to implement and lacked customisation options. In fact, Google “makes unilateral decisions” on how the collected data is stored and used. 

    Users had no way to learn about or control all intended uses of people’s data — which made compliance with the second clause impossible. 

    Unsurprisingly, Google was among the first companies to face a GDPR lawsuit (together with Facebook). 

    By 2019, French data regulator CNIL, successfully argued that Google wasn’t sufficiently disclosing its data collection across products — and hence in breach of GDPR. After a failed appeal, Google had to pay a €50 million fine and promise to do better. 

    2019 : Google Analytics 4 Announcement 

    Throughout 2019, Google rightfully attempted to resolve some of its GDPR shortcomings across all products, Google Universal Analytics (UA) included. 

    They added a more visible consent mechanism for online tracking and provided extra compliance tips for users to follow. In the background, Google also made tech changes to its data processing mechanism to get on the good side of regulations.

    Though Google addressed some of the issues, they missed others. A 2019 independent investigation found that Google real-time-bidding (RTB) ad auctions still used EU citizens’ and residents’ data without consent, thanks to a loophole called “Push Pages”. But they managed to quickly patch this up before the allegations had made it to court. 

    In November 2019, Google released a beta version of the new product version — Google Analytics 4, due to replace Universal Analytics. 

    GA4 came with a set of new privacy-focused features for ticking GDPR boxes such as :

    • Data deletion mechanism. Users can now request to surgically extract certain data from the Analytics servers via a new interface. 
    • Shorter data retention period. You can now shorten the default retention period to 2 months by default (instead of 14 months) or add a custom limit.  
    • IP Anonymisation. GA4 doesn’t log or store IP addresses by default. 

    Google Analytics also updated its data processing terms and made changes to its privacy policy

    Though Google made some progress, Google Analytics 4 still has many limitations — and isn’t GDPR compliant. 

    2020 : Privacy Shield Invalidation Ruling 

    As part of the 2018 GDPR preparations, Google named its Irish entity (Google Ireland Limited) as the “data controller” legally responsible for EEA and Swiss users’ information. 

    The company announcement says : 

    Google Analytics Statement on Privacy Shield Invalidation Ruling
    Source : Google

    Initially, Google assumed that this legal change would help them ensure GDPR compliance as “legally speaking” a European entity was set in charge of European data. 

    Practically, however, EEA consumers’ data was still primarily transferred and processed in the US — where most Google data centres are located. Until 2020, such cross-border data transfers were considered legal thanks to the Privacy Shield framework

    But in July 2020, The EU Court of Justice ruled that this framework doesn’t provide adequate data protection to digitally transmitted data against US surveillance laws. Hence, companies like Google can no longer use it. The Swiss Federal Data Protection and Information Commissioner (FDPIC) reached the same conclusion in September 2020. 

    The invalidation of the Privacy Shield framework put Google in a tough position.

     Article 14. f of the GDPR explicitly states : 

    “The controller (the company) that intends to carry out a transfer of personal data to a recipient (Analytics solution) in a third country or an international organisation must provide its users with information on the place of processing and storage of its data”.

    Invalidation of the Privacy Shield framework prohibited Google from moving data to the US. At the same time, GDPR provisions mandated that they must disclose proper data location. 

    But Google Analytics (like many other products) had no a mechanism for : 

    • Guaranteeing intra-EU data storage 
    • Selecting a designated regional storage location 
    • Informing users about data storage location or data transfers outside of the EU 

    And these factors made Google Analytics in direct breach of GDPR — a territory, where they remain as of 2022.

    2020-2022 : Google GDPR Breaches and Fines 

    The 2020 ruling opened Google to GDPR lawsuits from country-specific data regulators.

    Google Analytics in particular was under a heavy cease-fire. 

    • Sweden first fined Google for violating GDPR for no not fulfilling its obligations to request data delisting in 2020. 
    • France rejected Google Analytics 4 IP address anonymisation function as a sufficient measure for protecting cross-border data transfers. Even with it, US intelligence services can still access user IPs and other PII. France declared Google Analytics illegal and pressed a €150 million fine. 
    • Austria also found Google Analytics GDPR non-compliant and proclaimed the service as “illegal”. The authority now seeks a fine too. 

    The Dutch Data Protection Authority and  Norwegian Data Protection Authority also found Google Analytics guilty of a GDPR breach and seek to limit Google Analytics usage. 

    New privacy controls in Google Analytics 4 do not resolve the underlying issue — unregulated, non-consensual EU-US data transfer. 

    Google Analytics GDPR non-compliance effectively opens any website tracking or analysing European visitors to legal persecution.

    In fact, this is already happening. noyb, a European privacy-focused NGO, has already filed over 100 lawsuits against European websites using Google Analytics.

    2022 : Privacy Shield 2.0. Negotiations

    Google isn’t the only US company affected by the Privacy Shield framework invalidation. The ruling puts thousands of digital companies at risk of non-compliance.

    To settle the matter, US and EU authorities started “peace talks” in spring 2022.

    European Commission President Ursula von der Leyen said that they are working with the Biden administration on the new agreement that will “enable predictable and trustworthy data flows between the EU and US, safeguarding the privacy and civil liberties.” 

    However, it’s just the beginning of a lengthy negotiation process. The matter is far from being settled and contentious issues remain as we discussed on Twitter (come say hi !).

    For one, the US isn’t eager to modify its surveillance laws and is mostly willing to make them “proportional” to those in place in the EU. These modifications may still not satisfy CJEU — which has the power to block the agreement vetting or invalidate it once again. 

    While these matters are getting hashed out, Google Analytics users, collecting data about EU citizens and/or residents, remain on slippery grounds. As long as they use GA4, they can be subject to GDPR-related lawsuits. 

    To Sum It Up 

    • Google Analytics 4 and Google Universal Analytics are not GDPR compliant because of Privacy Shield invalidation in 2020. 
    • French and Austrian data watchdogs named Google Analytics operations “illegal”. Swedish, Dutch and Norwegian authorities also claim it’s in breach of GDPR. 
    • Any website using GA for collecting data about European citizens and/or residents can be taken to court for GDPR violations (which is already happening). 
    • Privacy Shield 2.0 Framework discussions to regulate EU-US data transfers have only begun and may take years. Even if accepted, the new framework(s) may once again be invalidated by local data regulators as has already happened in the past. 

    Time to Get a GDPR Compliant Google Analytics Alternative 

    Retaining 100% data ownership is the optimal path to GDPR compliance.

    By selecting a transparent web analytics solution that offers 100% data ownership, you can rest assured that no “behind the scenes” data collection, processing or transfers take place. 

    Unlike Google Analytics 4, Matomo offers all of the features you need to be GDPR compliant : 

    • Full data anonymisation 
    • Single-purpose data usage 
    • Easy consent and an opt-out mechanism 
    • First-party cookies usage by default 
    • Simple access to collect data 
    • Fast data removals 
    • EU-based data storage for Matomo Cloud (or storage in the country of your choice with Matomo On-Premise)

    Learn about your audiences in a privacy-centred way and protect your business against unnecessary legal exposure. 

    Start your 21-day free trial (no credit card required) to see how fully GDPR-compliant website analytics works !