Recherche avancée

Médias (3)

Mot : - Tags -/pdf

Autres articles (98)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (8586)

  • Vedanti and Max Sound vs. Google

    14 août 2014, par Multimedia Mike — Legal/Ethical

    Vedanti Systems Limited (VSL) and Max Sound Coporation filed a lawsuit against Google recently. Ordinarily, I wouldn’t care about corporate legal battles. However, this one interests me because it’s multimedia-related. I’m curious to know how coding technology patents might hold up in a real court case.

    Here’s the most entertaining complaint in the lawsuit :

    Despite Google’s well-publicized Code of Conduct — “Don’t be Evil” — which it explains is “about doing the right thing,” “following the law,” and “acting honorably,” Google, in fact, has an established pattern of conduct which is the exact opposite of its claimed piety.

    I wonder if this is the first known case in which Google has been sued over its long-obsoleted “Don’t be evil” mantra ?

    Researching The Plaintiffs

    I think I made a mistake by assuming this lawsuit might have merit. My first order of business was to see what the plaintiff organizations have produced. I have a strong feeling that these might be run of the mill patent trolls.

    VSL currently has a blank web page. Further, the Wayback Machine only has pages reaching back to 2011. The earliest page lists these claims against a plain black background (I’ve highlighted some of the more boisterous claims and the passages that make it appear that Vedanti doesn’t actually produce anything but is strictly an IP organization) :

    The inventions key :
    The patent and software reduced any data content, without compressing, up to a 97% total reduction of the data which also produces a lossless result. This physics based invention is often called the Holy Grail.

    Vedanti Systems Intellectual Property
    Our strategic IP portfolio is granted in all of the world’s largest technology development and use countries. A major value indemnification of our licensee products is the early date of invention filing and subsequent Issue. Vedanti IP has an intrinsic 20 year patent protection and valuation in royalties and licensing. The original data transmission art has no prior art against it.

    Vedanti Systems invented among other firsts, The Slice and Partitioning of Macroblocks within a RGB Tri level region in a frame to select or not, the pixel.

    Vedanti Systems invention is used in nearly every wireless chipset and handset in the world

    Our original pixel selection system revolutionized wireless handset communications. An example of this system “Slice” and “Macroblock Partitioning” is used throughout Satellite channel expansion, Wireless partitioning, Telecom – Video Conferencing, Surveillance Cameras, and 2010 developing Media applications.

    Vedanti Systems is a Semiconductor based software, applications, and IP Continuations Intellectual Property company.

    Let’s move onto the other plaintiff, Max Sound. They have a significantly more substantive website. They also have an Android app named Spins HD Audio, which appears to be little more than a music player based on the screenshots.

    Max Sound also has a stock ticker symbol : MAXD. Something clicked into place when I looked up their ticker symbol : While worth only a few pennies, it was worth a few more pennies after this lawsuit was announced, which might be one of the motivations behind the lawsuit.

    Here’s a trick I learned when I was looking for a new tech job last year : When I first look at a company’s website and am trying to figure out what they really do, I head straight to their jobs/careers page. A lot of corporate websites have way too much blathering corporatese that can be tough to cut through. But when I see what mix of talent and specific skills they are hoping to hire, that gives me a much better portrait of what the company does.

    The reason I bring this up is because this tech company doesn’t seem to have jobs/careers page.

    The Lawsuit
    The core complaint centers around Patent 7974339 : Optimized data transmission system and method. It was filed in July 2004 (or possibly as early as January 2002), issued in July 2011, and assigned (purchased ?) by Vedanti in May 2012. The lawsuit alleges that nearly everything Google has ever produced (or, more accurately, purchased) leverages the patented technology.

    The patent itself has 5 drawings. If you’ve ever seen a multimedia codec patent, or any whitepaper on a multimedia codec, you’ve seen these graphs before. E.g., “Raw pixels come in here -> some analysis happens here -> more analysis happens over here -> entropy coding -> final bitstream”. The text of a patent document isn’t meant to be particularly useful. I’ve tried to understand this stuff before and it never goes well. Skimming the text, I just see a blur of the words data, transmission, pixel, and matrix.

    So I read the complaint to try to figure out what this is all about. To summarize the storyline as narrated by the lawsuit, some inventors were unhappy with the state of video compression in 2001 and endeavored to create something better. So they did, and called it the VSL codec. This codec is so far undocumented on the MultimediaWiki, so it probably has yet to be seen “in the wild”. Good luck finding hard technical data on it now since searches for “VSL codec” are overwhelmed by articles about this lawsuit. Also, the original codec probably wasn’t called VSL because VSL is apparently an IP organization formed much later.

    Then, the protagonists of the lawsuit patented the codec. Then, years later, Google wanted to purchase a video codec that they could open source and use to supplant H.264.

    The complaint goes on to allege that in 2010, Google specifically contacted VSL to possibly license or acquire this mysterious VSL technology. Google was allegedly allowed to study the technology, eventually decided not to continue discussions, and shipped back the proprietary materials.

    Here’s where things get weird. When Google shipped back the materials, they allegedly shipped back a bunch of Post-It notes. The notes are alleged to contain a ton of incriminating evidence. The lawsuit claims that the notes contained such tidbits as :

    • Google was concerned that its infringement could be considered “recklessness” (the standard applicable to willful infringement) ;
    • Google personnel should “try” to destroy incriminating emails ;
    • Google should consider a “design around” because it was facing a “risk of litigation.”

    Actually, given Google’s acquisition of On2, I can totally believe that last one (On2’s codecs have famously contained a lot of weirdness which is commonly suspected to be attributable to designing around known patents).

    Anyway, a lot of this case seems to hinge on the authenticity of these Post-It notes :

    “65. The Post-It notes are unequivocal evidence of Google’s knowledge of the ’339 Patent and infringement by Defendants”

    I wish I could find a stock photo of a stack of Post-It notes in an evidence bag.

    I’ve worked at big technology companies. Big tech companies these days are very diligent about indoctrinating employees about IP liability issues. The reason this Post-It situation strikes me as odd is because the alleged contents of the notes basically outline everything the corporate lawyers tell you NOT to do.

    Analysis
    I’m trying to determine what specific algorithms and coding techniques. I guess I was expecting to see a specific claim that, “Our patent outlines this specific coding technique and here is unequivocal proof that Google A) uses the same technique, and B) specifically did so after looking at our patent.” I didn’t find that (well, a bit of part B, c.f., the Post-It note debacle), but maybe that’s not how these patent lawsuits operate. I’ve never kept up before.

    Maybe it’s just a patent troll. Maybe it’s for the stock bump. I’m expecting to see pump-n-dump stock spam featuring the stock symbol MAXD anytime now.

    I’ve never been interested in following a lawsuit case carefully before. I suddenly find myself wondering if I can subscribe to the RSS feed for this case ? Too much to hope for. But I found this item through Pando and maybe they’ll stay on top of it.

  • Dreamcast Anniversary Programming

    10 septembre 2010, par Multimedia Mike — Game Hacking

    This day last year saw a lot of nostalgia posts on the internet regarding the Sega Dreamcast, launched 10 years prior to that day (on 9/9/99). Regrettably, none of the retrospectives that I read really seemed to mention the homebrew potential, which is the aspect that interested me. On the occasion of the DC’s 11th anniversary, I wanted to remind myself how to build something for the unit and do so using modern equipment and build tools.



    Background
    Like many other programmers, I initially gained interest in programming because I desired to program video games. Not content to just plunk out games on a PC, I always had a deep, abiding ambition to program actual video game hardware. That is, I wanted to program a purpose-built video game console. The Sega Dreamcast might be the most ideal candidate to ever emerge for that task. All that was required to run your own software on the unit was the console, a PC, some free software tools, and a special connectivity measure.

    The Equipment
    Here is the hardware required (ideally) to build software for the DC :

    • The console itself (I happen to have 3 of them laying around, as pictured above)
    • Some peripherals : Such as the basic DC controller, the DC keyboard (flagship title : Typing of the Dead), and the visual memory unit (VMU)


    • VGA box : The DC supported 480p gaming via a device that allowed you to connect the console straight to a VGA monitor via 15-pin D-sub. Not required for development, but very useful. I happen to have 3 of them from different third parties :


    • Finally, the connectivity measure for hooking the DC to the PC.
      There are 2 options here. The first is rare, expensive and relatively fast : A DC broadband adapter. The second is slower but much less expensive and relatively easy to come by– the DC coder’s cable. This was a DB-9 adapter on one end and a DC serial adapter on the other, and a circuit in the middle to monkey with voltage levels or some such ; I’m no electrical engineer. I procured this model from the notorious Lik Sang, well before that outfit was sued out of business.


    Dealing With Legacy
    Take a look at that coder’s cable again. DB-9 ? When was the last time you owned a computer with one of those ? And then think farther back to the last time to had occasion to plug something into one of those ports (likely a serial mouse).



    A few years ago, someone was about to toss out this Belkin USB to DB-9 serial converter when I intervened. I foresaw the day when I would dust off the coder’s cable. So now I can connect a USB serial cable to my Eee PC, which then connects via converter to a different serial cable, one which has its own conversion circuit that alters the connection to yet another type of serial cable.

    Bits is bits is bits as far as I’m concerned.



    Putting It All Together
    Now to assemble all the pieces (plus a monitor) into one development desktop :



    The monitor says “dcload 1.0.3, idle…”. That’s a custom boot CD-ROM that is patiently waiting to receive commands, code and data via the serial port.

    Getting The Software
    Back in the day, homebrew software development on the DC revolved around these components :

    • GNU binutils : for building base toolchains for the Hitachi SH-4 main CPU as well as the ARM7-based audio coprocessor
    • GNU gcc/g++ : for building compilers on top of binutils for the 2 CPUs
    • Newlib : a C library intended for embedded systems
    • KallistiOS : an open source, real-time OS developed for the DC

    The DC was my first exposure to building cross compilers. I developed some software for the DC in the earlier part of the decade. Now, I am trying to figure out how I did it, especially since I think I came up with a few interesting ideas at the time.

    Struggling With the Software Legacy
    The source for KallistiOS has gone untouched since about 2004 but is still around thanks to Sourceforge. The instructions for properly building the toolchain have been lost to time, or would be were it not for the Internet Archive’s copy of a site called Hangar Eleven. Also, KallistiOS makes reference to a program called ‘dc-tool’ which is needed on the client side for communicating with dcload. I was able to find this binary at the Boob ! site (well-known in DC circles).

    I was able to build the toolchain using binutils 2.20.1, gcc 4.5.1 and newlib 1.18.0. Building the toolchain is an odd process as it requires building the binutils, then building the C compiler, then newlib, and then building the C compiler again along with the C++ compiler because the C++ compiler depends on newlib.

    With some effort, I got the toolchain to build KallistiOS and most of its example programs. I documented most of the tweaks I had to make, several of them exactly the same as this one that I recently discovered while resurrecting a 10-year-old C program (common construct in C programming of old ?).

    Moment of Truth
    So I had some example programs built as ELF files. I told dc-tool to upload and run them on the waiting console. Unfortunately, the tool would just sort of stall, though some communication had evidently taken place. It has been many years since I have seen this in action but I recall that something more ought to be happening.

    Plan B (Hardware)
    This is the point that I remember that I have been holding onto one rather old little machine that still has a DB-9 serial port. It’s not especially ergonomic to set up. I have to run it on my floor because, to connect it to my network, I need to run a 25′ ethernet cable that just barely reaches from the other room. The machine doesn’t seem to like USB keyboards, which is a shame since I have long since ditched any PS/2 keyboards. Fortunately, the box still has an old Gentoo distro and is running sshd, a holdover from its former life as a headless box.



    Now when I run dc-tool, both the PC and DC report the upload progress while pretty overscan bars oscillate on the DC’s monitor. Now I’m back in business, until…

    Plan C (Software)
    None of these KallistiOS example programs are working. Some are even reporting catastrophic failures (register dumps) via the serial console. That’s when I remember that gcc can be a bit fickle on CPU architectures that are not, shall we say, first-class citizens. Back in the day, gcc 2.95 was a certified no-go for SH-4 development. 3.0.3 or 3.0.4 was called upon at the time. As I’m hosting this toolchain on x86_64 right now, gcc 3.0.4 can’t even be built (predates the architecture).

    One last option : As I searched through my old DC project directories, I found that I still have a lot of the resulting binaries, the ones I built 7-8 years ago. I upload a few of those and I finally see homebrew programming at work again, including this old program (described in detail here).

    Next Steps
    If I ever feel like revisiting this again, I suppose I can try some of the older 4.x series to see if they build valid programs. Alternatively, try building an x86_32-hosted 3.0.4 toolchain which ought to be a known good. And if that fails, search a little bit more to find that there are still active Dreamcast communities out there on the internet which probably have development toolchain binaries ready for download.

  • Unveiling GA4 Issues : 8 Questions from a Marketer That GA4 Can’t Answer

    8 janvier 2024, par Alex

    It’s hard to believe, but Universal Analytics had a lifespan of 11 years, from its announcement in March 2012. Despite occasional criticism, this service established standards for the entire web analytics industry. Many metrics and reports became benchmarks for a whole generation of marketers. It truly was an era.

    For instance, a lot of marketers got used to starting each workday by inspecting dashboards and standard traffic reports in the Universal Analytics web interface. There were so, so many of those days. They became so accustomed to Universal Analytics that they would enter reports, manipulate numbers, and play with metrics almost on autopilot, without much thought.

    However, six months have passed since the sunset of Universal Analytics – precisely on July 1, 2023, when Google stopped processing requests for resources using the previous version of Google Analytics. The time when data about visitors and their interactions with the website were more clearly structured within the UA paradigm is now in the past. GA4 has brought a plethora of opportunities to marketers, but along with those opportunities came a series of complexities.

    GA4 issues

    Since its initial announcement in 2020, GA4 has been plagued with errors and inconsistencies. It still has poor and sometimes illogical documentation, numerous restrictions, and peculiar interface solutions. But more importantly, the barrier to entry into web analytics has significantly increased.

    If you diligently follow GA4 updates, read the documentation, and possess skills in working with data (SQL and basic statistics), you probably won’t feel any problems – you know how to set up a convenient and efficient environment for your product and marketing data. But what if you’re not that proficient ? That’s when issues arise.

    In this article, we try to address a series of straightforward questions that less experienced users – marketers, project managers, SEO specialists, and others – want answers to. They have no time to delve into the intricacies of GA4 but seek access to the fundamentals crucial for their functionality.

    Previously, in Universal Analytics, they could quickly and conveniently address their issues. Now, the situation has become, to put it mildly, more complex. We’ve identified 8 such questions for which the current version of GA4 either fails to provide answers or implies that answers would require significant enhancements. So, let’s dive into them one by one.

    Question 1 : What are the most popular traffic sources on my website ?

    Seemingly a straightforward question. What does GA4 tell us ? It responds with a question : “Which traffic source parameter are you interested in ?”

    GA4 traffic source

    Wait, what ?

    People just want to know which resources bring them the most traffic. Is that really an issue ?

    Unfortunately, yes. In GA4, there are not one, not two, but three traffic source parameters :

    1. Session source.
    2. First User Source – the source of the first session for each user.
    3. Just the source – determined at the event or conversion level.

    If you wanted to open a report and draw conclusions quickly, we have bad news for you. Before you start ranking your traffic sources by popularity, you need to do some mental work on which parameter and in what context you will look. And even when you decide, you’ll need to make a choice in the selection of standard reports : work with the User Acquisition Report or Traffic Acquisition.

    Yes, there is a difference between them : the first uses the First User Source parameter, and the second uses the session source. And you need to figure that out too.

    Question 2 : What is my conversion rate ?

    This question concerns everyone, and it should be simple, implying a straightforward answer. But no.

    GA4 conversion rate

    In GA4, there are three conversion metrics (yes, three) :

    1. Session conversion – the percentage of sessions with a conversion.
    2. User conversion – the percentage of users who completed a conversion.
    3. First-time Purchaser Conversion – the share of active users who made their first purchase.

    If the last metric doesn’t interest us much, GA4 users can still choose something from the remaining two. But what’s next ? Which parameters to use for comparison ? Session source or user source ? What if you want to see the conversion rate for a specific event ? And how do you do this in analyses rather than in standard reports ?

    In the end, instead of an answer to a simple question, marketers get a bunch of new questions.

    Question 3. Can I trust user and session metrics ?

    Unfortunately, no. This may boggle the mind of those not well-versed in the mechanics of calculating user and session metrics, but it’s the plain truth : the numbers in GA4 and those in reality may and will differ.

    GA4 confidence levels

    The reason is that GA4 uses the HyperLogLog++ statistical algorithm to count unique values. Without delving into details, it’s a mechanism for approximate estimation of a metric with a certain level of error.

    This error level is quite well-documented. For instance, for the Total Users metric, the error level is 1.63% (for a 95% confidence interval). In simple terms, this means that 100,000 users in the GA4 interface equate to 100,000 1.63% in reality.

    Furthermore – but this is no surprise to anyone – GA4 samples data. This means that with too large a sample size or when using a large number of parameters, the application will assess your metrics based on a partial sample – let’s say 5, 10, or 30% of the entire population.

    It’s a reasonable assumption, but it can (and probably will) surprise marketers – the metrics will deviate from reality. All end-users can do (excluding delving into raw data methodologies) is to take this error level into account in their conclusions.

    Question 4. How do I calculate First Click attribution ?

    You can’t. Unfortunately, as of late, GA4 offers only three attribution models available in the Attribution tab : Last Click, Last Click For Google Ads, and Data Driven. First Click attribution is essential for understanding where and when demand is generated. In the previous version of Google Analytics (and until recently, in the current one), users could quickly apply First Click and other attribution models, compare them, and gain insights. Now, this capability is gone.

    GA4 attribution model

    Certainly, you can look at the conversion distribution considering the First User Source parameter – this will be some proxy for First Click attribution. However, comparing it with others in the Model Comparison tab won’t be possible. In the context of the GA4 interface, it makes sense to forget about non-standard attribution models.

    Question 5. How do I account for intra-session traffic ?

    Intra-session traffic essentially refers to a change in traffic sources within a session. Imagine a scenario where a user comes to your site organically from Google and, within a minute, comes from an email campaign. In the previous version of Google Analytics, a new session with the traffic source “e-mail” would be created in such a case. But now, the situation has changed.

    A session now only ends in the case of a timeout – say, 30 minutes without interaction. This means a session will always have a source from which it started. If a user changes the source within a session (clicks on an ad, from email campaigns, and so on), you won’t know anything about it until they convert. This is a significant blow to intra-session traffic since their contribution to traffic remains virtually unnoticed. 

    Question 6. How can I account for users who have not consented to the use of third-party cookies ?

    You can’t. Google Consent Mode settings imply several options when a user rejects the use of 3rd party cookies. In GA4 and BigQuery, depersonalized cookieless pings will be sent. These pings do not contain specific client_id, session_id, or other custom dimensions. As a result, you won’t be able to consider them as users or link the actions of such users together.

    Question 7. How can I compare data in explorations with the previous year ?

    The maximum data retention period for a free GA4 account is 14 months. This means that if the date range is wider, you can only use standard reports. You won’t be able to compare or view cohorts or funnels for periods more than 14 months ago. This makes the product functionality less rich because various report formats in explorations are very convenient for comparing specific metrics in easily digestible reports.

    GA4 data retention

    Of course, you always have the option to connect BigQuery and store raw data without limitations, but this process usually requires the involvement of an advanced analyst. And precisely this option is unavailable to most marketers in small teams.

    Question 8. Is the data for yesterday accurate ?

    Unknown. Google declares that data processing in GA4 takes up to 48 hours. And although this process is faster, most users still have room for frustration. And they can be understood.

    Data processing time in GA4

    What does “data processing takes 24-48 hours” mean ? When will the data in reports be complete ? For yesterday ? Or the day before yesterday ? Or for all days that were more than two days ago ? Unclear. What should marketers tell their managers when they were asked if all the data is in this report ? Well, probably all of it… or maybe not… Let’s wait for 48 hours…

    Undoubtedly, computational resources and time are needed for data preprocessing and aggregation. It’s okay that data for today will not be up-to-date. And probably not for yesterday either. But people just want to know when they can trust their data. Are they asking for too much : just a note that this report contains all the data sent and processed by Google Analytics ?

    What should you do ?

    Credit should be given to the Google team – they have done a lot to enable users to answer these questions in one form or another. For example, you can use data streaming in BigQuery and work with raw data. The entry threshold for this functionality has been significantly lowered. In fact, if you are dissatisfied with the GA4 interface, you can organize your export to BigQuery and create your own reports without (almost) any restrictions.

    Another strong option is the widespread launch of GTM Server Side. This allows you to quite freely modify the event model and essentially enrich each hit with various parameters, doing this in a first-party context. This, of course, reduces the harmful impact of most of the limitations described in this text.

    But this is not a solution.

    The users in question – marketers, managers, developers – they do not want or do not have the time for a deep dive into the issue. And they want simple answers to simple (it seemed) questions. And for now, unfortunately, GA4 is more of a professional tool for analysts than a convenient instrument for generating insights for not very advanced users.

    Why is this such a serious issue ?

    The thing is – and this is crucial – over the past 10 years, Google has managed to create a sort of GA-bubble for marketers. Many of them have become so accustomed to Google Analytics that when faced with another issue, they don’t venture to explore alternative solutions but attempt to solve it on their own. And almost always, this turns out to be expensive and inconvenient.

    However, with the latest updates to GA4, it is becoming increasingly evident that this application is struggling to address even the most basic questions from users. And these questions are not fantastically complex. Much of what was described in this article is not an unsolvable mystery and is successfully addressed by other analytics services.

    Let’s try to answer some of the questions described from the perspective of Matomo.

    Question 1 : What are the most popular traffic sources ? [Solved]

    In the Acquisition panel, you will find at least three easily identifiable reports – for traffic channels (All Channels), sources (Websites), and campaigns (Campaigns). 

    Channel Type Table

    With these, you can quickly and easily answer the question about the most popular traffic sources, and if needed, delve into more detailed information, such as landing pages.

    Question 2 : What is my conversion rate ? [Solved]

    Under Goals in Matomo, you’ll easily find the overall conversion rate for your site. Below that you’ll have access to the conversion rate of each goal you’ve set in your Matomo instance.

    Question 3 : Can I trust user and session metrics ? [Solved]

    Yes. With Matomo, you’re guaranteed 100% accurate data. Matomo does not apply sampling, does not employ specific statistical algorithms, or any analogs of threshold values. Yes, it is possible, and it’s perfectly normal. If you see a metric in the visits or users field, it accurately represents reality by 100%.

    Try Matomo for Free

    Get the web insights you need, without compromising data accuracy.

    No credit card required

    Question 4 : How do I calculate First Click attribution ? [Solved]

    You can do this in the same section where the other 5 attribution models, available in Matomo, are calculated – in the Multi Attribution section.

    Multi Attribution feature

    You can choose a specific conversion and, in a few clicks, calculate and compare up to 3 marketing attribution models. This means you don’t have to spend several days digging through documentation trying to understand how a particular model is calculated. Have a question – get an answer.

    Question 5 : How do I account for intra-session traffic ? [Solved]

    Matomo creates a new visit when a user changes a campaign. This means that you will accurately capture all relevant traffic if it is adequately tagged. No campaigns will be lost within a visit, as they will have a new utm_campaign parameter.

    This is a crucial point because when the Referrer changes, a new visit is not created, but the key lies in something else – accounting for all available traffic becomes your responsibility and depends on how you tag it.

    Try Matomo for Free

    Get the web insights you need, without compromising data accuracy.

    No credit card required

    Question 6 : How can I account for users who have not consented to the use of third-party cookies ? [Solved]

    Google Analytics requires users to accept a cookie consent banner with “analytics_storage=granted” to track them. If users reject cookie consent banners, however, then Google Analytics can’t track these visitors at all. They simply won’t show up in your traffic reports. 

    Matomo doesn’t require cookie consent banners (apart from in the United Kingdom and Germany) and can therefore continue to track visitors even after they have rejected a cookie consent screen. This is achieved through a config_id variable (the user identifier equivalent which is updating once a day). 

    Matomo doesn't need cookie consent, so you see a complete view of your traffic

    This means that virtually all of your website traffic will be tracked regardless of whether users accept a cookie consent banner or not.

    Question 7 : How can I compare data in explorations with the previous year ? [Solved]

    There is no limitation on data retention for your aggregated reports in Matomo. The essence of Matomo experience lies in the reporting data, and consequently, retaining reports indefinitely is a viable option. So you can compare data for any timeframe. 7

    Date Comparison Selector