Recherche avancée

Médias (1)

Mot : - Tags -/école

Autres articles (56)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • Soumettre améliorations et plugins supplémentaires

    10 avril 2011

    Si vous avez développé une nouvelle extension permettant d’ajouter une ou plusieurs fonctionnalités utiles à MediaSPIP, faites le nous savoir et son intégration dans la distribution officielle sera envisagée.
    Vous pouvez utiliser la liste de discussion de développement afin de le faire savoir ou demander de l’aide quant à la réalisation de ce plugin. MediaSPIP étant basé sur SPIP, il est également possible d’utiliser le liste de discussion SPIP-zone de SPIP pour (...)

Sur d’autres sites (4161)

  • Google Analytics Privacy Issues : Is It Really That Bad ?

    2 juin 2022, par Erin

    If you find yourself asking : “What’s the deal with Google Analytics privacy ?”, you probably have some second thoughts. 

    Your hunch is right. Google Analytics (GA) is a popular web analytics tool, but it’s far from being perfect when it comes to respecting users’ privacy. 

    This post helps you understand tremendous Google Analytics privacy concerns users, consumers and regulators expressed over the years.

    In this blog, we’ll cover :

    What Does Google Analytics Collect About Users ? 

    To understand Google Analytics privacy issues, you need to know how Google treats web users’ data. 

    By default, Google Analytics collects the following information : 

    • Session statistics — duration, page(s) viewed, etc. 
    • Referring website details — a link you came through or keyword used. 
    • Approximate geolocation — country, city. 
    • Browser and device information — mobile vs desktop, OS usage, etc. 

    Google obtains web analytics data about users via two means : an on-site Google Analytics tracking code and cookies.

    A cookie is a unique identifier (ID) assigned to each user visiting a web property. Each cookie stores two data items : unique user ID and website name. 

    With the help of cookies, web analytics solutions can recognise returning visitors and track their actions across the website(s).

    First-party vs third-party cookies
    • First party cookies are generated by one website and collect user behaviour data from said website only. 
    • Third-party cookies are generated by a third-party website object (for example, an ad) and can track user behaviour data across multiple websites. 

    As it’s easy to imagine, third-party cookies are a goldmine for companies selling online ads. Essentially, they allow ad platforms to continue watching how the user navigates the web after clicking a certain link. 

    Yet, people have little clue as to which data they are sharing and how it is being used. Also, user consent to tracking across websites is only marginally guaranteed by existing Google Analytics controls. 

    Why Third-Party Cookie Data Collection By GA Is Problematic 

    Cookies can transmit personally identifiable information (PII) such as name, log in details, IP address, saved payment method and so on. Some of these details can end up with advertisers without consumers’ direct knowledge or consent.

    Regulatory frameworks such as General Data Protection Regulation (GDPR) in Europe and California Consumer Privacy Act (CCPA) emerged as a response to uncontrolled user behaviour tracking.

    Under regulatory pressure, Big Tech companies had to adapt their data collection process.

    Apple was the first to implement by-default third-party blocking in the Safari browser. Then added a tracking consent mechanism for iPhone users starting from iOS 15.2 and later. 

    Google, too, said it would drop third-party cookie usage after The European Commission and UK’s Competition and Markets Authority (CMA) launched antitrust investigations into its activity. 

    To shake off the data watchdogs, Google released a Privacy Sandbox — a set of progressive tech, operational and compliance changes for ensuring greater consumer privacy. 

    Google’s biggest promise : deprecate third-party cookies usage for all web and mobile products. 

    Originally, Google promised to drop third-party cookies by 2022, but that didn’t happen. Instead, Google delayed cookie tracking depreciation for Chrome until the second half of 2023

    Why did they push back on this despite hefty fines from regulators ?

    Because online ads make Google a lot of money.

    In 2021, Alphabet Inc (parent company of Google), made $256.7 billion in revenue, of which $209.49 billion came from selling advertising. 

    Lax Google Analytics privacy enforcement — and its wide usage by website owners — help Google make those billions from collecting and selling user data. 

    How Google Uses Collected Google Analytics Data for Advertising 

    Over 28 million websites (or roughly 85% of the Internet) have Google Analytics tracking codes installed. 

    Even if one day we get a Google Analytics version without cookies, it still won’t address all the privacy concerns regulators and consumers have. 

    Over the years, Google has accumulated an extensive collection of user data. The company’s engineers used it to build state-of-the-art deep learning models, now employed to build advanced user profiles. 

    Deep learning is the process of training a machine to recognise data patterns. Then this “knowledge” is used to produce highly-accurate predictive insights. The more data you have for model training — the better its future accuracy will be. 

    Google has amassed huge deposits of data from its collection of products — GA, YouTube, Gmail, Google Docs and Google Maps among others. Now they are using this data to build a third-party cookies-less alternative mechanism for modelling people’s preferences, habits, lifestyles, etc. 

    Their latest model is called Google Topics. 

    This comes only after Google’s failed attempt to replace cookie-based training with Federated Learning of Cohorts (FLoC) model. But the solution wasn’t offering enough user transparency and user controls among other issues.

    Google Topics
    Source : Google Blog

    Google Topics promises to limit the granularity of data advertisers get about users. 

    But it’s still a web user surveillance method. With Google Topics, the company will continue collecting user data via Chrome (and likely other Google products) — and share it with advertisers. 

    Because as we said before : Google is in the business of profiting off consumers’ data. 

    Two Major Ways Google Takes Advantage of Customer Data

    Every bit of data Google collects across its ecosystem of products can be used in two ways :

    • For ad targeting and personalisation 
    • To improve Google’s products 

    The latter also helps the former. 

    Advanced Ad Personalisation and Targeting

    GA provides the company with ample data on users’ 

    • Recent and frequent searches 
    • Location history
    • Visited websites
    • Used apps 
    • Videos and ads viewed 
    • Personal data like age or gender 

    The company’s privacy policy explicitly states that :

    Google Analytics Privacy Policy
    Source : Google

    Google also admits to using collected data to “measure the effectiveness of advertising” and “personalise content and ads you see on Google.” 

    But there are no further elaborations on how exactly customers’ data is used — and what you can do to prevent it from being shared with third parties. 

    In some cases, Google also “forgets” to inform users about its in-product tracking.

    Journalists from CNBC and The New York Times independently concluded that Google monitors users’ Gmail activity. In particular, the company scans your inbox for recent purchases, trips, flights and bills notifications. 

    While Google says that this information isn’t sold to advertisers (directly), they still may use the “saved information about your orders in other Google services”. 

    Once again, this means you have little control or knowledge of subsequent data usage. 

    Improving Product Usability 

    Google has many “arms” to collect different data points — from user’s search history to frequently-travelled physical routes. 

    They also reserve the right to use these insights for improving existing products. 

    Here’s what it means : by combining different types of data points obtained from various products, Google can pierce a detailed picture of a person’s life. Even if such user profile data is anonymised, it is still alarmingly accurate. 

    Douglas Schmidt, a computer science researcher at Vanderbilt University, well summarised the matter : 

    “[Google’s] business model is to collect as much data about you as possible and cross-correlate it so they can try to link your online persona with your offline persona. This tracking is just absolutely essential to their business. ‘Surveillance capitalism’ is a perfect phrase for it.”

    Google Data Collection Obsession Is Backed Into Its Business Model 

    OK, but Google offers some privacy controls to users ? Yes. Google only sees and uses the information you voluntarily enter or permit them to access. 

    But as the Washington Post correspondent points out :

    “[Big Tech] companies get to set all the rules, as long as they run those rules by consumers in convoluted terms of service that even those capable of decoding the legalistic language rarely bother to read. Other mechanisms for notice and consent, such as opt-outs and opt-ins, create similar problems. Control for the consumer is mostly an illusion.”

    Google openly claims to be “one of many ad networks that personalise ads based on your activity online”. 

    The wrinkle is that they have more data than all other advertising networks (arguably combined). This helps Google sell high-precision targeting and contextually personalised ads for billions of dollars annually.

    Given that Google has stakes in so many products — it’s really hard to de-Google your business and minimise tracking and data collection from the company.

    They are also creating a monopoly on data collection and ownership. This fact makes regulators concerned. The 2021 antitrust lawsuit from the European Commission says : 

    “The formal investigation will notably examine whether Google is distorting competition by restricting access by third parties to user data for advertising purposes on websites and apps while reserving such data for its own use.”

    In other words : By using consumer data to its unfair advantage, Google allegedly shuts off competition.

    But that’s not the only matter worrying regulators and consumers alike. Over the years, Google also received numerous other lawsuits for breaching people’s privacy, over and over again. 

    Here’s a timeline : 

    Separately, Google has a very complex history with GDPR compliance

    How Google Analytics Contributes to the Web Privacy Problem 

    Google Analytics is the key puzzle piece that supports Google’s data-driven business model. 

    If Google was to release a privacy-focused Google Analytics alternative, it’d lose access to valuable web users’ data and a big portion of digital ad revenues. 

    Remember : Google collects more data than it shares with web analytics users and advertisers. But they keep a lot of it for personal usage — and keep looking for ways to share this intel with advertisers (in a way that keeps regulators off their tail).

    For Google Analytics to become truly ethical and privacy-focused, Google would need to change their entire revenue model — which is something they are unlikely to do.

    Where does this leave Google Analytics users ? 

    In a slippery territory. By proxy, companies using GA are complicit with Google’s shady data collection and usage practice. They become part of the problem.

    In fact, Google Analytics usage opens a business to two types of risks : 

    • Reputational. 77% of global consumers say that transparency around how data is collected and used is important to them when interacting with different brands. That’s why data breaches and data misuse by brands lead to major public outrages on social media and boycotts in some cases. 
    • Legal. EU regulators are on a continuous crusade against Google Analytics 4 (GA4) as it is in breach of GDPR. French and Austrian watchdogs ruled the “service” illegal. Since Google Analytics is not GDPR compliant, it opens any business using it to lawsuits (which is already happening).

    But there’s a way out.

    Choose a Privacy-Friendly Google Analytics Alternative 

    Google Analytics is a popular web analytics service, but not the only one available. You have alternatives such as Matomo. 

    Our guiding principle is : respecting privacy.

    Unlike Google Analytics, we leave data ownership 100% in users’ hands. Matomo lets you implement privacy-centred controls for user data collection.

    Plus, you can self-host Matomo On-Premise or choose Matomo Cloud with data securely stored in the EU and in compliance with GDPR.

    The best part ? You can try our ethical alternative to Google Analytics for free. No credit card required ! Start your free 21-day trial now

  • Google Analytics 4 and GDPR : Everything You Need to Know

    17 mai 2022, par Erin

    Four years have passed since the European General Data Protection Regulation (GDPR, also known as DSGVO in German, and RGPD in French) took effect.

    That’s ample time to get compliant, especially for an organisation as big and innovative as Google. Or is it ? 

    If you are wondering how GDPR affects Google Analytics 4 and what the compliance status is at present, here’s the lowdown. 

    Is Google Analytics 4 GDPR Compliant ?

    No. As of mid-2022, Google Analytics 4 (GA4) isn’t fully GDPR compliant. Despite adding extra privacy-focused features, GA4 still has murky status with the European regulators. After the invalidation of the Privacy Shield framework in 2020, Google is yet to regulate EU-US data protection. At present, the company doesn’t sufficiently protect EU citizens’ and residents’ data against US surveillance laws. This is a direct breach of GDPR.

    Google Analytics and GDPR : a Complex Relationship 

    European regulators have scrutinised Google since GDPR came into effect in 2018.

    While the company took steps to prepare for GDPR provisions, it didn’t fully comply with important regulations around user data storage, transfer and security.

    The relationship between Google and EU regulators got more heated after the Court of Justice of the European Union (CJEU) invalidated the Privacy Shield — a leeway Google used for EU-US data transfers. After 2020, GDPR litigation against Google followed. 

    This post summarises the main milestones in this story and explains the consequences for Google Analytics users. 

    Google Analytics and GDPR Timeline

    2018 : Google Analytics Meets GDPR 

    In 2018, the EU adopted the General Data Protection Regulation (GDPR) — a set of privacy and data security laws, covering all member states. Every business interacting with EU citizens and/or residents had to comply.

    GDPR harmonised data protection laws across member states and put down extra provisions for what constitutes sensitive personal information (or PII). Broadly, PII includes any data about the person’s :

    • Racial or ethnic origin 
    • Employment status 
    • Religious or political beliefs
    • State of health 
    • Genetic or biometric data 
    • Financial records (such as payment method data)
    • Address and phone numbers 

    Businesses were barred from collecting this information without explicit consent (and even with it in some cases). If collected, such sensitive information is also subject to strict requirements on how it should be stored, secured, transferred and used. 

    7 Main GDPR Principles Explained 

    Article 5 of the GDPR lays out seven main GDPR principles for personal data and privacy protection : 

    • Lawfulness, fairness and transparency — data must be obtained legally, collected with consent and in adherence to laws. 
    • Purpose limitation — all personal information must be collected for specified, explicit and legal purposes. 
    • Data minimisation — companies must collect only necessary and adequate data, aligned with the stated purpose. 
    • Accuracy — data accuracy must be ensured at all times. Companies must have mechanisms to erase or correct inaccurate data without delays. 
    • Storage limitation — data must be stored only for as long as the stated purpose suggests. Though there’s no upper time limit on data storage. 
    • Integrity and confidentiality (security) — companies must take measures to ensure secure data storage and prevent unlawful or unauthorised access to it. 
    • Accountability — companies must be able to demonstrate adherence to the above principles. 

    Google claimed to have taken steps to make all of their products GDPR compliant ahead of the deadline. But in practice, this wasn’t always the case.

    In March 2018, a group of publishers admonished Google for not providing them with enough tools for GDPR compliance :

    “[Y]ou refuse to provide publishers with any specific information about how you will collect, share and use the data. Placing the full burden of obtaining new consent on the publisher is untenable without providing the publisher with the specific information needed to provide sufficient transparency or to obtain the requisite specific, granular and informed consent under the GDPR.”

    The proposed Google Analytics GDPR consent form was hard to implement and lacked customisation options. In fact, Google “makes unilateral decisions” on how the collected data is stored and used. 

    Users had no way to learn about or control all intended uses of people’s data — which made compliance with the second clause impossible. 

    Unsurprisingly, Google was among the first companies to face a GDPR lawsuit (together with Facebook). 

    By 2019, French data regulator CNIL, successfully argued that Google wasn’t sufficiently disclosing its data collection across products — and hence in breach of GDPR. After a failed appeal, Google had to pay a €50 million fine and promise to do better. 

    2019 : Google Analytics 4 Announcement 

    Throughout 2019, Google rightfully attempted to resolve some of its GDPR shortcomings across all products, Google Universal Analytics (UA) included. 

    They added a more visible consent mechanism for online tracking and provided extra compliance tips for users to follow. In the background, Google also made tech changes to its data processing mechanism to get on the good side of regulations.

    Though Google addressed some of the issues, they missed others. A 2019 independent investigation found that Google real-time-bidding (RTB) ad auctions still used EU citizens’ and residents’ data without consent, thanks to a loophole called “Push Pages”. But they managed to quickly patch this up before the allegations had made it to court. 

    In November 2019, Google released a beta version of the new product version — Google Analytics 4, due to replace Universal Analytics. 

    GA4 came with a set of new privacy-focused features for ticking GDPR boxes such as :

    • Data deletion mechanism. Users can now request to surgically extract certain data from the Analytics servers via a new interface. 
    • Shorter data retention period. You can now shorten the default retention period to 2 months by default (instead of 14 months) or add a custom limit.  
    • IP Anonymisation. GA4 doesn’t log or store IP addresses by default. 

    Google Analytics also updated its data processing terms and made changes to its privacy policy

    Though Google made some progress, Google Analytics 4 still has many limitations — and isn’t GDPR compliant. 

    2020 : Privacy Shield Invalidation Ruling 

    As part of the 2018 GDPR preparations, Google named its Irish entity (Google Ireland Limited) as the “data controller” legally responsible for EEA and Swiss users’ information. 

    The company announcement says : 

    Google Analytics Statement on Privacy Shield Invalidation Ruling
    Source : Google

    Initially, Google assumed that this legal change would help them ensure GDPR compliance as “legally speaking” a European entity was set in charge of European data. 

    Practically, however, EEA consumers’ data was still primarily transferred and processed in the US — where most Google data centres are located. Until 2020, such cross-border data transfers were considered legal thanks to the Privacy Shield framework

    But in July 2020, The EU Court of Justice ruled that this framework doesn’t provide adequate data protection to digitally transmitted data against US surveillance laws. Hence, companies like Google can no longer use it. The Swiss Federal Data Protection and Information Commissioner (FDPIC) reached the same conclusion in September 2020. 

    The invalidation of the Privacy Shield framework put Google in a tough position.

     Article 14. f of the GDPR explicitly states : 

    “The controller (the company) that intends to carry out a transfer of personal data to a recipient (Analytics solution) in a third country or an international organisation must provide its users with information on the place of processing and storage of its data”.

    Invalidation of the Privacy Shield framework prohibited Google from moving data to the US. At the same time, GDPR provisions mandated that they must disclose proper data location. 

    But Google Analytics (like many other products) had no a mechanism for : 

    • Guaranteeing intra-EU data storage 
    • Selecting a designated regional storage location 
    • Informing users about data storage location or data transfers outside of the EU 

    And these factors made Google Analytics in direct breach of GDPR — a territory, where they remain as of 2022.

    2020-2022 : Google GDPR Breaches and Fines 

    The 2020 ruling opened Google to GDPR lawsuits from country-specific data regulators.

    Google Analytics in particular was under a heavy cease-fire. 

    • Sweden first fined Google for violating GDPR for no not fulfilling its obligations to request data delisting in 2020. 
    • France rejected Google Analytics 4 IP address anonymisation function as a sufficient measure for protecting cross-border data transfers. Even with it, US intelligence services can still access user IPs and other PII. France declared Google Analytics illegal and pressed a €150 million fine. 
    • Austria also found Google Analytics GDPR non-compliant and proclaimed the service as “illegal”. The authority now seeks a fine too. 

    The Dutch Data Protection Authority and  Norwegian Data Protection Authority also found Google Analytics guilty of a GDPR breach and seek to limit Google Analytics usage. 

    New privacy controls in Google Analytics 4 do not resolve the underlying issue — unregulated, non-consensual EU-US data transfer. 

    Google Analytics GDPR non-compliance effectively opens any website tracking or analysing European visitors to legal persecution.

    In fact, this is already happening. noyb, a European privacy-focused NGO, has already filed over 100 lawsuits against European websites using Google Analytics.

    2022 : Privacy Shield 2.0. Negotiations

    Google isn’t the only US company affected by the Privacy Shield framework invalidation. The ruling puts thousands of digital companies at risk of non-compliance.

    To settle the matter, US and EU authorities started “peace talks” in spring 2022.

    European Commission President Ursula von der Leyen said that they are working with the Biden administration on the new agreement that will “enable predictable and trustworthy data flows between the EU and US, safeguarding the privacy and civil liberties.” 

    However, it’s just the beginning of a lengthy negotiation process. The matter is far from being settled and contentious issues remain as we discussed on Twitter (come say hi !).

    For one, the US isn’t eager to modify its surveillance laws and is mostly willing to make them “proportional” to those in place in the EU. These modifications may still not satisfy CJEU — which has the power to block the agreement vetting or invalidate it once again. 

    While these matters are getting hashed out, Google Analytics users, collecting data about EU citizens and/or residents, remain on slippery grounds. As long as they use GA4, they can be subject to GDPR-related lawsuits. 

    To Sum It Up 

    • Google Analytics 4 and Google Universal Analytics are not GDPR compliant because of Privacy Shield invalidation in 2020. 
    • French and Austrian data watchdogs named Google Analytics operations “illegal”. Swedish, Dutch and Norwegian authorities also claim it’s in breach of GDPR. 
    • Any website using GA for collecting data about European citizens and/or residents can be taken to court for GDPR violations (which is already happening). 
    • Privacy Shield 2.0 Framework discussions to regulate EU-US data transfers have only begun and may take years. Even if accepted, the new framework(s) may once again be invalidated by local data regulators as has already happened in the past. 

    Time to Get a GDPR Compliant Google Analytics Alternative 

    Retaining 100% data ownership is the optimal path to GDPR compliance.

    By selecting a transparent web analytics solution that offers 100% data ownership, you can rest assured that no “behind the scenes” data collection, processing or transfers take place. 

    Unlike Google Analytics 4, Matomo offers all of the features you need to be GDPR compliant : 

    • Full data anonymisation 
    • Single-purpose data usage 
    • Easy consent and an opt-out mechanism 
    • First-party cookies usage by default 
    • Simple access to collect data 
    • Fast data removals 
    • EU-based data storage for Matomo Cloud (or storage in the country of your choice with Matomo On-Premise)

    Learn about your audiences in a privacy-centred way and protect your business against unnecessary legal exposure. 

    Start your 21-day free trial (no credit card required) to see how fully GDPR-compliant website analytics works ! 

  • Things I Have Learned About Emscripten

    1er septembre 2015, par Multimedia Mike — Cirrus Retro

    3 years ago, I released my Game Music Appreciation project, a website with a ludicrously uninspired title which allowed users a relatively frictionless method to experience a range of specialized music files related to old video games. However, the site required use of a special Chrome plugin. Ever since that initial release, my #1 most requested feature has been for a pure JavaScript version of the music player.

    “Impossible !” I exclaimed. “There’s no way JS could ever run fast enough to run these CPU emulators and audio synthesizers in real time, and allow for the visualization that I demand !” Well, I’m pleased to report that I have proved me wrong. I recently quietly launched a new site with what I hope is a catchier title, meant to evoke a cloud-based retro-music-as-a-service product : Cirrus Retro. Right now, it’s basically the same as the old site, but without the wonky Chrome-specific technology.

    Along the way, I’ve learned a few things about using Emscripten that I thought might be useful to share with other people who wish to embark on a similar journey. This is geared more towards someone who has a stronger low-level background (such as C/C++) vs. high-level (like JavaScript).

    General Goals
    Do you want to cross-compile an entire desktop application, one that relies on an extensive GUI toolkit ? That might be difficult (though I believe there is a path for porting qt code directly with Emscripten). Your better wager might be to abstract out the core logic and processes of the program and then create a new web UI to access them.

    Do you want to compile a game that basically just paints stuff to a 2D canvas ? You’re in luck ! Emscripten has a porting path for SDL. Make a version of your C/C++ software that targets SDL (generally not a tall order) and then compile that with Emscripten.

    Do you just want to cross-compile some functionality that lives in a library ? That’s what I’ve done with the Cirrus Retro project. For this, plan to compile the library into a JS file that exports some public functions that other, higher-level, native JS (i.e., JS written by a human and not a computer) will invoke.

    Memory Levels
    When porting C/C++ software to JavaScript using Emscripten, you have to think on 2 different levels. Or perhaps you need to force JavaScript into a low level C lens, especially if you want to write native JS code that will interact with Emscripten-compiled code. This often means somehow allocating chunks of memory via JS and passing them to the Emscripten-compiled functions. And you wouldn’t believe the type of gymnastics you need to execute to get native JS and Emscripten-compiled JS to cooperate.

    “Emscripten : Pointers and Pointers” is the best (and, really, ONLY) explanation I could find for understanding the basic mechanics of this process, at least when I started this journey. However, there’s a mistake in the explanation that left me confused for a little while, and I’m at a loss to contact the author (doesn’t anyone post a simple email address anymore ?).

    Per the best of my understanding, Emscripten allocates a large JS array and calls that the memory space that the compiled C/C++ code is allowed to operate in. A pointer in C/C++ code will just be an index into that mighty array. Really, that’s not too far off from how a low-level program process is supposed to view memory– as a flat array.

    Eventually, I just learned to cargo-cult my way through the memory allocation process. Here’s the JS code for allocating an Emscripten-compatible byte buffer, taken from my test harness (more on that later) :

    var musicBuffer = fs.readFileSync(testSpec[’filename’]) ;
    var musicBufferBytes = new Uint8Array(musicBuffer) ;
    var bytesMalloc = player._malloc(musicBufferBytes.length) ;
    var bytes = new Uint8Array(player.HEAPU8.buffer, bytesMalloc, musicBufferBytes.length) ;
    bytes.set(new Uint8Array(musicBufferBytes.buffer)) ;
    

    So, read the array of bytes from some input source, create a Uint8Array from the bytes, use the Emscripten _malloc() function to allocate enough bytes from the Emscripten memory array for the input bytes, then create a new array… then copy the bytes…

    You know what ? It’s late and I can’t remember how it works exactly, but it does. It has been a few months since I touched that code (been fighting with front-end website tech since then). You write that memory allocation code enough times and it begins to make sense, and then you hope you don’t have to write it too many more times.

    Multithreading
    You can’t port multithreaded code to JS via Emscripten. JavaScript has no notion of threads ! If you don’t understand the computer science behind this limitation, a more thorough explanation is beyond the scope of this post. But trust me, I’ve thought about it a lot. In fact, the official Emscripten literature states that you should be able to port most any C/C++ code as long as 1) none of the code is proprietary (i.e., all the raw source is available) ; and 2) there are no threads.

    Yes, I read about the experimental pthreads support added to Emscripten recently. Don’t get too excited ; that won’t be ready and widespread for a long time to come as it relies on a new browser API. In the meantime, figure out how to make your multithreaded C/C++ code run in a single thread if you want it to run in a browser.

    Printing Facility
    Eventually, getting software to work boils down to debugging, and the most primitive tool in many a programmer’s toolbox is the humble print statement. A print statement allows you to inspect a piece of a program’s state at key junctures. Eventually, when you try to cross-compile C/C++ code to JS using Emscripten, something is not going to work correctly in the generated JS “object code” and you need to understand what. You’ll be pleading for a method of just inspecting one variable deep in the original C/C++ code.

    I came up with this simple printf-workalike called emprintf() :

    #ifndef EMPRINTF_H
    #define EMPRINTF_H
    

    #include <stdio .h>
    #include <stdarg .h>
    #include <emscripten .h>

    #define MAX_MSG_LEN 1000

    /* NOTE : Don’t pass format strings that contain single quote (’) or newline
    * characters. */
    static void emprintf(const char *format, ...)

    char msg[MAX_MSG_LEN] ;
    char consoleMsg[MAX_MSG_LEN + 16] ;
    va_list args ;

    /* create the string */
    va_start(args, format) ;
    vsnprintf(msg, MAX_MSG_LEN, format, args) ;
    va_end(args) ;

    /* wrap the string in a console.log(’’) statement */
    snprintf(consoleMsg, MAX_MSG_LEN + 16, "console.log(’%s’)", msg) ;

    /* send the final string to the JavaScript console */
    emscripten_run_script(consoleMsg) ;

    #endif /* EMPRINTF_H */

    Put it in a file called “emprint.h”. Include it into any C/C++ file where you need debugging visibility, use emprintf() as a replacement for printf() and the output will magically show up on the browser’s JavaScript debug console. Heed the comments and don’t put any single quotes or newlines in strings, and keep it under 1000 characters. I didn’t say it was perfect, but it has helped me a lot in my Emscripten adventures.

    Optimization Levels
    Remember to turn on optimization when compiling. I have empirically found that optimizing for size (-Os) leads to the best performance all around, in addition to having the smallest size. Just be sure to specify some optimization level. If you don’t, the default is -O0 which offers horrible performance when running in JS.

    Static Compression For HTTP Delivery
    JavaScript code compresses pretty efficiently, even after it has been optimized for size using -Os. I routinely see compression ratios between 3.5:1 and 5:1 using gzip.

    Web servers in this day and age are supposed to be smart enough to detect when a requesting web browser can accept gzip-compressed data and do the compression on the fly. They’re even supposed to be smart enough to cache compressed output so the same content is not recompressed for each request. I would have to set up a series of tests to establish whether either of the foregoing assertions are correct and I can’t be bothered. Instead, I took it into my own hands. The trick is to pre-compress the JS files and then instruct the webserver to serve these files with a ‘Content-Type’ of ‘application/javascript’ and a ‘Content-Encoding’ of ‘gzip’.

    1. Compress your large Emscripten-build JS files with ‘gzip’ : ‘gzip compiled-code.js’
    2. Rename them from extension .js.gz to .jsgz
    3. Tell the webserver to deliver .jsgz files with the correct Content-Type and Content-Encoding headers

    To do that last step with Apache, specify these lines :

    AddType application/javascript jsgz
    AddEncoding gzip jsgz
    

    They belong in either a directory’s .htaccess file or in the sitewide configuration (/etc/apache2/mods-available/mime.conf works on my setup).

    Build System and Build Time Optimization
    Oh goodie, build systems ! I had a very specific manner in which I wanted to build my JS modules using Emscripten. Can I possibly coerce any of the many popular build systems to do this ? It has been a few months since I worked on this problem specifically but I seem to recall that the build systems I tried to used would freak out at the prospect of compiling stuff to a final binary target of .js.

    I had high hopes for Bazel, which Google released while I was developing Cirrus Retro. Surely, this is software that has been battle-tested in the harshest conditions of one of the most prominent software-developing companies in the world, needing to take into account the most bizarre corner cases and still build efficiently and correctly every time. And I have little doubt that it fulfills the order. Similarly, I’m confident that Google also has a team of no fewer than 100 or so people dedicated to developing and supporting the project within the organization. When you only have, at best, 1-2 hours per night to work on projects like this, you prefer not to fight with such cutting edge technology and after losing 2 or 3 nights trying to make a go of Bazel, I eventually put it aside.

    I also tried to use Autotools. It failed horribly for me, mostly for my own carelessness and lack of early-project source control.

    After that, it was strictly vanilla makefiles with no real dependency management. But you know what helps in these cases ? ccache ! Or at least, it would if it didn’t fail with Emscripten.

    Quick tip : ccache has trouble with LLVM unless you set the CCACHE_CPP2 environment variable (e.g. : “export CCACHE_CPP2=1”). I don’t remember the specifics, but it magically fixes things. Then, the lazy build process becomes “make clean && make”.

    Testing
    If you have never used Node.js, testing Emscripten-compiled JS code might be a good opportunity to start. I was able to use Node.js to great effect for testing the individually-compiled music player modules, wiring up a series of invocations using Python for a broader test suite (wouldn’t want to go too deep down the JS rabbit hole, after all).

    Be advised that Node.js doesn’t enjoy the same kind of JIT optimizations that the browser engines leverage. Thus, in the case of time critical code like, say, an audio synthesis library, the code might not run in real time. But as long as it produces the correct bitwise waveform, that’s good enough for continuous integration.

    Also, if you have largely been a low-level programmer for your whole career and are generally unfamiliar with the world of single-threaded, event-driven, callback-oriented programming, you might be in for a bit of a shock. When I wanted to learn how to read the contents of a file in Node.js, this is the first tutorial I found on the matter. I thought the code presented was a parody of bad coding style :

    var fs = require("fs") ;
    var fileName = "foo.txt" ;
    

    fs.exists(fileName, function(exists)
    if (exists)
    fs.stat(fileName, function(error, stats)
    fs.open(fileName, "r", function(error, fd)
    var buffer = new Buffer(stats.size) ;

    fs.read(fd, buffer, 0, buffer.length, null, function(error, bytesRead, buffer)
    var data = buffer.toString("utf8", 0, buffer.length) ;

    console.log(data) ;
    fs.close(fd) ;
    ) ;
    ) ;
    ) ;
    ) ;

    Apparently, this kind of thing doesn’t raise an eyebrow in the JS world.

    Now, I understand and respect the JS programming model. But this was seriously frustrating when I first encountered it because a simple script like the one I was trying to write just has an ordered list of tasks to complete. When it asks for bytes from a file, it really has nothing better to do than to wait for the answer.

    Thankfully, it turns out that Node’s fs module includes synchronous versions of the various file access functions. So it’s all good.

    Conclusion
    I’m sure I missed or underexplained some things. But if other brave souls are interested in dipping their toes in the waters of Emscripten, I hope these tips will come in handy.

    The post Things I Have Learned About Emscripten first appeared on Breaking Eggs And Making Omelettes.