Recherche avancée

Médias (0)

Mot : - Tags -/masques

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (111)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

  • Script d’installation automatique de MediaSPIP

    25 avril 2011, par

    Afin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
    Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
    La documentation de l’utilisation du script d’installation (...)

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

Sur d’autres sites (8573)

  • Subtitling Sierra VMD Files

    1er juin 2016, par Multimedia Mike — Game Hacking

    I was contacted by a game translation hobbyist from Spain (henceforth known as The Translator). He had set his sights on Sierra’s 7-CD Phantasmagoria. This mammoth game was driven by a lot of FMV files and animations that have speech. These require language translation in the form of video subtitling. He’s lucky that he found possibly the one person on the whole internet who has just the right combination of skill, time, and interest to pull this off. And why would I care about helping ? I guess I share a certain camaraderie with game hackers. Don’t act so surprised. You know what kind of stuff I like to work on.

    The FMV format used in this game is VMD, which makes an appearance in numerous Sierra titles. FFmpeg already supports decoding this format. FFmpeg also supports subtitling video. So, ideally, all that’s necessary to support this goal is to add a muxer for the VMD format which can encode raw video and audio, which the format supports. Implement video compression as extra credit.

    The pipeline that I envisioned looks like this :


    VMD Subtitling Process

    VMD Subtitling Process


    “Trivial !” I surmised. I just never learn, do I ?

    The Plan
    So here’s my initial pitch, outlining the work I estimated that I would need to do towards the stated goal :

    1. Create a new file muxer that produces a syntactically valid VMD file with bogus video and audio data. Make sure it works with both FFmpeg’s playback system as well as the proper Phantasmagoria engine.
    2. Create a new video encoder that essentially operates in pass-through mode while correctly building a palette.
    3. Create a new basic encoder for the video frames.

    A big unknown for me was exactly how subtitle handling operates in FFmpeg. Thanks to this project, I now know. I was concerned because I was pretty sure that font rendering entails anti-aliasing which bodes poorly for keeping the palette count under 256 unique colors.

    Computer Science Puzzle
    When pondering how to process the palette, I was excited for the opportunity to exercise actual computer science. FFmpeg converts frames from paletted frames to full RGB frames. Then it needs to convert them back to paletted frames. I had a vague recollection of solving this problem once before when I was experimenting with a new paletted video codec. I seem to recall that I did the palette conversion in a very naive manner. I just used a static 256-element array and processed each RGB pixel of the frame, seeing if the value already occurred in the table (O(n) lookup) and adding it otherwise.

    There are more efficient algorithms, however, such as hash tables and trees. Somewhere along the line, FFmpeg helpfully acquired a rarely-used tree data structure, which was perfect for this project.

    So I was pretty pleased with this optimization. Too bad this wouldn’t survive to the end of the effort.

    Another palette-related challenge was the fact that a group of pictures would be accumulating a new palette but that palette needed to be recorded before the group. Thus, the muxer needed to have extra logic to rewind the file when the video encoder transmitted a palette change.

    Video Compression
    VMD has a few methods in its compression toolbox. It can use interframe differencing, it has some RLE, or it can code a frame raw. It can also use a custom LZ-like format on top of these. For early prototypes, I elected to leave each frame coded raw. After the concept was proved, I implemented the frame differencing.


    VMD frame #1

    VMD frame #2

    VMD frame difference
    Top frame compared with the middle frame yields the bottom frame : red pixels indicate changes

    Encoding only those red dots in between vast runs of unchanged pixels yielded a vast measurable improvement. The next step was to try wiring up FFmpeg’s existing LZ compression facilities to the encoder. This turned out to be implausible since VMD’s LZ variant has nothing to do with anything FFmpeg already provides. Fortunately, the LZ piece is not absolutely required and the frame differencing + RLE provides plenty of compression.

    Subtitling
    I’ve never done anything, multimedia programming-wise, concerning subtitles. I guess all the entertainment I care about has always been in my native tongue. What a good excuse to program outside of my comfort zone !

    First, I needed to know how to access FFmpeg’s subtitling facilities. Fortunately, The Translator did the legwork on this matter so I didn’t have to figure it out.

    However, I intuitively had misgivings about this phase. I had heard that the subtitling process performs anti-aliasing. That means that the image would need to be promoted to a higher colorspace for this phase and that the anti-aliasing process would likely push the color count way past 256. Some quick tests revealed this to be the case, as the running color count would leap by several hundred colors as soon as the palette accounting algorithm encountered a subtitle.

    So I dug into the subtitle subsystem. I discovered that the subtitle library operates by creating a linked list of subtitle bitmaps that the client app must render. The bitmaps are comprised of 8-bit alpha transparency values that must be composited onto the target frame (i.e., 0 = transparent, 255 = 100% opaque). For example, the letter ‘H’ :

                                      (with 00s removed)
    13 F8 41 00 00 00 00 68 E4  |  13 F8 41             68 E4    
    14 FF 44 00 00 00 00 6C EC  |  14 FF 44             6C EC
    14 FF 44 00 00 00 00 6C EC  |  14 FF 44             6C EC
    14 FF 44 00 00 00 00 6C EC  |  14 FF 44             6C EC
    14 FF DC D0 D0 D0 D0 E4 EC  |  14 FF DC D0 D0 D0 D0 E4 EC
    14 FF 7E 50 50 50 50 9A EC  |  14 FF 7E 50 50 50 50 9A EC
    14 FF 44 00 00 00 00 6C EC  |  14 FF 44             6C EC
    14 FF 44 00 00 00 00 6C EC  |  14 FF 44             6C EC
    14 FF 44 00 00 00 00 6C EC  |  14 FF 44             6C EC
    11 E0 3B 00 00 00 00 5E CE  |  11 E0 3B             5E CE
    

    To get around the color explosion problem, I chose a threshold value and quantized values above and below to 255 and 0, respectively. Further, the process chooses an appropriate color from the existing palette rather than introducing any new colors.

    Muxing Matters
    In order to force VMD into a general purpose media framework, a lot of special information needs to be passed around. Like many paletted codecs, the palette needs to be transmitted from the file demuxer to the video decoder via some side channel. For re-encoding, this also implies that the palette needs to make the trip from the video encoder to the file muxer. As if this wasn’t enough, individual VMD frames have even more data that needs to be ferried between the muxer and codec levels, including frame change boundaries. FFmpeg provides methods to do these things, but I could not always rely on the systems to relay the data in all cases. I was probably doing something wrong ; I accept that. Instead, I just packed all the information at the front of an encoded frame and split it apart in the muxer.

    I could not quite figure out how to get the audio and video muxed correctly. As a result, neither FFmpeg nor the Phantasmagoria engine could replay the files correctly.

    Plan B
    Since I was having so much trouble creating an entirely new VMD file, likely due to numerous unknown bits of the file format, I thought of another angle : re-use the existing VMD file. For this approach, I kept the video encoder and file muxer that I created in the initial phase, but modified the file muxer to emit a special intermediate file. Then, I created a Python tool to repackage the original VMD file using compressed video data in the intermediate file.

    For this phase, I also implemented a command line switch for FFmpeg to disable subtitle blending, to make the feature feel like less of an unofficial hack, as though this nonsense would ever have a chance of being incorporated upstream.

    At this point, I was seeing some success with the complete, albeit roundabout, subtitling process. I constructed a subtitle file using “Spanish I Learned From Mexican Telenovelas” and the frames turned out fairly readable :


    Le puso los cuernos a él

    “she cheated on him”


    es un desgraciado

    “he’s a scumbag” … these random subtitles could fit surprisingly well !


    The few files that I tested appeared to work fine. But then I handed off my work to The Translator and he immediately found a bunch of problems. According to my notes, the problems mostly took the form of flashing, solid color frames. Further, I found tiny, mostly imperceptible flaws in my RLE compressor, usually only detectable by running strict comparison tools ; but I wasn’t satisfied.

    At this point, I think I attempted to just encode the entire palette at the front of each frame, as allowed by the format, but that did not seem to fix any problems. My notes are not completely clear on this matter (likely because I was still trying to figure out the exact problem), but I think it had to do with FFmpeg inserting extra video frames in order to even out gaps in the video framerate.

    Sigh, Plan C
    At this point, I was getting tired of trying to force FFmpeg to do this. So I decided to minimize its involvement using lessons learned up to this point.

    The next pitch :

    1. Create a new C program that can open an existing VMD file and output an identical VMD file. I know this sounds easy, but the specific method of copying entails interpreting individual parts of the file and writing those individual parts to the new file. This is in preparation for…
    2. Import the VMD video decoder functions directly into the program to decode the individual video frames and re-encode them, replacing the video frames as the file is rewritten.
    3. Wire up the subtitle system. During the adventure to disable subtitle blending, I accidentally learned enough about interfacing to the subtitle library to just invoke it directly.
    4. Rewrite the RLE method so that it is 100% correct.

    Off to work I went. That part about lifting the existing VMD decoder functions out of their libavcodec nest turned out to not be that straightforward. As an alternative, I modified the decoder to dump the raw frames to an intermediate file. In doing so, I think I was able to avoid the issue of the duplicated frames that plagued the previous efforts.

    Also, remember how I was really pleased with the palette conversion technique in which I was able to leverage computer science big-O theory ? By this stage, I had no reason to convert the paletted video to RGB in the first place ; all of the decoding, subtitling and re-encoding operates in the paletted colorspace.

    This approach seemed to work pretty well. The final program is subtitle-vmd.c. The process is still a little weird. The modifications in my own FFmpeg fork are necessary to create an intermediate file that the new C tool can operate with.

    Next Steps
    The Translator has found some assorted bugs and corner cases that still need to be ironed out. Further, for extra credit, I need find the change windows for each frame to improve compression just a little more. I don’t think I will be trying for LZ compression, though.

    However, almost as soon as I had this whole system working, The Translator informed me that there is another, different movie format in play in the Phantasmagoria engine called ROBOT, with an extension of RBT. Fortunately, enough of the algorithms have been reverse engineered and re-implemented in ScummVM that I was able to sort out enough details for another subtitling project. That will be the subject of a future post.

    See Also :

  • Matomo Celebrates 15 Years of Building an Open-Source & Transparent Web Analytics Solution

    30 juin 2022, par Matthieu Aubry — About, Community
    &lt;script type=&quot;text/javascript&quot;&gt;<br />
           if ('function' === typeof window.playMatomoVideo){<br />
           window.playMatomoVideo(&quot;brand&quot;, &quot;#brand&quot;)<br />
           } else {<br />
           document.addEventListener(&quot;DOMContentLoaded&quot;, function() { window.playMatomoVideo(&quot;brand&quot;, &quot;#brand&quot;); });<br />
           }<br />
      &lt;/script&gt;

    Fifteen years ago, I realised that people (myself included) were increasingly integrating the internet into their everyday lives, and it was clear that it would only expand in the future. It was an exciting new world, but the amount of personal data shared online, level of tracking and lack of security was a growing concern. Google Analytics was just launched then and was already gaining huge traction – so data from millions of websites started flowing into Google’s database, creating what was then the biggest centralised database about people worldwide and their actions online.

    So as a young engineering student, I decided we needed to build an open source and transparent solution that could help make the internet more secure and private while still providing organisations with powerful insights. I aimed to create a win-win solution for businesses and their digital consumers.

    And in 2007, I started developing Matomo with the help from Scott Switzer and Jennifer Langdon (who offered me an internship and support).   

    All thanks to the Matomo Community

    We have reached significant milestones and made major changes over the last 15 years, but we wouldn’t be where we are today without the Matomo Community.

    So I would like to celebrate and thank the hundreds of volunteer developers who have donated their time to develop Matomo, the thousands of contributors who provided feedback to improve Matomo, the countless supportive forum members, our passionate team of 40 at Matomo, the numerous translators who have translated Matomo and the 1.5 million websites that choose Matomo as their analytics platform.

    Matomo's Birthday
    Team Meetup in Paris in 2012

    Matomo has been a community effort built on the shoulders of many, and we will continue to work for you. 

    So let’s look at some milestones we have achieved over the last 15 years.

    Looking back on milestones in our timeline

    2007

    • Birth of Matomo
    • First alpha version released

    2008

    • Release first public 0.1.0 version

    2009

    • 50,000 websites use Matomo

    2010

    • Matomo first stable 1.0.0 released
    • Mobile app launched

    2011

    • Released Ecommerce Analytics, Custom Variables, First Party Cookies

    • Released Privacy control features (first of many privacy features to come !)

    2012

    • Released Log Analytics feature
    • 1 Million Downloads !
    • 300,000 websites worldwide use Matomo

    2013

    • Matomo is now available in 50 languages !
    • Matomo brand redesign

    2016

    2017

    • Launched Matomo Cloud service 
    • Released Multi Channel Conversion Attribution Premium Feature, Custom Reports Premium Feature, Login Saml Premium Feature, WooCommerceAnalytics Premium Feature and Heatmap & Session Recording Premium Feature 

    2018

    2019

    2020

    2021

    • 1,000,000 websites worldwide use Matomo
    • including 30,000 active Matomo for WordPress installations
    • Released SEO Web Vitals, Advertising Conversion Export and Tracking Spam Prevention feature

    2022

    • Released WP Statistics to Matomo importer

    Our efforts continue

    While we’ve seen incredible growth over the years, our work doesn’t stop there. In fact, we’re only just getting started.

    Today over 55% of the internet continues to use privacy-threatening web analytics solutions, while 1.5% uses Matomo. So there are still great strides to be made to create a more private internet, and joining the Matomo Community is one way to support this movement.

    There are many ways to get involved too, such as :

    So what comes next for Matomo ?

    The future of Matomo is approachable, powerful and flexible. We’re strengthening the customers’ voice, expanding our resources internally (we’re continuously hiring !) and conducting rigorous customer research to craft a tool that balances usability and functionality.

    I look forward to the next 15 years and seeing what the future holds for Matomo and our community.

  • Google Analytics Privacy Issues : Is It Really That Bad ?

    2 juin 2022, par Erin

    If you find yourself asking : “What’s the deal with Google Analytics privacy ?”, you probably have some second thoughts. 

    Your hunch is right. Google Analytics (GA) is a popular web analytics tool, but it’s far from being perfect when it comes to respecting users’ privacy. 

    This post helps you understand tremendous Google Analytics privacy concerns users, consumers and regulators expressed over the years.

    In this blog, we’ll cover :

    What Does Google Analytics Collect About Users ? 

    To understand Google Analytics privacy issues, you need to know how Google treats web users’ data. 

    By default, Google Analytics collects the following information : 

    • Session statistics — duration, page(s) viewed, etc. 
    • Referring website details — a link you came through or keyword used. 
    • Approximate geolocation — country, city. 
    • Browser and device information — mobile vs desktop, OS usage, etc. 

    Google obtains web analytics data about users via two means : an on-site Google Analytics tracking code and cookies.

    A cookie is a unique identifier (ID) assigned to each user visiting a web property. Each cookie stores two data items : unique user ID and website name. 

    With the help of cookies, web analytics solutions can recognise returning visitors and track their actions across the website(s).

    First-party vs third-party cookies
    • First party cookies are generated by one website and collect user behaviour data from said website only. 
    • Third-party cookies are generated by a third-party website object (for example, an ad) and can track user behaviour data across multiple websites. 

    As it’s easy to imagine, third-party cookies are a goldmine for companies selling online ads. Essentially, they allow ad platforms to continue watching how the user navigates the web after clicking a certain link. 

    Yet, people have little clue as to which data they are sharing and how it is being used. Also, user consent to tracking across websites is only marginally guaranteed by existing Google Analytics controls. 

    Why Third-Party Cookie Data Collection By GA Is Problematic 

    Cookies can transmit personally identifiable information (PII) such as name, log in details, IP address, saved payment method and so on. Some of these details can end up with advertisers without consumers’ direct knowledge or consent.

    Regulatory frameworks such as General Data Protection Regulation (GDPR) in Europe and California Consumer Privacy Act (CCPA) emerged as a response to uncontrolled user behaviour tracking.

    Under regulatory pressure, Big Tech companies had to adapt their data collection process.

    Apple was the first to implement by-default third-party blocking in the Safari browser. Then added a tracking consent mechanism for iPhone users starting from iOS 15.2 and later. 

    Google, too, said it would drop third-party cookie usage after The European Commission and UK’s Competition and Markets Authority (CMA) launched antitrust investigations into its activity. 

    To shake off the data watchdogs, Google released a Privacy Sandbox — a set of progressive tech, operational and compliance changes for ensuring greater consumer privacy. 

    Google’s biggest promise : deprecate third-party cookies usage for all web and mobile products. 

    Originally, Google promised to drop third-party cookies by 2022, but that didn’t happen. Instead, Google delayed cookie tracking depreciation for Chrome until the second half of 2023

    Why did they push back on this despite hefty fines from regulators ?

    Because online ads make Google a lot of money.

    In 2021, Alphabet Inc (parent company of Google), made $256.7 billion in revenue, of which $209.49 billion came from selling advertising. 

    Lax Google Analytics privacy enforcement — and its wide usage by website owners — help Google make those billions from collecting and selling user data. 

    How Google Uses Collected Google Analytics Data for Advertising 

    Over 28 million websites (or roughly 85% of the Internet) have Google Analytics tracking codes installed. 

    Even if one day we get a Google Analytics version without cookies, it still won’t address all the privacy concerns regulators and consumers have. 

    Over the years, Google has accumulated an extensive collection of user data. The company’s engineers used it to build state-of-the-art deep learning models, now employed to build advanced user profiles. 

    Deep learning is the process of training a machine to recognise data patterns. Then this “knowledge” is used to produce highly-accurate predictive insights. The more data you have for model training — the better its future accuracy will be. 

    Google has amassed huge deposits of data from its collection of products — GA, YouTube, Gmail, Google Docs and Google Maps among others. Now they are using this data to build a third-party cookies-less alternative mechanism for modelling people’s preferences, habits, lifestyles, etc. 

    Their latest model is called Google Topics. 

    This comes only after Google’s failed attempt to replace cookie-based training with Federated Learning of Cohorts (FLoC) model. But the solution wasn’t offering enough user transparency and user controls among other issues.

    Google Topics
    Source : Google Blog

    Google Topics promises to limit the granularity of data advertisers get about users. 

    But it’s still a web user surveillance method. With Google Topics, the company will continue collecting user data via Chrome (and likely other Google products) — and share it with advertisers. 

    Because as we said before : Google is in the business of profiting off consumers’ data. 

    Two Major Ways Google Takes Advantage of Customer Data

    Every bit of data Google collects across its ecosystem of products can be used in two ways :

    • For ad targeting and personalisation 
    • To improve Google’s products 

    The latter also helps the former. 

    Advanced Ad Personalisation and Targeting

    GA provides the company with ample data on users’ 

    • Recent and frequent searches 
    • Location history
    • Visited websites
    • Used apps 
    • Videos and ads viewed 
    • Personal data like age or gender 

    The company’s privacy policy explicitly states that :

    Google Analytics Privacy Policy
    Source : Google

    Google also admits to using collected data to “measure the effectiveness of advertising” and “personalise content and ads you see on Google.” 

    But there are no further elaborations on how exactly customers’ data is used — and what you can do to prevent it from being shared with third parties. 

    In some cases, Google also “forgets” to inform users about its in-product tracking.

    Journalists from CNBC and The New York Times independently concluded that Google monitors users’ Gmail activity. In particular, the company scans your inbox for recent purchases, trips, flights and bills notifications. 

    While Google says that this information isn’t sold to advertisers (directly), they still may use the “saved information about your orders in other Google services”. 

    Once again, this means you have little control or knowledge of subsequent data usage. 

    Improving Product Usability 

    Google has many “arms” to collect different data points — from user’s search history to frequently-travelled physical routes. 

    They also reserve the right to use these insights for improving existing products. 

    Here’s what it means : by combining different types of data points obtained from various products, Google can pierce a detailed picture of a person’s life. Even if such user profile data is anonymised, it is still alarmingly accurate. 

    Douglas Schmidt, a computer science researcher at Vanderbilt University, well summarised the matter : 

    “[Google’s] business model is to collect as much data about you as possible and cross-correlate it so they can try to link your online persona with your offline persona. This tracking is just absolutely essential to their business. ‘Surveillance capitalism’ is a perfect phrase for it.”

    Google Data Collection Obsession Is Backed Into Its Business Model 

    OK, but Google offers some privacy controls to users ? Yes. Google only sees and uses the information you voluntarily enter or permit them to access. 

    But as the Washington Post correspondent points out :

    “[Big Tech] companies get to set all the rules, as long as they run those rules by consumers in convoluted terms of service that even those capable of decoding the legalistic language rarely bother to read. Other mechanisms for notice and consent, such as opt-outs and opt-ins, create similar problems. Control for the consumer is mostly an illusion.”

    Google openly claims to be “one of many ad networks that personalise ads based on your activity online”. 

    The wrinkle is that they have more data than all other advertising networks (arguably combined). This helps Google sell high-precision targeting and contextually personalised ads for billions of dollars annually.

    Given that Google has stakes in so many products — it’s really hard to de-Google your business and minimise tracking and data collection from the company.

    They are also creating a monopoly on data collection and ownership. This fact makes regulators concerned. The 2021 antitrust lawsuit from the European Commission says : 

    “The formal investigation will notably examine whether Google is distorting competition by restricting access by third parties to user data for advertising purposes on websites and apps while reserving such data for its own use.”

    In other words : By using consumer data to its unfair advantage, Google allegedly shuts off competition.

    But that’s not the only matter worrying regulators and consumers alike. Over the years, Google also received numerous other lawsuits for breaching people’s privacy, over and over again. 

    Here’s a timeline : 

    Separately, Google has a very complex history with GDPR compliance

    How Google Analytics Contributes to the Web Privacy Problem 

    Google Analytics is the key puzzle piece that supports Google’s data-driven business model. 

    If Google was to release a privacy-focused Google Analytics alternative, it’d lose access to valuable web users’ data and a big portion of digital ad revenues. 

    Remember : Google collects more data than it shares with web analytics users and advertisers. But they keep a lot of it for personal usage — and keep looking for ways to share this intel with advertisers (in a way that keeps regulators off their tail).

    For Google Analytics to become truly ethical and privacy-focused, Google would need to change their entire revenue model — which is something they are unlikely to do.

    Where does this leave Google Analytics users ? 

    In a slippery territory. By proxy, companies using GA are complicit with Google’s shady data collection and usage practice. They become part of the problem.

    In fact, Google Analytics usage opens a business to two types of risks : 

    • Reputational. 77% of global consumers say that transparency around how data is collected and used is important to them when interacting with different brands. That’s why data breaches and data misuse by brands lead to major public outrages on social media and boycotts in some cases. 
    • Legal. EU regulators are on a continuous crusade against Google Analytics 4 (GA4) as it is in breach of GDPR. French and Austrian watchdogs ruled the “service” illegal. Since Google Analytics is not GDPR compliant, it opens any business using it to lawsuits (which is already happening).

    But there’s a way out.

    Choose a Privacy-Friendly Google Analytics Alternative 

    Google Analytics is a popular web analytics service, but not the only one available. You have alternatives such as Matomo. 

    Our guiding principle is : respecting privacy.

    Unlike Google Analytics, we leave data ownership 100% in users’ hands. Matomo lets you implement privacy-centred controls for user data collection.

    Plus, you can self-host Matomo On-Premise or choose Matomo Cloud with data securely stored in the EU and in compliance with GDPR.

    The best part ? You can try our ethical alternative to Google Analytics for free. No credit card required ! Start your free 21-day trial now