Recherche avancée

Médias (1)

Mot : - Tags -/musée

Autres articles (42)

  • La sauvegarde automatique de canaux SPIP

    1er avril 2010, par

    Dans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
    Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

Sur d’autres sites (7010)

  • avfilter/f_ebur128 : properly propagate true peak

    23 juin, par Niklas Haas
    avfilter/f_ebur128 : properly propagate true peak
    

    After 3b26b782ee, `ebur128->true_peak` was only set to the maximum of the
    current "true peak per frame" values, when it should report the true peak for
    the entire stream.

    Fixes : 3b26b782eeded9b9ab7fac013cd1a83a30d68206

    • [DH] libavfilter/f_ebur128.c
  • Consent Mode v2 : Everything You Need to Know

    7 mai 2024, par Alex — Analytics Tips

    Confused about Consent Mode v2 and its impact on your website analytics ? You’re not the only one. 

    Google’s latest update has left many scratching their heads about data privacy and tracking. 

    In this blog, we’re getting straight to the point. We’ll break down what Consent Mode v2 is, how it works, and the impact it has.

    What is Consent Mode ?

    What exaclty is Google Consent Mode and why is there so much buzz surrounding it ? This question has been frustrating analysts and marketers worldwide since the beginning of this year. 

    Consent Mode is the solution from Google designed to manage data collection on websites in accordance with user privacy requirements.

    This mode enables website owners to customise how Google tags respond to users’ consent status for cookie usage. At its core, Consent Mode adheres to privacy regulations such as GDPR in Europe and CCPA in California, without significant loss of analytical data.

    Diagram displaying how consent mode works

    How does Consent Mode work ?

    Consent Mode operates by adjusting the behaviour of tags on a website depending on whether consent for cookie usage is provided or not. If a user does not consent to the use of analytical or advertising cookies, Google tags automatically switch to collecting a limited amount of data, ensuring privacy compliance.

    This approach allows for continued valuable insights into website traffic and user behavior, even if users opt out of most tracking cookies.

    What types of consent are available in Consent Mode ?

    As of 6 March 2024, Consent Mode v2 has become the current standard (and in terms of utilising Google Advertising Services, practically mandatory), indicating the incorporation of four consent types :

    1. ad_storage : allows for the collection and storage of data necessary for delivering personalised ads based on user actions.
    2. ad_user_data : pertains to the collection and usage of data that can be associated with the user for ad customisation and optimisation.
    3. ad_personalization : permits the use of user data for ad personalisation and providing more relevant content.
    4. analytics_storage : relates to the collection and storage of data for analytics, enabling websites to analyse user behaviour and enhance user experience.

    Additionally, in Consent Mode v2, there are two modes :

    1. Basic Consent Mode : in which Google tags are not used for personalised advertising and measurements if consent is not obtained.
    2. Advanced Consent Mode : allows Google tags to utilise anonymised data for personalised advertising campaigns and measurements, even if consent is not obtained.

    What is Consent Mode v2 ? (And how does it differ from Consent Mode v1 ?)

    Consent Mode v2 is an improved version of the original Consent Mode, offering enhanced customisation capabilities and better compliance with privacy requirements. 

    The new version introduces additional consent configuration parameters, allowing for even more precise control over which data is collected and how it’s used. The key difference between Consent Mode v2 and Consent Mode v1 lies in more granular consent management, making this tool even more flexible and powerful in safeguarding personal data.

    In Consent Mode v2, the existing markers (ad_storage and analytics_storage) are accompanied by two new markers :

    1. ad_user_data – does the user agree to their personal data being utilized for advertising purposes ?
    2. ad_personalization – does the user agree to their data being employed for remarketing ?

    In contrast to ad_storage and analytics_storage, these markers don’t directly affect how the tags operate on the site itself. 

    They serve as additional directives sent alongside the pings to Google services, indicating how user data can be utilised for advertising purposes.

    While ad_storage and analytics_storage serve as upstream qualifiers for data (determining which identifiers are sent with the pings), ad_user_data and ad_personalization serve as downstream instructions for Google services regarding data processing.

    How is the implementation of Consent Mode v2 going ?

    The implementation of Consent Mode v2 is encountering some issues and bugs (as expected). The most important thing to understand :

    1. Advanced Consent Mode v2 is essential if you have traffic and campaigns with Google Ads in the European Union.
    2. If you don’t have substantially large traffic, enabling Advanced Consent Mode v2 will likely result in a traffic drop in GA4 – because this version of consent mode (unlike the basic one) applies behavioural modelling to users who haven’t accepted the use of cookies. And modelling the behaviour requires time.

    The aspect of behavioural modelling in Consent Mode v2 implies the following : the data of users who have declined tracking options begin to be modelled using machine learning. 

    However, training the model requires a suitable data volume. As the Google’s documentation states :

    The property should collect at least 1,000 events per day with analytics_storage=’denied’ for at least 7 days. The property should have at least 1,000 daily users submitting events with analytics_storage=’granted’ for at least 7 of the previous 28 days.

    Largely due to this, the market’s response to the Consent Mode v2 implementation was mixed : many reported a significant drop in traffic in their GA4 and Google Ads reports upon enabling the Advanced mode. Essentially, a portion of the data was lost because Google’s models lacked enough data for training. 

    And from the very beginning of implementation, users regularly report about a few examples of that scenario. If your website doesn’t have enough traffic for behaviour modelling, after Consent Mode v2 switching you will face significant drop in your traffic in Google Ads and GA4 reports. There are a lot of cases of observing 90-95% drop in metrics of users and sessions.

    In a nutshell, you should be prepared for significant data losses if you are planning to switch to Google Consent Mode v2.

    How does Consent Mode v2 impact web analytics ? 

    The transition to Consent Mode v2 alters the methods of user data collection and processing. The main concerns arise from the potential loss of accuracy and completeness of analytical data due to restrictions on the use of cookies and other identifiers when user consent is absent. 

    With Google Consent Mode v2, the data of visitors who have not agreed to tracking will be modelled and may not accurately reflect your actual visitors’ behaviours and actions. So as an analyst or marketer, you will not have true insights into these visitors and the data acquired will be more generalised and less accurate.

    Google Consent Mode v2 appears to be a kind of compromise band-aid solution. 

    It tries to solve these issues by using data modelling and anonymised data collection. However, it’s critical to note that there are specific limitations inherent to the modelling mechanism.

    This complicates the analysis of visitor behavior, advertising campaigns, and website optimisation, ultimately impacting decision-making and resulting in poor website performance and marketing outcomes.

    Wrap up

    Consent Mode v2 is a mechanism of managing Google tag operations based on user consent settings. 

    It’s mandatory if you’re using Google’s advertising services, and optional (at least for Advanced mode) if you don’t advertise on Google Ads. 

    There are particular indications that this technology is unreliable from a GDPR perspective. 

    Using Google Consent Mode will inevitably lead to data losses and inaccuracies in its analysis. 

    In other words, it in some sense jeopardises your business.

  • Developing MobyCAIRO

    26 mai 2021, par Multimedia Mike — General

    I recently published a tool called MobyCAIRO. The ‘CAIRO’ part stands for Computer-Assisted Image ROtation, while the ‘Moby’ prefix refers to its role in helping process artifact image scans to submit to the MobyGames database. The tool is meant to provide an accelerated workflow for rotating and cropping image scans. It works on both Windows and Linux. Hopefully, it can solve similar workflow problems for other people.

    As of this writing, MobyCAIRO has not been tested on Mac OS X yet– I expect some issues there that should be easily solvable if someone cares to test it.

    The rest of this post describes my motivations and how I arrived at the solution.

    Background
    I have scanned well in excess of 2100 images for MobyGames and other purposes in the past 16 years or so. The workflow looks like this :


    Workflow diagram

    Image workflow


    It should be noted that my original workflow featured me manually rotating the artifact on the scanner bed in order to ensure straightness, because I guess I thought that rotate functions in image editing programs constituted dark, unholy magic or something. So my workflow used to be even more arduous :


    Longer workflow diagram

    I can’t believe I had the patience to do this for hundreds of scans


    Sometime last year, I was sitting down to perform some more scanning and found myself dreading the oncoming tedium of straightening and cropping the images. This prompted a pivotal question :


    Why can’t a computer do this for me ?

    After all, I have always been a huge proponent of making computers handle the most tedious, repetitive, mind-numbing, and error-prone tasks. So I did some web searching to find if there were any solutions that dealt with this. I also consulted with some like-minded folks who have to cope with the same tedious workflow.

    I came up empty-handed. So I endeavored to develop my own solution.

    Problem Statement and Prior Work

    I want to develop a workflow that can automatically rotate an image so that it is straight, and also find the most likely crop rectangle, uniformly whitening the area outside of the crop area (in the case of circles).

    As mentioned, I checked to see if any other programs can handle this, starting with my usual workhorse, Photoshop Elements. But I can’t expect the trimmed down version to do everything. I tried to find out if its big brother could handle the task, but couldn’t find a definitive answer on that. Nor could I find any other tools that seem to take an interest in optimizing this particular workflow.

    When I brought this up to some peers, I received some suggestions, including an idea that the venerable GIMP had a feature like this, but I could not find any evidence. Further, I would get responses of “Program XYZ can do image rotation and cropping.” I had to tamp down on the snark to avoid saying “Wow ! An image editor that can perform rotation AND cropping ? What a game-changer !” Rotation and cropping features are table stakes for any halfway competent image editor for the last 25 or so years at least. I am hoping to find or create a program which can lend a bit of programmatic assistance to the task.

    Why can’t other programs handle this ? The answer seems fairly obvious : Image editing tools are general tools and I want a highly customized workflow. It’s not reasonable to expect a turnkey solution to do this.

    Brainstorming An Approach
    I started with the happiest of happy cases— A disc that needed archiving (a marketing/press assets CD-ROM from a video game company, contents described here) which appeared to have some pretty clear straight lines :


    Ubisoft 2004 Product Catalog CD-ROM

    My idea was to try to find straight lines in the image and then rotate the image so that the image is parallel to the horizontal based on the longest single straight line detected.

    I just needed to figure out how to find a straight line inside of an image. Fortunately, I quickly learned that this is very much a solved problem thanks to something called the Hough transform. As a bonus, I read that this is also the tool I would want to use for finding circles, when I got to that part. The nice thing about knowing the formal algorithm to use is being able to find efficient, optimized libraries which already implement it.

    Early Prototype
    A little searching for how to perform a Hough transform in Python led me first to scikit. I was able to rapidly produce a prototype that did some basic image processing. However, running the Hough transform directly on the image and rotating according to the longest line segment discovered turned out not to yield expected results.


    Sub-optimal rotation

    It also took a very long time to chew on the 3300×3300 raw image– certainly longer than I care to wait for an accelerated workflow concept. The key, however, is that you are apparently not supposed to run the Hough transform on a raw image– you need to compute the edges first, and then attempt to determine which edges are ‘straight’. The recommended algorithm for this step is the Canny edge detector. After applying this, I get the expected rotation :


    Perfect rotation

    The algorithm also completes in a few seconds. So this is a good early result and I was feeling pretty confident. But, again– happiest of happy cases. I should also mention at this point that I had originally envisioned a tool that I would simply run against a scanned image and it would automatically/magically make the image straight, followed by a perfect crop.

    Along came my MobyGames comrade Foxhack to disabuse me of the hope of ever developing a fully automated tool. Just try and find a usefully long straight line in this :


    Nascar 07 Xbox Scan, incorrectly rotated

    Darn it, Foxhack…

    There are straight edges, to be sure. But my initial brainstorm of rotating according to the longest straight edge looks infeasible. Further, it’s at this point that we start brainstorming that perhaps we could match on ratings badges such as the standard ESRB badges omnipresent on U.S. video games. This gets into feature detection and complicates things.

    This Needs To Be Interactive
    At this point in the effort, I came to terms with the fact that the solution will need to have some element of interactivity. I will also need to get out of my safe Linux haven and figure out how to develop this on a Windows desktop, something I am not experienced with.

    I initially dreamed up an impressive beast of a program written in C++ that leverages Windows desktop GUI frameworks, OpenGL for display and real-time rotation, GPU acceleration for image analysis and processing tricks, and some novel input concepts. I thought GPU acceleration would be crucial since I have a fairly good GPU on my main Windows desktop and I hear that these things are pretty good at image processing.

    I created a list of prototyping tasks on a Trello board and made a decent amount of headway on prototyping all the various pieces that I would need to tie together in order to make this a reality. But it was ultimately slowgoing when you can only grab an hour or 2 here and there to try to get anything done.

    Settling On A Solution
    Recently, I was determined to get a set of old shareware discs archived. I ripped the data a year ago but I was blocked on the scanning task because I knew that would also involve tedious straightening and cropping. So I finally got all the scans done, which was reasonably quick. But I was determined to not manually post-process them.

    This was fairly recent, but I can’t quite recall how I managed to come across the OpenCV library and its Python bindings. OpenCV is an amazing library that provides a significant toolbox for performing image processing tasks. Not only that, it provides “just enough” UI primitives to be able to quickly create a basic GUI for your program, including image display via multiple windows, buttons, and keyboard/mouse input. Furthermore, OpenCV seems to be plenty fast enough to do everything I need in real time, just with (accelerated where appropriate) CPU processing.

    So I went to work porting the ideas from the simple standalone Python/scikit tool. I thought of a refinement to the straight line detector– instead of just finding the longest straight edge, it creates a histogram of 360 rotation angles, and builds a list of lines corresponding to each angle. Then it sorts the angles by cumulative line length and allows the user to iterate through this list, which will hopefully provide the most likely straightened angle up front. Further, the tool allows making fine adjustments by 1/10 of an angle via the keyboard, not the mouse. It does all this while highlighting in red the straight line segments that are parallel to the horizontal axis, per the current candidate angle.


    MobyCAIRO - rotation interface

    The tool draws a light-colored grid over the frame to aid the user in visually verifying the straightness of the image. Further, the program has a mode that allows the user to see the algorithm’s detected edges :


    MobyCAIRO - show detected lines

    For the cropping phase, the program uses the Hough circle transform in a similar manner, finding the most likely circles (if the image to be processed is supposed to be a circle) and allowing the user to cycle among them while making precise adjustments via the keyboard, again, rather than the mouse.


    MobyCAIRO - assisted circle crop

    Running the Hough circle transform is a significantly more intensive operation than the line transform. When I ran it on a full 3300×3300 image, it ran for a long time. I didn’t let it run longer than a minute before forcibly ending the program. Is this approach unworkable ? Not quite– It turns out that the transform is just as effective when shrinking the image to 400×400, and completes in under 2 seconds on my Core i5 CPU.

    For rectangular cropping, I just settled on using OpenCV’s built-in region-of-interest (ROI) facility. I tried to intelligently find the best candidate rectangle and allow fine adjustments via the keyboard, but I wasn’t having much success, so I took a path of lesser resistance.

    Packaging and Residual Weirdness
    I realized that this tool would be more useful to a broader Windows-using base of digital preservationists if they didn’t have to install Python, establish a virtual environment, and install the prerequisite dependencies. Thus, I made the effort to figure out how to wrap the entire thing up into a monolithic Windows EXE binary. It is available from the project’s Github release page (another thing I figured out for the sake of this project !).

    The binary is pretty heavy, weighing in at a bit over 50 megabytes. You might advise using compression– it IS compressed ! Before I figured out the --onefile command for pyinstaller.exe, the generated dist/ subdirectory was 150 MB. Among other things, there’s a 30 MB FORTRAN BLAS library packaged in !

    Conclusion and Future Directions
    Once I got it all working with a simple tkinter UI up front in order to select between circle and rectangle crop modes, I unleashed the tool on 60 or so scans in bulk, using the Windows forfiles command (another learning experience). I didn’t put a clock on the effort, but it felt faster. Of course, I was livid with proudness the whole time because I was using my own tool. I just wish I had thought of it sooner. But, really, with 2100+ scans under my belt, I’m just getting started– I literally have thousands more artifacts to scan for preservation.

    The tool isn’t perfect, of course. Just tonight, I threw another scan at MobyCAIRO. Just go ahead and try to find straight lines in this specimen :


    Reading Who? Reading You! CD-ROM

    I eventually had to use the text left and right of center to line up against the grid with the manual keyboard adjustments. Still, I’m impressed by how these computer vision algorithms can see patterns I can’t, highlighting lines I never would have guessed at.

    I’m eager to play with OpenCV some more, particularly the video processing functions, perhaps even some GPU-accelerated versions.

    The post Developing MobyCAIRO first appeared on Breaking Eggs And Making Omelettes.