Recherche avancée

Médias (0)

Mot : - Tags -/organisation

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (47)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (4250)

  • WebVTT as a W3C Recommendation

    2 décembre 2013, par silvia

    Three weeks ago I attended TPAC, the annual meeting of W3C Working Groups. One of the meetings was of the Timed Text Working Group (TT-WG), that has been specifying TTML, the Timed Text Markup Language. It is now proposed that WebVTT be also standardised through the same Working Group.

    How did that happen, you may ask, in particular since WebVTT and TTML have in the past been portrayed as rival caption formats ? How will the WebVTT spec that is currently under development in the Text Track Community Group (TT-CG) move through a Working Group process ?

    I’ll explain first why there is a need for WebVTT to become a W3C Recommendation, and then how this is proposed to be part of the Timed Text Working Group deliverables, and finally how I can see this working between the TT-CG and the TT-WG.

    Advantages of a W3C Recommendation

    TTML is a XML-based markup format for captions developed during the time that XML was all the hotness. It has become a W3C standard (a so-called “Recommendation”) despite not having been implemented in any browsers (if you ask me : that’s actually a flaw of the W3C standardisation process : it requires only two interoperable implementations of any kind – and that could be anyone’s JavaScript library or Flash demonstrator – it doesn’t actually require browser implementations. But I digress…). To be fair, a subpart of TTML is by now implemented in Internet Explorer, but all the other major browsers have thus far rejected proposals of implementation.

    Because of its Recommendation status, TTML has become the basis for several other caption standards that other SDOs have picked : the SMPTE’s SMPTE-TT format, the EBU’s EBU-TT format, and the DASH Industry Forum’s use of SMPTE-TT. SMPTE-TT has also become the “safe harbour” format for the US legislation on captioning as decided by the FCC. (Note that the FCC requirements for captions on the Web are actually based on a list of features rather than requiring a specific format. But that will be the topic of a different blog post…)

    WebVTT is much younger than TTML. TTML was developed as an interchange format among caption authoring systems. WebVTT was built for rendering in Web browsers and with HTML5 in mind. It meets the requirements of the <track> element and supports more than just captions/subtitles. WebVTT is popular with browser developers and has already been implemented in all major browsers (Firefox Nightly is the last to implement it – all others have support already released).

    As we can see and as has been proven by the HTML spec and multiple other specs : browsers don’t wait for specifications to have W3C Recommendation status before they implement them. Nor do they really care about the status of a spec – what they care about is whether a spec makes sense for the Web developer and user communities and whether it fits in the Web platform. WebVTT has obviously achieved this status, even with an evolving spec. (Note that the spec tries very hard not to break backwards compatibility, thus all past implementations will at least be compatible with the more basic features of the spec.)

    Given that Web browsers don’t need WebVTT to become a W3C standard, why then should we spend effort in moving the spec through the W3C process to become a W3C Recommendation ?

    The modern Web is now much bigger than just Web browsers. Web specifications are being used in all kinds of devices including TV set-top boxes, phone and tablet apps, and even unexpected devices such as white goods. Videos are increasingly omnipresent thus exposing deaf and hard-of-hearing users to ever-growing challenges in interacting with content on diverse devices. Some of these devices will not use auto-updating software but fixed versions so can’t easily adapt to new features. Thus, caption producers (both commercial and community) need to be able to author captions (and other video accessibility content as defined by the HTML5
    element) towards a feature set that is clearly defined to be supported by such non-updating devices.

    Understandably, device vendors in this space have a need to build their technology on standardised specifications. SDOs for such device technologies like to reference fixed specifications so the feature set is not continually updating. To reference WebVTT, they could use a snapshot of the specification at any time and reference that, but that’s not how SDOs work. They prefer referencing an officially sanctioned and tested version of a specification – for a W3C specification that means creating a W3C Recommendation of the WebVTT spec.

    Taking WebVTT on a W3C recommendation track is actually advantageous for browsers, too, because a test suite will have to be developed that proves that features are implemented in an interoperable manner. In summary, I can see the advantages and personally support the effort to take WebVTT through to a W3C Recommendation.

    Choice of Working Group

    FAIK this is the first time that a specification developed in a Community Group is being moved into the recommendation track. This is something that has been expected when the W3C created CGs, but not something that has an established process yet.

    The first question of course is which WG would take it through to Recommendation ? Would we create a new Working Group or find an existing one to move the specification through ? Since WGs involve a lot of overhead, the preference was to add WebVTT to the charter of an existing WG. The two obvious candidates were the HTML WG and the TT-WG – the first because it’s where WebVTT originated and the latter because it’s the closest thematically.

    Adding a deliverable to a WG is a major undertaking. The TT-WG is currently in the process of re-chartering and thus a suggestion was made to add WebVTT to the milestones of this WG. TBH that was not my first choice. Since I’m already an editor in the HTML WG and WebVTT is very closely related to HTML and can be tested extensively as part of HTML, I preferred the HTML WG. However, adding WebVTT to the TT-WG has some advantages, too.

    Since TTML is an exchange format, lots of captions that will be created (at least professionally) will be in TTML and TTML-related formats. It makes sense to create a mapping from TTML to WebVTT for rendering in browsers. The expertise of both, TTML and WebVTT experts is required to develop a good mapping – as has been shown when we developed the mapping from CEA608/708 to WebVTT. Also, captioning experts are already in the TT-WG, so it helps to get a second set of eyes onto WebVTT.

    A disadvantage of moving a specification out of a CG into a WG is, however, that you potentially lose a lot of the expertise that is already involved in the development of the spec. People don’t easily re-subscribe to additional mailing lists or want the additional complexity of involving another community (see e.g. this email).

    So, a good process needs to be developed to allow everyone to contribute to the spec in the best way possible without requiring duplicate work. How can we do that ?

    The forthcoming process

    At TPAC the TT-WG discussed for several hours what the next steps are in taking WebVTT through the TT-WG to recommendation status (agenda with slides). I won’t bore you with the different views – if you are keen, you can read the minutes.

    What I came away with is the following process :

    1. Fix a few more bugs in the CG until we’re happy with the feature set in the CG. This should match the feature set that we realistically expect devices to implement for a first version of the WebVTT spec.
    2. Make a FSA (Final Specification Agreement) in the CG to create a stable reference and a clean IPR position.
    3. Assuming that the TT-WG’s charter has been approved with WebVTT as a milestone, we would next bring the FSA specification into the TT-WG as FPWD (First Public Working Draft) and immediately do a Last Call which effectively freezes the feature set (this is possible because there has already been wide community review of the WebVTT spec) ; in parallel, the CG can continue to develop the next version of the WebVTT spec with new features (just like it is happening with the HTML5 and HTML5.1 specifications).
    4. Develop a test suite and address any issues in the Last Call document (of course, also fix these issues in the CG version of the spec).
    5. As per W3C process, substantive and minor changes to Last Call documents have to be reported and raised issues addressed before the spec can progress to the next level : Candidate Recommendation status.
    6. For the next step – Proposed Recommendation status – an implementation report is necessary, and thus the test suite needs to be finalized for the given feature set. The feature set may also be reduced at this stage to just the ones implemented interoperably, leaving any other features for the next version of the spec.
    7. The final step is Recommendation status, which simply requires sufficient support and endorsement by W3C members.

    The first version of the WebVTT spec naturally has a focus on captioning (and subtitling), since this has been the dominant use case that we have focused on this far and it’s the part that is the most compatibly implemented feature set of WebVTT in browsers. It’s my expectation that the next version of WebVTT will have a lot more features related to audio descriptions, chapters and metadata. Thus, this seems a good time for a first version feature freeze.

    There are still several obstacles towards progressing WebVTT as a milestone of the TT-WG. Apart from the need to get buy-in from the TT-WG, the TT-CG, and the AC (Adivisory Committee who have to approve the new charter), we’re also looking at the license of the specification document.

    The CG specification has an open license that allows creating derivative work as long as there is attribution, while the W3C document license for documents on the recommendation track does not allow the creation of derivative work unless given explicit exceptions. This is an issue that is currently being discussed in the W3C with a proposal for a CC-BY license on the Recommendation track. However, my view is that it’s probably ok to use the different document licenses : the TT-WG will work on WebVTT 1.0 and give it a W3C document license, while the CG starts working on the next WebVTT version under the open CG license. It probably actually makes sense to have a less open license on a frozen spec.

    Making the best of a complicated world

    WebVTT is now proposed as part of the recharter of the TT-WG. I have no idea how complicated the process will become to achieve a W3C WebVTT 1.0 Recommendation, but I am hoping that what is outlined above will be workable in such a way that all of us get to focus on progressing the technology.

    At TPAC I got the impression that the TT-WG is committed to progressing WebVTT to Recommendation status. I know that the TT-CG is committed to continue developing WebVTT to its full potential for all kinds of media-time aligned content with new kinds already discussed at FOMS. Let’s enable both groups to achieve their goals. As a consequence, we will allow the two formats to excel where they do : TTML as an interchange format and WebVTT as a browser rendering format.

  • CD-R Read Speed Experiments

    21 mai 2011, par Multimedia Mike — Science Projects, Sega Dreamcast

    I want to know how fast I can really read data from a CD-R. Pursuant to my previous musings on this subject, I was informed that it is inadequate to profile reading just any file from a CD-R since data might be read faster or slower depending on whether the data is closer to the inside or the outside of the disc.

    Conclusion / Executive Summary
    It is 100% true that reading data from the outside of a CD-R is faster than reading data from the inside. Read on if you care to know the details of how I arrived at this conclusion, and to find out just how much speed advantage there is to reading from the outside rather than the inside.

    Science Project Outline

    • Create some sample CD-Rs with various properties
    • Get a variety of optical drives
    • Write a custom program that profiles the read speed

    Creating The Test Media
    It’s my understanding that not all CD-Rs are created equal. Fortunately, I have 3 spindles of media handy : Some plain-looking Memorex discs, some rather flamboyant Maxell discs, and those 80mm TDK discs :



    My approach for burning is to create a single file to be burned into a standard ISO-9660 filesystem. The size of the file will be the advertised length of the CD-R minus 1 megabyte for overhead— so, 699 MB for the 120mm discs, 209 MB for the 80mm disc. The file will contain a repeating sequence of 0..0xFF bytes.

    Profiling
    I don’t want to leave this to the vagaries of any filesystem handling layer so I will conduct this experiment at the sector level. Profiling program outline :

    • Read the CD-ROM TOC and get the number of sectors that comprise the data track
    • Profile reading the first 20 MB of sectors
    • Profile reading 20 MB of sectors in the middle of the track
    • Profile reading the last 20 MB of sectors

    Unfortunately, I couldn’t figure out the raw sector reading on modern Linux incarnations (which is annoying since I remember it being pretty straightforward years ago). So I left it to the filesystem after all. New algorithm :

    • Open the single, large file on the CD-R and query the file length
    • Profile reading the first 20 MB of data, 512 kbytes at a time
    • Profile reading 20 MB of sectors in the middle of the track (starting from filesize / 2 - 10 MB), 512 kbytes at a time
    • Profile reading the last 20 MB of sectors (starting from filesize - 20MB), 512 kbytes at a time

    Empirical Data
    I tested the program in Linux using an LG Slim external multi-drive (seen at the top of the pile in this post) and one of my Sega Dreamcast units. I gathered the median value of 3 runs for each area (inner, middle, and outer). I also conducted a buffer flush in between Linux runs (as root : 'sync; echo 3 > /proc/sys/vm/drop_caches').

    LG Slim external multi-drive (reading from inner, middle, and outer areas in kbytes/sec) :

    • TDK-80mm : 721, 897, 1048
    • Memorex-120mm : 1601, 2805, 3623
    • Maxell-120mm : 1660, 2806, 3624

    So the 120mm discs can range from about 10.5X all the way up to a full 24X on this drive. For whatever reason, the 80mm disc fares a bit worse — even at the inner track — with a range of 4.8X - 7X.

    Sega Dreamcast (reading from inner, middle, and outer areas in kbytes/sec) :

    • TDK-80mm : 502, 632, 749
    • Memorex-120mm : 499, 889, 1143
    • Maxell-120mm : 500, 890, 1156

    It’s interesting that the 80mm disc performed comparably to the 120mm discs in the Dreamcast, in contrast to the LG Slim drive. Also, the results are consistent with my previous profiling experiments, which largely only touched the inner area. The read speeds range from 3.3X - 7.7X. The middle of a 120mm disc reads at about 6X.

    Implications
    A few thoughts regarding these results :

    • Since the very definition of 1X is the minimum speed necessary to stream data from an audio CD, then presumably, original 1X CD-ROM drives would have needed to be capable of reading 1X from the inner area. I wonder what the max read speed at the outer edges was ? It’s unlikely I would be able to get a 1X drive working easily in this day and age since the earliest CD-ROM drives required custom controllers.
    • I think 24X is the max rated read speed for CD-Rs, at least for this drive. This implies that the marketing literature only cites the best possible numbers. I guess this is no surprise, similar to how monitors and TVs have always been measured by their diagonal dimension.
    • Given this data, how do you engineer an ISO-9660 filesystem image so that the timing-sensitive multimedia files live on the outermost track ? In the Dreamcast case, if you can guarantee your FMV files will live somewhere between the middle and the end of the disc, you should be able to count on a bitrate of at least 900 kbytes/sec.

    Source Code
    Here is the program I wrote for profiling. Note that the filename is hardcoded (#define FILENAME). Compiling for Linux is a simple 'gcc -Wall profile-cdr.c -o profile-cdr'. Compiling for Dreamcast is performed in the standard KallistiOS manner (people skilled in the art already know what they need to know) ; the only variation is to compile with the '-D_arch_dreamcast' flag, which the default KOS environment adds anyway.

    C :
    1. #ifdef _arch_dreamcast
    2.   #include <kos .h>
    3.  
    4.   /* map I/O functions to their KOS equivalents */
    5.   #define open fs_open
    6.   #define lseek fs_seek
    7.   #define read fs_read
    8.   #define close fs_close
    9.  
    10.   #define FILENAME "/cd/bigfile"
    11. #else
    12.   #include <stdio .h>
    13.   #include <sys /types.h>
    14.   #include </sys><sys /stat.h>
    15.   #include </sys><sys /time.h>
    16.   #include <fcntl .h>
    17.   #include <unistd .h>
    18.  
    19.   #define FILENAME "/media/Full disc/bigfile"
    20. #endif
    21.  
    22. /* Get a current absolute millisecond count ; it doesn’t have to be in
    23. * reference to anything special. */
    24. unsigned int get_current_milliseconds()
    25. {
    26. #ifdef _arch_dreamcast
    27.   return timer_ms_gettime64() ;
    28. #else
    29.   struct timeval tv ;
    30.   gettimeofday(&tv, NULL) ;
    31.   return tv.tv_sec * 1000 + tv.tv_usec / 1000 ;
    32. #endif
    33. }
    34.  
    35. #define READ_SIZE (20 * 1024 * 1024)
    36. #define READ_BUFFER_SIZE (512 * 1024)
    37.  
    38. int main()
    39. {
    40.   int i, j ;
    41.   int fd ;
    42.   char read_buffer[READ_BUFFER_SIZE] ;
    43.   off_t filesize ;
    44.   unsigned int start_time, end_time ;
    45.  
    46.   fd = open(FILENAME, O_RDONLY) ;
    47.   if (fd == -1)
    48.   {
    49.     printf("could not open %s\n", FILENAME) ;
    50.     return 1 ;
    51.   }
    52.   filesize = lseek(fd, 0, SEEK_END) ;
    53.  
    54.   for (i = 0 ; i <3 ; i++)
    55.   {
    56.     if (i == 0)
    57.     {
    58.       printf("reading inner 20 MB...\n") ;
    59.       lseek(fd, 0, SEEK_SET) ;
    60.     }
    61.     else if (i == 1)
    62.     {
    63.       printf("reading middle 20 MB...\n") ;
    64.       lseek(fd, (filesize / 2) - (READ_SIZE / 2), SEEK_SET) ;
    65.     }
    66.     else
    67.     {
    68.       printf("reading outer 20 MB...\n") ;
    69.       lseek(fd, filesize - READ_SIZE, SEEK_SET) ;
    70.     }
    71.     /* read 20 MB ; 40 chunks of 1/2 MB */
    72.     start_time = get_current_milliseconds() ;
    73.     for (j = 0 ; j <(READ_SIZE / READ_BUFFER_SIZE) ; j++)
    74.       if (read(fd, read_buffer, READ_BUFFER_SIZE) != READ_BUFFER_SIZE)
    75.       {
    76.         printf("read error\n") ;
    77.         break ;
    78.       }
    79.     end_time = get_current_milliseconds() ;
    80.     printf("%d - %d = %d ms => %d kbytes/sec\n",
    81.       end_time, start_time, end_time - start_time,
    82.       READ_SIZE / (end_time - start_time)) ;
    83.   }
    84.  
    85.   close(fd) ;
    86.  
    87.   return 0 ;
    88. }
  • How (and Why) to Run a Web Accessibility Audit in 2024

    7 mai 2024, par Erin

    When most businesses design their websites, they primarily think about aesthetics, not accessibility. However, not everyone who visits your website has the same abilities or access needs. Eight percent of the US population has visual impairments.

    The last thing you want is to alienate website visitors with a bad experience because your site isn’t up to accessibility standards. (And with growing international regulation, risk fines or lawsuits as a result.)

    A web accessibility audit can help you identify and fix any issues for users with impaired vision, hearing or other physical disabilities. In this article, we’ll cover how to conduct such an audit efficiently for your website in 2024.

    What is a web accessibility audit ?

    A web accessibility audit is a way to evaluate the usability of your website for users with visual, auditory or physical impairments, as well as cognitive disabilities or neurological issues. The goal is to figure out how accessible your website is to each of these affected groups and solve any issues that come up.

    To complete an audit, you use digital tools and various manual accessibility testing processes to ensure your site meets modern web accessibility standards.

    Why is a web accessibility audit a must in 2024 ?

    For far too long, many businesses have not considered the experiences of those with disabilities. The growing frustrations of affected internet users have led to a new focus on web accessibility laws and enforcement.

    Lawsuits related to the ADA (Americans with Disabilities Act) reached all-time highs in 2023 — over 4,500 digital-related lawsuits were filed. The EU has also drawn up the European Accessibility Act (EAC), which goes into effect in June 2025.

    But at the end of the day, it’s not about accessibility legislation. It’s about doing right by people.

    Illustration of a sight-impaired person using text-to-speech to browse a website on a smartphone

    This video by voice actor, YouTuber, and surfer Pete Gustin demonstrates why accessibility measures are so important. If buttons, navigation and content sections aren’t properly labelled, sight-impaired people who rely on speech-to-text to browse the web can’t comfortably interact with your site.

    And you’re worse off for it. You can lose some of your best customers and advocates this way. 

    With stronger enforcement of accessibility regulations in the US and new regulations coming into effect in the EU in 2025, the time to act is now. It’s not enough to “keep accessibility in mind” — you must take concrete steps to improve it.

    Who should lead a web accessibility audit ?

    Ideally, you want to hire a third-party web accessibility expert to lead the audit. They can guide you through multiple stages of manual accessibility testing to ensure your site meets regulations and user needs. 

    Experienced accessibility auditors are familiar with common pitfalls and can help you avoid them. They ensure you meet the legal requirements with proper solutions, not quick fixes.

    If this isn’t an option, find someone with relevant experience within your company. And involve someone with “skin in the game” in the process. Hire someone with visual impairments to usability test your site. Don’t just do automated tests or “put yourself in their shoes.” Make sure the affected users can use your site without issues.

    Automated vs. manual audits and the danger of shortcuts

    While there are automated audits, they only check for the bare minimum :

    • Do your images have alt tags ? (They don’t check if the alt tag is descriptive or just SEO junk text.)
    • Are clickable buttons identified with text for visually impaired users ?
    • Is your text size adjustable ?
    • Are your background and foreground colours accessible for colour-blind users ? Is there a sufficient contrast ratio ?
    Illustration of the results of an automated accessibility test

    They don’t dive into the user journey (and typically can’t access login-locked parts of your site). They can be a good starting point, but it’s a bad idea to rely completely on automated audits.

    They’ll miss more complex issues like :

    • Dynamic content and animated elements or videos that could put people with epilepsy at risk of seizures
    • A navigational flow that is unnecessarily challenging for users with impairments
    • Video elements without proper captions

    So, don’t rely too much on automated tests and audits. Many lawsuits for ADA infractions are against companies that think they’ve already solved the problem. For example, 30% of 2023 lawsuits were against sites that used accessibility overlays.

    Key elements of the Web Content Accessibility Guidelines (WCAG)

    The international standard for web accessibility is the Web Content Accessibility Guidelines (WCAG). In the most recent version, WCAG 2.2, there are new requirements for visual elements and focus and other updates.

    Here’s a quick overview of the key priorities of WCAG :

    Diagram of core WCAG considerations like text scalability, colour choices, accessible navigation, and more

    Perceivable : Any user can read or listen to your site’s content

    The first priority is for any user to be able to perceive the actual content on your site. To be compliant, you need to make these adjustments and more :

    • Use text that scales with browser settings.
    • Avoid relying on colour contrasts to communicate something.
    • Ensure visual elements are explained in text.
    • Offer audio alternatives for things like CAPTCHA.
    • Form fields and interactive elements are properly named.

    Operable : Any user can navigate the site and complete tasks without issue

    The second priority is for users to navigate your website and complete tasks. Here are some of the main considerations for this section :

    • Navigation is possible through keyboard and text-to-speech interfaces.
    • You offer navigation tools to bypass repeated blocks of content.
    • Buttons are properly titled and named.
    • You give impaired users enough time to finish processes without timing out.
    • You allow users to turn off unnecessary animations (and ensure none include three flashes or more within one second).
    • Links have a clear purpose from their alt text (and context).

    Understandable : Any user can read and understand the content

    The third priority is making the content understandable. You need to communicate as simply and as clearly as possible. Here are a few key points :

    • Software can determine the default language of each page.
    • You use a consistent method to explain jargon or difficult terms.
    • You introduce the meaning of unfamiliar abbreviations and acronyms.
    • You offer tools to help users double-check and correct input.
    • The reading grade is not higher than grade 9. If it is, you must offer an alternative text with a lower grade.
    • Use consistent and predictable formatting and navigation.

    This intro to accessibility guidelines should help you see the wide range of potential accessibility issues. Accessibility is not just about screen readers — it’s about ensuring a good user experience for users with a wide range of disabilities.

    Note : If you’re not hiring a third-party expert for the manual accessibility audit, this introduction isn’t enough. You need to familiarise yourself with all 50 success criteria in WCAG 2.2.

    How to do your first web accessibility audit

    Ready to find and fix the accessibility issues across your website ? Follow the steps outlined below to do a successful accessibility audit.

    Start with an automated accessibility test

    To point you in the right direction, start with a digital accessibility checker. There are many free alternatives, including :

    • Accessibility Checker
    • Silktide accessibility checker
    • AAArdvark

    When choosing a tool, check it’s up-to-date with the newest accessibility guidelines. Many accessibility evaluation tools are still based on the WCAG 2.1 version rather than WCAG 2.2.

    The tool will give you a basic evaluation of the accessibility level of your site. A free report can quickly identify common issues with navigation, labelling, colour choices and more. 

    But this is only good as a starting point. Remember that even paid versions of these testing tools are limited and cannot replace a manual audit.

    Look for common issues

    The next step is to manually look for common issues that impact your site’s level of accessibility :

    • Undescriptive alt text
    • Colour combinations (and lack of ability to change background and foreground colours)
    • Unscalable text
    • Different site content sections that are not properly labelled

    The software you use to create your site can lead to many of these issues. Is your content management system (CMS) compliant with ADA or WCAG ? If not, you may want to move to a CMS before continuing the audit.

    Pinpoint customer journeys and test them for accessibility 

    After you’ve fixed common issues, it’s essential to put the actual customer journey to the test. Explore your most important journeys with behavioural analytics tools like session recordings and funnel analysis.

    Analysing funnel reports lets you quickly identify each page that usually contributes to a sale. You will also have an overview of the most popular funnels to evaluate for accessibility.

    If your current web analytics platform doesn’t offer behavioural reports like these, Matomo can help. Our privacy-friendly web analytics solution includes funnel reports, session recordings, A/B testing, form analytics, heatmaps and more.

    Try Matomo for Free

    Get the web insights you need, without compromising data accuracy.

    No credit card required

    If you don’t have the budget to test every page individually, this is the perfect place to start. You want to ensure that users with disabilities have no issues completing the main tasks on your site. 

    Don’t focus solely on your web pages 

    Accessibility barriers can also exist outside of your standard web pages. So ensure that other file formats like PDFs and videos are also accessible. 

    Remember that downloadable materials are also part of your digital experience. Always consider the needs of individuals with disabilities when accessing things like case studies or video tutorials. 

    Highlight high-priority issues in a detailed report

    To complete the audit, you need to summarise and highlight high-priority issues. In a larger company, this will be in the form of a report. W3’s Web Accessibility Initiative offers a free accessibility report template and an online tool to generate a report.

    For smaller teams, it may make sense to input issues directly into the product backlog or a task list. Then, you can tackle the issues, starting with high-priority pages identified earlier in this process.

    Avoid quick fixes and focus on sustainable improvement

    As mentioned, AI-powered overlay solutions aren’t compliant and put you at risk for lawsuits. It’s not enough to install a quick accessibility tool and pat yourself on the back.

    And it’s not just about accessibility compliance. These solutions provide a disjointed experience that alienates potential users. 

    The point of a digital accessibility audit is to identify issues and provide a better experience to all your users. So don’t try to cut corners. Do the work required to implement solutions that work seamlessly for everyone. Invest in a long-term accessibility remediation process.

    Deliver a frictionless experience while gaining insight into your users

    An accessibility audit is crucial to ensure an inclusive experience — that a wide variety of users can read and interact with your site.

    But what about the basic usability of your website ? Are you sure the experience is without friction ? Matomo’s behavioural analytics tools can show how users interact with your website.

    For example, heatmaps can show you where users are clicking — which can help you identify a pattern, like many users mistaking a visual element for a button.

    Plus, our privacy-friendly web analytics are compliant with GDPR, CCPA and other data privacy regulations. That helps protect you against privacy-related lawsuits, just as an accessibility audit protects you against ADA lawsuits.

    And it never hurts that your users know you respect their privacy. Try Matomo free for 21-days. No credit card required.