Recherche avancée

Médias (0)

Mot : - Tags -/performance

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (62)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

Sur d’autres sites (7816)

  • Introducing Crash Analytics for Matomo

    30 août 2023, par Erin — Community, Plugins

    Bugs and development go hand in hand. As code matures, it contends with new browser iterations, clashes with ad blockers and other software quirks, resulting in the inevitable emergence of bugs. In fact, a staggering 13% of all pageviews come with lurking JavaScript errors.

    Monitoring for crashes becomes an unrelenting task. Amidst this never-ending effort to remove bugs, a SurveyMonkey study unveils a shared reality : a resounding 66% of individuals have encountered bug-ridden websites.

    These bugs lead to problems like malfunctioning shopping carts, glitchy checkout procedures and contact forms that just won’t cooperate. But they’re not just minor annoyances – they pose a real danger to your conversion rates and revenue.

    According to a study, 58% of visitors are inclined to abandon purchases as a result of bugs, while an astonishing 75% are driven to completely abandon websites due to these frustrating experiences.

    Imagine a website earning approximately 25,000 EUR per month. Now, factor in errors occurring in 13% of all pageviews. The result ? A potential monthly loss of 1,885 EUR.

    Meet Crash Analytics

    Driven by our vision to create an empowering analytics product, we’re excited to introduce Crash Analytics, an innovative plugin for Matomo On-Premise that automatically tracks bugs on your website.

    Crash Analytics for Matomo Evolution Graph
    View crash reports by evolution over time

    By offering insights into the precise bug location and the user’s interactions that triggered it, along with details about their device type, browser and more, Crash Analytics empowers you to swiftly address crashes, leading to an improved user experience, higher conversion rates and revenue growth.

    Soon, Crash Analytics will become available to Matomo Cloud users as well, so stay tuned for further updates and announcements.

    Say goodbye to lost revenue – never miss a bug again

    Even if you put your website through the toughest tests, it’s hard to predict every little hiccup that can pop up across different browsers, setups and situations. Factors such as ad blockers, varying internet speeds for visitors and browser updates can add an extra layer of complexity.

    When these crashes happen, you want to know immediately. However, according to a study, only 29% of surveyed respondents would report the existence of the site bug to the website operator. These bugs that go unnoticed can really hurt your bottom line and conversion rates, causing you to lose out on revenue and leaving your users frustrated and disappointed.

    Crash detail report in Crash Analytics for Matomo
    Detailed crash report

    Crash Analytics is here to bridge this gap. Armed with scheduled reporting (via email or texts) and automated alert functionalities, you gain the power to instantly detect bugs as they occur on your site. This proactive approach ensures that even the subtlest of issues are brought to your attention promptly. 

    With automated reports and alerts, you can also opt to receive notifications when crashes increase or ignore specific crashes that you deem insignificant. This keeps you in the loop with only the issues that truly matter, helping you cut out the noise and take immediate action.

    Forward crash data

    Easily forward crash data to developers and synchronise the efforts of technical teams and marketing experts. Track emerging, disappearing and recurring errors, ensuring that crash data is efficiently relayed to developers to prioritise fixes that matter.

    Eemerging, disappearing and recurring crashes in Crash Analytics for Matomo
    Track emerging, disappearing and recurring bugs

    Plus, your finger is always on the pulse with real-time reports that offer a live view of crashes happening at the moment, an especially helpful feature after deploying changes. Use annotations to mark deploys and correlate them with crash data, enabling you to quickly identify if a new bug is linked to recent updates or modifications.

    Crash data in real time
    Crash data in real time

    And with our mobile app, you can effortlessly stay connected to your website’s performance, conveniently accessing crash information anytime and anywhere. This ensures you’re in complete control of your site’s health, even when you’re on the move.

    Streamline bug resolution with combined web and crash analytics

    Crash Analytics for Matomo doesn’t just stop at pinpointing bug locations ; it goes a step further by providing you with a holistic perspective of user interactions. Seamlessly combining Matomo’s traditional and behavioural web analytics features—like segments, session recordings and visitor logs—with crash data, this integrated approach unveils a wealth of insights so you can quickly resolve bugs. 

    For instance, let’s say a user encounters a bug while attempting to complete a purchase on your e-commerce website. Crash Analytics reveals the exact point of failure, but to truly grasp the situation, you delve into the session recordings. These recordings offer a front-row seat to the user’s journey—every click and interaction that led to the bug. Session recordings are especially helpful when you are struggling to reproduce an issue.

    Visits log combined with crash data in Matomo
    Visits log overlayed with crash data

    Additionally, the combination of visitor logs with crash data offers a comprehensive timeline of a user’s engagement. This helps you understand their activity leading up to the bug, such as pages visited, actions taken and devices used. Armed with these multifaceted insights, you can confidently pinpoint the root causes and address the crash immediately.

    With segments, you have the ability to dissect the data and compare experiences among distinct user groups. For example, you can compare mobile visitors to desktop visitors to determine if the issue is isolated or widespread and what impact the issue is having on the user experience of different user groups. 

    The combination of crash data with Matomo’s comprehensive web analytics equips you with the tools needed to elevate user experiences and ultimately drive revenue growth.

    Start in seconds, shape as needed : Your path to a 100% reliable website

    Crash Analytics makes the path to a reliable website simple. You don’t have to deal with intricate setups—crash detection starts without any configuration. 

    Plus, Crash Analytics excels in cross-stack proficiency, seamlessly extending its capabilities beyond automatically tracking JavaScript errors to covering server-side crashes as well, whether they occur in PHP, Android, iOS, Java or other frameworks. This versatile approach ensures that Crash Analytics comprehensively supports your website’s health and performance across various technological landscapes.

    Elevate your website with Crash Analytics

    Experience the seamless convergence of bug tracking and web analytics, allowing you to delve into user interactions, session recordings and visitor logs. With the flexibility of customising real-time alerts and scheduled reports, alongside cross-stack proficiency, Crash Analytics becomes your trusted ally in enhancing your website’s reliability and user satisfaction to increase conversions and drive revenue growth. Equip yourself to swiftly address issues and create a website where user experiences take precedence.

    Start your 30-day free trial of our Crash Analytics plugin today, and stay tuned for its availability on Matomo Cloud.

  • Subtitling Sierra VMD Files

    1er juin 2016, par Multimedia Mike — Game Hacking

    I was contacted by a game translation hobbyist from Spain (henceforth known as The Translator). He had set his sights on Sierra’s 7-CD Phantasmagoria. This mammoth game was driven by a lot of FMV files and animations that have speech. These require language translation in the form of video subtitling. He’s lucky that he found possibly the one person on the whole internet who has just the right combination of skill, time, and interest to pull this off. And why would I care about helping ? I guess I share a certain camaraderie with game hackers. Don’t act so surprised. You know what kind of stuff I like to work on.

    The FMV format used in this game is VMD, which makes an appearance in numerous Sierra titles. FFmpeg already supports decoding this format. FFmpeg also supports subtitling video. So, ideally, all that’s necessary to support this goal is to add a muxer for the VMD format which can encode raw video and audio, which the format supports. Implement video compression as extra credit.

    The pipeline that I envisioned looks like this :


    VMD Subtitling Process

    VMD Subtitling Process


    “Trivial !” I surmised. I just never learn, do I ?

    The Plan
    So here’s my initial pitch, outlining the work I estimated that I would need to do towards the stated goal :

    1. Create a new file muxer that produces a syntactically valid VMD file with bogus video and audio data. Make sure it works with both FFmpeg’s playback system as well as the proper Phantasmagoria engine.
    2. Create a new video encoder that essentially operates in pass-through mode while correctly building a palette.
    3. Create a new basic encoder for the video frames.

    A big unknown for me was exactly how subtitle handling operates in FFmpeg. Thanks to this project, I now know. I was concerned because I was pretty sure that font rendering entails anti-aliasing which bodes poorly for keeping the palette count under 256 unique colors.

    Computer Science Puzzle
    When pondering how to process the palette, I was excited for the opportunity to exercise actual computer science. FFmpeg converts frames from paletted frames to full RGB frames. Then it needs to convert them back to paletted frames. I had a vague recollection of solving this problem once before when I was experimenting with a new paletted video codec. I seem to recall that I did the palette conversion in a very naive manner. I just used a static 256-element array and processed each RGB pixel of the frame, seeing if the value already occurred in the table (O(n) lookup) and adding it otherwise.

    There are more efficient algorithms, however, such as hash tables and trees. Somewhere along the line, FFmpeg helpfully acquired a rarely-used tree data structure, which was perfect for this project.

    So I was pretty pleased with this optimization. Too bad this wouldn’t survive to the end of the effort.

    Another palette-related challenge was the fact that a group of pictures would be accumulating a new palette but that palette needed to be recorded before the group. Thus, the muxer needed to have extra logic to rewind the file when the video encoder transmitted a palette change.

    Video Compression
    VMD has a few methods in its compression toolbox. It can use interframe differencing, it has some RLE, or it can code a frame raw. It can also use a custom LZ-like format on top of these. For early prototypes, I elected to leave each frame coded raw. After the concept was proved, I implemented the frame differencing.


    VMD frame #1

    VMD frame #2

    VMD frame difference
    Top frame compared with the middle frame yields the bottom frame : red pixels indicate changes

    Encoding only those red dots in between vast runs of unchanged pixels yielded a vast measurable improvement. The next step was to try wiring up FFmpeg’s existing LZ compression facilities to the encoder. This turned out to be implausible since VMD’s LZ variant has nothing to do with anything FFmpeg already provides. Fortunately, the LZ piece is not absolutely required and the frame differencing + RLE provides plenty of compression.

    Subtitling
    I’ve never done anything, multimedia programming-wise, concerning subtitles. I guess all the entertainment I care about has always been in my native tongue. What a good excuse to program outside of my comfort zone !

    First, I needed to know how to access FFmpeg’s subtitling facilities. Fortunately, The Translator did the legwork on this matter so I didn’t have to figure it out.

    However, I intuitively had misgivings about this phase. I had heard that the subtitling process performs anti-aliasing. That means that the image would need to be promoted to a higher colorspace for this phase and that the anti-aliasing process would likely push the color count way past 256. Some quick tests revealed this to be the case, as the running color count would leap by several hundred colors as soon as the palette accounting algorithm encountered a subtitle.

    So I dug into the subtitle subsystem. I discovered that the subtitle library operates by creating a linked list of subtitle bitmaps that the client app must render. The bitmaps are comprised of 8-bit alpha transparency values that must be composited onto the target frame (i.e., 0 = transparent, 255 = 100% opaque). For example, the letter ‘H’ :

                                      (with 00s removed)
    13 F8 41 00 00 00 00 68 E4  |  13 F8 41             68 E4    
    14 FF 44 00 00 00 00 6C EC  |  14 FF 44             6C EC
    14 FF 44 00 00 00 00 6C EC  |  14 FF 44             6C EC
    14 FF 44 00 00 00 00 6C EC  |  14 FF 44             6C EC
    14 FF DC D0 D0 D0 D0 E4 EC  |  14 FF DC D0 D0 D0 D0 E4 EC
    14 FF 7E 50 50 50 50 9A EC  |  14 FF 7E 50 50 50 50 9A EC
    14 FF 44 00 00 00 00 6C EC  |  14 FF 44             6C EC
    14 FF 44 00 00 00 00 6C EC  |  14 FF 44             6C EC
    14 FF 44 00 00 00 00 6C EC  |  14 FF 44             6C EC
    11 E0 3B 00 00 00 00 5E CE  |  11 E0 3B             5E CE
    

    To get around the color explosion problem, I chose a threshold value and quantized values above and below to 255 and 0, respectively. Further, the process chooses an appropriate color from the existing palette rather than introducing any new colors.

    Muxing Matters
    In order to force VMD into a general purpose media framework, a lot of special information needs to be passed around. Like many paletted codecs, the palette needs to be transmitted from the file demuxer to the video decoder via some side channel. For re-encoding, this also implies that the palette needs to make the trip from the video encoder to the file muxer. As if this wasn’t enough, individual VMD frames have even more data that needs to be ferried between the muxer and codec levels, including frame change boundaries. FFmpeg provides methods to do these things, but I could not always rely on the systems to relay the data in all cases. I was probably doing something wrong ; I accept that. Instead, I just packed all the information at the front of an encoded frame and split it apart in the muxer.

    I could not quite figure out how to get the audio and video muxed correctly. As a result, neither FFmpeg nor the Phantasmagoria engine could replay the files correctly.

    Plan B
    Since I was having so much trouble creating an entirely new VMD file, likely due to numerous unknown bits of the file format, I thought of another angle : re-use the existing VMD file. For this approach, I kept the video encoder and file muxer that I created in the initial phase, but modified the file muxer to emit a special intermediate file. Then, I created a Python tool to repackage the original VMD file using compressed video data in the intermediate file.

    For this phase, I also implemented a command line switch for FFmpeg to disable subtitle blending, to make the feature feel like less of an unofficial hack, as though this nonsense would ever have a chance of being incorporated upstream.

    At this point, I was seeing some success with the complete, albeit roundabout, subtitling process. I constructed a subtitle file using “Spanish I Learned From Mexican Telenovelas” and the frames turned out fairly readable :


    Le puso los cuernos a él

    “she cheated on him”


    es un desgraciado

    “he’s a scumbag” … these random subtitles could fit surprisingly well !


    The few files that I tested appeared to work fine. But then I handed off my work to The Translator and he immediately found a bunch of problems. According to my notes, the problems mostly took the form of flashing, solid color frames. Further, I found tiny, mostly imperceptible flaws in my RLE compressor, usually only detectable by running strict comparison tools ; but I wasn’t satisfied.

    At this point, I think I attempted to just encode the entire palette at the front of each frame, as allowed by the format, but that did not seem to fix any problems. My notes are not completely clear on this matter (likely because I was still trying to figure out the exact problem), but I think it had to do with FFmpeg inserting extra video frames in order to even out gaps in the video framerate.

    Sigh, Plan C
    At this point, I was getting tired of trying to force FFmpeg to do this. So I decided to minimize its involvement using lessons learned up to this point.

    The next pitch :

    1. Create a new C program that can open an existing VMD file and output an identical VMD file. I know this sounds easy, but the specific method of copying entails interpreting individual parts of the file and writing those individual parts to the new file. This is in preparation for…
    2. Import the VMD video decoder functions directly into the program to decode the individual video frames and re-encode them, replacing the video frames as the file is rewritten.
    3. Wire up the subtitle system. During the adventure to disable subtitle blending, I accidentally learned enough about interfacing to the subtitle library to just invoke it directly.
    4. Rewrite the RLE method so that it is 100% correct.

    Off to work I went. That part about lifting the existing VMD decoder functions out of their libavcodec nest turned out to not be that straightforward. As an alternative, I modified the decoder to dump the raw frames to an intermediate file. In doing so, I think I was able to avoid the issue of the duplicated frames that plagued the previous efforts.

    Also, remember how I was really pleased with the palette conversion technique in which I was able to leverage computer science big-O theory ? By this stage, I had no reason to convert the paletted video to RGB in the first place ; all of the decoding, subtitling and re-encoding operates in the paletted colorspace.

    This approach seemed to work pretty well. The final program is subtitle-vmd.c. The process is still a little weird. The modifications in my own FFmpeg fork are necessary to create an intermediate file that the new C tool can operate with.

    Next Steps
    The Translator has found some assorted bugs and corner cases that still need to be ironed out. Further, for extra credit, I need find the change windows for each frame to improve compression just a little more. I don’t think I will be trying for LZ compression, though.

    However, almost as soon as I had this whole system working, The Translator informed me that there is another, different movie format in play in the Phantasmagoria engine called ROBOT, with an extension of RBT. Fortunately, enough of the algorithms have been reverse engineered and re-implemented in ScummVM that I was able to sort out enough details for another subtitling project. That will be the subject of a future post.

    See Also :

  • The use cases for a element in HTML

    1er janvier 2014, par silvia

    The W3C HTML WG and the WHATWG are currently discussing the introduction of a <main> element into HTML.

    The <main> element has been proposed by Steve Faulkner and is specified in a draft extension spec which is about to be accepted as a FPWD (first public working draft) by the W3C HTML WG. This implies that the W3C HTML WG will be looking for implementations and for feedback by implementers on this spec.

    I am supportive of the introduction of a <main> element into HTML. However, I believe that the current spec and use case list don’t make a good enough case for its introduction. Here are my thoughts.

    Main use case : accessibility

    In my opinion, the main use case for the introduction of <main> is accessibility.

    Like any other users, when blind users want to perceive a Web page/application, they need to have a quick means of grasping the content of a page. Since they cannot visually scan the layout and thus determine where the main content is, they use accessibility technology (AT) to find what is known as “landmarks”.

    “Landmarks” tell the user what semantic content is on a page : a header (such as a banner), a search box, a navigation menu, some asides (also called complementary content), a footer, …. and the most important part : the main content of the page. It is this main content that a blind user most often wants to skip to directly.

    In the days of HTML4, a hidden “skip to content” link at the beginning of the Web page was used as a means to help blind users access the main content.

    In the days of ARIA, the aria @role=main enables authors to avoid a hidden link and instead mark the element where the main content begins to allow direct access to the main content. This attribute is supported by AT – in particular screen readers – by making it part of the landmarks that AT can directly skip to.

    Both the hidden link and the ARIA @role=main approaches are, however, band aids : they are being used by those of us that make “finished” Web pages accessible by adding specific extra markup.

    A world where ARIA is not necessary and where accessibility developers would be out of a job because the normal markup that everyone writes already creates accessible Web sites/applications would be much preferable over the current world of band-aids.

    Therefore, to me, the primary use case for a <main> element is to achieve exactly this better world and not require specialized markup to tell a user (or a tool) where the main content on a page starts.

    An immediate effect would be that pages that have a <main> element will expose a “main” landmark to blind and vision-impaired users that will enable them to directly access that main content on the page without having to wade through other text on the page. Without a <main> element, this functionality can currently only be provided using heuristics to skip other semantic and structural elements and is for this reason not typically implemented in AT.

    Other use cases

    The <main> element is a semantic element not unlike other new semantic elements such as <header>, <footer>, <aside>, <article>, <nav>, or <section>. Thus, it can also serve other uses where the main content on a Web page/Web application needs to be identified.

    Data mining

    For data mining of Web content, the identification of the main content is one of the key challenges. Many scholarly articles have been published on this topic. This stackoverflow article references and suggests a multitude of approaches, but the accepted answer says “there’s no way to do this that’s guaranteed to work”. This is because Web pages are inherently complex and many <div>, <p>, <iframe> and other elements are used to provide markup for styling, notifications, ads, analytics and other use cases that are necessary to make a Web page complete, but don’t contribute to what a user consumes as semantically rich content. A <main> element will allow authors to pro-actively direct data mining tools to the main content.

    Search engines

    One particularly important “data mining” tool are search engines. They, too, have a hard time to identify which sections of a Web page are more important than others and employ many heuristics to do so, see e.g. this ACM article. Yet, they still disappoint with poor results pointing to findings of keywords in little relevant sections of a page rather than ranking Web pages higher where the keywords turn up in the main content area. A <main> element would be able to help search engines give text in main content areas a higher weight and prefer them over other areas of the Web page. It would be able to rank different Web pages depending on where on the page the search words are found. The <main> element will be an additional hint that search engines will digest.

    Visual focus

    On small devices, the display of Web pages designed for Desktop often causes confusion as to where the main content can be found and read, in particular when the text ends up being too small to be readable. It would be nice if browsers on small devices had a functionality (maybe a default setting) where Web pages would start being displayed as zoomed in on the main content. This could alleviate some of the headaches of responsive Web design, where the recommendation is to show high priority content as the first content. Right now this problem is addressed through stylesheets that re-layout the page differently depending on device, but again this is a band-aid solution. Explicit semantic markup of the main content can solve this problem more elegantly.

    Styling

    Finally, naturally, <main> would also be used to style the main content differently from others. You can e.g. replace a semantically meaningless <div id=”main”> with a semantically meaningful <main> where their position is identical. My analysis below shows, that this is not always the case, since oftentimes <div id=”main”> is used to group everything together that is not the header – in particular where there are multiple columns. Thus, the ease of styling a <main> element is only a positive side effect and not actually a real use case. It does make it easier, however, to adapt the style of the main content e.g. with media queries.

    Proposed alternative solutions

    It has been proposed that existing markup serves to satisfy the use cases that <main> has been proposed for. Let’s analyse these on some of the most popular Web sites. First let’s list the propsed algorithms.

    Proposed solution No 1 : Scooby-Doo

    On Sat, Nov 17, 2012 at 11:01 AM, Ian Hickson <ian@hixie.ch> wrote :
    | The main content is whatever content isn’t
    | marked up as not being main content (anything not marked up with <header>,
    | <aside>, <nav>, etc).
    

    This implies that the first element that is not a <header>, <aside>, <nav>, or <footer> will be the element that we want to give to a blind user as the location where they should start reading. The algorithm is implemented in https://gist.github.com/4032962.

    Proposed solution No 2 : First article element

    On Sat, Nov 17, 2012 at 8:01 AM, Ian Hickson  wrote :
    | On Thu, 15 Nov 2012, Ian Yang wrote :
    | >
    | > That’s a good idea. We really need an element to wrap all the <p>s,
    | > <ul>s, <ol>s, <figure>s, <table>s ... etc of a blog post.
    |
    | That’s called <article>.
    

    This approach identifies the first <article> element on the page as containing the main content. Here’s the algorithm for this approach.

    Proposed solution No 3 : An example heuristic approach

    The readability plugin has been developed to make Web pages readable by essentially removing all the non-main content from a page. An early source of readability is available. This demonstrates what a heuristic approach can perform.

    Analysing alternative solutions

    Comparison

    I’ve picked 4 typical Websites (top on Alexa) to analyse how these three different approaches fare. Ideally, I’d like to simply apply the above three scripts and compare pictures. However, since the semantic HTML5 elements <header>, <aside>, <nav>, and <footer> are not actually used by any of these Web sites, I don’t actually have this choice.

    So, instead, I decided to make some assumptions of where these semantic elements would be used and what the outcome of applying the first two algorithms would be. I can then compare it to the third, which is a product so we can take screenshots.

    Google.com

    http://google.com – search for “Scooby Doo”.

    The search results page would likely be built with :

    • a <nav> menu for the Google bar
    • a <header> for the search bar
    • another <header> for the login section
    • another <nav> menu for the search types
    • a <div> to contain the rest of the page
    • a <div> for the app bar with the search number
    • a few <aside>s for the left and right column
    • a set of <article>s for the search results
    “Scooby Doo” would find the first element after the headers as the “main content”. This is the element before the app bar in this case. Interestingly, there is a <div @id=main> already in the current Google results page, which “Scooby Doo” would likely also pick. However, there are a nav bar and two asides in this div, which clearly should not be part of the “main content”. Google actually placed a @role=main on a different element, namely the one that encapsulates all the search results.

    “First Article” would find the first search result as the “main content”. While not quite the same as what Google intended – namely all search results – it is close enough to be useful.

    The “readability” result is interesting, since it is not able to identify the main text on the page. It is actually aware of this problem and brings a warning before displaying this page :

    Readability of google.com

    Facebook.com

    https://facebook.com

    A user page would likely be built with :

    • a <header> bar for the search and login bar
    • a <div> to contain the rest of the page
    • an <aside> for the left column
    • a <div> to contain the center and right column
    • an <aside> for the right column
    • a <header> to contain the center column “megaphone”
    • a <div> for the status posting
    • a set of <article>s for the home stream
    “Scooby Doo” would find the first element after the headers as the “main content”. This is the element that contains all three columns. It’s actually a <div @id=content> already in the current Facebook user page, which “Scooby Doo” would likely also pick. However, Facebook selected a different element to place the @role=main : the center column.

    “First Article” would find the first news item in the home stream. This is clearly not what Facebook intended, since they placed the @role=main on the center column, above the first blog post’s title. “First Article” would miss that title and the status posting.

    The “readability” result again disappoints but warns that it failed :

    YouTube.com

    http://youtube.com

    A video page would likely be built with :

    • a <header> bar for the search and login bar
    • a <nav> for the menu
    • a <div> to contain the rest of the page
    • a <header> for the video title and channel links
    • a <div> to contain the video with controls
    • a <div> to contain the center and right column
    • an <aside> for the right column with an <article> per related video
    • an <aside> for the information below the video
    • a <article> per comment below the video
    “Scooby Doo” would find the first element after the headers as the “main content”. This is the element that contains the rest of the page. It’s actually a <div @id=content> already in the current YouTube video page, which “Scooby Doo” would likely also pick. However, YouTube’s related videos and comments are unlikely to be what the user would regard as “main content” – it’s the video they are after, which generously has a <div id=watch-player>.

    “First Article” would find the first related video or comment in the home stream. This is clearly not what YouTube intends.

    The “readability” result is not quite as unusable, but still very bare :

    Wikipedia.com

    http://wikipedia.com (“Overscan” page)

    A Wikipedia page would likely be built with :

    • a <header> bar for the search, login and menu items
    • a <div> to contain the rest of the page
    • an &ls ; article> with title and lots of text
    • <article> an <aside> with the table of contents
    • several <aside>s for the left column
    Good news : “Scooby Doo” would find the first element after the headers as the “main content”. This is the element that contains the rest of the page. It’s actually a <div id=”content” role=”main”> element on Wikipedia, which “Scooby Doo” would likely also pick.

    “First Article” would find the title and text of the main element on the page, but it would also include an <aside>.

    The “readability” result is also in agreement.

    Results

    In the following table we have summarised the results for the experiments :

    Site Scooby-Doo First article Readability
    Google.com FAIL SUCCESS FAIL
    Facebook.com FAIL FAIL FAIL
    YouTube.com FAIL FAIL FAIL
    Wikipedia.com SUCCESS SUCCESS SUCCESS

    Clearly, Wikipedia is the prime example of a site where even the simple approaches find it easy to determine the main content on the page. WordPress blogs are similarly successful. Almost any other site, including news sites, social networks and search engine sites are petty hopeless with the proposed approaches, because there are too many elements that are used for layout or other purposes (notifications, hidden areas) such that the pre-determined list of semantic elements that are available simply don’t suffice to mark up a Web page/application completely.

    Conclusion

    It seems that in general it is impossible to determine which element(s) on a Web page should be the “main” piece of content that accessibility tools jump to when requested, that a search engine should put their focus on, or that should be highlighted to a general user to read. It would be very useful if the author of the Web page would provide a hint through a <main> element where that main content is to be found.

    I think that the <main> element becomes particularly useful when combined with a default keyboard shortcut in browsers as proposed by Steve : we may actually find that non-accessibility users will also start making use of this shortcut, e.g. to get to videos on YouTube pages directly without having to tab over search boxes and other interactive elements, etc. Worthwhile markup indeed.