
Recherche avancée
Médias (1)
-
Sintel MP4 Surround 5.1 Full
13 mai 2011, par
Mis à jour : Février 2012
Langue : English
Type : Video
Autres articles (67)
-
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (13356)
-
Articles de documentation à faire/modifier pour la prochaine version (0.2)
16 novembre 2012Il va falloir faire beaucoup de changements dans la documentation pour la sortie de la prochaine version.
De nouveaux plugins sont à documenter :
- embed_doc
La documentation d’anciens plugins est à modifier suite aux améliorations :
- inscription3 ;
- mediaspip_player ;
MediaSPIP sera compatible avec plus de plugins, il faut les indiquer et expliquer les intéractions entre MediaSPIP et eux :
- Intéractions avec les réseaux sociaux :
- microblog ;
- facebook ;
- oembed ;
Il y en a beaucoup d’autres (à ajouter en commentaire si vous en voyez).
-
The use cases for a element in HTML
27 novembre 2012, par silviaThe W3C HTML WG and the WHATWG are currently discussing the introduction of a <main> element into HTML.
The <main> element has been proposed by Steve Faulkner and is specified in a draft extension spec which is about to be accepted as a FPWD (first public working draft) by the W3C HTML WG. This implies that the W3C HTML WG will be looking for implementations and for feedback by implementers on this spec.
I am supportive of the introduction of a <main> element into HTML. However, I believe that the current spec and use case list don’t make a good enough case for its introduction. Here are my thoughts.
Main use case : accessibility
In my opinion, the main use case for the introduction of <main> is accessibility.
Like any other users, when blind users want to perceive a Web page/application, they need to have a quick means of grasping the content of a page. Since they cannot visually scan the layout and thus determine where the main content is, they use accessibility technology (AT) to find what is known as “landmarks”.
“Landmarks” tell the user what semantic content is on a page : a header (such as a banner), a search box, a navigation menu, some asides (also called complementary content), a footer, …. and the most important part : the main content of the page. It is this main content that a blind user most often wants to skip to directly.
In the days of HTML4, a hidden “skip to content” link at the beginning of the Web page was used as a means to help blind users access the main content.
In the days of ARIA, the aria @role=main enables authors to avoid a hidden link and instead mark the element where the main content begins to allow direct access to the main content. This attribute is supported by AT – in particular screen readers – by making it part of the landmarks that AT can directly skip to.
Both the hidden link and the ARIA @role=main approaches are, however, band aids : they are being used by those of us that make “finished” Web pages accessible by adding specific extra markup.
A world where ARIA is not necessary and where accessibility developers would be out of a job because the normal markup that everyone writes already creates accessible Web sites/applications would be much preferable over the current world of band-aids.
Therefore, to me, the primary use case for a <main> element is to achieve exactly this better world and not require specialized markup to tell a user (or a tool) where the main content on a page starts.
An immediate effect would be that pages that have a <main> element will expose a “main” landmark to blind and vision-impaired users that will enable them to directly access that main content on the page without having to wade through other text on the page. Without a <main> element, this functionality can currently only be provided using heuristics to skip other semantic and structural elements and is for this reason not typically implemented in AT.
Other use cases
The <main> element is a semantic element not unlike other new semantic elements such as <header>, <footer>, <aside>, <article>, <nav>, or <section>. Thus, it can also serve other uses where the main content on a Web page/Web application needs to be identified.
Data mining
For data mining of Web content, the identification of the main content is one of the key challenges. Many scholarly articles have been published on this topic. This stackoverflow article references and suggests a multitude of approaches, but the accepted answer says “there’s no way to do this that’s guaranteed to work”. This is because Web pages are inherently complex and many <div>, <p>, <iframe> and other elements are used to provide markup for styling, notifications, ads, analytics and other use cases that are necessary to make a Web page complete, but don’t contribute to what a user consumes as semantically rich content. A <main> element will allow authors to pro-actively direct data mining tools to the main content.
Search engines
One particularly important “data mining” tool are search engines. They, too, have a hard time to identify which sections of a Web page are more important than others and employ many heuristics to do so, see e.g. this ACM article. Yet, they still disappoint with poor results pointing to findings of keywords in little relevant sections of a page rather than ranking Web pages higher where the keywords turn up in the main content area. A <main> element would be able to help search engines give text in main content areas a higher weight and prefer them over other areas of the Web page. It would be able to rank different Web pages depending on where on the page the search words are found. The <main> element will be an additional hint that search engines will digest.
Visual focus
On small devices, the display of Web pages designed for Desktop often causes confusion as to where the main content can be found and read, in particular when the text ends up being too small to be readable. It would be nice if browsers on small devices had a functionality (maybe a default setting) where Web pages would start being displayed as zoomed in on the main content. This could alleviate some of the headaches of responsive Web design, where the recommendation is to show high priority content as the first content. Right now this problem is addressed through stylesheets that re-layout the page differently depending on device, but again this is a band-aid solution. Explicit semantic markup of the main content can solve this problem more elegantly.
Styling
Finally, naturally, <main> would also be used to style the main content differently from others. You can e.g. replace a semantically meaningless <div id=”main”> with a semantically meaningful <main> where their position is identical. My analysis below shows, that this is not always the case, since oftentimes <div id=”main”> is used to group everything together that is not the header – in particular where there are multiple columns. Thus, the ease of styling a <main> element is only a positive side effect and not actually a real use case. It does make it easier, however, to adapt the style of the main content e.g. with media queries.
Proposed alternative solutions
It has been proposed that existing markup serves to satisfy the use cases that <main> has been proposed for. Let’s analyse these on some of the most popular Web sites. First let’s list the propsed algorithms.
Proposed solution No 1 : Scooby-Doo
On Sat, Nov 17, 2012 at 11:01 AM, Ian Hickson <ian@hixie.ch> wrote : | The main content is whatever content isn’t | marked up as not being main content (anything not marked up with <header>, | <aside>, <nav>, etc).
This implies that the first element that is not a <header>, <aside>, <nav>, or <footer> will be the element that we want to give to a blind user as the location where they should start reading. The algorithm is implemented in https://gist.github.com/4032962.
Proposed solution No 2 : First article element
On Sat, Nov 17, 2012 at 8:01 AM, Ian Hickson <ian@hixie.ch> wrote : | On Thu, 15 Nov 2012, Ian Yang wrote : | > | > That’s a good idea. We really need an element to wrap all the <p>s, | > <ul>s, <ol>s, <figure>s, <table>s ... etc of a blog post. | | That’s called <article>.
This approach identifies the first <article> element on the page as containing the main content. Here’s the algorithm for this approach.
Proposed solution No 3 : An example heuristic approach
The readability plugin has been developed to make Web pages readable by essentially removing all the non-main content from a page. An early source of readability is available. This demonstrates what a heuristic approach can perform.
Analysing alternative solutions
Comparison
I’ve picked 4 typical Websites (top on Alexa) to analyse how these three different approaches fare. Ideally, I’d like to simply apply the above three scripts and compare pictures. However, since the semantic HTML5 elements <header>, <aside>, <nav>, and <footer> are not actually used by any of these Web sites, I don’t actually have this choice.
So, instead, I decided to make some assumptions of where these semantic elements would be used and what the outcome of applying the first two algorithms would be. I can then compare it to the third, which is a product so we can take screenshots.
Google.com
http://google.com – search for “Scooby Doo”.
The search results page would likely be built with :
- a <nav> menu for the Google bar
- a <header> for the search bar
- another <header> for the login section
- another <nav> menu for the search types
- a <div> to contain the rest of the page
- a <div> for the app bar with the search number
- a few <aside>s for the left and right column
- a set of <article>s for the search results
“Scooby Doo” would find the first element after the headers as the “main content”. This is the element before the app bar in this case. Interestingly, there is a <div @id=main> already in the current Google results page, which “Scooby Doo” would likely also pick. However, there are a nav bar and two asides in this div, which clearly should not be part of the “main content”. Google actually placed a @role=main on a different element, namely the one that encapsulates all the search results.“First Article” would find the first search result as the “main content”. While not quite the same as what Google intended – namely all search results – it is close enough to be useful.
The “readability” result is interesting, since it is not able to identify the main text on the page. It is actually aware of this problem and brings a warning before displaying this page :
Facebook.com
A user page would likely be built with :
- a <header> bar for the search and login bar
- a <div> to contain the rest of the page
- an <aside> for the left column
- a <div> to contain the center and right column
- an <aside> for the right column
- a <header> to contain the center column “megaphone”
- a <div> for the status posting
- a set of <article>s for the home stream
“Scooby Doo” would find the first element after the headers as the “main content”. This is the element that contains all three columns. It’s actually a <div @id=content> already in the current Facebook user page, which “Scooby Doo” would likely also pick. However, Facebook selected a different element to place the @role=main : the center column.“First Article” would find the first news item in the home stream. This is clearly not what Facebook intended, since they placed the @role=main on the center column, above the first blog post’s title. “First Article” would miss that title and the status posting.
The “readability” result again disappoints but warns that it failed :
YouTube.com
A video page would likely be built with :
- a <header> bar for the search and login bar
- a <nav> for the menu
- a <div> to contain the rest of the page
- a <header> for the video title and channel links
- a <div> to contain the video with controls
- a <div> to contain the center and right column
- an <aside> for the right column with an <article> per related video
- an <aside> for the information below the video
- a <article> per comment below the video
“Scooby Doo” would find the first element after the headers as the “main content”. This is the element that contains the rest of the page. It’s actually a <div @id=content> already in the current YouTube video page, which “Scooby Doo” would likely also pick. However, YouTube’s related videos and comments are unlikely to be what the user would regard as “main content” – it’s the video they are after, which generously has a <div id=watch-player>.“First Article” would find the first related video or comment in the home stream. This is clearly not what YouTube intends.
The “readability” result is not quite as unusable, but still very bare :
Wikipedia.com
http://wikipedia.com (“Overscan” page)
A Wikipedia page would likely be built with :
- a <header> bar for the search, login and menu items
- a <div> to contain the rest of the page
- an &ls; article> with title and lots of text
- <article> an <aside> with the table of contents
- several <aside>s for the left column
Good news : “Scooby Doo” would find the first element after the headers as the “main content”. This is the element that contains the rest of the page. It’s actually a <div id=”content” role=”main”> element on Wikipedia, which “Scooby Doo” would likely also pick.“First Article” would find the title and text of the main element on the page, but it would also include an <aside>.
The “readability” result is also in agreement.
Results
In the following table we have summarised the results for the experiments :
Site Scooby-Doo First article Readability Google.com FAIL SUCCESS FAIL Facebook.com FAIL FAIL FAIL YouTube.com FAIL FAIL FAIL Wikipedia.com SUCCESS SUCCESS SUCCESS Clearly, Wikipedia is the prime example of a site where even the simple approaches find it easy to determine the main content on the page. WordPress blogs are similarly successful. Almost any other site, including news sites, social networks and search engine sites are petty hopeless with the proposed approaches, because there are too many elements that are used for layout or other purposes (notifications, hidden areas) such that the pre-determined list of semantic elements that are available simply don’t suffice to mark up a Web page/application completely.
Conclusion
It seems that in general it is impossible to determine which element(s) on a Web page should be the “main” piece of content that accessibility tools jump to when requested, that a search engine should put their focus on, or that should be highlighted to a general user to read. It would be very useful if the author of the Web page would provide a hint through a <main> element where that main content is to be found.
I think that the <main> element becomes particularly useful when combined with a default keyboard shortcut in browsers as proposed by Steve : we may actually find that non-accessibility users will also start making use of this shortcut, e.g. to get to videos on YouTube pages directly without having to tab over search boxes and other interactive elements, etc. Worthwhile markup indeed.
-
Hung out to dry
31 mai 2013, par Mans — Law and libertyOutrage was the general reaction when Google recently announced their dropping of XMPP server-to-server federation from Hangouts, as the search giant’s revamped instant messaging platform is henceforth to be known. This outrage is, however, largely unjustified ; Google’s decision is merely a rational response to issues of a more fundamental nature. To see why, we need to step back and look at the broader instant messaging landscape.
A brief history of IM
The term instant messaging (IM) gained popularity in the mid-1990s along with the rise of chat clients such as ICQ, AOL Instant Messenger, and later MSN Messenger. These all had one thing in common : they were closed systems. Although global in the sense of allowing access from anywhere on the Internet, communication was possible only within each network, and only using the officially sanctioned client software. Contrast this with email, where users are free to choose any service provider as well as client software, inter-server communication over open protocols delivering messages to their proper destinations.
The email picture has, however, not always been so rosy. During the 1970s and 80s a multitude of incompatible email systems (e.g. UUCP and X.400) were in more or less widespread use on various networks. As these networks gave way to the ARPANET/Internet, so did their mail systems to the SMTP email we all use today. A similar consolidation has yet to occur in the area of instant messaging.
Over the years, a few efforts towards a cross-domain instant messaging have been undertaken. One early example is the Zephyr system created as part of Project Athena at MIT in the late 1980s. While it never saw significant uptake, it is still in use at a few universities. A more successful story is that of XMPP. Conceived under the name Jabber in the late 1990s, XMPP is an open standard specified in a set of IETF RFCs. In addition to being open, a distinguishing feature of XMPP compared to other contemporary IM systems is its decentralised nature, server-to-server connections allowing communication between users with accounts on different systems. Just like email.
The social network
A more recent emergence on the Internet is the social network. Although not the first of its kind, Facebook was the first to achieve its level of penetration, both geographically and across social groups. A range of messaging options, including email-style as well as instant messaging (chat), are available, all within the same web interface. What it does not allow is communication outside the Facebook network. Other social networks operate in the same spirit.
The popularity of social networks, to the extent that they for many constitute the primary means of communication, has in a sense brought back fragmented networks of the 1980s. Even though they share infrastructure, up to and including the browser application, the social networks create walled-off regions of the Internet between which little or no exchange is possible.
The house that Google built
In 2005, Google launched Talk, an XMPP-based instant messaging service allowing users to connect using either Google’s official client application or any third-party XMPP client. Soon after, server-to-server federation was activated, enabling anyone with a Google account to exchange instant messages with users of any other federated XMPP service. An in-browser chat interface was also added to Gmail.
It was arguably only with the 2011 introduction of Google+ that Google, despite its previous endeavours with Orkut and Buzz, had a viable contender in the social networking space. Since its inception, Google+ has gone through a number of changes where features have been added or reworked. Instant messaging within Google+ was until recently available only in mobile clients. On the desktop, the sole messaging option was Hangouts which, although featuring text chat, cannot be considered instant messaging in the usual sense.
With a sprawling collection of messaging systems (Talk, Google+ Messenger, Hangouts), some action to consolidate them was a logical step. What we got was a unification under the Hangouts name. A redesigned Google+ now sports in-browser instant messaging similar the the Talk interface already present in Gmail. At the same time, the standalone desktop Talk client is discontinued, as is the Messenger feature in mobile Google+. All together, the changes make for a much less confusing user experience.
The sky is falling down
Along with the changes to the messaging platform, one announcement stoked anger on the Internet : Google’s intent to discontinue XMPP federation (as of this writing, it is still operational). Google, the (self-described) champions of openness on the Internet were seen to be closing their doors to the outside world. The effects of the change are, however, not quite so earth-shattering. Of the other major messaging networks to offer XMPP at all (Facebook, Skype, and the defunct Microsoft Messenger), none support federation ; a Google user has never been able to chat with a Facebook user.
XMPP federation appears to be in use mainly by non-profit organisations or individuals running their own servers. The number of users on these systems is hard to assess, though it seems fair to assume it is dwarfed by the hundreds of millions using Google or Facebook. As such, the overall impact of cutting off communication with the federated servers is relatively minor, albeit annoying for those affected.
A fragmented world
Rather than chastising Google for making a low-impact, presumably founded, business decision, we should be asking ourselves why instant messaging is still so fragmented in the first place, whereas email is not. The answer can be found by examining the nature of entities providing these services.
Ever since the commercialisation of the Internet started in the 1990s, email has been largely seen as being part of the Internet. Access to email was a major selling point for Internet service providers ; indeed, many still use the email facilities of their ISP. Instant messaging, by contrast, has never come as part of the basic offering, rather being a third-party service running on top of the Internet.
Users wishing to engage in instant messaging have always had to seek out and sign up with a provider of such a service. As the IM networks were isolated, most would choose whichever service their friends were already using, and a small number of networks, each with a sustainable number of users, came to dominate. In the early days, dedicated IM services such as ICQ were popular. Today, social networks have taken their place with Facebook currently in the dominant position. With the new Hangouts, Google offers its users the service they want in the way they have come to expect.
Follow the money
We now have all the pieces necessary to see why inter-domain instant messaging has never taken off, and the answer is simple : the major players have no commercial incentive to open access to their IM networks. In fact, they have good reason to keep the networks closed. Ensuring that a person leaving the network loses contact with his or her friends, increases user retention by raising the cost of switching to another service. Monetising users is also better facilitated if they are forced to remain on, say, Facebook’s web pages while using its services rather than accessing them indirectly, perhaps even through a competing (Google, say) frontend. The users do not generally care much, since all their friends are already on the same network as themselves.
While Google Talk was a standalone service, only loosely coupled to other Google products, these aspects were of lesser importance. After all, Google still had access to all the messages passing through the system and could analyse them for advert targeting purposes. Now that messaging is an integrated part of Google+, and thus serves as a direct competitor to the likes of Facebook, the situation has changed. All the reasons for Facebook not to open its network now apply equally to Google as well.