
Recherche avancée
Autres articles (96)
-
MediaSPIP Core : La Configuration
9 novembre 2010, parMediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...) -
Problèmes fréquents
10 mars 2010, parPHP et safe_mode activé
Une des principales sources de problèmes relève de la configuration de PHP et notamment de l’activation du safe_mode
La solution consiterait à soit désactiver le safe_mode soit placer le script dans un répertoire accessible par apache pour le site -
Sélection de projets utilisant MediaSPIP
29 avril 2011, parLes exemples cités ci-dessous sont des éléments représentatifs d’usages spécifiques de MediaSPIP pour certains projets.
Vous pensez avoir un site "remarquable" réalisé avec MediaSPIP ? Faites le nous savoir ici.
Ferme MediaSPIP @ Infini
L’Association Infini développe des activités d’accueil, de point d’accès internet, de formation, de conduite de projets innovants dans le domaine des Technologies de l’Information et de la Communication, et l’hébergement de sites. Elle joue en la matière un rôle unique (...)
Sur d’autres sites (6122)
-
WebVTT as a W3C Recommendation
2 décembre 2013, par silviaThree weeks ago I attended TPAC, the annual meeting of W3C Working Groups. One of the meetings was of the Timed Text Working Group (TT-WG), that has been specifying TTML, the Timed Text Markup Language. It is now proposed that WebVTT be also standardised through the same Working Group.
How did that happen, you may ask, in particular since WebVTT and TTML have in the past been portrayed as rival caption formats ? How will the WebVTT spec that is currently under development in the Text Track Community Group (TT-CG) move through a Working Group process ?
I’ll explain first why there is a need for WebVTT to become a W3C Recommendation, and then how this is proposed to be part of the Timed Text Working Group deliverables, and finally how I can see this working between the TT-CG and the TT-WG.
Advantages of a W3C Recommendation
TTML is a XML-based markup format for captions developed during the time that XML was all the hotness. It has become a W3C standard (a so-called “Recommendation”) despite not having been implemented in any browsers (if you ask me : that’s actually a flaw of the W3C standardisation process : it requires only two interoperable implementations of any kind – and that could be anyone’s JavaScript library or Flash demonstrator – it doesn’t actually require browser implementations. But I digress…). To be fair, a subpart of TTML is by now implemented in Internet Explorer, but all the other major browsers have thus far rejected proposals of implementation.
Because of its Recommendation status, TTML has become the basis for several other caption standards that other SDOs have picked : the SMPTE’s SMPTE-TT format, the EBU’s EBU-TT format, and the DASH Industry Forum’s use of SMPTE-TT. SMPTE-TT has also become the “safe harbour” format for the US legislation on captioning as decided by the FCC. (Note that the FCC requirements for captions on the Web are actually based on a list of features rather than requiring a specific format. But that will be the topic of a different blog post…)
WebVTT is much younger than TTML. TTML was developed as an interchange format among caption authoring systems. WebVTT was built for rendering in Web browsers and with HTML5 in mind. It meets the requirements of the <track> element and supports more than just captions/subtitles. WebVTT is popular with browser developers and has already been implemented in all major browsers (Firefox Nightly is the last to implement it – all others have support already released).
As we can see and as has been proven by the HTML spec and multiple other specs : browsers don’t wait for specifications to have W3C Recommendation status before they implement them. Nor do they really care about the status of a spec – what they care about is whether a spec makes sense for the Web developer and user communities and whether it fits in the Web platform. WebVTT has obviously achieved this status, even with an evolving spec. (Note that the spec tries very hard not to break backwards compatibility, thus all past implementations will at least be compatible with the more basic features of the spec.)
Given that Web browsers don’t need WebVTT to become a W3C standard, why then should we spend effort in moving the spec through the W3C process to become a W3C Recommendation ?
The modern Web is now much bigger than just Web browsers. Web specifications are being used in all kinds of devices including TV set-top boxes, phone and tablet apps, and even unexpected devices such as white goods. Videos are increasingly omnipresent thus exposing deaf and hard-of-hearing users to ever-growing challenges in interacting with content on diverse devices. Some of these devices will not use auto-updating software but fixed versions so can’t easily adapt to new features. Thus, caption producers (both commercial and community) need to be able to author captions (and other video accessibility content as defined by the HTML5
Understandably, device vendors in this space have a need to build their technology on standardised specifications. SDOs for such device technologies like to reference fixed specifications so the feature set is not continually updating. To reference WebVTT, they could use a snapshot of the specification at any time and reference that, but that’s not how SDOs work. They prefer referencing an officially sanctioned and tested version of a specification – for a W3C specification that means creating a W3C Recommendation of the WebVTT spec.
Taking WebVTT on a W3C recommendation track is actually advantageous for browsers, too, because a test suite will have to be developed that proves that features are implemented in an interoperable manner. In summary, I can see the advantages and personally support the effort to take WebVTT through to a W3C Recommendation.
Choice of Working Group
FAIK this is the first time that a specification developed in a Community Group is being moved into the recommendation track. This is something that has been expected when the W3C created CGs, but not something that has an established process yet.
The first question of course is which WG would take it through to Recommendation ? Would we create a new Working Group or find an existing one to move the specification through ? Since WGs involve a lot of overhead, the preference was to add WebVTT to the charter of an existing WG. The two obvious candidates were the HTML WG and the TT-WG – the first because it’s where WebVTT originated and the latter because it’s the closest thematically.
Adding a deliverable to a WG is a major undertaking. The TT-WG is currently in the process of re-chartering and thus a suggestion was made to add WebVTT to the milestones of this WG. TBH that was not my first choice. Since I’m already an editor in the HTML WG and WebVTT is very closely related to HTML and can be tested extensively as part of HTML, I preferred the HTML WG. However, adding WebVTT to the TT-WG has some advantages, too.
Since TTML is an exchange format, lots of captions that will be created (at least professionally) will be in TTML and TTML-related formats. It makes sense to create a mapping from TTML to WebVTT for rendering in browsers. The expertise of both, TTML and WebVTT experts is required to develop a good mapping – as has been shown when we developed the mapping from CEA608/708 to WebVTT. Also, captioning experts are already in the TT-WG, so it helps to get a second set of eyes onto WebVTT.
A disadvantage of moving a specification out of a CG into a WG is, however, that you potentially lose a lot of the expertise that is already involved in the development of the spec. People don’t easily re-subscribe to additional mailing lists or want the additional complexity of involving another community (see e.g. this email).
So, a good process needs to be developed to allow everyone to contribute to the spec in the best way possible without requiring duplicate work. How can we do that ?
The forthcoming process
At TPAC the TT-WG discussed for several hours what the next steps are in taking WebVTT through the TT-WG to recommendation status (agenda with slides). I won’t bore you with the different views – if you are keen, you can read the minutes.
What I came away with is the following process :
- Fix a few more bugs in the CG until we’re happy with the feature set in the CG. This should match the feature set that we realistically expect devices to implement for a first version of the WebVTT spec.
- Make a FSA (Final Specification Agreement) in the CG to create a stable reference and a clean IPR position.
- Assuming that the TT-WG’s charter has been approved with WebVTT as a milestone, we would next bring the FSA specification into the TT-WG as FPWD (First Public Working Draft) and immediately do a Last Call which effectively freezes the feature set (this is possible because there has already been wide community review of the WebVTT spec) ; in parallel, the CG can continue to develop the next version of the WebVTT spec with new features (just like it is happening with the HTML5 and HTML5.1 specifications).
- Develop a test suite and address any issues in the Last Call document (of course, also fix these issues in the CG version of the spec).
- As per W3C process, substantive and minor changes to Last Call documents have to be reported and raised issues addressed before the spec can progress to the next level : Candidate Recommendation status.
- For the next step – Proposed Recommendation status – an implementation report is necessary, and thus the test suite needs to be finalized for the given feature set. The feature set may also be reduced at this stage to just the ones implemented interoperably, leaving any other features for the next version of the spec.
- The final step is Recommendation status, which simply requires sufficient support and endorsement by W3C members.
The first version of the WebVTT spec naturally has a focus on captioning (and subtitling), since this has been the dominant use case that we have focused on this far and it’s the part that is the most compatibly implemented feature set of WebVTT in browsers. It’s my expectation that the next version of WebVTT will have a lot more features related to audio descriptions, chapters and metadata. Thus, this seems a good time for a first version feature freeze.
There are still several obstacles towards progressing WebVTT as a milestone of the TT-WG. Apart from the need to get buy-in from the TT-WG, the TT-CG, and the AC (Adivisory Committee who have to approve the new charter), we’re also looking at the license of the specification document.
The CG specification has an open license that allows creating derivative work as long as there is attribution, while the W3C document license for documents on the recommendation track does not allow the creation of derivative work unless given explicit exceptions. This is an issue that is currently being discussed in the W3C with a proposal for a CC-BY license on the Recommendation track. However, my view is that it’s probably ok to use the different document licenses : the TT-WG will work on WebVTT 1.0 and give it a W3C document license, while the CG starts working on the next WebVTT version under the open CG license. It probably actually makes sense to have a less open license on a frozen spec.
Making the best of a complicated world
WebVTT is now proposed as part of the recharter of the TT-WG. I have no idea how complicated the process will become to achieve a W3C WebVTT 1.0 Recommendation, but I am hoping that what is outlined above will be workable in such a way that all of us get to focus on progressing the technology.
At TPAC I got the impression that the TT-WG is committed to progressing WebVTT to Recommendation status. I know that the TT-CG is committed to continue developing WebVTT to its full potential for all kinds of media-time aligned content with new kinds already discussed at FOMS. Let’s enable both groups to achieve their goals. As a consequence, we will allow the two formats to excel where they do : TTML as an interchange format and WebVTT as a browser rendering format.
-
Hung out to dry
31 mai 2013, par Mans — Law and libertyOutrage was the general reaction when Google recently announced their dropping of XMPP server-to-server federation from Hangouts, as the search giant’s revamped instant messaging platform is henceforth to be known. This outrage is, however, largely unjustified ; Google’s decision is merely a rational response to issues of a more fundamental nature. To see why, we need to step back and look at the broader instant messaging landscape.
A brief history of IM
The term instant messaging (IM) gained popularity in the mid-1990s along with the rise of chat clients such as ICQ, AOL Instant Messenger, and later MSN Messenger. These all had one thing in common : they were closed systems. Although global in the sense of allowing access from anywhere on the Internet, communication was possible only within each network, and only using the officially sanctioned client software. Contrast this with email, where users are free to choose any service provider as well as client software, inter-server communication over open protocols delivering messages to their proper destinations.
The email picture has, however, not always been so rosy. During the 1970s and 80s a multitude of incompatible email systems (e.g. UUCP and X.400) were in more or less widespread use on various networks. As these networks gave way to the ARPANET/Internet, so did their mail systems to the SMTP email we all use today. A similar consolidation has yet to occur in the area of instant messaging.
Over the years, a few efforts towards a cross-domain instant messaging have been undertaken. One early example is the Zephyr system created as part of Project Athena at MIT in the late 1980s. While it never saw significant uptake, it is still in use at a few universities. A more successful story is that of XMPP. Conceived under the name Jabber in the late 1990s, XMPP is an open standard specified in a set of IETF RFCs. In addition to being open, a distinguishing feature of XMPP compared to other contemporary IM systems is its decentralised nature, server-to-server connections allowing communication between users with accounts on different systems. Just like email.
The social network
A more recent emergence on the Internet is the social network. Although not the first of its kind, Facebook was the first to achieve its level of penetration, both geographically and across social groups. A range of messaging options, including email-style as well as instant messaging (chat), are available, all within the same web interface. What it does not allow is communication outside the Facebook network. Other social networks operate in the same spirit.
The popularity of social networks, to the extent that they for many constitute the primary means of communication, has in a sense brought back fragmented networks of the 1980s. Even though they share infrastructure, up to and including the browser application, the social networks create walled-off regions of the Internet between which little or no exchange is possible.
The house that Google built
In 2005, Google launched Talk, an XMPP-based instant messaging service allowing users to connect using either Google’s official client application or any third-party XMPP client. Soon after, server-to-server federation was activated, enabling anyone with a Google account to exchange instant messages with users of any other federated XMPP service. An in-browser chat interface was also added to Gmail.
It was arguably only with the 2011 introduction of Google+ that Google, despite its previous endeavours with Orkut and Buzz, had a viable contender in the social networking space. Since its inception, Google+ has gone through a number of changes where features have been added or reworked. Instant messaging within Google+ was until recently available only in mobile clients. On the desktop, the sole messaging option was Hangouts which, although featuring text chat, cannot be considered instant messaging in the usual sense.
With a sprawling collection of messaging systems (Talk, Google+ Messenger, Hangouts), some action to consolidate them was a logical step. What we got was a unification under the Hangouts name. A redesigned Google+ now sports in-browser instant messaging similar the the Talk interface already present in Gmail. At the same time, the standalone desktop Talk client is discontinued, as is the Messenger feature in mobile Google+. All together, the changes make for a much less confusing user experience.
The sky is falling down
Along with the changes to the messaging platform, one announcement stoked anger on the Internet : Google’s intent to discontinue XMPP federation (as of this writing, it is still operational). Google, the (self-described) champions of openness on the Internet were seen to be closing their doors to the outside world. The effects of the change are, however, not quite so earth-shattering. Of the other major messaging networks to offer XMPP at all (Facebook, Skype, and the defunct Microsoft Messenger), none support federation ; a Google user has never been able to chat with a Facebook user.
XMPP federation appears to be in use mainly by non-profit organisations or individuals running their own servers. The number of users on these systems is hard to assess, though it seems fair to assume it is dwarfed by the hundreds of millions using Google or Facebook. As such, the overall impact of cutting off communication with the federated servers is relatively minor, albeit annoying for those affected.
A fragmented world
Rather than chastising Google for making a low-impact, presumably founded, business decision, we should be asking ourselves why instant messaging is still so fragmented in the first place, whereas email is not. The answer can be found by examining the nature of entities providing these services.
Ever since the commercialisation of the Internet started in the 1990s, email has been largely seen as being part of the Internet. Access to email was a major selling point for Internet service providers ; indeed, many still use the email facilities of their ISP. Instant messaging, by contrast, has never come as part of the basic offering, rather being a third-party service running on top of the Internet.
Users wishing to engage in instant messaging have always had to seek out and sign up with a provider of such a service. As the IM networks were isolated, most would choose whichever service their friends were already using, and a small number of networks, each with a sustainable number of users, came to dominate. In the early days, dedicated IM services such as ICQ were popular. Today, social networks have taken their place with Facebook currently in the dominant position. With the new Hangouts, Google offers its users the service they want in the way they have come to expect.
Follow the money
We now have all the pieces necessary to see why inter-domain instant messaging has never taken off, and the answer is simple : the major players have no commercial incentive to open access to their IM networks. In fact, they have good reason to keep the networks closed. Ensuring that a person leaving the network loses contact with his or her friends, increases user retention by raising the cost of switching to another service. Monetising users is also better facilitated if they are forced to remain on, say, Facebook’s web pages while using its services rather than accessing them indirectly, perhaps even through a competing (Google, say) frontend. The users do not generally care much, since all their friends are already on the same network as themselves.
While Google Talk was a standalone service, only loosely coupled to other Google products, these aspects were of lesser importance. After all, Google still had access to all the messages passing through the system and could analyse them for advert targeting purposes. Now that messaging is an integrated part of Google+, and thus serves as a direct competitor to the likes of Facebook, the situation has changed. All the reasons for Facebook not to open its network now apply equally to Google as well.
-
I am trying to recode my m4v video files with DRM to regular mp4 so I can play on Android [closed]
5 novembre 2012, par user1708282I have several video files from itunes that are .m4v with normal Apple DRM.
As my PC (Windows Vista) is authorised to play the video I can play them either through itunes or with VLC
I know it is possible to do this with commercial software, most of which seems to play the video (on an authorised computer) and then re-encode the output stream. This is the approach I would like to take.
My problem is that I can't get ffmpeg or Mencoder to play the DRM file... but surely it must be possible if VLC can play the file ???
The goal is to ultimately play the new file on an Android device, but the re-coding will be done (I assume) on a PC.