
Recherche avancée
Autres articles (11)
-
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ; -
Déploiements possibles
31 janvier 2010, parDeux types de déploiements sont envisageable dépendant de deux aspects : La méthode d’installation envisagée (en standalone ou en ferme) ; Le nombre d’encodages journaliers et la fréquentation envisagés ;
L’encodage de vidéos est un processus lourd consommant énormément de ressources système (CPU et RAM), il est nécessaire de prendre tout cela en considération. Ce système n’est donc possible que sur un ou plusieurs serveurs dédiés.
Version mono serveur
La version mono serveur consiste à n’utiliser qu’une (...) -
Gestion des droits de création et d’édition des objets
8 février 2011, parPar défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;
Sur d’autres sites (3408)
-
WebVTT as a W3C Recommendation
2 décembre 2013, par silviaThree weeks ago I attended TPAC, the annual meeting of W3C Working Groups. One of the meetings was of the Timed Text Working Group (TT-WG), that has been specifying TTML, the Timed Text Markup Language. It is now proposed that WebVTT be also standardised through the same Working Group.
How did that happen, you may ask, in particular since WebVTT and TTML have in the past been portrayed as rival caption formats ? How will the WebVTT spec that is currently under development in the Text Track Community Group (TT-CG) move through a Working Group process ?
I’ll explain first why there is a need for WebVTT to become a W3C Recommendation, and then how this is proposed to be part of the Timed Text Working Group deliverables, and finally how I can see this working between the TT-CG and the TT-WG.
Advantages of a W3C Recommendation
TTML is a XML-based markup format for captions developed during the time that XML was all the hotness. It has become a W3C standard (a so-called “Recommendation”) despite not having been implemented in any browsers (if you ask me : that’s actually a flaw of the W3C standardisation process : it requires only two interoperable implementations of any kind – and that could be anyone’s JavaScript library or Flash demonstrator – it doesn’t actually require browser implementations. But I digress…). To be fair, a subpart of TTML is by now implemented in Internet Explorer, but all the other major browsers have thus far rejected proposals of implementation.
Because of its Recommendation status, TTML has become the basis for several other caption standards that other SDOs have picked : the SMPTE’s SMPTE-TT format, the EBU’s EBU-TT format, and the DASH Industry Forum’s use of SMPTE-TT. SMPTE-TT has also become the “safe harbour” format for the US legislation on captioning as decided by the FCC. (Note that the FCC requirements for captions on the Web are actually based on a list of features rather than requiring a specific format. But that will be the topic of a different blog post…)
WebVTT is much younger than TTML. TTML was developed as an interchange format among caption authoring systems. WebVTT was built for rendering in Web browsers and with HTML5 in mind. It meets the requirements of the <track> element and supports more than just captions/subtitles. WebVTT is popular with browser developers and has already been implemented in all major browsers (Firefox Nightly is the last to implement it – all others have support already released).
As we can see and as has been proven by the HTML spec and multiple other specs : browsers don’t wait for specifications to have W3C Recommendation status before they implement them. Nor do they really care about the status of a spec – what they care about is whether a spec makes sense for the Web developer and user communities and whether it fits in the Web platform. WebVTT has obviously achieved this status, even with an evolving spec. (Note that the spec tries very hard not to break backwards compatibility, thus all past implementations will at least be compatible with the more basic features of the spec.)
Given that Web browsers don’t need WebVTT to become a W3C standard, why then should we spend effort in moving the spec through the W3C process to become a W3C Recommendation ?
The modern Web is now much bigger than just Web browsers. Web specifications are being used in all kinds of devices including TV set-top boxes, phone and tablet apps, and even unexpected devices such as white goods. Videos are increasingly omnipresent thus exposing deaf and hard-of-hearing users to ever-growing challenges in interacting with content on diverse devices. Some of these devices will not use auto-updating software but fixed versions so can’t easily adapt to new features. Thus, caption producers (both commercial and community) need to be able to author captions (and other video accessibility content as defined by the HTML5
Understandably, device vendors in this space have a need to build their technology on standardised specifications. SDOs for such device technologies like to reference fixed specifications so the feature set is not continually updating. To reference WebVTT, they could use a snapshot of the specification at any time and reference that, but that’s not how SDOs work. They prefer referencing an officially sanctioned and tested version of a specification – for a W3C specification that means creating a W3C Recommendation of the WebVTT spec.
Taking WebVTT on a W3C recommendation track is actually advantageous for browsers, too, because a test suite will have to be developed that proves that features are implemented in an interoperable manner. In summary, I can see the advantages and personally support the effort to take WebVTT through to a W3C Recommendation.
Choice of Working Group
FAIK this is the first time that a specification developed in a Community Group is being moved into the recommendation track. This is something that has been expected when the W3C created CGs, but not something that has an established process yet.
The first question of course is which WG would take it through to Recommendation ? Would we create a new Working Group or find an existing one to move the specification through ? Since WGs involve a lot of overhead, the preference was to add WebVTT to the charter of an existing WG. The two obvious candidates were the HTML WG and the TT-WG – the first because it’s where WebVTT originated and the latter because it’s the closest thematically.
Adding a deliverable to a WG is a major undertaking. The TT-WG is currently in the process of re-chartering and thus a suggestion was made to add WebVTT to the milestones of this WG. TBH that was not my first choice. Since I’m already an editor in the HTML WG and WebVTT is very closely related to HTML and can be tested extensively as part of HTML, I preferred the HTML WG. However, adding WebVTT to the TT-WG has some advantages, too.
Since TTML is an exchange format, lots of captions that will be created (at least professionally) will be in TTML and TTML-related formats. It makes sense to create a mapping from TTML to WebVTT for rendering in browsers. The expertise of both, TTML and WebVTT experts is required to develop a good mapping – as has been shown when we developed the mapping from CEA608/708 to WebVTT. Also, captioning experts are already in the TT-WG, so it helps to get a second set of eyes onto WebVTT.
A disadvantage of moving a specification out of a CG into a WG is, however, that you potentially lose a lot of the expertise that is already involved in the development of the spec. People don’t easily re-subscribe to additional mailing lists or want the additional complexity of involving another community (see e.g. this email).
So, a good process needs to be developed to allow everyone to contribute to the spec in the best way possible without requiring duplicate work. How can we do that ?
The forthcoming process
At TPAC the TT-WG discussed for several hours what the next steps are in taking WebVTT through the TT-WG to recommendation status (agenda with slides). I won’t bore you with the different views – if you are keen, you can read the minutes.
What I came away with is the following process :
- Fix a few more bugs in the CG until we’re happy with the feature set in the CG. This should match the feature set that we realistically expect devices to implement for a first version of the WebVTT spec.
- Make a FSA (Final Specification Agreement) in the CG to create a stable reference and a clean IPR position.
- Assuming that the TT-WG’s charter has been approved with WebVTT as a milestone, we would next bring the FSA specification into the TT-WG as FPWD (First Public Working Draft) and immediately do a Last Call which effectively freezes the feature set (this is possible because there has already been wide community review of the WebVTT spec) ; in parallel, the CG can continue to develop the next version of the WebVTT spec with new features (just like it is happening with the HTML5 and HTML5.1 specifications).
- Develop a test suite and address any issues in the Last Call document (of course, also fix these issues in the CG version of the spec).
- As per W3C process, substantive and minor changes to Last Call documents have to be reported and raised issues addressed before the spec can progress to the next level : Candidate Recommendation status.
- For the next step – Proposed Recommendation status – an implementation report is necessary, and thus the test suite needs to be finalized for the given feature set. The feature set may also be reduced at this stage to just the ones implemented interoperably, leaving any other features for the next version of the spec.
- The final step is Recommendation status, which simply requires sufficient support and endorsement by W3C members.
The first version of the WebVTT spec naturally has a focus on captioning (and subtitling), since this has been the dominant use case that we have focused on this far and it’s the part that is the most compatibly implemented feature set of WebVTT in browsers. It’s my expectation that the next version of WebVTT will have a lot more features related to audio descriptions, chapters and metadata. Thus, this seems a good time for a first version feature freeze.
There are still several obstacles towards progressing WebVTT as a milestone of the TT-WG. Apart from the need to get buy-in from the TT-WG, the TT-CG, and the AC (Adivisory Committee who have to approve the new charter), we’re also looking at the license of the specification document.
The CG specification has an open license that allows creating derivative work as long as there is attribution, while the W3C document license for documents on the recommendation track does not allow the creation of derivative work unless given explicit exceptions. This is an issue that is currently being discussed in the W3C with a proposal for a CC-BY license on the Recommendation track. However, my view is that it’s probably ok to use the different document licenses : the TT-WG will work on WebVTT 1.0 and give it a W3C document license, while the CG starts working on the next WebVTT version under the open CG license. It probably actually makes sense to have a less open license on a frozen spec.
Making the best of a complicated world
WebVTT is now proposed as part of the recharter of the TT-WG. I have no idea how complicated the process will become to achieve a W3C WebVTT 1.0 Recommendation, but I am hoping that what is outlined above will be workable in such a way that all of us get to focus on progressing the technology.
At TPAC I got the impression that the TT-WG is committed to progressing WebVTT to Recommendation status. I know that the TT-CG is committed to continue developing WebVTT to its full potential for all kinds of media-time aligned content with new kinds already discussed at FOMS. Let’s enable both groups to achieve their goals. As a consequence, we will allow the two formats to excel where they do : TTML as an interchange format and WebVTT as a browser rendering format.
-
Our latest improvement to QA : Screenshot Testing
2 octobre 2013, par benaka — DevelopmentIntroduction to QA in Piwik
Like any piece of good software, Piwik comes with a comprehensive QA suite that includes unit and integration tests. The unit tests make sure core components of Piwik work properly. The integration tests make sure Piwik’s tracking and report aggregation and APIs work properly.
To complete our QA suite, we’ve recently added a new type of tests : Screenshot tests, that we use to make sure Piwik’s controller and JavaScript code works properly.
This blog post will explain how they work and describe our experiences setting them up ; we hope to show you an example of innovative QA practices in an active open source project.
Screenshot Tests
As the name implies, our screenshot tests (1) first capture a screenshot of a URL, then (2) compare the result with an expected image. This lets us test the code in Piwik’s controllers and Piwik’s JavaScript simply by specifying a URL.
Contrast this with conventional UI tests that test for page content changes. Such tests require writing large amounts of test code that, at most, check for changes in HTML. Our tests, on the otherhand, will be able to show regressions in CSS and JavaScript rendering logic with a bare minimum of testing code.
Capturing Screenshots
Screenshots are captured using a 3rd party tool. We tried several tools before settling on PhantomJS. PhantomJS executes a JavaScript file with an environment that allows it to create WebKit powered web views. When capturing a screenshot, we supply PhantomJS with a script that :
- opens a web page view,
- loads a URL,
- waits for all AJAX requests to be completed,
- waits for all images to be loaded
- waits for all JavaScript to be run.
Then it renders the completed page to an PNG file.
- To see how we use PhantomJS see capture.js.
- To see how we wait for AJAX requests to complete and images to load see override.js.
Comparing Screenshots
Once a screenshot is generated we test for UI regressions by comparing it with an expected image. There is no sort of fuzzy matching involved. We just check that the images consist of the same bytes.
If a screenshot test fails we use ImageMagick’s compare command line tool to generate an image diff :
In this example above, there was a change that caused the Search box to be hidden in the datatable. This resulted in the whole Data table report being shifted up a few pixels. The differences are visible in red color which gives rapid feedback to the developers what has changed in the last commit.
Screenshot Tests on Travis
We experienced trouble generating identical screenshots on different machines, so our tests were not initially automated by Travis. Once we surpassed this hurdle, we created a new github repo to store our UI tests and screenshots and then enabled the travis build for it. We also made sure that every time a commit is pushed to the Piwik repo, our travis build will push a commit to the UI test repo to run the UI tests.
We decided to create a new repository so the main repository wouldn’t be burdened with the large screenshot files (which git would not handle very well). We also made sure the travis build would upload all the generated screenshots to a server so debugging failures would be easier.
Problems we experienced
Getting generated screenshots to render identically on separate machines was quite a challenge. It took months to figure out how to get it right. Here’s what we learned :
Fonts will render identically on different machines, but different machines can pick the wrong fonts. When we first tried getting these tests to run on Travis, we noticed small differences in the way fonts were rendered on different machines. We thought this was an insurmountable problem that would occur due to the libraries installed on these machines. It turns out, the machines were just picking the wrong fonts. After installing certain fonts during our Travis build, everything started working.
Different versions of GD can generate slightly different images. GD is used in Piwik to, among other things, generate sparkline images. Different versions of GD will result in slightly different images. They look the same to the naked eye, but some pixels will have slightly different colors. This is, unfortunately, a problem we couldn’t solve. We couldn’t make sure that everyone who runs the tests uses the same version of GD, so instead we disabled sparklines for UI testing.
What we learned about existing screenshot capturing tools
We tried several screenshot capturing tools before finding one that would work adequately. Here’s what we learned about them :
-
CutyCapt This is the first screenshot capturing tool we tried. CutyCapt is a C++ program that uses QtWebKit to load and take a screenshot of a page. It can’t be used to capture multiple screenshots in one run and it can’t be used to wait for all AJAX/Images/JavaScript to complete/load (at least not currently).
-
PhantomJS This is the solution we eventually chose. PhantomJS is a headless scriptable browser that currently uses WebKit as its rendering engine.
For the most part, PhantomJS is the best solution we found. It reliably renders screenshots, allows JavaScript to be injected into pages it loads, and since it essentially just runs JavaScript code that you provide, it can be made to do whatever you want.
-
SlimerJS SlimerJS is a clone of PhantomJS that uses Gecko as the rendering engine. It is meant to function similarly to PhantomJS. Unfortunately, due to some limitations hard-coded in Mozilla’s software, we couldn’t use it.
For one, SlimerJS is not headless. There is, apparently, no way to do that when embedding Mozilla. You can, however, run it through xvfb, however the fact that it has to create a window means some odd things can happen. When using SlimerJS, we would sometimes end up with images where tooltips would display as if the mouse was hovering over an element. This inconsistency meant we couldn’t use it for our tests.
One tool we didn’t try was Selenium Webdriver. Although Selenium is traditionally used to create tests that check for HTML content, it can be used to generate screenshots. (Note : PhantomJS supports using a remote WebDriver.)
Our Future Plans for Screenshot Testing
At the moment we render a couple dozen screenshots. We test how our PHP code, JavaScript code and CSS makes Piwik’s UI look, but we don’t test how it behaves. This is our next step.
We want to create Screenshot Unit Tests for each UI control Piwik uses (for example, the Data Table View or the Site Selector). These tests would use the Widgetize plugin to load a control by itself, then execute JavaScript that simulates events and user behavior, and finally take a screenshot. This way we can test how our code handles clicks and hovers and all sorts of other behavior.
Screenshots Tests will make Piwik more stable and keep us agile and able to release early and often. Thank you for your support & Spreading the word about Piwik !
-
Render YUV video in OpenGL of ffmpeg using CVPixelBufferRef and Shaders
4 septembre 2012, par resident_I'm using to render YUV frames of ffmpeg with the iOS 5.0 method "CVOpenGLESTextureCacheCreateTextureFromImage".
I'm using like the apple example GLCameraRipple
My result in iPhone screen is this : iPhone Screen
I need to know I'm doing wrong.
I put part of my code to find errors.
ffmpeg configure frames :
ctx->p_sws_ctx = sws_getContext(ctx->p_video_ctx->width,
ctx->p_video_ctx->height,
ctx->p_video_ctx->pix_fmt,
ctx->p_video_ctx->width,
ctx->p_video_ctx->height,
PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL);
// Framebuffer for RGB data
ctx->p_frame_buffer = malloc(avpicture_get_size(PIX_FMT_YUV420P,
ctx->p_video_ctx->width,
ctx->p_video_ctx->height));
avpicture_fill((AVPicture*)ctx->p_picture_rgb, ctx->p_frame_buffer,PIX_FMT_YUV420P,
ctx->p_video_ctx->width,
ctx->p_video_ctx->height);My render method :
if (NULL == videoTextureCache) {
NSLog(@"displayPixelBuffer error");
return;
}
CVPixelBufferRef pixelBuffer;
CVPixelBufferCreateWithBytes(kCFAllocatorDefault, mTexW, mTexH, kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange, buffer, mFrameW * 3, NULL, 0, NULL, &pixelBuffer);
CVReturn err;
// Y-plane
glActiveTexture(GL_TEXTURE0);
err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
videoTextureCache,
pixelBuffer,
NULL,
GL_TEXTURE_2D,
GL_RED_EXT,
mTexW,
mTexH,
GL_RED_EXT,
GL_UNSIGNED_BYTE,
0,
&_lumaTexture);
if (err)
{
NSLog(@"Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err);
}
glBindTexture(CVOpenGLESTextureGetTarget(_lumaTexture), CVOpenGLESTextureGetName(_lumaTexture));
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
// UV-plane
glActiveTexture(GL_TEXTURE1);
err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
videoTextureCache,
pixelBuffer,
NULL,
GL_TEXTURE_2D,
GL_RG_EXT,
mTexW/2,
mTexH/2,
GL_RG_EXT,
GL_UNSIGNED_BYTE,
1,
&_chromaTexture);
if (err)
{
NSLog(@"Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err);
}
glBindTexture(CVOpenGLESTextureGetTarget(_chromaTexture), CVOpenGLESTextureGetName(_chromaTexture));
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glBindFramebuffer(GL_FRAMEBUFFER, defaultFramebuffer);
// Set the view port to the entire view
glViewport(0, 0, backingWidth, backingHeight);
static const GLfloat squareVertices[] = {
1.0f, 1.0f,
-1.0f, 1.0f,
1.0f, -1.0f,
-1.0f, -1.0f,
};
GLfloat textureVertices[] = {
1, 1,
1, 0,
0, 1,
0, 0,
};
// Draw the texture on the screen with OpenGL ES 2
[self renderWithSquareVertices:squareVertices textureVertices:textureVertices];
// Flush the CVOpenGLESTexture cache and release the texture
CVOpenGLESTextureCacheFlush(videoTextureCache, 0);
CVPixelBufferRelease(pixelBuffer);
[moviePlayerDelegate bufferDone];RenderWithSquareVertices method
- (void)renderWithSquareVertices:(const GLfloat*)squareVertices textureVertices:(const GLfloat*)textureVertices
{
// Use shader program.
glUseProgram(shader.program);
// Update attribute values.
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, squareVertices);
glEnableVertexAttribArray(ATTRIB_VERTEX);
glVertexAttribPointer(ATTRIB_TEXTUREPOSITON, 2, GL_FLOAT, 0, 0, textureVertices);
glEnableVertexAttribArray(ATTRIB_TEXTUREPOSITON);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// Present
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER];}
My fragment shader :
uniform sampler2D SamplerY;
uniform sampler2D SamplerUV;
varying highp vec2 _texcoord;
void main()
{
mediump vec3 yuv;
lowp vec3 rgb;
yuv.x = texture2D(SamplerY, _texcoord).r;
yuv.yz = texture2D(SamplerUV, _texcoord).rg - vec2(0.5, 0.5);
// BT.601, which is the standard for SDTV is provided as a reference
/* rgb = mat3( 1, 1, 1,
0, -.34413, 1.772,
1.402, -.71414, 0) * yuv;*/
// Using BT.709 which is the standard for HDTV
rgb = mat3( 1, 1, 1,
0, -.18732, 1.8556,
1.57481, -.46813, 0) * yuv;
gl_FragColor = vec4(rgb, 1);
}Very thanks,