Recherche avancée

Médias (1)

Mot : - Tags -/censure

Autres articles (21)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 is the first MediaSPIP stable release.
    Its official release date is June 21, 2013 and is announced here.
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Other interesting software

    13 avril 2011, par

    We don’t claim to be the only ones doing what we do ... and especially not to assert claims to be the best either ... What we do, we just try to do it well and getting better ...
    The following list represents softwares that tend to be more or less as MediaSPIP or that MediaSPIP tries more or less to do the same, whatever ...
    We don’t know them, we didn’t try them, but you can take a peek.
    Videopress
    Website : http://videopress.com/
    License : GNU/GPL v2
    Source code : (...)

  • Organiser par catégorie

    17 mai 2013, par

    Dans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
    Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
    Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...)

Sur d’autres sites (4669)

  • Our latest improvement to QA : Screenshot Testing

    2 octobre 2013, par benaka — Development

    Introduction to QA in Piwik

    Like any piece of good software, Piwik comes with a comprehensive QA suite that includes unit and integration tests. The unit tests make sure core components of Piwik work properly. The integration tests make sure Piwik’s tracking and report aggregation and APIs work properly.

    To complete our QA suite, we’ve recently added a new type of tests : Screenshot tests, that we use to make sure Piwik’s controller and JavaScript code works properly.

    This blog post will explain how they work and describe our experiences setting them up ; we hope to show you an example of innovative QA practices in an active open source project.

    Screenshot Tests

    As the name implies, our screenshot tests (1) first capture a screenshot of a URL, then (2) compare the result with an expected image. This lets us test the code in Piwik’s controllers and Piwik’s JavaScript simply by specifying a URL.

    Contrast this with conventional UI tests that test for page content changes. Such tests require writing large amounts of test code that, at most, check for changes in HTML. Our tests, on the otherhand, will be able to show regressions in CSS and JavaScript rendering logic with a bare minimum of testing code.

    Capturing Screenshots

    Screenshots are captured using a 3rd party tool. We tried several tools before settling on PhantomJS. PhantomJS executes a JavaScript file with an environment that allows it to create WebKit powered web views. When capturing a screenshot, we supply PhantomJS with a script that :

    • opens a web page view,
    • loads a URL,
    • waits for all AJAX requests to be completed,
    • waits for all images to be loaded
    • waits for all JavaScript to be run.

    Then it renders the completed page to an PNG file.

    • To see how we use PhantomJS see capture.js.
    • To see how we wait for AJAX requests to complete and images to load see override.js.

    Comparing Screenshots

    Once a screenshot is generated we test for UI regressions by comparing it with an expected image. There is no sort of fuzzy matching involved. We just check that the images consist of the same bytes.

    If a screenshot test fails we use ImageMagick’s compare command line tool to generate an image diff :

    Showing differences QA tests screenshots pixel by pixel comparison

    In this example above, there was a change that caused the Search box to be hidden in the datatable. This resulted in the whole Data table report being shifted up a few pixels. The differences are visible in red color which gives rapid feedback to the developers what has changed in the last commit.

    Screenshot Tests on Travis

    We experienced trouble generating identical screenshots on different machines, so our tests were not initially automated by Travis. Once we surpassed this hurdle, we created a new github repo to store our UI tests and screenshots and then enabled the travis build for it. We also made sure that every time a commit is pushed to the Piwik repo, our travis build will push a commit to the UI test repo to run the UI tests.

    We decided to create a new repository so the main repository wouldn’t be burdened with the large screenshot files (which git would not handle very well). We also made sure the travis build would upload all the generated screenshots to a server so debugging failures would be easier.

    Problems we experienced

    Getting generated screenshots to render identically on separate machines was quite a challenge. It took months to figure out how to get it right. Here’s what we learned :

    Fonts will render identically on different machines, but different machines can pick the wrong fonts. When we first tried getting these tests to run on Travis, we noticed small differences in the way fonts were rendered on different machines. We thought this was an insurmountable problem that would occur due to the libraries installed on these machines. It turns out, the machines were just picking the wrong fonts. After installing certain fonts during our Travis build, everything started working.

    Different versions of GD can generate slightly different images. GD is used in Piwik to, among other things, generate sparkline images. Different versions of GD will result in slightly different images. They look the same to the naked eye, but some pixels will have slightly different colors. This is, unfortunately, a problem we couldn’t solve. We couldn’t make sure that everyone who runs the tests uses the same version of GD, so instead we disabled sparklines for UI testing.

    What we learned about existing screenshot capturing tools

    We tried several screenshot capturing tools before finding one that would work adequately. Here’s what we learned about them :

    • CutyCapt This is the first screenshot capturing tool we tried. CutyCapt is a C++ program that uses QtWebKit to load and take a screenshot of a page. It can’t be used to capture multiple screenshots in one run and it can’t be used to wait for all AJAX/Images/JavaScript to complete/load (at least not currently).

    • PhantomJS This is the solution we eventually chose. PhantomJS is a headless scriptable browser that currently uses WebKit as its rendering engine.

      For the most part, PhantomJS is the best solution we found. It reliably renders screenshots, allows JavaScript to be injected into pages it loads, and since it essentially just runs JavaScript code that you provide, it can be made to do whatever you want.

    • SlimerJS SlimerJS is a clone of PhantomJS that uses Gecko as the rendering engine. It is meant to function similarly to PhantomJS. Unfortunately, due to some limitations hard-coded in Mozilla’s software, we couldn’t use it.

      For one, SlimerJS is not headless. There is, apparently, no way to do that when embedding Mozilla. You can, however, run it through xvfb, however the fact that it has to create a window means some odd things can happen. When using SlimerJS, we would sometimes end up with images where tooltips would display as if the mouse was hovering over an element. This inconsistency meant we couldn’t use it for our tests.

    One tool we didn’t try was Selenium Webdriver. Although Selenium is traditionally used to create tests that check for HTML content, it can be used to generate screenshots. (Note : PhantomJS supports using a remote WebDriver.)

    Our Future Plans for Screenshot Testing

    At the moment we render a couple dozen screenshots. We test how our PHP code, JavaScript code and CSS makes Piwik’s UI look, but we don’t test how it behaves. This is our next step.

    We want to create Screenshot Unit Tests for each UI control Piwik uses (for example, the Data Table View or the Site Selector). These tests would use the Widgetize plugin to load a control by itself, then execute JavaScript that simulates events and user behavior, and finally take a screenshot. This way we can test how our code handles clicks and hovers and all sorts of other behavior.

    Screenshots Tests will make Piwik more stable and keep us agile and able to release early and often. Thank you for your support & Spreading the word about Piwik !

  • Converting RGB to YUV, + ffmpeg

    10 juillet 2012, par TheSHEEEP

    I am trying the following to record a live video from my Flash/AIR application :

    1. I take a "screenshot" (BitmapData from stage) each frame.
    2. I convert each pixel to yuv format like this (V2) :

         var file :File = new File(_appUrl + "/creation/output.raw");
         var fs :FileStream = new FileStream();
         fs.open(file, FileMode.WRITE);
         var finalY :ByteArray = new ByteArray();
         var finalU :ByteArray = new ByteArray();
         var finalV :ByteArray = new ByteArray();
         var rect :Rectangle = new Rectangle(0, 0, 600, 700);
         var pixels :ByteArray;
         var pixel :uint;
         var r :uint;
         var g :uint;
         var b :uint;
         _screenBuffer = _screenBuffer.reverse();
         while (_screenBuffer.length > 0)
         {
             pixels = BitmapData(_screenBuffer.pop()).getPixels(rect);
             pixels.position = 0;
             // Convert and save each pixel
             for (var x:int = 0; x < 600; x++)
             {
                 for (var y:int = 0; y < 700; y++)
                 {
                     // Convert to yuv
                     pixel = pixels.readUnsignedInt();
                     r = pixel >> 16 & 0xff;
                     g = pixel >> 8 & 0xff;
                     b = pixel & 0xff;
                         // Y' is written for each pixel
                     finalY.writeByte(0.257 * r + 0.504 * g + 0.098 * b + 128);
                         // U and V are written once per 2x2 pixel block
                     if (x % 2 == 0 && y % 2 == 0)
                     {
                         finalU.writeByte(-0.148 * r - 0.291 * g + 0.439 * b + 128);
                         finalV.writeByte(0.439 * r - 0.368 * g - 0.071 * b + 128);
                     }
                 }
             }
         }
         // Write the converted bytes to the file
         finalY.position = 0;
         finalU.position = 0;
         finalV.position = 0;
         fs.writeBytes(finalY, 0, finalY.length);
         fs.writeBytes(finalU, 0, finalU.length);
         fs.writeBytes(finalV, 0, finalV.length);
         fs.close();
    3. I use the following line of ffmpeg to convert the raw file to a video :

      ffmpeg -r 30 -pix_fmt yuv420p -s 600x700 -vcodec rawvideo -f rawvideo -i output.raw -y test.mp4

    A video is created, but it is simply a mess, barely resembling what was recorded. I know that the capturing process works, as I have tried the same BitmapData "screenshots" with the SimpleFlvWriter.

    So, either something is wrong with my conversion or with the ffmpeg line, but I have no idea.

    This is what ffmpeg outputs when creating the video, maybe it can help someone :

    libavutil      51. 39.100 / 51. 39.100
    libavcodec     54.  3.101 / 54.  3.101
    libavformat    54.  1.100 / 54.  1.100
    libavdevice    53.  4.100 / 53.  4.100
    libavfilter     2. 62.101 /  2. 62.101
    libswscale      2.  1.100 /  2.  1.100
    libswresample   0.  7.100 /  0.  7.100
    libpostproc    52.  0.100 / 52.  0.100
    [rawvideo @ 01D39FC0] Estimating duration from bitrate, this may be inaccurate
    Input #0, rawvideo, from 'output.raw':
     Duration: N/A, start: 0.000000, bitrate: N/A
       Stream #0:0: Video: rawvideo (I420 / 0x30323449), yuv420p, 600x700, 30 tbr,
    30 tbn, 30 tbc
    [buffer @ 01D3FEC0] w:600 h:700 pixfmt:yuv420p tb:1/1000000 sar:0/1 sws_param:
    [libx264 @ 0375DB80] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE 4.2 AVX
    [libx264 @ 0375DB80] profile High, level 3.1
    [libx264 @ 0375DB80] 264 - core 120 r2146 bcd41db - H.264/MPEG-4 AVC codec - Copyleft 2003-2011 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69qpstep=4 ip_ratio=1.40 aq=1:1.00
    Output #0, mp4, to 'test.mp4':
     Metadata:
       encoder         : Lavf54.1.100
       Stream #0:0: Video: h264 (![0][0][0] / 0x0021), yuv420p, 600x700, q=-1--1, 30 tbn, 30 tbc
    Stream mapping:
     Stream #0:0 -> #0:0 (rawvideo -> libx264)
    Press [q] to stop, [?] for help
    Truncating packet of size 630000 to 1
    frame=   48 fps=  0 q=-1.0 Lsize=     157kB time=00:00:01.53 bitrate= 837.3kbits/s
    video:156kB audio:0kB global headers:0kB muxing overhead 0.687626%
    [libx264 @ 0375DB80] frame I:3     Avg QP:23.15  size: 23480
    [libx264 @ 0375DB80] frame P:38    Avg QP:28.80  size:  2169
    [libx264 @ 0375DB80] frame B:7     Avg QP:29.61  size:   833
    [libx264 @ 0375DB80] consecutive B-frames: 79.2%  4.2%  0.0% 16.7%
    [libx264 @ 0375DB80] mb I  I16..4: 41.4%  6.2% 52.4%
    [libx264 @ 0375DB80] mb P  I16..4: 10.6%  3.3%  0.9%  P16..4: 68.4%  1.3%  1.2% 0.0%  0.0%    skip:14.2%
    [libx264 @ 0375DB80] mb B  I16..4:  0.0%  0.0%  0.0%  B16..8: 13.3%  2.2%  0.7% direct: 1.9%  skip:81.9%  L0:51.6% L1:47.4% BI: 1.0%
    [libx264 @ 0375DB80] 8x8 transform intra:16.7% inter:31.2%
    [libx264 @ 0375DB80] coded y,uvDC,uvAC intra: 14.7% 25.5% 22.3% inter: 1.0% 4.1% 3.4%
    [libx264 @ 0375DB80] i16 v,h,dc,p: 87% 11%  2%  0%
    [libx264 @ 0375DB80] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu:  3% 18% 75%  1%  0%  1%  1% 0%  0%
    [libx264 @ 0375DB80] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu:  6% 74% 12%  1%  1%  1%  2% 1%  2%
    [libx264 @ 0375DB80] i8c dc,h,v,p: 51% 45%  4%  1%
    [libx264 @ 0375DB80] Weighted P-Frames: Y:0.0% UV:0.0%
    [libx264 @ 0375DB80] ref P L0:  4.6%  0.4% 94.6%  0.3%
    [libx264 @ 0375DB80] ref B L0: 96.0%  4.0%
    [libx264 @ 0375DB80] ref B L1: 96.5%  3.5%
    [libx264 @ 0375DB80] kb/s:793.39

    I'm not really a codec expert (just starting ;)), so I don't know what to make of most of that.

    Here is a zip that contains one of the frames and the video output. What should be visible is a green smiling pear, without any artifacts. Remember the size is 600x700 and the format yuv420. Best to view such raw image files with irfanview, IMO. Don't mind the noise, its from pushing against my microphone ;)

  • android ffmpeg opengl es render movie

    18 janvier 2013, par broschb

    I am trying to render video via the NDK, to add some features that just aren't supported in the sdk. I am using FFmpeg to decode the video and can compile that via the ndk, and used this as a starting point. I have modified that example and instead of using glDrawTexiOES to draw the texture I have setup some vertices and am rendering the texture on top of that (opengl es way of rendering quad).

    Below is what I am doing to render, but creating the glTexImage2D is slow. I want to know if there is any way to speed this up, or give the appearance of speeding this up, such as trying to setup some textures in the background and render pre-setup textures. Or if there is any other way to more quickly draw the video frames to screen in android ? Currently I can only get about 12fps.

    glClear(GL_COLOR_BUFFER_BIT);
    glEnableClientState(GL_VERTEX_ARRAY);
    glEnableClientState(GL_TEXTURE_COORD_ARRAY);
    glBindTexture(GL_TEXTURE_2D, textureConverted);

    //this is slow
    glTexImage2D(GL_TEXTURE_2D, /* target */
    0, /* level */
    GL_RGBA, /* internal format */
    textureWidth, /* width */
    textureHeight, /* height */
    0, /* border */
    GL_RGBA, /* format */
    GL_UNSIGNED_BYTE,/* type */
    pFrameConverted->data[0]);

    glEnableClientState(GL_TEXTURE_COORD_ARRAY);
    glTexCoordPointer(2, GL_FLOAT, 0, texCoords);
    glVertexPointer(3, GL_FLOAT, 0, vertices);
    glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_BYTE, indices);
    glDisableClientState(GL_VERTEX_ARRAY);
    glDisableClientState(GL_TEXTURE_COORD_ARRAY);

    EDIT
    I changed my code to initialize a gltextImage2D only once, and modify it with glSubTexImage2D, it didn't make much of an improvement to the framerate.

    I then modified the code to modify a native Bitmap object on the NDK. With this approach I have a background thread that runs that process the next frames and populates the bitmap object on the native side. I think this has potential, but I need to get the speed increased of converting the AVFrame object from FFmpeg into a native bitmap. Below is currently what I am using to convert, a brute force approach. Is there any way to increase the speed of this or optimize this conversion ?

    static void fill_bitmap(AndroidBitmapInfo*  info, void *pixels, AVFrame *pFrame)
    {
    uint8_t *frameLine;

    int  yy;
    for (yy = 0; yy < info->height; yy++) {
       uint8_t*  line = (uint8_t*)pixels;
       frameLine = (uint8_t *)pFrame->data[0] + (yy * pFrame->linesize[0]);

       int xx;
       for (xx = 0; xx < info->width; xx++) {
           int out_offset = xx * 4;
           int in_offset = xx * 3;

           line[out_offset] = frameLine[in_offset];
           line[out_offset+1] = frameLine[in_offset+1];
           line[out_offset+2] = frameLine[in_offset+2];
           line[out_offset+3] = 0;
       }
       pixels = (char*)pixels + info->stride;
    }
    }