Recherche avancée

Médias (91)

Autres articles (67)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Gestion des droits de création et d’édition des objets

    8 février 2011, par

    Par défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

Sur d’autres sites (8783)

  • How to write UI tests for your plugin – Introducing the Piwik Platform

    18 février 2015, par Thomas Steur — Development

    This is the next post of our blog series where we introduce the capabilities of the Piwik platform (our previous post was How to write unit tests for your plugin). This time you’ll learn how to write UI tests in Piwik. For this tutorial you will need to have basic knowledge of JavaScript and the Piwik platform.

    What is a UI test ?

    Some might know a UI test under the term ‘CSS test’ or ‘screenshot test’. When we speak of UI tests we mean automated tests that capture a screenshot of a URL and then compare the result with an expected image. If the images are not exactly the same the test will fail. For more information read our blog post about UI Testing.

    What is a UI test good for ?

    We use them to test our PHP Controllers, Twig templates, CSS, and indirectly test our JavaScript. We do usually not write Unit or Integration tests for our controllers. For example we use UI tests to ensure that the installation, the login and the update process works as expected. We also have tests for most pages, reports, settings, etc. This increases the quality of our product and saves us a lot of time as it is easy to write and maintain such tests. All UI tests are executed on Travis after each commit and compared with our expected screenshots.

    Getting started

    In this post, we assume that you have already installed Piwik 2.11.0 or later via git, set up your development environment and created a plugin. If not, visit the Piwik Developer Zone where you’ll find the tutorial Setting up Piwik and other Guides that help you to develop a plugin.

    Next you need to install the needed packages to execute UI tests.

    Let’s create a UI test

    We start by using the Piwik Console to create a new UI test :

    ./console generate:test --testtype ui

    The command will ask you to enter the name of the plugin the created test should belong to. I will use the plugin name “Widgetize”. Next it will ask you for the name of the test. Here you usually enter the name of the page or report you want to test. I will use the name “WidgetizePage” in this example. There should now be a file plugins/Widgetize/tests/UI/WidgetizePage_spec.js which contains already an example to get you started easily :

    describe("WidgetizePage", function () {
       var generalParams = 'idSite=1&period=day&date=2010-01-03';

       it('should load a simple page by its module and action', function (done) {
           var screenshotName = 'simplePage';
           // will save image in "processed-ui-screenshots/WidgetizePageTest_simplePage.png"

           expect.screenshot(screenshotName).to.be.capture(function (page) {
               var urlToTest = "?" + generalParams + "&module=Widgetize&action=index";
               page.load(urlToTest);
           }, done);
       });
    });

    What is happening here ?

    This example declares a new set of specs by calling the method describe(name, callback) and within that a new spec by calling the method it(description, func). Within the spec we load a URL and once loaded capture a screenshot of the whole page. The captured screenshot will be saved under the defined screenshotName. You might have noticed we write our UI tests in BDD style.

    Capturing only a part of the page

    It is good practice to not always capture the full page. For example many pages contain a menu and if you change that menu, all your screenshot tests would fail. To avoid this you would instead have a separate test for your menu. To capture only a part of the page simply specify a jQuery selector and call the method captureSelector instead of capture :

    var contentSelector = '#selector1, .selector2 .selector3';
    // Only the content of both selectors will be in visible in the captured screenshot
    expect.screenshot('page_partial').to.be.captureSelector(contentSelector, function (page) {
       page.load(urlToTest);
    }, done);

    Hiding content

    There is a known issue with sparklines that can fail tests randomly. Also version numbers or a date that changes from time to time can fail tests without actually having an error. To avoid this you can prevent elements from being visible in the captured screenshot via CSS as we add a CSS class called uiTest to the HTML element while tests are running.

    .uiTest .version { visibility:hidden }

    Running a test

    To run the previously generated tests we will use the command tests:run-ui :

    ./console tests:run-ui WidgetizePage

    After running the tests for the first time you will notice a new folder plugins/PLUGINNAME/tests/UI/processed-ui-screenshots in your plugin. If everything worked, there will be an image for every captured screenshot. If you’re happy with the result it is time to copy the file over to the expected-ui-screenshots folder, otherwise you have to adjust your test until you get the result you want. From now on, the newly captured screenshots will be compared with the expected images whenever you execute the tests.

    Fixing a test

    At some point your UI test will fail, for example due to expected CSS changes. To fix a test all you have to do is to copy the captured screenshot from the folder processed-ui-screenshots to the folder expected-ui-screenshots.

    Executing the UI tests on Travis

    In case you have not generated a .travis.yml file for your plugin yet you can do this by executing the following command :

    ./console generate:travis-yml --plugin PLUGINNAME

    Next you have to activate Travis for your repository.

    Advanced features

    Isn’t it easy to create a UI test ? We never even created a file ! Of course you can accomplish even more if you want. For example you can specify a fixture to be inserted before running the tests which is useful when your plugin requires custom data. You can also control the browser as it was a human by clicking, moving the mouse, typing text, etc. If you want to discover more features have a look at our existing test cases.

    If you have any feedback regarding our APIs or our guides in the Developer Zone feel free to send it to us.

  • How to write UI tests for your plugin – Introducing the Piwik Platform

    18 février 2015, par Thomas Steur — Development

    This is the next post of our blog series where we introduce the capabilities of the Piwik platform (our previous post was How to write unit tests for your plugin). This time you’ll learn how to write UI tests in Piwik. For this tutorial you will need to have basic knowledge of JavaScript and the Piwik platform.

    What is a UI test ?

    Some might know a UI test under the term ‘CSS test’ or ‘screenshot test’. When we speak of UI tests we mean automated tests that capture a screenshot of a URL and then compare the result with an expected image. If the images are not exactly the same the test will fail. For more information read our blog post about UI Testing.

    What is a UI test good for ?

    We use them to test our PHP Controllers, Twig templates, CSS, and indirectly test our JavaScript. We do usually not write Unit or Integration tests for our controllers. For example we use UI tests to ensure that the installation, the login and the update process works as expected. We also have tests for most pages, reports, settings, etc. This increases the quality of our product and saves us a lot of time as it is easy to write and maintain such tests. All UI tests are executed on Travis after each commit and compared with our expected screenshots.

    Getting started

    In this post, we assume that you have already installed Piwik 2.11.0 or later via git, set up your development environment and created a plugin. If not, visit the Piwik Developer Zone where you’ll find the tutorial Setting up Piwik and other Guides that help you to develop a plugin.

    Next you need to install the needed packages to execute UI tests.

    Let’s create a UI test

    We start by using the Piwik Console to create a new UI test :

    ./console generate:test --testtype ui

    The command will ask you to enter the name of the plugin the created test should belong to. I will use the plugin name “Widgetize”. Next it will ask you for the name of the test. Here you usually enter the name of the page or report you want to test. I will use the name “WidgetizePage” in this example. There should now be a file plugins/Widgetize/tests/UI/WidgetizePage_spec.js which contains already an example to get you started easily :

    describe("WidgetizePage", function () {
       var generalParams = 'idSite=1&period=day&date=2010-01-03';

       it('should load a simple page by its module and action', function (done) {
           var screenshotName = 'simplePage';
           // will save image in "processed-ui-screenshots/WidgetizePageTest_simplePage.png"

           expect.screenshot(screenshotName).to.be.capture(function (page) {
               var urlToTest = "?" + generalParams + "&module=Widgetize&action=index";
               page.load(urlToTest);
           }, done);
       });
    });

    What is happening here ?

    This example declares a new set of specs by calling the method describe(name, callback) and within that a new spec by calling the method it(description, func). Within the spec we load a URL and once loaded capture a screenshot of the whole page. The captured screenshot will be saved under the defined screenshotName. You might have noticed we write our UI tests in BDD style.

    Capturing only a part of the page

    It is good practice to not always capture the full page. For example many pages contain a menu and if you change that menu, all your screenshot tests would fail. To avoid this you would instead have a separate test for your menu. To capture only a part of the page simply specify a jQuery selector and call the method captureSelector instead of capture :

    var contentSelector = '#selector1, .selector2 .selector3';
    // Only the content of both selectors will be in visible in the captured screenshot
    expect.screenshot('page_partial').to.be.captureSelector(contentSelector, function (page) {
       page.load(urlToTest);
    }, done);

    Hiding content

    There is a known issue with sparklines that can fail tests randomly. Also version numbers or a date that changes from time to time can fail tests without actually having an error. To avoid this you can prevent elements from being visible in the captured screenshot via CSS as we add a CSS class called uiTest to the HTML element while tests are running.

    .uiTest .version { visibility:hidden }

    Running a test

    To run the previously generated tests we will use the command tests:run-ui :

    ./console tests:run-ui WidgetizePage

    After running the tests for the first time you will notice a new folder plugins/PLUGINNAME/tests/UI/processed-ui-screenshots in your plugin. If everything worked, there will be an image for every captured screenshot. If you’re happy with the result it is time to copy the file over to the expected-ui-screenshots folder, otherwise you have to adjust your test until you get the result you want. From now on, the newly captured screenshots will be compared with the expected images whenever you execute the tests.

    Fixing a test

    At some point your UI test will fail, for example due to expected CSS changes. To fix a test all you have to do is to copy the captured screenshot from the folder processed-ui-screenshots to the folder expected-ui-screenshots.

    Executing the UI tests on Travis

    In case you have not generated a .travis.yml file for your plugin yet you can do this by executing the following command :

    ./console generate:travis-yml --plugin PLUGINNAME

    Next you have to activate Travis for your repository.

    Advanced features

    Isn’t it easy to create a UI test ? We never even created a file ! Of course you can accomplish even more if you want. For example you can specify a fixture to be inserted before running the tests which is useful when your plugin requires custom data. You can also control the browser as it was a human by clicking, moving the mouse, typing text, etc. If you want to discover more features have a look at our existing test cases.

    If you have any feedback regarding our APIs or our guides in the Developer Zone feel free to send it to us.

  • Rendering YUV420P ffmpeg decoded images on QT with OpenGL, only see black screen

    17 février 2019, par Lucas Zanella

    I’ve found this QT OpenGL Widget which should render a 420PYUV image on screen. I’m feeding a ffmpeg decoded buffer into its paintGL() function but I see nothing. Neither noises or correct images, only a black screen. I’m trying to understand why.

    I want to exclude the possibilities of other things being wrong, but I need to be sure first that my code will produce anything. I std::couted some bytes from the ffmpeg just to see if they were arriving and they were. So I should see at least some noise.

    Can you see anything wrong with my code that wouldn’t make it able to render images on screen ?

    This is the widget that should output the image :

    #include "XVideoWidget.h"
    #include <qdebug>
    #include <qtimer>
    #include <iostream>
    //自动加双引号
    #define GET_STR(x) #x
    #define A_VER 3
    #define T_VER 4

    //顶点shader
    const char *vString = GET_STR(
       attribute vec4 vertexIn;
       attribute vec2 textureIn;
       varying vec2 textureOut;
       void main(void)
       {
           gl_Position = vertexIn;
           textureOut = textureIn;
       }
    );


    //片元shader
    const char *tString = GET_STR(
       varying vec2 textureOut;
       uniform sampler2D tex_y;
       uniform sampler2D tex_u;
       uniform sampler2D tex_v;
       void main(void)
       {
           vec3 yuv;
           vec3 rgb;
           yuv.x = texture2D(tex_y, textureOut).r;
           yuv.y = texture2D(tex_u, textureOut).r - 0.5;
           yuv.z = texture2D(tex_v, textureOut).r - 0.5;
           rgb = mat3(1.0, 1.0, 1.0,
               0.0, -0.39465, 2.03211,
               1.13983, -0.58060, 0.0) * yuv;
           gl_FragColor = vec4(rgb, 1.0);
       }

    );



    //准备yuv数据
    // ffmpeg -i v1080.mp4 -t 10 -s 240x128 -pix_fmt yuv420p  out240x128.yuv
    XVideoWidget::XVideoWidget(QWidget * parent)
    {
      // setWindowFlags (Qt::WindowFullscreenButtonHint);
     //  showFullScreen();

    }

    XVideoWidget::~XVideoWidget()
    {
    }

    //初始化opengl
    void XVideoWidget::initializeGL()
    {
       //qDebug() &lt;&lt; "initializeGL";
       std::cout &lt;&lt; "initializing gl" &lt;&lt; std::endl;
       //初始化opengl (QOpenGLFunctions继承)函数
       initializeOpenGLFunctions();

       this->m_F  = QOpenGLContext::currentContext()->functions();

       //program加载shader(顶点和片元)脚本
       //片元(像素)
       std::cout &lt;&lt; program.addShaderFromSourceCode(QOpenGLShader::Fragment, tString) &lt;&lt; std::endl;
       //顶点shader
       std::cout &lt;&lt; program.addShaderFromSourceCode(QOpenGLShader::Vertex, vString) &lt;&lt; std::endl;

       //设置顶点坐标的变量
       program.bindAttributeLocation("vertexIn",A_VER);

       //设置材质坐标
       program.bindAttributeLocation("textureIn",T_VER);

       //编译shader
       std::cout &lt;&lt; "program.link() = " &lt;&lt; program.link() &lt;&lt; std::endl;

       std::cout &lt;&lt; "program.bind() = " &lt;&lt; program.bind() &lt;&lt; std::endl;

       //传递顶点和材质坐标
       //顶点
       static const GLfloat ver[] = {
           -1.0f,-1.0f,
           1.0f,-1.0f,
           -1.0f, 1.0f,
           1.0f,1.0f
       };

       //材质
       static const GLfloat tex[] = {
           0.0f, 1.0f,
           1.0f, 1.0f,
           0.0f, 0.0f,
           1.0f, 0.0f
       };

       //顶点
       glVertexAttribPointer(A_VER, 2, GL_FLOAT, 0, 0, ver);
       glEnableVertexAttribArray(A_VER);

       //材质
       glVertexAttribPointer(T_VER, 2, GL_FLOAT, 0, 0, tex);
       glEnableVertexAttribArray(T_VER);

       //glUseProgram(&amp;program);
       //从shader获取材质
       unis[0] = program.uniformLocation("tex_y");
       unis[1] = program.uniformLocation("tex_u");
       unis[2] = program.uniformLocation("tex_v");

       //创建材质
       glGenTextures(3, texs);

       //Y
       glBindTexture(GL_TEXTURE_2D, texs[0]);
       //放大过滤,线性插值   GL_NEAREST(效率高,但马赛克严重)
       glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
       glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
       //创建材质显卡空间
       glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, width, height, 0, GL_RED, GL_UNSIGNED_BYTE, 0);

       //U
       glBindTexture(GL_TEXTURE_2D, texs[1]);
       //放大过滤,线性插值
       glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
       glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
       //创建材质显卡空间
       glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, width/2, height / 2, 0, GL_RED, GL_UNSIGNED_BYTE, 0);

       //V
       glBindTexture(GL_TEXTURE_2D, texs[2]);
       //放大过滤,线性插值
       glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
       glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
       //创建材质显卡空间
       glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, width / 2, height / 2, 0, GL_RED, GL_UNSIGNED_BYTE, 0);

       ///分配材质内存空间
       datas[0] = new unsigned char[width*height];     //Y
       datas[1] = new unsigned char[width*height/4];   //U
       datas[2] = new unsigned char[width*height/4];   //V
    }

    //刷新显示
    void XVideoWidget::paintGL(unsigned char**data)
    //void QFFmpegGLWidget::updateData(unsigned char**data)
    {
       std::cout &lt;&lt; "painting!" &lt;&lt; std::endl;
       memcpy(datas[0], data[0], width*height);
       memcpy(datas[1], data[1], width*height/4);
       memcpy(datas[2], data[2], width*height/4);

       glActiveTexture(GL_TEXTURE0);
       glBindTexture(GL_TEXTURE_2D, texs[0]); //0层绑定到Y材质
       //修改材质内容(复制内存内容)
       glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_RED, GL_UNSIGNED_BYTE, datas[0]);
       //与shader uni遍历关联
       glUniform1i(unis[0], 0);


       glActiveTexture(GL_TEXTURE0+1);
       glBindTexture(GL_TEXTURE_2D, texs[1]); //1层绑定到U材质
                                              //修改材质内容(复制内存内容)
       glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width/2, height / 2, GL_RED, GL_UNSIGNED_BYTE, datas[1]);
       //与shader uni遍历关联
       glUniform1i(unis[1],1);


       glActiveTexture(GL_TEXTURE0+2);
       glBindTexture(GL_TEXTURE_2D, texs[2]); //2层绑定到V材质
                                              //修改材质内容(复制内存内容)
       glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width / 2, height / 2, GL_RED, GL_UNSIGNED_BYTE, datas[2]);
       //与shader uni遍历关联
       glUniform1i(unis[2], 2);

       glDrawArrays(GL_TRIANGLE_STRIP,0,4);
       qDebug() &lt;&lt; "paintGL";
    }


    // 窗口尺寸变化
    void XVideoWidget::resizeGL(int width, int height)
    {
       m_F->glViewport(0, 0, width, height);

       qDebug() &lt;&lt; "resizeGL "&lt;code></iostream></qtimer></qdebug>

    Here’s a bit of code from my MainWindow :

    MainWindow::MainWindow(QWidget *parent):
       QMainWindow(parent)
       {
           FfmpegDecoder* ffmpegDecoder = new FfmpegDecoder();
           if(!ffmpegDecoder->Init()) {
               std::cout &lt;&lt; "problem with ffmpeg decoder init"  &lt;&lt; std::endl;
           } else {
               std::cout &lt;&lt; "fmmpeg decoder initiated"  &lt;&lt; std::endl;
           }
           XVideoWidget * xVideoWidget = new XVideoWidget(parent);
           ffmpegDecoder->setOpenGLWidget(xVideoWidget);

           mediaStream = new MediaStream(uri, ffmpegDecoder, videoConsumer);//= new MediaStream(uri, ffmpegDecoder, videoConsumer);
           //...
       }
       void MainWindow::run()
       {
           mediaStream->receiveFrame();
       }

    My main.cpp makes sure my window run() method runs in the background.

       MainWindow w;
       w.setFixedSize(1280,720);
       w.show();
       boost::thread mediaThread(&amp;MainWindow::run, &amp;w);
       std::cout &lt;&lt; "mediaThread running"  &lt;&lt; std::endl;

    If someone wants to view the entire code, please feel free to visit the commit I just did : https://github.com/lucaszanella/orwell/tree/bbd74e42bd42df685bacc5d51cacbee3a178689f