Recherche avancée

Médias (91)

Autres articles (48)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • Gestion des droits de création et d’édition des objets

    8 février 2011, par

    Par défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;

Sur d’autres sites (5836)

  • Video Overlay not returning to black once video stopped

    24 septembre 2020, par Matt Nelson

    I am combining multiple videos from a webrtc call and aligning the "user-terminal" videos to the left, and the "user-visitor" videos to the right. As the visitor feed can start and strop, they have multiple videos so I'm offsetting them by the timestamp.

    


    This is working, however crazy it looks !

    


    The one last issue I have is that when the first visitor video stop on the right, it shows the last frame of that video until the next video starts on the left. Can I have it return o the black background ?

    


    Here is the command passed to ffmpeg :

    


    ffmpeg -y 
-i /recordings/process/5f6c9c3/videoroom-5f6c9c3-user-terminal-1600953586531366-audio.mjr.opus 
-i /recordings/process/5f6c9c3/videoroom-5f6c9c3-user-visitor-1600953592694430-audio.mjr.opus 
-i /recordings/process/5f6c9c3/videoroom-5f6c9c3-user-visitor-1600953609873223-audio.mjr.opus 
-i /recordings/process/5f6c9c3/videoroom-5f6c9c3-user-visitor-1600953628668227-audio.mjr.opus 
-i /recordings/process/5f6c9c3/videoroom-5f6c9c3-user-visitor-1600953663905342-audio.mjr.opus 
-i /recordings/process/5f6c9c3/videoroom-5f6c9c3-user-MasterTerminal-52350116-1600953681107272-audio.mjr.opus 
-i /recordings/process/5f6c9c3/videoroom-5f6c9c3-user-visitor-1600953697832165-audio.mjr.opus 
-i /recordings/process/5f6c9c3/videoroom-5f6c9c3-user-MasterTerminal-52350116-1600953723320364-audio.mjr.opus 
-i /recordings/process/5f6c9c3/videoroom-5f6c9c3-user-visitor-1600953725307043-audio.mjr.opus 
-filter_complex [1:a]adelay=6163|6163[1adelay];[2:a]adelay=23341|23341[2adelay];[3:a]adelay=42136|42136[3adelay];[4:a]adelay=77373|77373[4adelay];[5:a]adelay=94575|94575[5adelay];[6:a]adelay=111300|111300[6adelay];[7:a]adelay=136788|136788[7adelay];[8:a]adelay=138775|138775[8adelay];[0:a][1adelay][2adelay][3adelay][4adelay][5adelay][6adelay][7adelay][8adelay]amix=inputs=9:duration=longest[a] 
-map [a] -ac -2 /recordings/process/5f6c9c3/5f6c9c3.mp3


    


  • Rendering YUV420P ffmpeg decoded images on QT with OpenGL, only see black screen

    17 février 2019, par Lucas Zanella

    I’ve found this QT OpenGL Widget which should render a 420PYUV image on screen. I’m feeding a ffmpeg decoded buffer into its paintGL() function but I see nothing. Neither noises or correct images, only a black screen. I’m trying to understand why.

    I want to exclude the possibilities of other things being wrong, but I need to be sure first that my code will produce anything. I std::couted some bytes from the ffmpeg just to see if they were arriving and they were. So I should see at least some noise.

    Can you see anything wrong with my code that wouldn’t make it able to render images on screen ?

    This is the widget that should output the image :

    #include "XVideoWidget.h"
    #include <qdebug>
    #include <qtimer>
    #include <iostream>
    //自动加双引号
    #define GET_STR(x) #x
    #define A_VER 3
    #define T_VER 4

    //顶点shader
    const char *vString = GET_STR(
       attribute vec4 vertexIn;
       attribute vec2 textureIn;
       varying vec2 textureOut;
       void main(void)
       {
           gl_Position = vertexIn;
           textureOut = textureIn;
       }
    );


    //片元shader
    const char *tString = GET_STR(
       varying vec2 textureOut;
       uniform sampler2D tex_y;
       uniform sampler2D tex_u;
       uniform sampler2D tex_v;
       void main(void)
       {
           vec3 yuv;
           vec3 rgb;
           yuv.x = texture2D(tex_y, textureOut).r;
           yuv.y = texture2D(tex_u, textureOut).r - 0.5;
           yuv.z = texture2D(tex_v, textureOut).r - 0.5;
           rgb = mat3(1.0, 1.0, 1.0,
               0.0, -0.39465, 2.03211,
               1.13983, -0.58060, 0.0) * yuv;
           gl_FragColor = vec4(rgb, 1.0);
       }

    );



    //准备yuv数据
    // ffmpeg -i v1080.mp4 -t 10 -s 240x128 -pix_fmt yuv420p  out240x128.yuv
    XVideoWidget::XVideoWidget(QWidget * parent)
    {
      // setWindowFlags (Qt::WindowFullscreenButtonHint);
     //  showFullScreen();

    }

    XVideoWidget::~XVideoWidget()
    {
    }

    //初始化opengl
    void XVideoWidget::initializeGL()
    {
       //qDebug() &lt;&lt; "initializeGL";
       std::cout &lt;&lt; "initializing gl" &lt;&lt; std::endl;
       //初始化opengl (QOpenGLFunctions继承)函数
       initializeOpenGLFunctions();

       this->m_F  = QOpenGLContext::currentContext()->functions();

       //program加载shader(顶点和片元)脚本
       //片元(像素)
       std::cout &lt;&lt; program.addShaderFromSourceCode(QOpenGLShader::Fragment, tString) &lt;&lt; std::endl;
       //顶点shader
       std::cout &lt;&lt; program.addShaderFromSourceCode(QOpenGLShader::Vertex, vString) &lt;&lt; std::endl;

       //设置顶点坐标的变量
       program.bindAttributeLocation("vertexIn",A_VER);

       //设置材质坐标
       program.bindAttributeLocation("textureIn",T_VER);

       //编译shader
       std::cout &lt;&lt; "program.link() = " &lt;&lt; program.link() &lt;&lt; std::endl;

       std::cout &lt;&lt; "program.bind() = " &lt;&lt; program.bind() &lt;&lt; std::endl;

       //传递顶点和材质坐标
       //顶点
       static const GLfloat ver[] = {
           -1.0f,-1.0f,
           1.0f,-1.0f,
           -1.0f, 1.0f,
           1.0f,1.0f
       };

       //材质
       static const GLfloat tex[] = {
           0.0f, 1.0f,
           1.0f, 1.0f,
           0.0f, 0.0f,
           1.0f, 0.0f
       };

       //顶点
       glVertexAttribPointer(A_VER, 2, GL_FLOAT, 0, 0, ver);
       glEnableVertexAttribArray(A_VER);

       //材质
       glVertexAttribPointer(T_VER, 2, GL_FLOAT, 0, 0, tex);
       glEnableVertexAttribArray(T_VER);

       //glUseProgram(&amp;program);
       //从shader获取材质
       unis[0] = program.uniformLocation("tex_y");
       unis[1] = program.uniformLocation("tex_u");
       unis[2] = program.uniformLocation("tex_v");

       //创建材质
       glGenTextures(3, texs);

       //Y
       glBindTexture(GL_TEXTURE_2D, texs[0]);
       //放大过滤,线性插值   GL_NEAREST(效率高,但马赛克严重)
       glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
       glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
       //创建材质显卡空间
       glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, width, height, 0, GL_RED, GL_UNSIGNED_BYTE, 0);

       //U
       glBindTexture(GL_TEXTURE_2D, texs[1]);
       //放大过滤,线性插值
       glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
       glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
       //创建材质显卡空间
       glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, width/2, height / 2, 0, GL_RED, GL_UNSIGNED_BYTE, 0);

       //V
       glBindTexture(GL_TEXTURE_2D, texs[2]);
       //放大过滤,线性插值
       glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
       glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
       //创建材质显卡空间
       glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, width / 2, height / 2, 0, GL_RED, GL_UNSIGNED_BYTE, 0);

       ///分配材质内存空间
       datas[0] = new unsigned char[width*height];     //Y
       datas[1] = new unsigned char[width*height/4];   //U
       datas[2] = new unsigned char[width*height/4];   //V
    }

    //刷新显示
    void XVideoWidget::paintGL(unsigned char**data)
    //void QFFmpegGLWidget::updateData(unsigned char**data)
    {
       std::cout &lt;&lt; "painting!" &lt;&lt; std::endl;
       memcpy(datas[0], data[0], width*height);
       memcpy(datas[1], data[1], width*height/4);
       memcpy(datas[2], data[2], width*height/4);

       glActiveTexture(GL_TEXTURE0);
       glBindTexture(GL_TEXTURE_2D, texs[0]); //0层绑定到Y材质
       //修改材质内容(复制内存内容)
       glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_RED, GL_UNSIGNED_BYTE, datas[0]);
       //与shader uni遍历关联
       glUniform1i(unis[0], 0);


       glActiveTexture(GL_TEXTURE0+1);
       glBindTexture(GL_TEXTURE_2D, texs[1]); //1层绑定到U材质
                                              //修改材质内容(复制内存内容)
       glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width/2, height / 2, GL_RED, GL_UNSIGNED_BYTE, datas[1]);
       //与shader uni遍历关联
       glUniform1i(unis[1],1);


       glActiveTexture(GL_TEXTURE0+2);
       glBindTexture(GL_TEXTURE_2D, texs[2]); //2层绑定到V材质
                                              //修改材质内容(复制内存内容)
       glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width / 2, height / 2, GL_RED, GL_UNSIGNED_BYTE, datas[2]);
       //与shader uni遍历关联
       glUniform1i(unis[2], 2);

       glDrawArrays(GL_TRIANGLE_STRIP,0,4);
       qDebug() &lt;&lt; "paintGL";
    }


    // 窗口尺寸变化
    void XVideoWidget::resizeGL(int width, int height)
    {
       m_F->glViewport(0, 0, width, height);

       qDebug() &lt;&lt; "resizeGL "&lt;code></iostream></qtimer></qdebug>

    Here’s a bit of code from my MainWindow :

    MainWindow::MainWindow(QWidget *parent):
       QMainWindow(parent)
       {
           FfmpegDecoder* ffmpegDecoder = new FfmpegDecoder();
           if(!ffmpegDecoder->Init()) {
               std::cout &lt;&lt; "problem with ffmpeg decoder init"  &lt;&lt; std::endl;
           } else {
               std::cout &lt;&lt; "fmmpeg decoder initiated"  &lt;&lt; std::endl;
           }
           XVideoWidget * xVideoWidget = new XVideoWidget(parent);
           ffmpegDecoder->setOpenGLWidget(xVideoWidget);

           mediaStream = new MediaStream(uri, ffmpegDecoder, videoConsumer);//= new MediaStream(uri, ffmpegDecoder, videoConsumer);
           //...
       }
       void MainWindow::run()
       {
           mediaStream->receiveFrame();
       }

    My main.cpp makes sure my window run() method runs in the background.

       MainWindow w;
       w.setFixedSize(1280,720);
       w.show();
       boost::thread mediaThread(&amp;MainWindow::run, &amp;w);
       std::cout &lt;&lt; "mediaThread running"  &lt;&lt; std::endl;

    If someone wants to view the entire code, please feel free to visit the commit I just did : https://github.com/lucaszanella/orwell/tree/bbd74e42bd42df685bacc5d51cacbee3a178689f

  • convert camera-stream to a MJPEG + RTSP stream

    15 juillet 2017, par manman

    I am trying to convert a video-stream from a ubiquiti camera to a rtsp stream decoded in mjpeg. I tried it with ffserver but it didn’t work out. Straight away : A solution in windows would be more suitable in this case, if anyone knows a good windows-software to do such things, please tell me.

    Now to my setup :
    For testing purpose, i used a Ubuntu-Desktop VM and installed the ffmpeg package including ffserver via the command apt-get install ffmpeg.
    Afterwards i used the preconfigured Feed (feed1.ffm) to send the data to ffserver with ffmpeg :

    ffmpeg -i rtsp://[Camera-Url] -strict -2 http://localhost:8090/feed1.ffm

    and configured a new Stream in ffserver.conf

    <stream>
     Format rtsp
     Feed feed1.ffm
     VideoCodec mjpeg
     VideoFrameRate 5
     VideoIntraOnly
     VideoSize 352x240
     NoAudio
    </stream>

    I now tested the stream with vlc-player with following urls but none of them worked :

    rtsp://127.0.0.1/jpgvideo.sav
    rtsp://127.0.0.1:5454/jpgvideo.sav
    rtsp://127.0.0.1:5454/jpgvideo.sav.rtsp
    rtsp://127.0.0.1:5454/jpgvideo.rtsp

    Does someone knows why ? What I am missing here ?


    Note that another non-rtsp stream is just working fine :

    <stream>
     Feed feed1.ffm
     Format mpjpeg
     VideoFrameRate 5
     VideoIntraOnly
     VideoSize 352x240
     NoAudio
    </stream>

    Url :

    http://127.0.0.1:8090/test.mjpg

    In case anybody wonders :
    I am trying to get a video-stream on a SPA525G2 Cisco-IP-Phone. This is only supported to Cisco-Cameras, but according to this link it should also be possible if the stream is cisco-camera-like. (rtsp + mjpeg, 5fps)