
Recherche avancée
Médias (3)
-
Elephants Dream - Cover of the soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
Valkaama DVD Label
4 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Image
-
Publier une image simplement
13 avril 2011, par ,
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (67)
-
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ; -
Le plugin : Gestion de la mutualisation
2 mars 2010, parLe plugin de Gestion de mutualisation permet de gérer les différents canaux de mediaspip depuis un site maître. Il a pour but de fournir une solution pure SPIP afin de remplacer cette ancienne solution.
Installation basique
On installe les fichiers de SPIP sur le serveur.
On ajoute ensuite le plugin "mutualisation" à la racine du site comme décrit ici.
On customise le fichier mes_options.php central comme on le souhaite. Voilà pour l’exemple celui de la plateforme mediaspip.net :
< ?php (...) -
Gestion de la ferme
2 mars 2010, parLa ferme est gérée dans son ensemble par des "super admins".
Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
Dans un premier temps il utilise le plugin "Gestion de mutualisation"
Sur d’autres sites (8251)
-
Real time livestreaming - RPI
24 avril 2022, par VictorI work at a telehealth company and we are using connected medical devices in order to provide the doctor with real time information from these equipements, the equipements are used by a trained health Professional.


Those devices work with video and audio. Right now, we are using them with peerjs (so peer to peer connection) but we are trying to move away from that and have a RPI with his only job to stream data (so streaming audio and video).


Because the equipements are supposed to be used with instructions from a doctor we need the doctor to receive the data in real time.


But we also need the trained health professional to see what he is doing (so we need a local feed from the equipement)


How do we capture audio and video


We are using ffmpeg with a go client that is in charge of managing the ffmpeg clients and stream them to a SRS server.
This works but we are having a 2-3 sec delay when streaming the data. (rtmp from ffmpeg and flv on the front end)


ffmpeg settings :


("ffmpeg", "-f", "v4l2", `-i`, "*/video0", "-f", "flv", "-vcodec", "libx264", "-x264opts", "keyint=15", "-preset", "ultrafast", "-tune", "zerolatency", "-fflags", "nobuffer", "-b:a", "160k", "-threads", "0", "-g", "0", "rtmp://srs-url")



My questions


- 

- Is there a way for this set up to achieve low latency (<1 sec) (for the nurse and for the doctor) ?
- Is the way I want to achieve this good ? Is there a batter way ?






Flow schema


Data exchange and use case flow


-
Firefox does not support mkv format videos, need to convert video to mp4
21 mai 2021, par sambhav jainI am making a screen recorder app that records your screen and upload the video to the
s3 bucket
. the video is in mpv format. themkv format
is supported by chrome, edge but not by firefox. Firefox supports the mp4 format. so how do I convert the video to mp4 format for firefox ?videofile
is inmkv
which needs to be converted tomp4


handleUpload() {
 for (let i = 0; i < this.files.length; i++) {
 this.awsUploadService
 .getSignedUrlS3(this.files[i]["name"], this.files[i]["type"])
 .subscribe(
 ({ url, keyFile }) => {
 this.awsUploadService
 .uploadfileAWSS3(url, this.files[i]["type"], this.files[i])
 .subscribe(
 (data) => {
 if (data["type"] === 1) {
 this.files[i]["progress"] =
 (data["loaded"] / data["total"]) * 100;
 }
 if (data["type"] === 4) {
 this.files[i]["isUploadCompleted"] = true;
 this.files[i][
 "uploadLocation"
 ] = `https://${environment.S3_BUCKET_NAME}.s3.${environment.S3_Region}.amazonaws.com/${keyFile}`;
 }
 },
 (error) => {
 this.files[i].isUploadCompleted = false;
 throw error;
 }
 );
 },
 (error) => {
 this.files[i].isUploadCompleted = false;
 throw error;
 }
 );
 }
 }



-
Rendering YUV420P ffmpeg decoded images on QT with OpenGL, only see black screen
17 février 2019, par Lucas ZanellaI’ve found this QT OpenGL Widget which should render a 420PYUV image on screen. I’m feeding a ffmpeg decoded buffer into its
paintGL()
function but I see nothing. Neither noises or correct images, only a black screen. I’m trying to understand why.I want to exclude the possibilities of other things being wrong, but I need to be sure first that my code will produce anything. I
std::cout
ed some bytes from the ffmpeg just to see if they were arriving and they were. So I should see at least some noise.Can you see anything wrong with my code that wouldn’t make it able to render images on screen ?
This is the widget that should output the image :
#include "XVideoWidget.h"
#include <qdebug>
#include <qtimer>
#include <iostream>
//自动加双引号
#define GET_STR(x) #x
#define A_VER 3
#define T_VER 4
//顶点shader
const char *vString = GET_STR(
attribute vec4 vertexIn;
attribute vec2 textureIn;
varying vec2 textureOut;
void main(void)
{
gl_Position = vertexIn;
textureOut = textureIn;
}
);
//片元shader
const char *tString = GET_STR(
varying vec2 textureOut;
uniform sampler2D tex_y;
uniform sampler2D tex_u;
uniform sampler2D tex_v;
void main(void)
{
vec3 yuv;
vec3 rgb;
yuv.x = texture2D(tex_y, textureOut).r;
yuv.y = texture2D(tex_u, textureOut).r - 0.5;
yuv.z = texture2D(tex_v, textureOut).r - 0.5;
rgb = mat3(1.0, 1.0, 1.0,
0.0, -0.39465, 2.03211,
1.13983, -0.58060, 0.0) * yuv;
gl_FragColor = vec4(rgb, 1.0);
}
);
//准备yuv数据
// ffmpeg -i v1080.mp4 -t 10 -s 240x128 -pix_fmt yuv420p out240x128.yuv
XVideoWidget::XVideoWidget(QWidget * parent)
{
// setWindowFlags (Qt::WindowFullscreenButtonHint);
// showFullScreen();
}
XVideoWidget::~XVideoWidget()
{
}
//初始化opengl
void XVideoWidget::initializeGL()
{
//qDebug() << "initializeGL";
std::cout << "initializing gl" << std::endl;
//初始化opengl (QOpenGLFunctions继承)函数
initializeOpenGLFunctions();
this->m_F = QOpenGLContext::currentContext()->functions();
//program加载shader(顶点和片元)脚本
//片元(像素)
std::cout << program.addShaderFromSourceCode(QOpenGLShader::Fragment, tString) << std::endl;
//顶点shader
std::cout << program.addShaderFromSourceCode(QOpenGLShader::Vertex, vString) << std::endl;
//设置顶点坐标的变量
program.bindAttributeLocation("vertexIn",A_VER);
//设置材质坐标
program.bindAttributeLocation("textureIn",T_VER);
//编译shader
std::cout << "program.link() = " << program.link() << std::endl;
std::cout << "program.bind() = " << program.bind() << std::endl;
//传递顶点和材质坐标
//顶点
static const GLfloat ver[] = {
-1.0f,-1.0f,
1.0f,-1.0f,
-1.0f, 1.0f,
1.0f,1.0f
};
//材质
static const GLfloat tex[] = {
0.0f, 1.0f,
1.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f
};
//顶点
glVertexAttribPointer(A_VER, 2, GL_FLOAT, 0, 0, ver);
glEnableVertexAttribArray(A_VER);
//材质
glVertexAttribPointer(T_VER, 2, GL_FLOAT, 0, 0, tex);
glEnableVertexAttribArray(T_VER);
//glUseProgram(&program);
//从shader获取材质
unis[0] = program.uniformLocation("tex_y");
unis[1] = program.uniformLocation("tex_u");
unis[2] = program.uniformLocation("tex_v");
//创建材质
glGenTextures(3, texs);
//Y
glBindTexture(GL_TEXTURE_2D, texs[0]);
//放大过滤,线性插值 GL_NEAREST(效率高,但马赛克严重)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
//创建材质显卡空间
glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, width, height, 0, GL_RED, GL_UNSIGNED_BYTE, 0);
//U
glBindTexture(GL_TEXTURE_2D, texs[1]);
//放大过滤,线性插值
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
//创建材质显卡空间
glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, width/2, height / 2, 0, GL_RED, GL_UNSIGNED_BYTE, 0);
//V
glBindTexture(GL_TEXTURE_2D, texs[2]);
//放大过滤,线性插值
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
//创建材质显卡空间
glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, width / 2, height / 2, 0, GL_RED, GL_UNSIGNED_BYTE, 0);
///分配材质内存空间
datas[0] = new unsigned char[width*height]; //Y
datas[1] = new unsigned char[width*height/4]; //U
datas[2] = new unsigned char[width*height/4]; //V
}
//刷新显示
void XVideoWidget::paintGL(unsigned char**data)
//void QFFmpegGLWidget::updateData(unsigned char**data)
{
std::cout << "painting!" << std::endl;
memcpy(datas[0], data[0], width*height);
memcpy(datas[1], data[1], width*height/4);
memcpy(datas[2], data[2], width*height/4);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texs[0]); //0层绑定到Y材质
//修改材质内容(复制内存内容)
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_RED, GL_UNSIGNED_BYTE, datas[0]);
//与shader uni遍历关联
glUniform1i(unis[0], 0);
glActiveTexture(GL_TEXTURE0+1);
glBindTexture(GL_TEXTURE_2D, texs[1]); //1层绑定到U材质
//修改材质内容(复制内存内容)
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width/2, height / 2, GL_RED, GL_UNSIGNED_BYTE, datas[1]);
//与shader uni遍历关联
glUniform1i(unis[1],1);
glActiveTexture(GL_TEXTURE0+2);
glBindTexture(GL_TEXTURE_2D, texs[2]); //2层绑定到V材质
//修改材质内容(复制内存内容)
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width / 2, height / 2, GL_RED, GL_UNSIGNED_BYTE, datas[2]);
//与shader uni遍历关联
glUniform1i(unis[2], 2);
glDrawArrays(GL_TRIANGLE_STRIP,0,4);
qDebug() << "paintGL";
}
// 窗口尺寸变化
void XVideoWidget::resizeGL(int width, int height)
{
m_F->glViewport(0, 0, width, height);
qDebug() << "resizeGL "<code></iostream></qtimer></qdebug>Here’s a bit of code from my MainWindow :
MainWindow::MainWindow(QWidget *parent):
QMainWindow(parent)
{
FfmpegDecoder* ffmpegDecoder = new FfmpegDecoder();
if(!ffmpegDecoder->Init()) {
std::cout << "problem with ffmpeg decoder init" << std::endl;
} else {
std::cout << "fmmpeg decoder initiated" << std::endl;
}
XVideoWidget * xVideoWidget = new XVideoWidget(parent);
ffmpegDecoder->setOpenGLWidget(xVideoWidget);
mediaStream = new MediaStream(uri, ffmpegDecoder, videoConsumer);//= new MediaStream(uri, ffmpegDecoder, videoConsumer);
//...
}
void MainWindow::run()
{
mediaStream->receiveFrame();
}My main.cpp makes sure my window
run()
method runs in the background.MainWindow w;
w.setFixedSize(1280,720);
w.show();
boost::thread mediaThread(&MainWindow::run, &w);
std::cout << "mediaThread running" << std::endl;If someone wants to view the entire code, please feel free to visit the commit I just did : https://github.com/lucaszanella/orwell/tree/bbd74e42bd42df685bacc5d51cacbee3a178689f