
Recherche avancée
Médias (91)
-
#3 The Safest Place
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#4 Emo Creates
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#2 Typewriter Dance
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#1 The Wires
11 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
ED-ME-5 1-DVD
11 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Audio
-
Revolution of Open-source and film making towards open film making
6 octobre 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (98)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Les vidéos
21 avril 2011, parComme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
Sur d’autres sites (6016)
-
FFMpeg - Split Window RTMP - Delay on Second Stream
22 février 2016, par Nick SmitI’m trying to combine two live
RTMP
sources into one split screen output with combined audio. The output is then sent on to a receivingRTMP
server.Using the following command, which uses the same
RTMP
input for both feeds, I’ve managed to get the above working, however the input on the left is delayed by about 2 seconds from the one on the right.ffmpeg -re -i rtmp://myserver.tld/live/stream_key -re -i rtmp://myserver.tld/live/stream_key \
-filter_complex "\
nullsrc=size=1152x720 [base];\
[0:v] crop=576:720 [upperleft];\
[1:v] crop=576:720 [upperright];\
[base][upperleft] overlay=shortest=1 [tmp1];\
[tmp1][upperright] overlay=shortest=1:x=576;\
[0:a][1:a]amix \
" -c:a libfdk_aac -ar 44100 -threads 32 -c:v libx264 -g 50 -preset ultrafast -tune zerolatency -f flv rtmp://myserver.tld/live/new_stream_keyOutput :
ffmpeg version N-76137-gb0bb1dc Copyright (c) 2000-2015 the FFmpeg developers
built with gcc 4.8 (Ubuntu 4.8.4-2ubuntu1~14.04)
configuration: --prefix=/home/ubuntu/ffmpeg_build --pkg-config-flags=--static --extra-cflags=-I/home/ubuntu/ffmpeg_build/include --extra-ldflags=-L/home/ubuntu/ffmpeg_build/lib --bindir=/home/ubuntu/bin --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-nonfree
libavutil 55. 4.100 / 55. 4.100
libavcodec 57. 7.100 / 57. 7.100
libavformat 57. 8.102 / 57. 8.102
libavdevice 57. 0.100 / 57. 0.100
libavfilter 6. 12.100 / 6. 12.100
libswscale 4. 0.100 / 4. 0.100
libswresample 2. 0.100 / 2. 0.100
libpostproc 54. 0.100 / 54. 0.100
[flv @ 0x3a0e940] video stream discovered after head already parsed
Input #0, flv, from 'rtmp://myserver.tld/live/stream_key':
Metadata:
Server : NGINX RTMP (github.com/arut/nginx-rtmp-module)
displayWidth : 1152
displayHeight : 720
fps : 29
profile :
level :
Duration: 00:00:00.00, start: 5.717000, bitrate: N/A
Stream #0:0: Audio: aac (LC), 48000 Hz, stereo, fltp, 163 kb/s
Stream #0:1: Video: h264 (High), yuv420p, 1152x720, 30.30 fps, 29.97 tbr, 1k tbn, 59.94 tbc
[flv @ 0x3a49e00] video stream discovered after head already parsed
Input #1, flv, from 'rtmp://myserver.tld/live/stream_key':
Metadata:
Server : NGINX RTMP (github.com/arut/nginx-rtmp-module)
displayWidth : 1152
displayHeight : 720
fps : 29
profile :
level :
Duration: 00:00:00.00, start: 9.685000, bitrate: N/A
Stream #1:0: Audio: aac (LC), 48000 Hz, stereo, fltp, 163 kb/s
Stream #1:1: Video: h264 (High), yuv420p, 1152x720, 30.30 fps, 29.97 tbr, 1k tbn, 59.94 tbc
[libx264 @ 0x3a9cd60] Application has requested 32 threads. Using a thread count greater than 16 is not recommended.
[libx264 @ 0x3a9cd60] using SAR=1/1
[libx264 @ 0x3a9cd60] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
[libx264 @ 0x3a9cd60] profile Constrained Baseline, level 3.1
[libx264 @ 0x3a9cd60] 264 - core 142 r2389 956c8d8 - H.264/MPEG-4 AVC codec - Copyleft 2003-2014 - http://www.videolan.org/x264.html - options: cabac=0 ref=1 deblock=0:0:0 analyse=0:0 me=dia subme=0 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=0 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=0 threads=11 lookahead_threads=11 sliced_threads=1 slices=11 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=0 weightp=0 keyint=50 keyint_min=5 scenecut=0 intra_refresh=0 rc=crf mbtree=0 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=0
Output #0, flv, to 'rtmp://myserver.tld/live/new_stream_key':
Metadata:
Server : NGINX RTMP (github.com/arut/nginx-rtmp-module)
displayWidth : 1152
displayHeight : 720
fps : 29
profile :
level :
encoder : Lavf57.8.102
Stream #0:0: Video: h264 (libx264) ([7][0][0][0] / 0x0007), yuv420p, 1152x720 [SAR 1:1 DAR 8:5], q=-1--1, 25 fps, 1k tbn, 25 tbc (default)
Metadata:
encoder : Lavc57.7.100 libx264
Stream #0:1: Audio: aac (libfdk_aac) ([10][0][0][0] / 0x000A), 44100 Hz, stereo, s16, 128 kb/s (default)
Metadata:
encoder : Lavc57.7.100 libfdk_aac
Stream mapping:
Stream #0:0 (aac) -> amix:input0
Stream #0:1 (h264) -> crop
Stream #1:0 (aac) -> amix:input1
Stream #1:1 (h264) -> crop
overlay -> Stream #0:0 (libx264)
amix -> Stream #0:1 (libfdk_aac)
Press [q] to stop, [?] for help
[flv @ 0x3a0e940] Thread message queue blocking; consider raising the thread_queue_size option (current value: 512)
frame= 81 fps= 20 q=15.0 size= 674kB time=00:00:03.24 bitrate=1703.3kbits/frame= 102 fps= 22 q=22.0 size= 945kB time=00:00:04.08 bitrate=1896.4kbits/Is there any way to force
FFMpeg
to read bothRTMP
inputs at the same time ? -
Video rendering in OpenGL on Qt5
28 janvier 2016, par BobnonoI’m trying to render my decoded frame from FFMPEG to an QOpenGLWindow.
FFMPEG give me NV12 AVFrame or YUV420p or RGB.
I choose the simplest RGB.I created an c_gl_video_yuv class inherit from QOpenGLWindow and QOpenGLFunctions_3_3.
I want to use shader to draw my rectangle and texture it with the video frame. (for YUV I want the sader to convert it in RGB and apply texture)
my c_gl_video_yuv class is define as below :
class c_gl_video_yuv : public QOpenGLWindow,protected QOpenGLFunctions_3_3_Core
{
public:
c_gl_video_yuv();
~c_gl_video_yuv();
---
void update_texture(AVFrame *frame, int w, int h);
protected:
void initializeGL();
void paintGL();
void resizeGL(int width, int height);
void paintEvent(QPaintEvent *);
private:
---
GLuint textures[2];
---
// Shader program
QOpenGLShaderProgram *m_program;
GLint locVertices;
GLint locTexcoord;
};I initialise the OpenGL :
void c_gl_video_yuv::initializeGL()
{
// Init shader program
initializeOpenGLFunctions();
glGenTextures(2, textures);
/* Apply some filter on the texture */
glBindTexture(GL_TEXTURE_2D, textures[0]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glBindTexture(GL_TEXTURE_2D, textures[1]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
qDebug() << "c_gl_video_yuv::initializeGL() initialise shader"<< endl;
m_program = new QOpenGLShaderProgram(this);
m_program->addShaderFromSourceFile(QOpenGLShader::Vertex, ":/shader/Ressources/vertex_yuv.glsl");
m_program->addShaderFromSourceFile(QOpenGLShader::Fragment, ":/shader/Ressources/rgb_to_rgb_shader .glsl");
m_program->link();
// /* Grab location of shader attributes. */
locVertices = m_program->attributeLocation("position");
locTexcoord = m_program->attributeLocation("texpos");
/* Enable vertex arrays to push the data. */
glEnableVertexAttribArray(locVertices);
glEnableVertexAttribArray(locTexcoord);
/* set data in the arrays. */
glVertexAttribPointer(locVertices, 2, GL_FLOAT, GL_FALSE, 0,
&vertices[0][0]);
glVertexAttribPointer(locTexcoord, 2, GL_FLOAT, GL_FALSE, 0,
&texcoords[0][0]);
// GL options
glEnable(GL_DEPTH_TEST);
}And I render
void c_gl_video_yuv::paintGL()
{
qDebug() << "paintGL() set viewport "<* Clear background. */
glClearColor(0.5f,0.5f,0.5f,1.0f);
glClear(GL_COLOR_BUFFER_BIT);
if(first_frame)
{
qDebug() << "paintGL() Bind shader" << endl;
m_program->bind();
/* Get Ytex attribute to associate to TEXTURE0 */
m_program->bindAttributeLocation("Ytex",0);
m_program->bindAttributeLocation("UVtex",1);
qDebug() << "paintGL() Bind texture" << endl;
if(!is_init)
{
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textures[0]);
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGB, frame_width, frame_height, 0, GL_RGB, GL_UNSIGNED_BYTE, frame_yuv->data[0] );
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, textures[1]);
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGB, frame_width, frame_height, 0, GL_RGB, GL_UNSIGNED_BYTE, frame_yuv->data[0] );
is_init = true;
}
else
{
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textures[0]);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0,0, frame_width, frame_height, GL_RGB, GL_UNSIGNED_BYTE, frame_yuv->data[0]);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, textures[1]);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0,0, frame_width, frame_height, GL_RGB, GL_UNSIGNED_BYTE, frame_yuv->data[0]);
}
glVertexAttribPointer(locVertices, 2, GL_FLOAT, GL_FALSE, 0, vertices);
glVertexAttribPointer(locTexcoord, 2, GL_FLOAT, GL_FALSE, 0, texcoords);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisableVertexAttribArray(1);
glDisableVertexAttribArray(0);
m_program->release();
}
}the vertex shader is :
#version 330 core
attribute vec3 position;
attribute vec2 texpos;
varying vec2 opos;
void main(void)
{
opos = texpos;
gl_Position = vec4(position, 1.0);
}and the Fragment shader is :
version 330 core
in vec2 TexCoords;
out vec4 color;
uniform sampler2D YImage;
uniform sampler2D UVImage;
const vec3 r_c = vec3(1.164383, 0.000000, 1.596027);
const vec3 g_c = vec3(1.164383, -0.391762, -0.812968);
const vec3 b_c = vec3(1.164383, 2.017232, 0.000000);
const vec3 offset = vec3(-0.0625, -0.5, -0.5);
void main()
{
float y_val = texture(YImage, TexCoords).r;
float u_val = texture(UVImage, TexCoords).r;
float v_val = texture(UVImage, TexCoords).g;
vec3 yuv = vec3(y_val, u_val, v_val);
yuv += offset;
color.r = dot(yuv, r_c);
color.g = dot(yuv, g_c);
color.b = dot(yuv, b_c);
color.a = 1.0;
};(for RGB frame a replace thevec3(1.164383, 0.000000, 1.596027) ; by vec3(1.0, 1.000000, 1.0) ;
So before i receive a frame it render nothing, just a grey window - noarmal
After I receive a frame, the textures are uploaded and the shaders normaly create the player. But nothing appear not even a black rectangle nothing, just plain grey.What is wrong ?
It is not the right way to upload texture, or my vertex are not created ?
of course i declarde my vetrtices eand texture coordonateconst GLfloat vertices[][2] = {
{-1.f, -1.f},
{1.f, -1.f},
{-1.f, 1.f},
{1.f, 1.f}
};
const GLfloat texcoords[][2] = {
{0.0f, 1.0f},
{1.0f, 1.0f},
{0.0f, 0.0f},
{1.0f, 0.0f}
};I am very new to OpenGL so it is quite fuzzy in my head, but I thought it is not really hard to draw a rectangle with a streaming texture.
Maybe i should use vbo or fbo but I still don’t really understand this.If someone can help me, I will appreciate !
Thanks -
How can combine two separate scripts being piped together to make one script instead of two ?
27 mars 2016, par user556068For the past couple hours I’ve been banging my head against the wall trying to figure out something I thought would be simple. Maybe it is but it’s beyond me at the moment. So I have now two scripts. Originallly they were part of the same but I could never make it work how it should. So the first part uses
curl
to download a file from a site. Then usinggrep
andsed
to filter out the text I need which is then put into a plain text file as a long list of website urls ; one per line. The last part of the 1st script calls onyoutube -dl
to read the batch file in order to obtain the web addresses where the actual content is located. I hope that makes sense.youtube-dl
reads the batch file and outputs a new list urls into the terminal. This second list is not saved to file because it doesn’t need to be. These urls change from day to day or hour to hour. Using theread
command, these urls are then passed to ffmpeg using a predetermined set of arguments for the input and output. Ffmpeg is executed on every url it receives and runs quietly in the background.The first paragraph describes
script1.sh
and paragraph 2 obviously describesscript2.sh
. When I pipe them together likescript1.sh | script2.sh
it works better than I ever thought possible. Maybe i’m nitpicking at this point but the idea is to have 1 unified script. For the moment I have simplified it by adding an alias to my.bash_profile
.Here are the last two commands of script1.
sed 's/\"\,/\//g' > "$HOME/file2.txt";
cat $HOME/file2.txt | youtube-dl --ignore-config -iga -The trailing
-
allows youtube-dl to read from stdin.The second part of the script ; what I’m calling script2 at this point begins with
while read -r input
do
ffmpeg [arg] [input] [arg2] [output]What am i not seeing that is causing the script to hang when the two halves are combined yet work perfectly if one is piped into the other ?