
Recherche avancée
Médias (91)
-
Head down (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Echoplex (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Discipline (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Letting you (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
1 000 000 (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
999 999 (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
Autres articles (95)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (5330)
-
FFMpeg - Split Window RTMP - Delay on Second Stream
22 février 2016, par Nick SmitI’m trying to combine two live
RTMP
sources into one split screen output with combined audio. The output is then sent on to a receivingRTMP
server.Using the following command, which uses the same
RTMP
input for both feeds, I’ve managed to get the above working, however the input on the left is delayed by about 2 seconds from the one on the right.ffmpeg -re -i rtmp://myserver.tld/live/stream_key -re -i rtmp://myserver.tld/live/stream_key \
-filter_complex "\
nullsrc=size=1152x720 [base];\
[0:v] crop=576:720 [upperleft];\
[1:v] crop=576:720 [upperright];\
[base][upperleft] overlay=shortest=1 [tmp1];\
[tmp1][upperright] overlay=shortest=1:x=576;\
[0:a][1:a]amix \
" -c:a libfdk_aac -ar 44100 -threads 32 -c:v libx264 -g 50 -preset ultrafast -tune zerolatency -f flv rtmp://myserver.tld/live/new_stream_keyOutput :
ffmpeg version N-76137-gb0bb1dc Copyright (c) 2000-2015 the FFmpeg developers
built with gcc 4.8 (Ubuntu 4.8.4-2ubuntu1~14.04)
configuration: --prefix=/home/ubuntu/ffmpeg_build --pkg-config-flags=--static --extra-cflags=-I/home/ubuntu/ffmpeg_build/include --extra-ldflags=-L/home/ubuntu/ffmpeg_build/lib --bindir=/home/ubuntu/bin --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-nonfree
libavutil 55. 4.100 / 55. 4.100
libavcodec 57. 7.100 / 57. 7.100
libavformat 57. 8.102 / 57. 8.102
libavdevice 57. 0.100 / 57. 0.100
libavfilter 6. 12.100 / 6. 12.100
libswscale 4. 0.100 / 4. 0.100
libswresample 2. 0.100 / 2. 0.100
libpostproc 54. 0.100 / 54. 0.100
[flv @ 0x3a0e940] video stream discovered after head already parsed
Input #0, flv, from 'rtmp://myserver.tld/live/stream_key':
Metadata:
Server : NGINX RTMP (github.com/arut/nginx-rtmp-module)
displayWidth : 1152
displayHeight : 720
fps : 29
profile :
level :
Duration: 00:00:00.00, start: 5.717000, bitrate: N/A
Stream #0:0: Audio: aac (LC), 48000 Hz, stereo, fltp, 163 kb/s
Stream #0:1: Video: h264 (High), yuv420p, 1152x720, 30.30 fps, 29.97 tbr, 1k tbn, 59.94 tbc
[flv @ 0x3a49e00] video stream discovered after head already parsed
Input #1, flv, from 'rtmp://myserver.tld/live/stream_key':
Metadata:
Server : NGINX RTMP (github.com/arut/nginx-rtmp-module)
displayWidth : 1152
displayHeight : 720
fps : 29
profile :
level :
Duration: 00:00:00.00, start: 9.685000, bitrate: N/A
Stream #1:0: Audio: aac (LC), 48000 Hz, stereo, fltp, 163 kb/s
Stream #1:1: Video: h264 (High), yuv420p, 1152x720, 30.30 fps, 29.97 tbr, 1k tbn, 59.94 tbc
[libx264 @ 0x3a9cd60] Application has requested 32 threads. Using a thread count greater than 16 is not recommended.
[libx264 @ 0x3a9cd60] using SAR=1/1
[libx264 @ 0x3a9cd60] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
[libx264 @ 0x3a9cd60] profile Constrained Baseline, level 3.1
[libx264 @ 0x3a9cd60] 264 - core 142 r2389 956c8d8 - H.264/MPEG-4 AVC codec - Copyleft 2003-2014 - http://www.videolan.org/x264.html - options: cabac=0 ref=1 deblock=0:0:0 analyse=0:0 me=dia subme=0 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=0 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=0 threads=11 lookahead_threads=11 sliced_threads=1 slices=11 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=0 weightp=0 keyint=50 keyint_min=5 scenecut=0 intra_refresh=0 rc=crf mbtree=0 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=0
Output #0, flv, to 'rtmp://myserver.tld/live/new_stream_key':
Metadata:
Server : NGINX RTMP (github.com/arut/nginx-rtmp-module)
displayWidth : 1152
displayHeight : 720
fps : 29
profile :
level :
encoder : Lavf57.8.102
Stream #0:0: Video: h264 (libx264) ([7][0][0][0] / 0x0007), yuv420p, 1152x720 [SAR 1:1 DAR 8:5], q=-1--1, 25 fps, 1k tbn, 25 tbc (default)
Metadata:
encoder : Lavc57.7.100 libx264
Stream #0:1: Audio: aac (libfdk_aac) ([10][0][0][0] / 0x000A), 44100 Hz, stereo, s16, 128 kb/s (default)
Metadata:
encoder : Lavc57.7.100 libfdk_aac
Stream mapping:
Stream #0:0 (aac) -> amix:input0
Stream #0:1 (h264) -> crop
Stream #1:0 (aac) -> amix:input1
Stream #1:1 (h264) -> crop
overlay -> Stream #0:0 (libx264)
amix -> Stream #0:1 (libfdk_aac)
Press [q] to stop, [?] for help
[flv @ 0x3a0e940] Thread message queue blocking; consider raising the thread_queue_size option (current value: 512)
frame= 81 fps= 20 q=15.0 size= 674kB time=00:00:03.24 bitrate=1703.3kbits/frame= 102 fps= 22 q=22.0 size= 945kB time=00:00:04.08 bitrate=1896.4kbits/Is there any way to force
FFMpeg
to read bothRTMP
inputs at the same time ? -
avcodec : Remove libaacplus
24 janvier 2016, par Timothy Guavcodec : Remove libaacplus
TODO : bump minor
It’s inferior in quality to fdk-aac and has an arguably more problematic
license.As early as 2012, a HydrogenAudio user reported :
> It has however one huge advantage : much better quality at low bitrates than
> faac and libaacplus.I myself have made a few spectrograms for a comparison of the two
encoders as well. The FDK output is consistently better than the
libaacplus one, in all bitrates I tested.libaacplus license is 3GPP + LGPLv2. 3GPP copyright notice is completely
proprietory, as follows :> No part may be reproduced except as authorized by written permission.
>
> The copyright and the foregoing restriction extend to reproduction in
> all media.
>
> © 2008, 3GPP Organizational Partners (ARIB, ATIS, CCSA, ETSI, TTA, TTC).
>
> All rights reserved.(The latest 26410-d00 zip from 3GPP has the same notice, but the copyright
year is changed to 2015)The copyright part of the FDK AAC license (section 2) is a copyleft
license that permits redistribution under certain conditions (and
therefore the LGPL + libfdk-aac combination is not prohibited by
configure) :> Redistribution and use in source and binary forms, with or without
> modification, are permitted without payment of copyright license fees
> provided that you satisfy the following conditions :
>
> You must retain the complete text of this software license in
> redistributions of the FDK AAC Codec or your modifications thereto in
> source code form.
>
> You must retain the complete text of this software license in the
> documentation and/or other materials provided with redistributions of
> the FDK AAC Codec or your modifications thereto in binary form.
>
> You must make available free of charge copies of the complete source
> code of the FDK AAC Codec and your modifications thereto to recipients
> of copies in binary form.
>
> The name of Fraunhofer may not be used to endorse or promote products
> derived from this library without prior written permission.
>
> You may not charge copyright license fees for anyone to use, copy or
> distribute the FDK AAC Codec software or your modifications thereto.
>
> Your modified versions of the FDK AAC Codec must carry prominent
> notices stating that you changed the software and the date of any
> change. For modified versions of the FDK AAC Codec, the term
> "Fraunhofer FDK AAC Codec Library for Android" must be replaced by the
> term "Third-Party Modified Version of the Fraunhofer FDK AAC Codec
> Library for Android." -
Video rendering in OpenGL on Qt5
28 janvier 2016, par BobnonoI’m trying to render my decoded frame from FFMPEG to an QOpenGLWindow.
FFMPEG give me NV12 AVFrame or YUV420p or RGB.
I choose the simplest RGB.I created an c_gl_video_yuv class inherit from QOpenGLWindow and QOpenGLFunctions_3_3.
I want to use shader to draw my rectangle and texture it with the video frame. (for YUV I want the sader to convert it in RGB and apply texture)
my c_gl_video_yuv class is define as below :
class c_gl_video_yuv : public QOpenGLWindow,protected QOpenGLFunctions_3_3_Core
{
public:
c_gl_video_yuv();
~c_gl_video_yuv();
---
void update_texture(AVFrame *frame, int w, int h);
protected:
void initializeGL();
void paintGL();
void resizeGL(int width, int height);
void paintEvent(QPaintEvent *);
private:
---
GLuint textures[2];
---
// Shader program
QOpenGLShaderProgram *m_program;
GLint locVertices;
GLint locTexcoord;
};I initialise the OpenGL :
void c_gl_video_yuv::initializeGL()
{
// Init shader program
initializeOpenGLFunctions();
glGenTextures(2, textures);
/* Apply some filter on the texture */
glBindTexture(GL_TEXTURE_2D, textures[0]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glBindTexture(GL_TEXTURE_2D, textures[1]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
qDebug() << "c_gl_video_yuv::initializeGL() initialise shader"<< endl;
m_program = new QOpenGLShaderProgram(this);
m_program->addShaderFromSourceFile(QOpenGLShader::Vertex, ":/shader/Ressources/vertex_yuv.glsl");
m_program->addShaderFromSourceFile(QOpenGLShader::Fragment, ":/shader/Ressources/rgb_to_rgb_shader .glsl");
m_program->link();
// /* Grab location of shader attributes. */
locVertices = m_program->attributeLocation("position");
locTexcoord = m_program->attributeLocation("texpos");
/* Enable vertex arrays to push the data. */
glEnableVertexAttribArray(locVertices);
glEnableVertexAttribArray(locTexcoord);
/* set data in the arrays. */
glVertexAttribPointer(locVertices, 2, GL_FLOAT, GL_FALSE, 0,
&vertices[0][0]);
glVertexAttribPointer(locTexcoord, 2, GL_FLOAT, GL_FALSE, 0,
&texcoords[0][0]);
// GL options
glEnable(GL_DEPTH_TEST);
}And I render
void c_gl_video_yuv::paintGL()
{
qDebug() << "paintGL() set viewport "<* Clear background. */
glClearColor(0.5f,0.5f,0.5f,1.0f);
glClear(GL_COLOR_BUFFER_BIT);
if(first_frame)
{
qDebug() << "paintGL() Bind shader" << endl;
m_program->bind();
/* Get Ytex attribute to associate to TEXTURE0 */
m_program->bindAttributeLocation("Ytex",0);
m_program->bindAttributeLocation("UVtex",1);
qDebug() << "paintGL() Bind texture" << endl;
if(!is_init)
{
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textures[0]);
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGB, frame_width, frame_height, 0, GL_RGB, GL_UNSIGNED_BYTE, frame_yuv->data[0] );
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, textures[1]);
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGB, frame_width, frame_height, 0, GL_RGB, GL_UNSIGNED_BYTE, frame_yuv->data[0] );
is_init = true;
}
else
{
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textures[0]);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0,0, frame_width, frame_height, GL_RGB, GL_UNSIGNED_BYTE, frame_yuv->data[0]);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, textures[1]);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0,0, frame_width, frame_height, GL_RGB, GL_UNSIGNED_BYTE, frame_yuv->data[0]);
}
glVertexAttribPointer(locVertices, 2, GL_FLOAT, GL_FALSE, 0, vertices);
glVertexAttribPointer(locTexcoord, 2, GL_FLOAT, GL_FALSE, 0, texcoords);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisableVertexAttribArray(1);
glDisableVertexAttribArray(0);
m_program->release();
}
}the vertex shader is :
#version 330 core
attribute vec3 position;
attribute vec2 texpos;
varying vec2 opos;
void main(void)
{
opos = texpos;
gl_Position = vec4(position, 1.0);
}and the Fragment shader is :
version 330 core
in vec2 TexCoords;
out vec4 color;
uniform sampler2D YImage;
uniform sampler2D UVImage;
const vec3 r_c = vec3(1.164383, 0.000000, 1.596027);
const vec3 g_c = vec3(1.164383, -0.391762, -0.812968);
const vec3 b_c = vec3(1.164383, 2.017232, 0.000000);
const vec3 offset = vec3(-0.0625, -0.5, -0.5);
void main()
{
float y_val = texture(YImage, TexCoords).r;
float u_val = texture(UVImage, TexCoords).r;
float v_val = texture(UVImage, TexCoords).g;
vec3 yuv = vec3(y_val, u_val, v_val);
yuv += offset;
color.r = dot(yuv, r_c);
color.g = dot(yuv, g_c);
color.b = dot(yuv, b_c);
color.a = 1.0;
};(for RGB frame a replace thevec3(1.164383, 0.000000, 1.596027) ; by vec3(1.0, 1.000000, 1.0) ;
So before i receive a frame it render nothing, just a grey window - noarmal
After I receive a frame, the textures are uploaded and the shaders normaly create the player. But nothing appear not even a black rectangle nothing, just plain grey.What is wrong ?
It is not the right way to upload texture, or my vertex are not created ?
of course i declarde my vetrtices eand texture coordonateconst GLfloat vertices[][2] = {
{-1.f, -1.f},
{1.f, -1.f},
{-1.f, 1.f},
{1.f, 1.f}
};
const GLfloat texcoords[][2] = {
{0.0f, 1.0f},
{1.0f, 1.0f},
{0.0f, 0.0f},
{1.0f, 0.0f}
};I am very new to OpenGL so it is quite fuzzy in my head, but I thought it is not really hard to draw a rectangle with a streaming texture.
Maybe i should use vbo or fbo but I still don’t really understand this.If someone can help me, I will appreciate !
Thanks