
Recherche avancée
Médias (1)
-
Sintel MP4 Surround 5.1 Full
13 mai 2011, par
Mis à jour : Février 2012
Langue : English
Type : Video
Autres articles (98)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
Sur d’autres sites (9892)
-
slow avcodec_decode_video2, ffmpeg on android
5 janvier 2014, par John SimpsonI am developing a player on android using ffmpeg. However, I found out that avcodec_decode_video2 very slow. Sometimes, it takes about 0.1, even 0.2 second to decode a frame from a video with 1920 × 1080 resolution.
How can I improve the speed of avcodec_decode_video2() ?
-
openGL ES 2.0 on android , YUV to RGB and Rendering with ffMpeg
14 octobre 2013, par 101110101100111111101101My renderer dies 1 2 frames later when video shows after.
FATAL ERROR 11 : blabla...(Exactly occurs in glDrawElements (Y part))
I think problem is 'glPixelStorei' or 'GL_RGB', 'GL_LUMINANCE' but.. I don't get it.
My rendering way :
-
Decode data that got from network, (SDK Getting-> NDK Decoding), Enqueueing.
-
Dequeueing another threads (of course synchronized) get ready to setup OpenGL ES 2.0.(SDK)
-
When onDrawFrame, onSurfaceCreated, onSurfaceChanged methods are called, it shrink down to NDK. (My Renderer source in NDK will attach below.)
-
Rendering.
As you know, Fragment shader is using for conversion.
My Data is YUV 420p (pix_fmt_YUV420p) (12bit per pixel)Here is my entire source.
I haven't any knowledge about OpenGL ES before, this is first time.
Please let me know what am I do improving performance.
and What am I use parameters in 'glTexImage2D', 'glTexSubImage2D', 'glRenderbufferStorage' ????
GL_LUMINANCE ? GL_RGBA ? GL_RGB ? (GL_LUMINANCE is using now)void Renderer::set_draw_frame(JNIEnv* jenv, jbyteArray yData, jbyteArray uData, jbyteArray vData)
{
for (int i = 0; i < 3; i++) {
if (yuv_data_[i] != NULL) {
free(yuv_data_[i]);
}
}
int YSIZE = -1;
int USIZE = -1;
int VSIZE = -1;
if (yData != NULL) {
YSIZE = (int)jenv->GetArrayLength(yData);
LOG_DEBUG("YSIZE : %d", YSIZE);
yuv_data_[0] = (unsigned char*)malloc(sizeof(unsigned char) * YSIZE);
memset(yuv_data_[0], 0, YSIZE);
jenv->GetByteArrayRegion(yData, 0, YSIZE, (jbyte*)yuv_data_[0]);
yuv_data_[0] = reinterpret_cast<unsigned>(yuv_data_[0]);
} else {
YSIZE = (int)jenv->GetArrayLength(yData);
yuv_data_[0] = (unsigned char*)malloc(sizeof(unsigned char) * YSIZE);
memset(yuv_data_[0], 1, YSIZE);
}
if (uData != NULL) {
USIZE = (int)jenv->GetArrayLength(uData);
LOG_DEBUG("USIZE : %d", USIZE);
yuv_data_[1] = (unsigned char*)malloc(sizeof(unsigned char) * USIZE);
memset(yuv_data_[1], 0, USIZE);
jenv->GetByteArrayRegion(uData, 0, USIZE, (jbyte*)yuv_data_[1]);
yuv_data_[1] = reinterpret_cast<unsigned>(yuv_data_[1]);
} else {
USIZE = YSIZE/4;
yuv_data_[1] = (unsigned char*)malloc(sizeof(unsigned char) * USIZE);
memset(yuv_data_[1], 1, USIZE);
}
if (vData != NULL) {
VSIZE = (int)jenv->GetArrayLength(vData);
LOG_DEBUG("VSIZE : %d", VSIZE);
yuv_data_[2] = (unsigned char*)malloc(sizeof(unsigned char) * VSIZE);
memset(yuv_data_[2], 0, VSIZE);
jenv->GetByteArrayRegion(vData, 0, VSIZE, (jbyte*)yuv_data_[2]);
yuv_data_[2] = reinterpret_cast<unsigned>(yuv_data_[2]);
} else {
VSIZE = YSIZE/4;
yuv_data_[2] = (unsigned char*)malloc(sizeof(unsigned char) * VSIZE);
memset(yuv_data_[2], 1, VSIZE);
}
glClearColor(1.0F, 1.0F, 1.0F, 1.0F);
check_gl_error("glClearColor");
glClear(GL_COLOR_BUFFER_BIT);
check_gl_error("glClear");
}
void Renderer::draw_frame()
{
// Binding created FBO
glBindFramebuffer(GL_FRAMEBUFFER, frame_buffer_object_);
check_gl_error("glBindFramebuffer");
// Add program to OpenGL environment
glUseProgram(program_object_);
check_gl_error("glUseProgram");
for (int i = 0; i < 3; i++) {
LOG_DEBUG("Success");
//Bind texture
glActiveTexture(GL_TEXTURE0 + i);
check_gl_error("glActiveTexture");
glBindTexture(GL_TEXTURE_2D, yuv_texture_id_[i]);
check_gl_error("glBindTexture");
glUniform1i(yuv_texture_object_[i], i);
check_gl_error("glBindTexture");
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, stream_yuv_width_[i], stream_yuv_height_[i], GL_RGBA, GL_UNSIGNED_BYTE, yuv_data_[i]);
check_gl_error("glTexSubImage2D");
}
LOG_DEBUG("Success");
// Load vertex information
glVertexAttribPointer(position_object_, 2, GL_FLOAT, GL_FALSE, kStride, kVertexInformation);
check_gl_error("glVertexAttribPointer");
// Load texture information
glVertexAttribPointer(texture_position_object_, 2, GL_SHORT, GL_FALSE, kStride, kTextureCoordinateInformation);
check_gl_error("glVertexAttribPointer");
LOG_DEBUG("9");
glEnableVertexAttribArray(position_object_);
check_gl_error("glEnableVertexAttribArray");
glEnableVertexAttribArray(texture_position_object_);
check_gl_error("glEnableVertexAttribArray");
// Back to window buffer
glBindFramebuffer(GL_FRAMEBUFFER, 0);
check_gl_error("glBindFramebuffer");
LOG_DEBUG("Success");
// Draw the Square
glDrawElements(GL_TRIANGLE_STRIP, 6, GL_UNSIGNED_SHORT, kIndicesInformation);
check_gl_error("glDrawElements");
}
void Renderer::setup_render_to_texture()
{
glGenFramebuffers(1, &frame_buffer_object_);
check_gl_error("glGenFramebuffers");
glBindFramebuffer(GL_FRAMEBUFFER, frame_buffer_object_);
check_gl_error("glBindFramebuffer");
glGenRenderbuffers(1, &render_buffer_object_);
check_gl_error("glGenRenderbuffers");
glBindRenderbuffer(GL_RENDERBUFFER, render_buffer_object_);
check_gl_error("glBindRenderbuffer");
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA4, stream_yuv_width_[0], stream_yuv_height_[0]);
check_gl_error("glRenderbufferStorage");
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, render_buffer_object_);
check_gl_error("glFramebufferRenderbuffer");
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, yuv_texture_id_[0], 0);
check_gl_error("glFramebufferTexture2D");
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, yuv_texture_id_[1], 0);
check_gl_error("glFramebufferTexture2D");
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, yuv_texture_id_[2], 0);
check_gl_error("glFramebufferTexture2D");
glBindFramebuffer(GL_FRAMEBUFFER, 0);
check_gl_error("glBindFramebuffer");
GLint status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if (status != GL_FRAMEBUFFER_COMPLETE) {
print_log("renderer.cpp", "setup_graphics", "FBO setting fault.", LOGERROR);
LOG_ERROR("%d\n", status);
return;
}
}
void Renderer::setup_yuv_texture()
{
// Use tightly packed data
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
check_gl_error("glPixelStorei");
for (int i = 0; i < 3; i++) {
if (yuv_texture_id_[i]) {
glDeleteTextures(1, &yuv_texture_id_[i]);
check_gl_error("glDeleteTextures");
}
glActiveTexture(GL_TEXTURE0+i);
check_gl_error("glActiveTexture");
// Generate texture object
glGenTextures(1, &yuv_texture_id_[i]);
check_gl_error("glGenTextures");
glBindTexture(GL_TEXTURE_2D, yuv_texture_id_[i]);
check_gl_error("glBindTexture");
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
check_gl_error("glTexParameteri");
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
check_gl_error("glTexParameteri");
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
check_gl_error("glTexParameterf");
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
check_gl_error("glTexParameterf");
glEnable(GL_TEXTURE_2D);
check_gl_error("glEnable");
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, maximum_yuv_width_[i], maximum_yuv_height_[i], 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
check_gl_error("glTexImage2D");
}
}
void Renderer::setup_graphics()
{
print_gl_string("Version", GL_VERSION);
print_gl_string("Vendor", GL_VENDOR);
print_gl_string("Renderer", GL_RENDERER);
print_gl_string("Extensions", GL_EXTENSIONS);
program_object_ = create_program(kVertexShader, kFragmentShader);
if (!program_object_) {
print_log("renderer.cpp", "setup_graphics", "Could not create program.", LOGERROR);
return;
}
position_object_ = glGetAttribLocation(program_object_, "vPosition");
check_gl_error("glGetAttribLocation");
texture_position_object_ = glGetAttribLocation(program_object_, "vTexCoord");
check_gl_error("glGetAttribLocation");
yuv_texture_object_[0] = glGetUniformLocation(program_object_, "yTexture");
check_gl_error("glGetUniformLocation");
yuv_texture_object_[1] = glGetUniformLocation(program_object_, "uTexture");
check_gl_error("glGetUniformLocation");
yuv_texture_object_[2] = glGetUniformLocation(program_object_, "vTexture");
check_gl_error("glGetUniformLocation");
setup_yuv_texture();
setup_render_to_texture();
glViewport(0, 0, stream_yuv_width_[0], stream_yuv_height_[0]);//736, 480);//1920, 1080);//maximum_yuv_width_[0], maximum_yuv_height_[0]);
check_gl_error("glViewport");
}
GLuint Renderer::create_program(const char* vertex_source, const char* fragment_source)
{
GLuint vertexShader = load_shader(GL_VERTEX_SHADER, vertex_source);
if (!vertexShader) {
return 0;
}
GLuint pixelShader = load_shader(GL_FRAGMENT_SHADER, fragment_source);
if (!pixelShader) {
return 0;
}
GLuint program = glCreateProgram();
if (program) {
glAttachShader(program, vertexShader);
check_gl_error("glAttachShader");
glAttachShader(program, pixelShader);
check_gl_error("glAttachShader");
glLinkProgram(program);
/* Get a Status */
GLint linkStatus = GL_FALSE;
glGetProgramiv(program, GL_LINK_STATUS, &linkStatus);
if (linkStatus != GL_TRUE) {
GLint bufLength = 0;
glGetProgramiv(program, GL_INFO_LOG_LENGTH, &bufLength);
if (bufLength) {
char* buf = (char*) malloc(bufLength);
if (buf) {
glGetProgramInfoLog(program, bufLength, NULL, buf);
print_log("renderer.cpp", "create_program", "Could not link program.", LOGERROR);
LOG_ERROR("%s\n", buf);
free(buf);
}
}
glDeleteProgram(program);
program = 0;
}
}
return program;
}
GLuint Renderer::load_shader(GLenum shaderType, const char* pSource)
{
GLuint shader = glCreateShader(shaderType);
if (shader) {
glShaderSource(shader, 1, &pSource, NULL);
glCompileShader(shader);
/* Get a Status */
GLint compiled = 0;
glGetShaderiv(shader, GL_COMPILE_STATUS, &compiled);
if (!compiled) {
GLint infoLen = 0;
glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &infoLen);
if (infoLen) {
char* buf = (char*) malloc(infoLen);
if (buf) {
glGetShaderInfoLog(shader, infoLen, NULL, buf);
print_log("renderer.cpp", "load_shader", "Could not link program.", LOGERROR);
LOG_ERROR("%d :: %s\n", shaderType, buf);
free(buf);
}
glDeleteShader(shader);
shader = 0;
}
}
}
return shader;
}
void Renderer::onDrawFrame(JNIEnv* jenv, jbyteArray yData, jbyteArray uData, jbyteArray vData)
{
set_draw_frame(jenv, yData, uData, vData);
draw_frame();
return;
}
void Renderer::setSize(int stream_width, int stream_height) {
stream_yuv_width_[0] = stream_width;
stream_yuv_width_[1] = stream_width/2;
stream_yuv_width_[2] = stream_width/2;
stream_yuv_height_[0] = stream_height;
stream_yuv_height_[1] = stream_height/2;
stream_yuv_height_[2] = stream_height/2;
}
void Renderer::onSurfaceChanged(int width, int height)
{
mobile_yuv_width_[0] = width;
mobile_yuv_width_[1] = width/2;
mobile_yuv_width_[2] = width/2;
mobile_yuv_height_[0] = height;
mobile_yuv_height_[1] = height/2;
mobile_yuv_height_[2] = height/2;
maximum_yuv_width_[0] = 1920;
maximum_yuv_width_[1] = 1920/2;
maximum_yuv_width_[2] = 1920/2;
maximum_yuv_height_[0] = 1080;
maximum_yuv_height_[1] = 1080/2;
maximum_yuv_height_[2] = 1080/2;
// If stream size not setting, default size D1
//if (stream_yuv_width_[0] == 0) {
stream_yuv_width_[0] = 736;
stream_yuv_width_[1] = 736/2;
stream_yuv_width_[2] = 736/2;
stream_yuv_height_[0] = 480;
stream_yuv_height_[1] = 480/2;
stream_yuv_height_[2] = 480/2;
//}
setup_graphics();
return;
}
</unsigned></unsigned></unsigned>Here is my Fragment, Vertex source and coordinates :
static const char kVertexShader[] =
"attribute vec4 vPosition; \n"
"attribute vec2 vTexCoord; \n"
"varying vec2 v_vTexCoord; \n"
"void main() { \n"
"gl_Position = vPosition; \n"
"v_vTexCoord = vTexCoord; \n"
"} \n";
static const char kFragmentShader[] =
"precision mediump float; \n"
"varying vec2 v_vTexCoord; \n"
"uniform sampler2D yTexture; \n"
"uniform sampler2D uTexture; \n"
"uniform sampler2D vTexture; \n"
"void main() { \n"
"float y=texture2D(yTexture, v_vTexCoord).r;\n"
"float u=texture2D(uTexture, v_vTexCoord).r - 0.5;\n"
"float v=texture2D(vTexture, v_vTexCoord).r - 0.5;\n"
"float r=y + 1.13983 * v;\n"
"float g=y - 0.39465 * u - 0.58060 * v;\n"
"float b=y + 2.03211 * u;\n"
"gl_FragColor = vec4(r, g, b, 1.0);\n"
"}\n";
static const GLfloat kVertexInformation[] =
{
-1.0f, 1.0f, // TexCoord 0 top left
-1.0f,-1.0f, // TexCoord 1 bottom left
1.0f,-1.0f, // TexCoord 2 bottom right
1.0f, 1.0f // TexCoord 3 top right
};
static const GLshort kTextureCoordinateInformation[] =
{
0, 0, // TexCoord 0 top left
0, 1, // TexCoord 1 bottom left
1, 1, // TexCoord 2 bottom right
1, 0 // TexCoord 3 top right
};
static const GLuint kStride = 0;//COORDS_PER_VERTEX * 4;
static const GLshort kIndicesInformation[] =
{
0, 1, 2,
0, 2, 3
}; -
-
FFMPEG av_interleaved_write_frame() : Operation not permitted
20 octobre 2014, par camslazOk I receiving a ’av_interleaved_write_frame() : Operation not permitted’ error while trying to encode an MOV file. Firstly I need to outline the conditions behind it.
I am encoding 12 different files of different resolution sizes and format types via a PHP script that runs on cron. Basically it grabs a 250mb HD MOV file and encodes it in 4 different frame sizes as MOV, MP4 and WMV file types.
Now the script takes over 10mins to run and encode each of the files for the 250mb input file. I am outputting the processing times and as soon as the time on the script hits 10mins FFMPEG crashes and returns "av_interleaved_write_frame() : Operation not permitted" for the current file being encoded and all other remaining files yet to be encoded.
If the input videos is 150MB the total time the script runs for is under 10mins so it encodes all of the videos fine. Additionally if I run the FFMPEG command on the individual file that it fails on for the 250mb file it encodes the file with no issues.
From doing to research on the error "av_interleaved_write_frame()" it seems it is related to timestamps of what I understand to be of the input file. But in saying that it doesn’t seem to be the case in my instance because I can encode the file with no problem if I do it individually.
example ffmpeg command
ffmpeg -i GVowbt3vsrXL.mov -s 1920x1080 -sameq -vf "unsharp" -y GVowbt3vsrXL_4.wmv
Error output on the failed file at 10mins. Remember there is no issue with the command if I run it by itself it is only when the script hits 10mins.
'output' =>
array (
0 => 'FFmpeg version SVN-r24545, Copyright (c) 2000-2010 the FFmpeg developers',
1 => ' built on Aug 20 2010 23:32:02 with gcc 4.1.2 20080704 (Red Hat 4.1.2-48)',
2 => ' configuration: --enable-shared --enable-gpl --enable-pthreads --enable-nonfree --cpu=opteron --extra-cflags=\'-O3 -march=opteron -mtune=opteron\' --enable-libmp3lame --enable-libx264 --enable-libxvid --enable-avfilter --enable-filter=movie --enable-avfilter-lavf --enable-swscale',
3 => ' libavutil 50.23. 0 / 50.23. 0',
4 => ' libavcore 0. 1. 0 / 0. 1. 0',
5 => ' libavcodec 52.84. 1 / 52.84. 1',
6 => ' libavformat 52.77. 0 / 52.77. 0',
7 => ' libavdevice 52. 2. 0 / 52. 2. 0',
8 => ' libavfilter 1.26. 1 / 1.26. 1',
9 => ' libswscale 0.11. 0 / 0.11. 0',
10 => 'Input #0, mov,mp4,m4a,3gp,3g2,mj2, from \'/home/hdfootage/public_html/process/VideoEncode/_tmpfiles/GVowbt3vsrXL/GVowbt3vsrXL.mov\':',
11 => ' Metadata:',
12 => ' major_brand : qt',
13 => ' minor_version : 537199360',
14 => ' compatible_brands: qt',
15 => ' Duration: 00:00:20.00, start: 0.000000, bitrate: 110802 kb/s',
16 => ' Stream #0.0(eng): Video: mjpeg, yuvj422p, 1920x1080 [PAR 72:72 DAR 16:9], 109386 kb/s, 25 fps, 25 tbr, 25 tbn, 25 tbc',
17 => ' Stream #0.1(eng): Audio: pcm_s16be, 44100 Hz, 2 channels, s16, 1411 kb/s',
18 => '[buffer @ 0xdcd0e0] w:1920 h:1080 pixfmt:yuvj422p',
19 => '[unsharp @ 0xe00280] auto-inserting filter \'auto-inserted scaler 0\' between the filter \'src\' and the filter \'Filter 0 unsharp\'',
20 => '[scale @ 0xe005b0] w:1920 h:1080 fmt:yuvj422p -> w:1920 h:1080 fmt:yuv420p flags:0xa0000004',
21 => '[unsharp @ 0xe00280] effect:sharpen type:luma msize_x:5 msize_y:5 amount:1.00',
22 => '[unsharp @ 0xe00280] effect:none type:chroma msize_x:0 msize_y:0 amount:0.00',
23 => 'Output #0, asf, to \'/home/hdfootage/public_html/process/VideoEncode/_tmpfiles/GVowbt3vsrXL/GVowbt3vsrXL_4.wmv\':',
24 => ' Metadata:',
25 => ' WM/EncodingSettings: Lavf52.77.0',
26 => ' Stream #0.0(eng): Video: msmpeg4, yuv420p, 1920x1080 [PAR 1:1 DAR 16:9], q=2-31, 200 kb/s, 1k tbn, 25 tbc',
27 => ' Stream #0.1(eng): Audio: libmp3lame, 44100 Hz, 2 channels, s16, 64 kb/s',
28 => 'Stream mapping:',
29 => ' Stream #0.0 -> #0.0',
30 => ' Stream #0.1 -> #0.1',
31 => 'Press [q] to stop encoding',
32 => '[msmpeg4 @ 0xdccb50] warning, clipping 1 dct coefficients to -127..127',Then it errors
frame= 75 fps= 5 q=1.0 size= 12704kB time=2.90 bitrate=3588 6.0kbits av_interleaved_write_frame(): Operation not permitted',
)Has any anybody encountered this sort of problem before ? It seems to be something to do with the timestamps but only because the script is running for a period longer then 10mins. It maybe related to PHP/Apache config but I don’t know if it is FFMPEG or if it is server config I need to be looking at.