
Recherche avancée
Médias (91)
-
#3 The Safest Place
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#4 Emo Creates
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#2 Typewriter Dance
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#1 The Wires
11 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
ED-ME-5 1-DVD
11 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Audio
-
Revolution of Open-source and film making towards open film making
6 octobre 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (51)
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs
Sur d’autres sites (6514)
-
YUV decoded by FFMPEG that draw with opengl is not clear
3 juin 2016, par S.JinI’m writing a movie player that use FFMPEG and OpenGL ES. Movie can be decoded succeessfully, but when I use AVFrame as texture to draw in my screen, I found it was so fuzzy. I don’t know where wrong in my code. If I change the AVFrame from YUV to RGB image, it will be clear.
Does any one know why use YUV as texture to draw will be not clear ?My render code :
#import "SJGLView.h"
#import <glkit></glkit>GLKit.h>
#import "SJDecoder.h"
#include "libavutil/pixfmt.h"
// MARK: - C Function
static void sj_logShaderError(GLuint shader) {
GLint info_len = 0;
glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &info_len);
if (info_len == 0) NSLog(@"Empty info");
else {
GLchar *log = (GLchar *)malloc(info_len);
glGetShaderInfoLog(shader, info_len, &info_len, log);
NSLog(@"Shader compile log: %s", log);
}
}
static void sj_logProgramError(GLuint program) {
int info_length;
glGetProgramiv(program, GL_INFO_LOG_LENGTH, &info_length);
if (info_length) {
GLchar *log = (GLchar *)malloc(info_length);
glGetProgramInfoLog(program, info_length, &info_length, log);
NSLog(@"Program link log: %s", log);
}
}
GLuint sj_loadShader(GLenum shader_type, const char* shader_source) {
GLuint shader = glCreateShader(shader_type);
glShaderSource(shader, 1, &shader_source, NULL);
glCompileShader(shader);
GLint compile_status = 0;
glGetShaderiv(shader, GL_COMPILE_STATUS, &compile_status);
if (!compile_status) goto fail;
return shader;
fail:
if (shader) {
sj_logShaderError(shader);
glDeleteShader(shader);
}
return 0;
}
void loadOrtho(float *matrix, float left, float right, float bottom, float top, float near, float far) {
float r_l = right - left;
float t_b = top - bottom;
float f_n = far - near;
float tx = (right + left)/(right - left);
float ty = (top + bottom)/(top - bottom);
float tz = (far + near)/(far - near);
matrix[0] = 2.0f / r_l;
matrix[1] = 0.0f;
matrix[2] = 0.0f;
matrix[3] = 0.0f;
matrix[4] = 0.0f;
matrix[5] = 2.0f / t_b;
matrix[6] = 0.0f;
matrix[7] = 0.0f;
matrix[8] = 0.0f;
matrix[9] = 0.0f;
matrix[10] = -2.0f / f_n;
matrix[11] = 0.0f;
matrix[12] = tx;
matrix[13] = ty;
matrix[14] = tz;
matrix[15] = 1.0f;
}
// BT.709, standard for HDTV
static const GLfloat g_bt709[] = {
1.164, 1.164, 1.164,
0.0, -0.213, 2.112,
1.793, -0.533, 0.0,
};
const GLfloat *getColorMatrix_bt709() {
return g_bt709;
}
enum {
ATTRIBUTE_VERTEX,
ATTRIBUTE_TEXCOORD,
};
@implementation SJGLView {
EAGLContext *_context;
GLuint _framebuffer;
GLuint _renderbuffer;
GLint _backingWidth;
GLint _backingHeight;
GLfloat _vertices[8];
GLuint _program;
GLuint _av4Position;
GLuint _av2Texcoord;
GLuint _um4Mvp;
GLfloat _texcoords[8];
GLuint _us2Sampler[3];
GLuint _um3ColorConversion;
GLuint _textures[3];
SJDecoder *_decoder;
}
+ (Class)layerClass {
return [CAEAGLLayer class];
}
- (instancetype)initWithFrame:(CGRect)frame decoder:(SJDecoder *)decoder {
self = [super initWithFrame:frame];
if (self) {
_decoder = decoder;
[self setupGL];
}
return self;
}
- (void)layoutSubviews {
glBindRenderbuffer(GL_RENDERBUFFER, _renderbuffer);
[_context renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer*)self.layer];
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &_backingWidth);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &_backingHeight);
[self updateVertices];
[self render: nil];
}
- (void)setContentMode:(UIViewContentMode)contentMode
{
[super setContentMode:contentMode];
[self updateVertices];
[self render:nil];
}
- (void)setupGL {
_context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
NSAssert(_context != nil, @"Failed to init EAGLContext");
CAEAGLLayer *eaglLayer= (CAEAGLLayer *)self.layer;
eaglLayer.opaque = YES;
eaglLayer.drawableProperties = @{
kEAGLDrawablePropertyRetainedBacking: [NSNumber numberWithBool:YES],
kEAGLDrawablePropertyColorFormat: kEAGLColorFormatRGBA8
};
[EAGLContext setCurrentContext:_context];
if ([self setupEAGLContext]) {
NSLog(@"Success to setup EAGLContext");
if ([self loadShaders]) {
NSLog(@"Success to load shader");
_us2Sampler[0] = glGetUniformLocation(_program, "us2_SamplerX");
_us2Sampler[1] = glGetUniformLocation(_program, "us2_SamplerY");
_us2Sampler[2] = glGetUniformLocation(_program, "us2_SamplerZ");
_um3ColorConversion = glGetUniformLocation(_program, "um3_ColorConversion");
}
}
}
- (BOOL)setupEAGLContext {
glGenFramebuffers(1, &_framebuffer);
glGenRenderbuffers(1, &_renderbuffer);
glBindFramebuffer(GL_FRAMEBUFFER, _framebuffer);
glBindRenderbuffer(GL_RENDERBUFFER, _renderbuffer);
[_context renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer *)self.layer];
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &_backingWidth);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &_backingHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, _renderbuffer);
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if (status != GL_FRAMEBUFFER_COMPLETE) {
NSLog(@"Failed to make complete framebuffer object: %x", status);
return NO;
}
GLenum glError = glGetError();
if (glError != GL_NO_ERROR) {
NSLog(@"Failed to setup EAGLContext: %x", glError);
return NO;
}
return YES;
}
- (BOOL)loadShaders {
NSString *vertexPath = [[NSBundle mainBundle] pathForResource:@"vertex" ofType:@"vsh"];
const char *vertexString = [[NSString stringWithContentsOfFile:vertexPath encoding:NSUTF8StringEncoding error:nil] UTF8String];
NSString *fragmentPath = _decoder.format == SJVideoFrameFormatYUV ? [[NSBundle mainBundle] pathForResource:@"yuv420p" ofType:@"fsh"] :
[[NSBundle mainBundle] pathForResource:@"rgb" ofType:@"fsh"];
const char *fragmentString = [[NSString stringWithContentsOfFile:fragmentPath encoding:NSUTF8StringEncoding error:nil] UTF8String];
GLuint vertexShader = sj_loadShader(GL_VERTEX_SHADER, vertexString);
GLuint fragmentShader = sj_loadShader(GL_FRAGMENT_SHADER, fragmentString);
_program = glCreateProgram();
glAttachShader(_program, vertexShader);
glAttachShader(_program, fragmentShader);
glLinkProgram(_program);
GLint link_status = GL_FALSE;
glGetProgramiv(_program, GL_LINK_STATUS, &link_status);
if(!link_status) goto fail;
_av4Position = glGetAttribLocation(_program, "av4_Position");
_av2Texcoord = glGetAttribLocation(_program, "av2_Texcoord");
_um4Mvp = glGetUniformLocation(_program, "um4_ModelViewProjection");
return YES;
fail:
sj_logProgramError(_program);
glDeleteShader(vertexShader);
glDeleteShader(fragmentShader);
glDeleteProgram(_program);
return NO;
}
- (void)useRenderer {
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glUseProgram(_program);
if (0 == _textures[0]) glGenTextures(3, _textures);
for (int i = 0; i < 3; i++) {
glActiveTexture(GL_TEXTURE0 + i);
glBindTexture(GL_TEXTURE_2D, _textures[i]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glUniform1i(_us2Sampler[i], i);
}
glUniformMatrix3fv(_um3ColorConversion, 1, GL_FALSE, getColorMatrix_bt709());
}
- (void)uploadTexture:(SJVideoFrame *)frame {
if (frame.format == SJVideoFrameFormatYUV) {
SJVideoYUVFrame *yuvFrame = (SJVideoYUVFrame *)frame;
const GLubyte *pixel[3] = { yuvFrame.luma.bytes, yuvFrame.chromaB.bytes, yuvFrame.chromaR.bytes };
const GLsizei widths[3] = { yuvFrame.width, yuvFrame.width/2, yuvFrame.width/2 };
const GLsizei heights[3] = { yuvFrame.height, yuvFrame.height/2, yuvFrame.height/2 };
for (int i = 0; i < 3; i++) {
glBindTexture(GL_TEXTURE_2D, _textures[i]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, widths[i], heights[i], 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, pixel[i]);
}
}
}
- (void)render:(SJVideoFrame *)frame {
[EAGLContext setCurrentContext:_context];
glUseProgram(_program);
[self useRenderer];
GLfloat modelviewProj[16];
loadOrtho(modelviewProj, -1.0f, 1.0f, -1.0f, 1.0f, -1.0f, 1.0f);
glUniformMatrix4fv(_um4Mvp, 1, GL_FALSE, modelviewProj);
[self updateVertices];
[self updateTexcoords];
glBindFramebuffer(GL_FRAMEBUFFER, _framebuffer);
glViewport(0, 0, _backingWidth, _backingHeight);
[self uploadTexture:frame];
glClear(GL_COLOR_BUFFER_BIT);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glBindRenderbuffer(GL_RENDERBUFFER, _renderbuffer);
[_context presentRenderbuffer:GL_RENDERBUFFER];
}
- (void)updateVertices {
[self resetVertices];
BOOL fit = (self.contentMode == UIViewContentModeScaleAspectFit);
float width = _decoder.frameWidth;
float height = _decoder.frameHeight;
const float dW = (float)_backingWidth / width;
const float dH = (float)_backingHeight / height;
float dd = fit ? MIN(dH, dW) : MAX(dH, dW);
float nW = (width * dd / (float)_backingWidth);
float nH = (height * dd / (float)_backingHeight);
_vertices[0] = -nW;
_vertices[1] = -nH;
_vertices[2] = nW;
_vertices[3] = -nH;
_vertices[4] = -nW;
_vertices[5] = nH;
_vertices[6] = nW;
_vertices[7] = nH;
glVertexAttribPointer(_av4Position, 2, GL_FLOAT, GL_FALSE, 0, _vertices);
glEnableVertexAttribArray(_av4Position);
}
- (void)resetVertices {
_vertices[0] = -1.0f;
_vertices[1] = -1.0f;
_vertices[2] = 1.0f;
_vertices[3] = -1.0f;
_vertices[4] = -1.0f;
_vertices[5] = 1.0f;
_vertices[6] = 1.0f;
_vertices[7] = 1.0f;
}
- (void)updateTexcoords {
[self resetTexcoords];
glVertexAttribPointer(_av2Texcoord, 2, GL_FLOAT, GL_FALSE, 0, _texcoords);
glEnableVertexAttribArray(_av2Texcoord);
}
- (void)resetTexcoords {
_texcoords[0] = 0.0f;
_texcoords[1] = 1.0f;
_texcoords[2] = 1.0f;
_texcoords[3] = 1.0f;
_texcoords[4] = 0.0f;
_texcoords[5] = 0.0f;
_texcoords[6] = 1.0f;
_texcoords[7] = 0.0f;
}.fsh :
precision highp float;
varying highp vec2 vv2_Texcoord;
uniform mat3 um3_ColorConversion;
uniform lowp sampler2D us2_SamplerX;
uniform lowp sampler2D us2_SamplerY;
uniform lowp sampler2D us2_SamplerZ;
void main() {
mediump vec3 yuv;
lowp vec3 rgb;
yuv.x = (texture2D(us2_SamplerX, vv2_Texcoord).r - (16.0/255.0));
yuv.y = (texture2D(us2_SamplerY, vv2_Texcoord).r - 0.5);
yuv.z = (texture2D(us2_SamplerZ, vv2_Texcoord).r - 0.5);
rgb = um3_ColorConversion * yuva;
gl_FragColor = vec4(rgb, 1.0);
}.vsh file :
precision highp float;
varying highp vec2 vv2_Texcoord;
uniform lowp sampler2D us2_SamplerX;
void main() {
gl_FragColor = vec4(texture2D(us2_SamplerX, vv2_Texcoord).rgb, 1)
}Result image :
resultThe rgb image :
RGB image -
Subtitling Sierra RBT Files
2 juin 2016, par Multimedia Mike — Game HackingThis is part 2 of the adventure started in my Subtitling Sierra VMD Files post. After I completed the VMD subtitling, The Translator discovered a wealth of animation files in a format called RBT (this apparently stands for “Robot” but I think “Ribbit” format could be more fun). What are we going to do ? We had come so far by solving the VMD subtitling problem for Phantasmagoria. It would be a shame if the effort ground to a halt due to this.
Fortunately, the folks behind the ScummVM project already figured out enough of the format to be able to decode the RBT files in Phantasmagoria.
In the end, I was successful in creating a completely standalone tool that can take a Robot file and a subtitle file and create a new Robot file with subtitles. The source code is here (subtitle-rbt.c). Here’s what the final result looks like :
“What’s in the refrigerator ?” I should note at this juncture that I am not sure if this particular Robot file even has sound or dialogue since I was conducting these experiments on a computer with non-working audio.
The RBT Format
I have created a new MultimediaWiki page describing the Robot Animation format based on the ScummVM source code. I have not worked with a format quite like this before. These are paletted animations which consist of a sequence of independent frames that are designed to be overlaid on top of static background. Because of these characteristics, each frame encodes its own unique dimensions and origin coordinate within the frame. While the Phantasmagoria VMD files are usually 288×144 (which are usually double-sized for the benefit of a 640×400 Super VGA canvas), these frames are meant to be plotted on a game field that was roughly 576×288 (288×144 doublesized).
For example, 2 minimalist animation frames from a desk investigation Robot file :
100×147
101×149As for compression, my first impression was that the algorithm was the same as VMD. This is wrong. It evidently uses an unmodified version of a standard algorithm called Lempel-Ziv-Stac (LZS). It shows up in several RFCs and was apparently used in MS-DOS’s transparent disk compression scheme.
Approach
Thankfully, many of the lessons I learned from the previous project are applicable to this project, including : subtitle library interfacing, subtitling in the paletted colorspace, and replacing encoded frames from the original file instead of trying to create a new file.Here is the pitch for this project :
- Create a C program that can traverse through an input file, piece by piece, and generate an output file. The result of this should be a bitwise identical file.
- Adapt the LZS compression decoding algorithm from ScummVM into the new tool. Make the tool dump raw Portable NetMap (PNM) files of varying dimensions and ensure that they look correct.
- Compress using LZS.
- Stretch the frames and draw subtitles.
- More compression. Find the minimum window for each frame.
Compression
Normally, my first goal is to decompress the video and store the data in a raw form. However, this turned out to be mathematically intractable. While the format does support both compressed and uncompressed frames (even though ScummVM indicates that the uncompressed path is yet unexercised), the goal of this project requires making the frames so large that they overflow certain parameters of the file.A Robot file has a sequence of frames and 2 tables describing the size of each frame. One table describes the entire frame size (audio + video) while the second table describes just the video frame size. Since these tables only use 16 bits to specify a size, the maximum frame size is 65536 bytes. Leaving space for the audio portion of the frame, this only leaves a per-frame byte budget of about 63000 bytes for the video. Expanding the frame to 576×288 (165,888 pixels) would overflow this limit.
Anyway, the upshot is that I needed to compress the data up front.
Fortunately, the LZS compressor is pretty straightforward, at least if you have experience writing VLC-oriented codecs. While the algorithm revolves around back references, my approach was to essentially write an RLE encoder. My compressor would search for runs of data (plentiful when I started to stretch the frame for subtitling purposes). When a run length of n=3 or more of the same pixel is found, encode the pixel by itself, and then store a back reference of offset -1 and length (n-1). It took a little while to iron out a few problems, but I eventually got it to work perfectly.
I have to say, however, that the format is a little bit weird in how it codes very large numbers. The length encoding is somewhat Golomb-like, i.e., smaller values are encoded with fewer bits. However, when it gets to large numbers, it starts encoding counts of 15 as blocks of 1111. For example, 24 is bigger than 7. Thus, emit 1111 into the bitstream and subtract 8 from 23 -> 16. Still bigger than 15, so stuff another 1111 into the bitstream and subtract 15. Now we’re at 1, so stuff 0001. So 24 is 11111111 0001. 12 bits is not too horrible. But the total number of bytes (value / 30). So a value of 300 takes around 10 bytes (80 bits) to encode.
Palette Slices
As in the VMD subtitling project, I took the subtitle color offered in the subtitle spec file as a suggestion and used Euclidean distance to match to the closest available color in the palette. One problem, however, is that the palette is a lot smaller in these animations. According to my notes, for the set of animations I scanned, only about 80 colors were specified, starting at palette index 55. I hypothesize that different slices of the palette are reserved for different uses. E.g., animation, background, and user interface. Thus, there is a smaller number of colors to draw upon for subtitling purposes.Scaling
One bit of residual weirdness in this format is the presence of a per-frame scale factor. While most frames set this to 100 (100% scale), I have observed 70%, 80%, and 90%. ScummVM is a bit unsure about how to handle these, so I am as well. However, I eventually realized I didn’t really need to care, at least not when decoding and re-encoding the frame. Just preserve the scale factor. I intend to modify the tool further to take scale factor into account when creating the subtitle.The Final Resolution
Right around the time that I was composing this post, The Translator emailed me and notified me that he had found a better way to subtitle the Robot files by modifying the scripts, rendering my entire approach moot. The result is much cleaner :
Turns out that the engine supported subtitles all along
It’s a good thing that I enjoyed the challenge or I might be annoyed at this point.
See Also
- Subtitling Sierra VMD Files : My effort to subtitle the main FMV files found in Sierra games.
-
How to set header metadata to encoded video ?
11 juin 2013, par JonaI'm encoding some images into an h264 video inside an mp4 container. I'm essentially using the ffmpeg example muxing.c. The thing is I'm trying to set some metadata in the mp4 container such as artist, title, etc...
I thought using the following would work but it didn't :
AVDictionary *opts = NULL;
av_dict_set(&opts, "title", "Super Lucky Dude", 0);
av_dict_set(&opts, "author", "Jacky Chan", 0);
av_dict_set(&opts, "album", "Chinese Movie", 0);
av_dict_set(&opts, "year", "05/10/2013", 0);
av_dict_set(&opts, "comment", "This video was created using example app.", 0);
av_dict_set(&opts, "genre", "Action", 0);
// Write the stream header, if any.
ret = avformat_write_header(oc, &opts);After the whole video is created I don't see any metadata written to the video file. Any pointers how to actually do this properly ?