
Recherche avancée
Autres articles (49)
-
Les vidéos
21 avril 2011, parComme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...) -
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...) -
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
Sur d’autres sites (3944)
-
Render YUV video in OpenGL of ffmpeg using CVPixelBufferRef and Shaders
4 septembre 2012, par resident_I'm using to render YUV frames of ffmpeg with the iOS 5.0 method "CVOpenGLESTextureCacheCreateTextureFromImage".
I'm using like the apple example GLCameraRipple
My result in iPhone screen is this : iPhone Screen
I need to know I'm doing wrong.
I put part of my code to find errors.
ffmpeg configure frames :
ctx->p_sws_ctx = sws_getContext(ctx->p_video_ctx->width,
ctx->p_video_ctx->height,
ctx->p_video_ctx->pix_fmt,
ctx->p_video_ctx->width,
ctx->p_video_ctx->height,
PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL);
// Framebuffer for RGB data
ctx->p_frame_buffer = malloc(avpicture_get_size(PIX_FMT_YUV420P,
ctx->p_video_ctx->width,
ctx->p_video_ctx->height));
avpicture_fill((AVPicture*)ctx->p_picture_rgb, ctx->p_frame_buffer,PIX_FMT_YUV420P,
ctx->p_video_ctx->width,
ctx->p_video_ctx->height);My render method :
if (NULL == videoTextureCache) {
NSLog(@"displayPixelBuffer error");
return;
}
CVPixelBufferRef pixelBuffer;
CVPixelBufferCreateWithBytes(kCFAllocatorDefault, mTexW, mTexH, kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange, buffer, mFrameW * 3, NULL, 0, NULL, &pixelBuffer);
CVReturn err;
// Y-plane
glActiveTexture(GL_TEXTURE0);
err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
videoTextureCache,
pixelBuffer,
NULL,
GL_TEXTURE_2D,
GL_RED_EXT,
mTexW,
mTexH,
GL_RED_EXT,
GL_UNSIGNED_BYTE,
0,
&_lumaTexture);
if (err)
{
NSLog(@"Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err);
}
glBindTexture(CVOpenGLESTextureGetTarget(_lumaTexture), CVOpenGLESTextureGetName(_lumaTexture));
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
// UV-plane
glActiveTexture(GL_TEXTURE1);
err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
videoTextureCache,
pixelBuffer,
NULL,
GL_TEXTURE_2D,
GL_RG_EXT,
mTexW/2,
mTexH/2,
GL_RG_EXT,
GL_UNSIGNED_BYTE,
1,
&_chromaTexture);
if (err)
{
NSLog(@"Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err);
}
glBindTexture(CVOpenGLESTextureGetTarget(_chromaTexture), CVOpenGLESTextureGetName(_chromaTexture));
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glBindFramebuffer(GL_FRAMEBUFFER, defaultFramebuffer);
// Set the view port to the entire view
glViewport(0, 0, backingWidth, backingHeight);
static const GLfloat squareVertices[] = {
1.0f, 1.0f,
-1.0f, 1.0f,
1.0f, -1.0f,
-1.0f, -1.0f,
};
GLfloat textureVertices[] = {
1, 1,
1, 0,
0, 1,
0, 0,
};
// Draw the texture on the screen with OpenGL ES 2
[self renderWithSquareVertices:squareVertices textureVertices:textureVertices];
// Flush the CVOpenGLESTexture cache and release the texture
CVOpenGLESTextureCacheFlush(videoTextureCache, 0);
CVPixelBufferRelease(pixelBuffer);
[moviePlayerDelegate bufferDone];RenderWithSquareVertices method
- (void)renderWithSquareVertices:(const GLfloat*)squareVertices textureVertices:(const GLfloat*)textureVertices
{
// Use shader program.
glUseProgram(shader.program);
// Update attribute values.
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, squareVertices);
glEnableVertexAttribArray(ATTRIB_VERTEX);
glVertexAttribPointer(ATTRIB_TEXTUREPOSITON, 2, GL_FLOAT, 0, 0, textureVertices);
glEnableVertexAttribArray(ATTRIB_TEXTUREPOSITON);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// Present
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER];}
My fragment shader :
uniform sampler2D SamplerY;
uniform sampler2D SamplerUV;
varying highp vec2 _texcoord;
void main()
{
mediump vec3 yuv;
lowp vec3 rgb;
yuv.x = texture2D(SamplerY, _texcoord).r;
yuv.yz = texture2D(SamplerUV, _texcoord).rg - vec2(0.5, 0.5);
// BT.601, which is the standard for SDTV is provided as a reference
/* rgb = mat3( 1, 1, 1,
0, -.34413, 1.772,
1.402, -.71414, 0) * yuv;*/
// Using BT.709 which is the standard for HDTV
rgb = mat3( 1, 1, 1,
0, -.18732, 1.8556,
1.57481, -.46813, 0) * yuv;
gl_FragColor = vec4(rgb, 1);
}Very thanks,
-
is it possible to play h264-mpegts format in ios ?
11 octobre 2012, par jAckOdEServer transcodes the video to the h264/aac, and send video data to client simultaneously. To play and save the video data at the same time, I use mpegts as the container format, but the problem is that ios MediaPlayer can not play mpegts file.
Google said that i can used ios-build of ffmpeg to do the task, but it seem to be an overkill. is there any other ways to play mpegts in iphone using just iOS SDK ?
-
Getting accurate time from FFMPeg with Objective C (Audio Queue Services)
2 avril 2012, par WinstonMy iPhone app plays an audio file using FFMPeg.
I'm getting the elapsed time (to show to user) from the playing audio (in minutes and seconds after converting from microseconds, given by FFMPeg) like so :
AudioTimeStamp currentTimeStamp;
AudioQueueGetCurrentTime (audioQueue, NULL, &currentTimeStamp, NULL);
getFFMPEGtime = currentTimeStamp.mSampleTime/self.basicAudioDescription.mSampleRate;
self.currentAudioTime = [NSString stringWithFormat: @"%02d:%02d",
(int) getFFMPEGtime / (int)60000000,
(int) ((getFFMPEGtime % 60000000)/1000000)];Everything works fine, but when I scrub back or forward to play another portion of the song, the elapsed time will go back to zero, no matter the current position. The timer will always zero out.
I know I'm suposed to do some math to keep track of the old time and the new time, maybe constructing another clock or so, perhaps implementing another callback function, etc... I'm not sure what way I should go.
My questions are :
1) What's the best approach to keep track of the elapsed time when going back/forward in a song, avoiding the clock to always going back to zero ?
2) Should I look deeply into FFMPeg functions or should I stick with Objective-C and Cocoa Touch for solving this problem ?
Please, I need some advices/ideas from experienced programmers. I'm stuck. Thanks beforehand !