
Recherche avancée
Autres articles (91)
-
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ; -
Submit bugs and patches
13 avril 2011Unfortunately a software is never perfect.
If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
You may also (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)
Sur d’autres sites (3693)
-
Using ffmpeg in microsoft Visual studio ( 2008 )
1er juin 2013, par miyangilhow can i use ffmpeg functions with my Microsoft VS ? i know i have to use MSYS and MingGW, but there is any document show the steps.
-
Calling ffmpeg api from Oracle
1er mai 2012, par TenGI have installed ffmpeg and ffmpeg-devel packages on Linux.
Oracle 11g is installed and running.
The database stores media files, and for better streaming we need to convert them to AVI format.
For ease of integration, we would like to do this conversion in the database.
Now, the simplest option is to write a wrapper for the ffmpeg command line utility, and enable a PLSQL procedure to call this.
However this would require the following steps :
- Read video BLOB
- Write to a OS file
- Call ffmpeg wrapper giving file name from (2) and output file name
- Load output file from 3 into a BLOB in PLSQL
I would like to if possible write a C routine (using the Oracle External Library feature) which accepts the input as the BLOB (OciLOBLocator), calls the appropriate libavformat functions presenting the LOB, and write the return to a LOB (again OciLOBLOcator) which is what the PLSQL layer then uses as the AVI file.
The other advantage of this is it avoids the undesirable impact of issuing a OS command from within Oracle.
The problem I have is that the examples given for ffmpeg show the processing of data from files, whereas I need the libraries to process the LOBs.
The alternative is to see if the OrdVideo data type in Oracle does this kind of conversion by using setformat and process.
-
How can you pass YUV frames from FFmpeg to OpenGL ES ?
24 mai 2012, par TheRockHas anybody tried to use FFmpeg to decode a video frame, then display it in OpenGL ES in iOS 5.0 ?
I tried to modify the GLCameraRipple example from Apple, but I always get a -6683 error from
CVOpenGLESTextureCacheCreateTextureFromImage()
.Here is my decode code :
...
convertCtx = sws_getContext(codecCtx->width, codecCtx->height, codecCtx->pix_fmt,
codecCtx->width, codecCtx->height, PIX_FMT_NV12,
SWS_FAST_BILINEAR, NULL, NULL, NULL);
srcFrame = avcodec_alloc_frame();
dstFrame = avcodec_alloc_frame();
width = codecCtx->width;
height = codecCtx->height;
outputBufLength = avpicture_get_size(PIX_FMT_NV12, width, height);
outputBuf = malloc(outputBufLength);
avpicture_fill((AVPicture *)dstFrame, outputBuf, PIX_FMT_NV12, width, height);
...
avcodec_decode_video2(codecCtx, srcFrame, &gotFrame, pkt);
...
sws_scale(convertCtx,
(const uint8_t**)srcFrame->data, srcFrame->linesize,
0, codecCtx->height,
dstFrame->data, dstFrame->linesize);Here is my code for display :
CVPixelBufferRef pixelBuffer;
CVPixelBufferCreateWithBytes(kCFAllocatorDefault, [videoDecoder width], [videoDecoder height],
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
dstFrame->data[0], dstFrame->linesize[0], 0, 0, 0,
&pixelBuffer);
...
CVReturn err;
int textureWidth = CVPixelBufferGetWidth(pixelBuffer);
int textureHeight = CVPixelBufferGetHeight(pixelBuffer);
if (!videoTextureCache)
{
NSLog(@"No video Texture cache");
}
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
// Y-plane
err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
videoTextureCache,
pixelBuffer,
NULL,
GL_TEXTURE_2D,
GL_RED_EXT,
textureWidth,
textureHeight,
GL_RED_EXT,
GL_UNSIGNED_BYTE,
0,
&lumaTexture);
if (err)
{
NSLog(@"Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err);
}
glBindTexture(CVOpenGLESTextureGetTarget(lumaTexture), CVOpenGLESTextureGetName(lumaTexture));
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
// UV-plane
err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
videoTextureCache,
pixelBuffer,
NULL,
GL_TEXTURE_2D,
GL_RG_EXT,
textureWidth / 2,
textureHeight / 2,
GL_RG_EXT,
GL_UNSIGNED_BYTE,
1,
&chromaTexture);
if (err)
{
NSLog(@"Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err);
}
glBindTexture(CVOpenGLESTextureGetTarget(chromaTexture), CVOpenGLESTextureGetName(chromaTexture));
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);I know that the code is not complete but it should be enough to understand my problem.
Could anybody please help me or show me some working example with this approach ?