
Recherche avancée
Autres articles (59)
-
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...) -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
Sur d’autres sites (9276)
-
FFmpeg does not decode h264 stream
5 juillet 2012, par HAPPY_TIGERI am trying to decode h264 stream from rtsp server and render it on iPhone.
I found some libraries and read some articles about it.
Libraries are from dropCam for iPhone called RTSPClient and DecoderWrapper.
But I can not decode frame data with DecodeWrapper that using on ffmpeg.
Here are my code.
VideoViewer.m
- (void)didReceiveFrame:(NSData*)frameData presentationTime:(NSDate*)presentationTime
{
[VideoDecoder staticInitialize];
mConverter = [[VideoDecoder alloc] initWithCodec:kVCT_H264 colorSpace:kVCS_RGBA32 width:320 height:240 privateData:nil];
[mConverter decodeFrame:frameData];
if ([mConverter isFrameReady]) {
UIImage *imageData =[mConverter getDecodedFrame];
if (imageData) {
[mVideoView setImage:imageData];
NSLog(@"decoded!");
}
}
}
---VideoDecoder.m---
- (id)initWithCodec:(enum VideoCodecType)codecType
colorSpace:(enum VideoColorSpace)colorSpace
width:(int)width
height:(int)height
privateData:(NSData*)privateData {
if(self = [super init]) {
codec = avcodec_find_decoder(CODEC_ID_H264);
codecCtx = avcodec_alloc_context();
// Note: for H.264 RTSP streams, the width and height are usually not specified (width and height are 0).
// These fields will become filled in once the first frame is decoded and the SPS is processed.
codecCtx->width = width;
codecCtx->height = height;
codecCtx->extradata = av_malloc([privateData length]);
codecCtx->extradata_size = [privateData length];
[privateData getBytes:codecCtx->extradata length:codecCtx->extradata_size];
codecCtx->pix_fmt = PIX_FMT_RGBA;
#ifdef SHOW_DEBUG_MV
codecCtx->debug_mv = 0xFF;
#endif
srcFrame = avcodec_alloc_frame();
dstFrame = avcodec_alloc_frame();
int res = avcodec_open(codecCtx, codec);
if (res < 0)
{
NSLog(@"Failed to initialize decoder");
}
}
return self;
}
- (void)decodeFrame:(NSData*)frameData {
AVPacket packet = {0};
packet.data = (uint8_t*)[frameData bytes];
packet.size = [frameData length];
int frameFinished=0;
NSLog(@"Packet size===>%d",packet.size);
// Is this a packet from the video stream?
if(packet.stream_index==0)
{
int res = avcodec_decode_video2(codecCtx, srcFrame, &frameFinished, &packet);
NSLog(@"Res value===>%d",res);
NSLog(@"frame data===>%d",(int)srcFrame->data);
if (res < 0)
{
NSLog(@"Failed to decode frame");
}
}
else
{
NSLog(@"No video stream found");
}
// Need to delay initializing the output buffers because we don't know the dimensions until we decode the first frame.
if (!outputInit) {
if (codecCtx->width > 0 && codecCtx->height > 0) {
#ifdef _DEBUG
NSLog(@"Initializing decoder with frame size of: %dx%d", codecCtx->width, codecCtx->height);
#endif
outputBufLen = avpicture_get_size(PIX_FMT_RGBA, codecCtx->width, codecCtx->height);
outputBuf = av_malloc(outputBufLen);
avpicture_fill((AVPicture*)dstFrame, outputBuf, PIX_FMT_RGBA, codecCtx->width, codecCtx->height);
convertCtx = sws_getContext(codecCtx->width, codecCtx->height, codecCtx->pix_fmt, codecCtx->width,
codecCtx->height, PIX_FMT_RGBA, SWS_FAST_BILINEAR, NULL, NULL, NULL);
outputInit = YES;
frameFinished=1;
}
else {
NSLog(@"Could not get video output dimensions");
}
}
if (frameFinished)
frameReady = YES;
}The console shows me as follows.
2011-05-16 20:16:04.223 RTSPTest1[41226:207] Packet size===>359
[h264 @ 0x5815c00] no frame!
2011-05-16 20:16:04.223 RTSPTest1[41226:207] Res value===>-1
2011-05-16 20:16:04.224 RTSPTest1[41226:207] frame data===>101791200
2011-05-16 20:16:04.224 RTSPTest1[41226:207] Failed to decode frame
2011-05-16 20:16:04.225 RTSPTest1[41226:207] decoded!
2011-05-16 20:16:04.226 RTSPTest1[41226:207] Packet size===>424
[h264 @ 0x5017c00] no frame!
2011-05-16 20:16:04.226 RTSPTest1[41226:207] Res value===>-1
2011-05-16 20:16:04.227 RTSPTest1[41226:207] frame data===>81002704
2011-05-16 20:16:04.227 RTSPTest1[41226:207] Failed to decode frame
2011-05-16 20:16:04.228 RTSPTest1[41226:207] decoded!
2011-05-16 20:16:04.229 RTSPTest1[41226:207] Packet size===>424
[h264 @ 0x581d000] no frame!
2011-05-16 20:16:04.229 RTSPTest1[41226:207] Res value===>-1
2011-05-16 20:16:04.230 RTSPTest1[41226:207] frame data===>101791616
2011-05-16 20:16:04.230 RTSPTest1[41226:207] Failed to decode frame
2011-05-16 20:16:04.231 RTSPTest1[41226:207] decoded!
. . . . .But the simulator shows nothing.
What's wrong with my code.
Help me solve this problem.
Thanks for your answers.
-
swscale/arm : re-enable neon rgbx to nv12 routines
22 février 2016, par Xiaolei Yuswscale/arm : re-enable neon rgbx to nv12 routines
Commit ’842b8f4ba2e79b9c004a67f6fdb3d5c5d05805d3’ fixed clang/iphone
build but failed on some versions of cygwin. It has now been verified
to work on both platforms.Signed-off-by : Michael Niedermayer <michael@niedermayer.cc>
-
Respecter les rotations des vidéos
2 mai 2011Les vidéos prises avec des dispositifs mobiles (Apple Iphone notamment) disposent d’une métadata de Rotation que Mediainfo peut récupérer.
Il serait donc mieux d’inverser les hauteurs / largeurs à ce moment là et d’encoder avec rotation ... Peut être
cf :