
Recherche avancée
Autres articles (55)
-
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
-
Les formats acceptés
28 janvier 2010, parLes commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
ffmpeg -codecs ffmpeg -formats
Les format videos acceptés en entrée
Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
Les formats vidéos de sortie possibles
Dans un premier temps on (...)
Sur d’autres sites (10272)
-
Webcam stream with FFMpeg on iPhone
6 décembre 2011, par SaphrositI'm trying to send and show a webcam stream from a linux server to an iPhone app. I don't know if it's the best solution, but I downloaded and installed FFMpeg on the linux server (following, for those who want to know, this tutorial).
FFMpeg is working fine. After a lots of wandering, I managed to send a stream to the client launchingffmpeg -s 320x240 -f video4linux2 -i /dev/video0 -f mpegts -vcodec libx264 udp://192.168.1.34:1234
where 192.168.1.34 is the address of the client. Actually the client is a Mac, but it is supposed to be an iPhone. I know the stream is sent and received correctly (tested in different ways).
However I didn't managed to watch the stream directly on the iPhone.
I thought of different (possible) solutions :-
first solution : store incoming data in a
NSMutableData
object. Then, when the stream ends, store it and then play it using aMPMoviePlayerController
. Here's the code :[video writeToFile:@"videoStream.m4v" atomically:YES];
NSURL *url = [NSURL fileURLWithPath:@"videoStream.m4v"];
MPMoviePlayerController *videoController = [[MPMoviePlayerController alloc] initWithContentURL:url];
[videoController.view setFrame:CGRectMake(100, 100, 150, 150)];
[self.view addSubview:videoController.view];
[videoController play];the problem of this solution is that nothing is played (I only see a black square), even if the video is saved correctly (I can play it directly from my disk using VLC). Besides, it's not such a great idea. It's just to make things work.
-
Second solution : use
CMSampleBufferRef
to store the incoming video. Much more problems comes with this solution : first of all, there's noCoreMedia.framework
in my system. Besides I do not get well what does this class represents and what should I do to make it works : I mean if I start (somehow) filling this "SampleBuffer" with bytes I receive from UDP connection, then it will automatically call theCMSampleBufferMakeDataReadyCallback
function I set during creation ? If yes, when ? When the single frame is completed or when the whole stream is received ? -
Third solution : use
AVFoundation
framework (neither this is actually available on my Mac). I did not understand if it's actually possible to start recording from a remote source or even from aNSMutableData
, achar*
or something like that. OnAVFoundation Programming Guide
I didn't find any reference that say if it's possible or not.
I don't know which one of this solution is the best for my purpose. ANY suggestion would be appreciate.
Besides, there's also another problem : I didn't use any segmenter program to send the video. Now, if I'm not getting wrong, segmenter needs to split the source video in smaller/shorter video easier to send. If it is right, then maybe it's not strictly necessary to make things work (may be added later). However, since the server is running under linux, I cannot use Apple's mediastreamsegmeter. May someone suggest an opensource segmenter to use in association with FFMpeg ?
UPDATE : I edited my question adding more informations on what I did since now and what my doubts are.
-
-
iPhone camera shooting video using the AVCaptureSession and using ffmpeg CMSampleBufferRef a change in h.264 format is the issue. please advice
4 janvier 2012, par isaiahMy goal is h.264/AAC , mpeg2-ts streaming to server from iphone device.
Current my source is FFmpeg+libx264 compile success. I Know gnu License. I want the demo program.
I'm want to know that
1.CMSampleBufferRef to AVPicture data is success ?
avpicture_fill((AVPicture*)pFrame, rawPixelBase, PIX_FMT_RGB32, width, height);
pFrame linesize and data is not null but pst -9233123123 . outpic also .
Because of this I have to guess 'non-strictly-monotonic PTS' message2.This log is repeat.
encoding frame (size= 0)
encoding frame = "" , 'avcodec_encode_video' return 0 is success but always 0 .I don't know what to do...
2011-06-01 15:15:14.199 AVCam[1993:7303] pFrame = avcodec_alloc_frame();
2011-06-01 15:15:14.207 AVCam[1993:7303] avpicture_fill = 1228800
Video encoding
2011-0601 15:5:14.215 AVCam[1993:7303] codec = 5841844
[libx264 @ 0x1441e00] using cpu capabilities: ARMv6 NEON
[libx264 @ 0x1441e00] profile Constrained Baseline, level 2.0[libx264 @ 0x1441e00] non-strictly-monotonic PTS
encoding frame (size= 0)
encoding frame
[libx264 @ 0x1441e00] final ratefactor: 26.743.I have to guess 'non-strictly-monotonic PTS' message is the cause of all problems.
what is this 'non-strictly-monotonic PTS' .this is source
(void) captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
if( !CMSampleBufferDataIsReady(sampleBuffer) )
{
NSLog( @"sample buffer is not ready. Skipping sample" );
return;
}
if( [isRecordingNow isEqualToString:@"YES"] )
{
lastSampleTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
if( videoWriter.status != AVAssetWriterStatusWriting )
{
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:lastSampleTime];
}
if( captureOutput == videooutput )
{
[self newVideoSample:sampleBuffer];
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
// access the data
int width = CVPixelBufferGetWidth(pixelBuffer);
int height = CVPixelBufferGetHeight(pixelBuffer);
unsigned char *rawPixelBase = (unsigned char *)CVPixelBufferGetBaseAddress(pixelBuffer);
AVFrame *pFrame;
pFrame = avcodec_alloc_frame();
pFrame->quality = 0;
NSLog(@"pFrame = avcodec_alloc_frame(); ");
// int bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0);
// int bytesSize = height * bytesPerRow ;
// unsigned char *pixel = (unsigned char*)malloc(bytesSize);
// unsigned char *rowBase = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
// memcpy (pixel, rowBase, bytesSize);
int avpicture_fillNum = avpicture_fill((AVPicture*)pFrame, rawPixelBase, PIX_FMT_RGB32, width, height);//PIX_FMT_RGB32//PIX_FMT_RGB8
//NSLog(@"rawPixelBase = %i , rawPixelBase -s = %s",rawPixelBase, rawPixelBase);
NSLog(@"avpicture_fill = %i",avpicture_fillNum);
//NSLog(@"width = %i,height = %i",width, height);
// Do something with the raw pixels here
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
//avcodec_init();
//avdevice_register_all();
av_register_all();
AVCodec *codec;
AVCodecContext *c= NULL;
int out_size, size, outbuf_size;
//FILE *f;
uint8_t *outbuf;
printf("Video encoding\n");
/* find the mpeg video encoder */
codec =avcodec_find_encoder(CODEC_ID_H264);//avcodec_find_encoder_by_name("libx264"); //avcodec_find_encoder(CODEC_ID_H264);//CODEC_ID_H264);
NSLog(@"codec = %i",codec);
if (!codec) {
fprintf(stderr, "codec not found\n");
exit(1);
}
c= avcodec_alloc_context();
/* put sample parameters */
c->bit_rate = 400000;
c->bit_rate_tolerance = 10;
c->me_method = 2;
/* resolution must be a multiple of two */
c->width = 352;//width;//352;
c->height = 288;//height;//288;
/* frames per second */
c->time_base= (AVRational){1,25};
c->gop_size = 10;//25; /* emit one intra frame every ten frames */
//c->max_b_frames=1;
c->pix_fmt = PIX_FMT_YUV420P;
c ->me_range = 16;
c ->max_qdiff = 4;
c ->qmin = 10;
c ->qmax = 51;
c ->qcompress = 0.6f;
/* open it */
if (avcodec_open(c, codec) < 0) {
fprintf(stderr, "could not open codec\n");
exit(1);
}
/* alloc image and output buffer */
outbuf_size = 100000;
outbuf = malloc(outbuf_size);
size = c->width * c->height;
AVFrame* outpic = avcodec_alloc_frame();
int nbytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);
//create buffer for the output image
uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes);
#pragma mark -
fflush(stdout);
<pre>// int numBytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);
// uint8_t *buffer = (uint8_t *)av_malloc(numBytes*sizeof(uint8_t));
//
// //UIImage *image = [UIImage imageNamed:[NSString stringWithFormat:@"10%d", i]];
// CGImageRef newCgImage = [self imageFromSampleBuffer:sampleBuffer];//[image CGImage];
//
// CGDataProviderRef dataProvider = CGImageGetDataProvider(newCgImage);
// CFDataRef bitmapData = CGDataProviderCopyData(dataProvider);
// buffer = (uint8_t *)CFDataGetBytePtr(bitmapData);
//
// avpicture_fill((AVPicture*)pFrame, buffer, PIX_FMT_RGB8, c->width, c->height);
avpicture_fill((AVPicture*)outpic, outbuffer, PIX_FMT_YUV420P, c->width, c->height);
struct SwsContext* fooContext = sws_getContext(c->width, c->height,
PIX_FMT_RGB8,
c->width, c->height,
PIX_FMT_YUV420P,
SWS_FAST_BILINEAR, NULL, NULL, NULL);
//perform the conversion
sws_scale(fooContext, pFrame->data, pFrame->linesize, 0, c->height, outpic->data, outpic->linesize);
// Here is where I try to convert to YUV
/* encode the image */
out_size = avcodec_encode_video(c, outbuf, outbuf_size, outpic);
printf("encoding frame (size=%5d)\n", out_size);
printf("encoding frame %s\n", outbuf);
//fwrite(outbuf, 1, out_size, f);
// free(buffer);
// buffer = NULL;
/* add sequence end code to have a real mpeg file */
// outbuf[0] = 0x00;
// outbuf[1] = 0x00;
// outbuf[2] = 0x01;
// outbuf[3] = 0xb7;
//fwrite(outbuf, 1, 4, f);
//fclose(f);
free(outbuf);
avcodec_close(c);
av_free(c);
av_free(pFrame);
printf("\n");
</pre> -
FFMPEG : Videos converted from FLV to MP4 does not play in iPod but works in iPhone
3 juin 2012, par Shakti SinghI used below command to convert videos from FLV,M4V to MP4.
ffmpeg -y -i video_1336406262.flv -vcodec libx264 -vpre slow -vpre
ipod640 -b 250k -bt 50k -acodec libfaac -ac 2 -ar 48000 -ab 64k -s
480x320 video_1336406262.mp4The videos converted from M4V to MP4 are playing very well in both iPhone and iPod but the videos converted from FLV to MP4 does not work in iPod but does in iPhone.
In the video area of HTML5 page iPod even does not show the play symbol.
Could someone help here ?
I am using the same command to convert from both FLV and M4V to MP4.
Thanks