
Recherche avancée
Autres articles (45)
-
Diogene : création de masques spécifiques de formulaires d’édition de contenus
26 octobre 2010, parDiogene est un des plugins ? SPIP activé par défaut (extension) lors de l’initialisation de MediaSPIP.
A quoi sert ce plugin
Création de masques de formulaires
Le plugin Diogène permet de créer des masques de formulaires spécifiques par secteur sur les trois objets spécifiques SPIP que sont : les articles ; les rubriques ; les sites
Il permet ainsi de définir en fonction d’un secteur particulier, un masque de formulaire par objet, ajoutant ou enlevant ainsi des champs afin de rendre le formulaire (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Contribute to translation
13 avril 2011You can help us to improve the language used in the software interface to make MediaSPIP more accessible and user-friendly. You can also translate the interface into any language that allows it to spread to new linguistic communities.
To do this, we use the translation interface of SPIP where the all the language modules of MediaSPIP are available. Just subscribe to the mailing list and request further informantion on translation.
MediaSPIP is currently available in French and English (...)
Sur d’autres sites (5543)
-
YouTube's HD Video Streaming Server Technology ?
30 septembre 2013, par bgentryLately I've been researching different methods for streaming MP4s to the browser. Flash Media Server is an obvious choice here (using Cloudfront), and most solutions I've seen use the RTMP protocol.
However, I spent some time on YouTube with Firebug and Chrome debugger figuring out how their streaming worked and I discovered some interesting differences between some of their videos and quality rates.
My two sample videos are A and B. A is available up to 480p and B is available up to 1080p. For both videos, all rates up to 480p are served in an FLV container with H.264 video and AAC audio, over HTTP. What's interesting here is that if you have not yet downloaded (cached) the entire video, and you try to skip forward to an uncached part of the video, a new request will be made with a 'begin' parameter equal to the target offset in milliseconds. Example from Video A at 480p :
http://v11.lscache8.c.youtube.com/videoplayback?ip=0.0.0.0&sparams=id%2Cexpire%2Cip%2Cipbits%2Citag%2Calgorithm%2Cburst%2Cfactor%2Coc%3AU0dWTldQVF9FSkNNNl9PSlhJ&fexp=904806%2C902906%2C903711&algorithm=throttle-factor&itag=35&ipbits=0&burst=40&sver=3&expire=1279756800&key=yt1&signature=D2D704D63C242CF187CAA5B5D5BAFB8DFACAC5FF.39180C01559C976717B651A7EB1D0C6249231EB7&factor=1.25&id=8568eb3135971f6f&begin=111863
Response Headers:
Cache-Control:public,max-age=23472
Connection:close
Content-Length:14320637
Content-Type:video/x-flv
Date:Wed, 21 Jul 2010 17:23:48 GMT
Expires:Wed, 21 Jul 2010 23:55:00 GMT
Last-Modified:Wed, 19 May 2010 12:31:41 GMT
Server:gvs 1.0
X-Content-Type-Options:nosniffThe file returned by this URL is a fully valid FLV containing only the portion of the video after the requested offset.
I did the same kind of test on the higher resolution versions of Video B. At 720p and 1080p, YouTube will return a video in an MP4 container, also with H.264 video and AAC audio. What's impressive to me is that their server takes the same type of offset for an MP4 video (via the 'begin' parameter) and returns a valid, streamable MP4 (moov atom at the front of the file with correct offsets) that also only includes the requested portion of the video.
So, how does YouTube do this ? How do they generate the FLV or MP4 container on the fly with the correct headers and only the desired segment of the requested video ? I know this can be accomplished using FFMPEG to seek to the desired start point and the qt-faststart script to reposition the moov atom to the front of the stream, but it seems like this would be too slow to handle on-demand for millions of YouTube viewers.
Ideas ?
Thanks in advance !
Footnote : I am not allowed to include more than 1 link at this point, so here is Video A's URL : http:// www.youtube .com/watch ?v=hWjrMTWXH28 "Video available up to 480p"
-
How to encode using the FFMpeg in Android (using H263)
3 juillet 2012, par Kenny910I am trying to follow the sample code on encoding in the ffmpeg document and successfully build a application to encode and generate a mp4 file but I face the following problems :
1) I am using the H263 for encoding but I can only set the width and height of the AVCodecContext to 176x144, for other case (like 720x480 or 640x480) it will return fail.
2) I can't play the output mp4 file by using the default Android player, isn't it support H263 mp4 file ? p.s. I can play it by using other player
3) Is there any sample code on encoding other video frame to make a new video (which mean decode the video and encode it back in different quality setting, also i would like to modify the frame content) ?
Here is my code, thanks !
JNIEXPORT jint JNICALL Java_com_ffmpeg_encoder_FFEncoder_nativeEncoder(JNIEnv* env, jobject thiz, jstring filename){
LOGI("nativeEncoder()");
avcodec_register_all();
avcodec_init();
av_register_all();
AVCodec *codec;
AVCodecContext *codecCtx;
int i;
int out_size;
int size;
int x;
int y;
int output_buffer_size;
FILE *file;
AVFrame *picture;
uint8_t *output_buffer;
uint8_t *picture_buffer;
/* Manual Variables */
int l;
int fps = 30;
int videoLength = 5;
/* find the H263 video encoder */
codec = avcodec_find_encoder(CODEC_ID_H263);
if (!codec) {
LOGI("avcodec_find_encoder() run fail.");
}
codecCtx = avcodec_alloc_context();
picture = avcodec_alloc_frame();
/* put sample parameters */
codecCtx->bit_rate = 400000;
/* resolution must be a multiple of two */
codecCtx->width = 176;
codecCtx->height = 144;
/* frames per second */
codecCtx->time_base = (AVRational){1,fps};
codecCtx->pix_fmt = PIX_FMT_YUV420P;
codecCtx->codec_id = CODEC_ID_H263;
codecCtx->codec_type = AVMEDIA_TYPE_VIDEO;
/* open it */
if (avcodec_open(codecCtx, codec) < 0) {
LOGI("avcodec_open() run fail.");
}
const char* mfileName = (*env)->GetStringUTFChars(env, filename, 0);
file = fopen(mfileName, "wb");
if (!file) {
LOGI("fopen() run fail.");
}
(*env)->ReleaseStringUTFChars(env, filename, mfileName);
/* alloc image and output buffer */
output_buffer_size = 100000;
output_buffer = malloc(output_buffer_size);
size = codecCtx->width * codecCtx->height;
picture_buffer = malloc((size * 3) / 2); /* size for YUV 420 */
picture->data[0] = picture_buffer;
picture->data[1] = picture->data[0] + size;
picture->data[2] = picture->data[1] + size / 4;
picture->linesize[0] = codecCtx->width;
picture->linesize[1] = codecCtx->width / 2;
picture->linesize[2] = codecCtx->width / 2;
for(l=0;l/encode 1 second of video
for(i=0;i/prepare a dummy image YCbCr
//Y
for(y=0;yheight;y++) {
for(x=0;xwidth;x++) {
picture->data[0][y * picture->linesize[0] + x] = x + y + i * 3;
}
}
//Cb and Cr
for(y=0;yheight/2;y++) {
for(x=0;xwidth/2;x++) {
picture->data[1][y * picture->linesize[1] + x] = 128 + y + i * 2;
picture->data[2][y * picture->linesize[2] + x] = 64 + x + i * 5;
}
}
//encode the image
out_size = avcodec_encode_video(codecCtx, output_buffer, output_buffer_size, picture);
fwrite(output_buffer, 1, out_size, file);
}
//get the delayed frames
for(; out_size; i++) {
out_size = avcodec_encode_video(codecCtx, output_buffer, output_buffer_size, NULL);
fwrite(output_buffer, 1, out_size, file);
}
}
//add sequence end code to have a real mpeg file
output_buffer[0] = 0x00;
output_buffer[1] = 0x00;
output_buffer[2] = 0x01;
output_buffer[3] = 0xb7;
fwrite(output_buffer, 1, 4, file);
fclose(file);
free(picture_buffer);
free(output_buffer);
avcodec_close(codecCtx);
av_free(codecCtx);
av_free(picture);
LOGI("finish");
return 0; } -
iPhone camera shooting video using the AVCaptureSession and using ffmpeg CMSampleBufferRef a change in h.264 format is the issue. please advice
4 janvier 2012, par isaiahMy goal is h.264/AAC , mpeg2-ts streaming to server from iphone device.
Current my source is FFmpeg+libx264 compile success. I Know gnu License. I want the demo program.
I'm want to know that
1.CMSampleBufferRef to AVPicture data is success ?
avpicture_fill((AVPicture*)pFrame, rawPixelBase, PIX_FMT_RGB32, width, height);
pFrame linesize and data is not null but pst -9233123123 . outpic also .
Because of this I have to guess 'non-strictly-monotonic PTS' message2.This log is repeat.
encoding frame (size= 0)
encoding frame = "" , 'avcodec_encode_video' return 0 is success but always 0 .I don't know what to do...
2011-06-01 15:15:14.199 AVCam[1993:7303] pFrame = avcodec_alloc_frame();
2011-06-01 15:15:14.207 AVCam[1993:7303] avpicture_fill = 1228800
Video encoding
2011-0601 15:5:14.215 AVCam[1993:7303] codec = 5841844
[libx264 @ 0x1441e00] using cpu capabilities: ARMv6 NEON
[libx264 @ 0x1441e00] profile Constrained Baseline, level 2.0[libx264 @ 0x1441e00] non-strictly-monotonic PTS
encoding frame (size= 0)
encoding frame
[libx264 @ 0x1441e00] final ratefactor: 26.743.I have to guess 'non-strictly-monotonic PTS' message is the cause of all problems.
what is this 'non-strictly-monotonic PTS' .this is source
(void) captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
if( !CMSampleBufferDataIsReady(sampleBuffer) )
{
NSLog( @"sample buffer is not ready. Skipping sample" );
return;
}
if( [isRecordingNow isEqualToString:@"YES"] )
{
lastSampleTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
if( videoWriter.status != AVAssetWriterStatusWriting )
{
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:lastSampleTime];
}
if( captureOutput == videooutput )
{
[self newVideoSample:sampleBuffer];
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
// access the data
int width = CVPixelBufferGetWidth(pixelBuffer);
int height = CVPixelBufferGetHeight(pixelBuffer);
unsigned char *rawPixelBase = (unsigned char *)CVPixelBufferGetBaseAddress(pixelBuffer);
AVFrame *pFrame;
pFrame = avcodec_alloc_frame();
pFrame->quality = 0;
NSLog(@"pFrame = avcodec_alloc_frame(); ");
// int bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0);
// int bytesSize = height * bytesPerRow ;
// unsigned char *pixel = (unsigned char*)malloc(bytesSize);
// unsigned char *rowBase = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
// memcpy (pixel, rowBase, bytesSize);
int avpicture_fillNum = avpicture_fill((AVPicture*)pFrame, rawPixelBase, PIX_FMT_RGB32, width, height);//PIX_FMT_RGB32//PIX_FMT_RGB8
//NSLog(@"rawPixelBase = %i , rawPixelBase -s = %s",rawPixelBase, rawPixelBase);
NSLog(@"avpicture_fill = %i",avpicture_fillNum);
//NSLog(@"width = %i,height = %i",width, height);
// Do something with the raw pixels here
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
//avcodec_init();
//avdevice_register_all();
av_register_all();
AVCodec *codec;
AVCodecContext *c= NULL;
int out_size, size, outbuf_size;
//FILE *f;
uint8_t *outbuf;
printf("Video encoding\n");
/* find the mpeg video encoder */
codec =avcodec_find_encoder(CODEC_ID_H264);//avcodec_find_encoder_by_name("libx264"); //avcodec_find_encoder(CODEC_ID_H264);//CODEC_ID_H264);
NSLog(@"codec = %i",codec);
if (!codec) {
fprintf(stderr, "codec not found\n");
exit(1);
}
c= avcodec_alloc_context();
/* put sample parameters */
c->bit_rate = 400000;
c->bit_rate_tolerance = 10;
c->me_method = 2;
/* resolution must be a multiple of two */
c->width = 352;//width;//352;
c->height = 288;//height;//288;
/* frames per second */
c->time_base= (AVRational){1,25};
c->gop_size = 10;//25; /* emit one intra frame every ten frames */
//c->max_b_frames=1;
c->pix_fmt = PIX_FMT_YUV420P;
c ->me_range = 16;
c ->max_qdiff = 4;
c ->qmin = 10;
c ->qmax = 51;
c ->qcompress = 0.6f;
/* open it */
if (avcodec_open(c, codec) < 0) {
fprintf(stderr, "could not open codec\n");
exit(1);
}
/* alloc image and output buffer */
outbuf_size = 100000;
outbuf = malloc(outbuf_size);
size = c->width * c->height;
AVFrame* outpic = avcodec_alloc_frame();
int nbytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);
//create buffer for the output image
uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes);
#pragma mark -
fflush(stdout);
<pre>// int numBytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);
// uint8_t *buffer = (uint8_t *)av_malloc(numBytes*sizeof(uint8_t));
//
// //UIImage *image = [UIImage imageNamed:[NSString stringWithFormat:@"10%d", i]];
// CGImageRef newCgImage = [self imageFromSampleBuffer:sampleBuffer];//[image CGImage];
//
// CGDataProviderRef dataProvider = CGImageGetDataProvider(newCgImage);
// CFDataRef bitmapData = CGDataProviderCopyData(dataProvider);
// buffer = (uint8_t *)CFDataGetBytePtr(bitmapData);
//
// avpicture_fill((AVPicture*)pFrame, buffer, PIX_FMT_RGB8, c->width, c->height);
avpicture_fill((AVPicture*)outpic, outbuffer, PIX_FMT_YUV420P, c->width, c->height);
struct SwsContext* fooContext = sws_getContext(c->width, c->height,
PIX_FMT_RGB8,
c->width, c->height,
PIX_FMT_YUV420P,
SWS_FAST_BILINEAR, NULL, NULL, NULL);
//perform the conversion
sws_scale(fooContext, pFrame->data, pFrame->linesize, 0, c->height, outpic->data, outpic->linesize);
// Here is where I try to convert to YUV
/* encode the image */
out_size = avcodec_encode_video(c, outbuf, outbuf_size, outpic);
printf("encoding frame (size=%5d)\n", out_size);
printf("encoding frame %s\n", outbuf);
//fwrite(outbuf, 1, out_size, f);
// free(buffer);
// buffer = NULL;
/* add sequence end code to have a real mpeg file */
// outbuf[0] = 0x00;
// outbuf[1] = 0x00;
// outbuf[2] = 0x01;
// outbuf[3] = 0xb7;
//fwrite(outbuf, 1, 4, f);
//fclose(f);
free(outbuf);
avcodec_close(c);
av_free(c);
av_free(pFrame);
printf("\n");
</pre>