
Recherche avancée
Médias (1)
-
The Slip - Artworks
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
Autres articles (54)
-
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
Les formats acceptés
28 janvier 2010, parLes commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
ffmpeg -codecs ffmpeg -formats
Les format videos acceptés en entrée
Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
Les formats vidéos de sortie possibles
Dans un premier temps on (...) -
Ajouter notes et légendes aux images
7 février 2011, parPour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
Modification lors de l’ajout d’un média
Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)
Sur d’autres sites (3916)
-
MPEG-TS Segments HTTP Live Streaming
5 juin 2013, par user1069624I'm trying to interleave MPEG-TS segments but failing. One set of segments was actually captured using the built in camera in the laptop, then encoded using FFMPEG with the following command :
ffmpeg -er 4 -y -f video4linux2 -s 640x480 -r 30 -i %s -isync -f mpegts -acodec libmp3lame -ar 48000 -ab 64k -s 640x480 -vcodec libx264 -fflags +genpts -b 386k -coder 0 -me_range 16 -keyint_min 25 -i_qfactor 0.71 -bt 386k -maxrate 386k -bufsize 386k -qcomp 0.6 -qmin 10 -qmax 51 -qdiff 4 -aspect 640:480
And the other one is an avi file that was encoded using the following command :
fmpeg -er 4 -y -f avi -s 640x480 -r 30 -i ./DSCF2021.AVI -vbsf dump_extra -f mpegts -acodec libmp3lame -ar 48000 -ab 64k -s 640x480 -vcodec libx264 -fflags +genpts -b 386k -coder 0 -me_range 16 -keyint_min 25 -i_qfactor 0.71 -bt 386k -maxrate 386k -bufsize 386k -qcomp 0.6 -qmin 10 -qmax 51 -qdiff 4 -aspect 640:480
Then the output is segmented into ts segments using an open source segmenter.
If both come from the same source (both from the camera) they work fine. However in this case, the second set of segments freeze. Time passes, but the video does not move..
So i think it's an encoding problem. So my question is, how should i change the ffmpeg command for this to work ?By interleave I mean, having a playlist with the first set of segments, and another playlist with the other set of segments, and having the client call one then the other (HTTP Live Streaming)
The ffprobe output of one of the first set of segments :
Input #0, mpegts, from 'live1.ts':
Duration: 00:00:09.76, start: 1.400000, bitrate: 281 kb/s
Program 1 Service01
Metadata:
name : Service01
provider_name : FFmpeg
Stream #0.0[0x100]: Video: h264, yuv420p, 640x480 [PAR 1:1 DAR 4:3], 29.92 fps, 29.92 tbr, 90k tbn, 59.83 tbc
Stream #0.1[0x101]: Audio: aac, 48000 Hz, stereo, s16, 111 kb/sThe ffprobe output of one of the second set of segments :
Input #0, mpegts, from 'ad1.ts':
Duration: 00:00:09.64, start: 1.400000, bitrate: 578 kb/s
Program 1 Service01
Metadata:
name : Service01
provider_name : FFmpeg
Stream #0.0[0x100]: Video: h264, yuv420p, 640x480 [PAR 1:1 DAR 4:3], 25 fps, 25 tbr, 90k tbn, 50 tbc
Stream #0.1[0x101]: Audio: aac, 48000 Hz, stereo, s16, 22 kb/sThank you,
-
Why doesn't this FFmpeg code create a video from a series of images ?
20 octobre 2011, par user551117I have successfully compiled the FFmpeg library for use in an iOS application. I would like to use it for encoding a video from a series of images, but I can't seem to make it work.
The following is the code that I am using to encode this video :
AVCodec *codec;
AVCodecContext *c= NULL;
int i, out_size, size, outbuf_size;
FILE *f;
AVFrame *picture;
uint8_t *outbuf;
printf("Video encoding\n");
/// find the mpeg video encoder
codec=avcodec_find_encoder(CODEC_ID_MPEG4);
//codec = avcodec_find_encoder(CODEC_ID_MPEG4);
if (!codec) {
fprintf(stderr, "codec not found\n");
exit(1);
}
c= avcodec_alloc_context();
picture= avcodec_alloc_frame();
// put sample parameters
c->bit_rate = 400000;
/// resolution must be a multiple of two
c->width = 320;
c->height = 480;
//frames per second
c->time_base= (AVRational){1,25};
c->gop_size = 10; /// emit one intra frame every ten frames
c->max_b_frames=1;
c->pix_fmt = PIX_FMT_YUV420P;
//open it
if (avcodec_open(c, codec) < 0) {
fprintf(stderr, "could not open codec\n");
exit(1);
}
f = fopen([[NSTemporaryDirectory() stringByAppendingPathComponent:filename] UTF8String], "w");
if (!f) {
fprintf(stderr, "could not open %s\n",[filename UTF8String]);
exit(1);
}
// alloc image and output buffer
outbuf_size = 100000;
outbuf = malloc(outbuf_size);
size = c->width * c->height;
#pragma mark -
AVFrame* outpic = avcodec_alloc_frame();
int nbytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);
//create buffer for the output image
uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes);
#pragma mark -
for(i=1;i<48;i++) {
fflush(stdout);
int numBytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);
uint8_t *buffer = (uint8_t *)av_malloc(numBytes*sizeof(uint8_t));
UIImage *image = [UIImage imageNamed:[NSString stringWithFormat:@"%d.png", i]];
CGImageRef newCgImage = [image CGImage];
CGDataProviderRef dataProvider = CGImageGetDataProvider(newCgImage);
CFDataRef bitmapData = CGDataProviderCopyData(dataProvider);
buffer = (uint8_t *)CFDataGetBytePtr(bitmapData);
avpicture_fill((AVPicture*)picture, buffer, PIX_FMT_RGB24, c->width, c->height);
avpicture_fill((AVPicture*)outpic, outbuffer, PIX_FMT_YUV420P, c->width, c->height);
struct SwsContext* fooContext = sws_getContext(c->width, c->height,
PIX_FMT_RGB24,
c->width, c->height,
PIX_FMT_YUV420P,
SWS_FAST_BILINEAR, NULL, NULL, NULL);
//perform the conversion
sws_scale(fooContext, picture->data, picture->linesize, 0, c->height, outpic->data, outpic->linesize);
// Here is where I try to convert to YUV
// encode the image
out_size = avcodec_encode_video(c, outbuf, outbuf_size, outpic);
printf("encoding frame %3d (size=%5d)\n", i, out_size);
fwrite(outbuf, 1, out_size, f);
free(buffer);
buffer = NULL;
}
// get the delayed frames
for(; out_size; i++) {
fflush(stdout);
out_size = avcodec_encode_video(c, outbuf, outbuf_size, NULL);
printf("write frame %3d (size=%5d)\n", i, out_size);
fwrite(outbuf, 1, outbuf_size, f);
}
// add sequence end code to have a real mpeg file
outbuf[0] = 0x00;
outbuf[1] = 0x00;
outbuf[2] = 0x01;
outbuf[3] = 0xb7;
fwrite(outbuf, 1, 4, f);
fclose(f);
free(outbuf);
avcodec_close(c);
av_free(c);
av_free(picture);
printf("\n");What could be wrong with this code ?
-
c++, FFMPEG, H264, creating zero-delay stream
5 février 2015, par MatI’m trying to encode video (using h264 codec at the moment, but other codecs would be fine too if better suited for my needs) such that the data needed for decoding is available directly after a frame (including the first frame) was encoded (so, i want only I and P frames, no B frames).
How do I need to setup the AVCodecContext to get such a stream ? So far my testing arround with the values still always resulted in avcodec_encode_video() returning 0 on the first frame.
//edit : this is currently my setup code of the AVCodecContext :
static AVStream* add_video_stream(AVFormatContext *oc, enum CodecID codec_id, int w, int h, int fps)
{
AVCodecContext *c;
AVStream *st;
AVCodec *codec;
/* find the video encoder */
codec = avcodec_find_encoder(codec_id);
if (!codec) {
fprintf(stderr, "codec not found\n");
exit(1);
}
st = avformat_new_stream(oc, codec);
if (!st) {
fprintf(stderr, "Could not alloc stream\n");
exit(1);
}
c = st->codec;
/* Put sample parameters. */
c->bit_rate = 400000;
/* Resolution must be a multiple of two. */
c->width = w;
c->height = h;
/* timebase: This is the fundamental unit of time (in seconds) in terms
* of which frame timestamps are represented. For fixed-fps content,
* timebase should be 1/framerate and timestamp increments should be
* identical to 1. */
c->time_base.den = fps;
c->time_base.num = 1;
c->gop_size = 12; /* emit one intra frame every twelve frames at most */
c->codec = codec;
c->codec_type = AVMEDIA_TYPE_VIDEO;
c->coder_type = FF_CODER_TYPE_VLC;
c->me_method = 7; //motion estimation algorithm
c->me_subpel_quality = 4;
c->delay = 0;
c->max_b_frames = 0;
c->thread_count = 1; // more than one threads seem to increase delay
c->refs = 3;
c->pix_fmt = PIX_FMT_YUV420P;
/* Some formats want stream headers to be separate. */
if (oc->oformat->flags & AVFMT_GLOBALHEADER)
c->flags |= CODEC_FLAG_GLOBAL_HEADER;
return st;
}but with this avcodec_encode_video() will buffer 13 frames before returning any bytes (after that, it will return bytes on every frame). if I set gop_size to 0, then avcodec_encode_video() will return bytes only after the second frame was passed to it. I need a zero delay though.
This guy apparently was successful (even with larger gop) : http://mailman.videolan.org/pipermail/x264-devel/2009-May/005880.html but I don’t see what he is doing differently