
Recherche avancée
Médias (3)
-
Elephants Dream - Cover of the soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
Valkaama DVD Label
4 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Image
-
Publier une image simplement
13 avril 2011, par ,
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (23)
-
Personnaliser les catégories
21 juin 2013, parFormulaire de création d’une catégorie
Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire.
Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...) -
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...) -
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
Sur d’autres sites (5600)
-
Libav/ffmpeg api duplicates first frame of mp4 video and makes it second frame
22 janvier 2020, par Optic_RayI am making video mp4 file from couple of jpg image files with different sizes present as sample< number >.jpg (sample1.jpg, sample2.jpg, etc) in a folder. I modified ffmpeg muxing.c example to make it create mp4 file from these set of jpg images(for frames) and also modified it to create only video stream(no audio stream). It is able to create mp4 file with n video frames from n different jpg image files, but mp4 file it outputs when n=2 isn’t as expected. I have tried from n=1 to n=7 so far. When n=2 mp4 file is created but I could see only first jpg image as frame in the entire video. It appears as if first frame(first jpg image) is duplicated and used as second frame too, as I just can’t find image of second jpg image file when I play the video. When I make video with 2 frames from 2 different jpg files, I want first frame to be first jpg image and second frame to be second jpg image. How do I achieve that ?
ffmpeg configuration :
ffmpeg version 3.4.6-0ubuntu0.18.04.1 Copyright (c) 2000-2019 the FFmpeg developers
built with gcc 7 (Ubuntu 7.3.0-16ubuntu3)
configuration: --prefix=/usr --extra-version=0ubuntu0.18.04.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-librsvg --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared
libavutil 55. 78.100 / 55. 78.100
libavcodec 57.107.100 / 57.107.100
libavformat 57. 83.100 / 57. 83.100
libavdevice 57. 10.100 / 57. 10.100
libavfilter 6.107.100 / 6.107.100
libavresample 3. 7. 0 / 3. 7. 0
libswscale 4. 8.100 / 4. 8.100
libswresample 2. 9.100 / 2. 9.100
libpostproc 54. 7.100 / 54. 7.100
Hyper fast Audio and Video encoderNumber of frames in video can be set in
#define STREAM_FRAME_RATE n//n=number of frames
. Following is the entire code :#include
#include <libavutil></libavutil>avassert.h>
#include <libavutil></libavutil>channel_layout.h>
#include <libavutil></libavutil>opt.h>
#include <libavutil></libavutil>mathematics.h>
#include <libavutil></libavutil>imgutils.h>
#include <libavutil></libavutil>timestamp.h>
#include <libavformat></libavformat>avformat.h>
#include <libswscale></libswscale>swscale.h>
#include <libswresample></libswresample>swresample.h>
#define STREAM_DURATION 1
#define STREAM_FRAME_RATE 2 /* 2 image/s */
#define STREAM_PIX_FMT AV_PIX_FMT_YUV420P /* default pix_fmt */
#define SCALE_FLAGS SWS_BICUBIC
// a wrapper around a single output AVStream
typedef struct OutputStream {
AVStream *st;
AVCodecContext *enc;
/* pts of the next frame that will be generated */
int64_t next_pts;
int samples_count;
AVFrame *frame;
AVFrame *tmp_frame;
float t, tincr, tincr2;
struct SwsContext *sws_ctx;
struct SwrContext *swr_ctx;
} OutputStream;
static void log_packet(const AVFormatContext *fmt_ctx, const AVPacket *pkt)
{
AVRational *time_base = &fmt_ctx->streams[pkt->stream_index]->time_base;
printf("pts:%s pts_time:%s dts:%s dts_time:%s duration:%s duration_time:%s stream_index:%d\n",
av_ts2str(pkt->pts), av_ts2timestr(pkt->pts, time_base),
av_ts2str(pkt->dts), av_ts2timestr(pkt->dts, time_base),
av_ts2str(pkt->duration), av_ts2timestr(pkt->duration, time_base),
pkt->stream_index);
}
static int write_frame(AVFormatContext *fmt_ctx, const AVRational *time_base, AVStream *st, AVPacket *pkt)
{
/* rescale output packet timestamp values from codec to stream timebase */
av_packet_rescale_ts(pkt, *time_base, st->time_base);
pkt->stream_index = st->index;
/* Write the compressed frame to the media file. */
log_packet(fmt_ctx, pkt);
return av_interleaved_write_frame(fmt_ctx, pkt);
}
/* Add an output stream. */
static void add_stream(OutputStream *ost, AVFormatContext *oc,
AVCodec **codec,
enum AVCodecID codec_id)
{
AVCodecContext *c;
int i;
/* find the encoder */
*codec = avcodec_find_encoder(codec_id);
if (!(*codec)) {
fprintf(stderr, "Could not find encoder for '%s'\n",
avcodec_get_name(codec_id));
exit(1);
}
ost->st = avformat_new_stream(oc, NULL);
if (!ost->st) {
fprintf(stderr, "Could not allocate stream\n");
exit(1);
}
ost->st->id = oc->nb_streams-1;
c = avcodec_alloc_context3(*codec);
if (!c) {
fprintf(stderr, "Could not alloc an encoding context\n");
exit(1);
}
ost->enc = c;
switch ((*codec)->type) {
case AVMEDIA_TYPE_AUDIO:
printf("######################2 audio add stream\n");
c->sample_fmt = (*codec)->sample_fmts ?
(*codec)->sample_fmts[0] : AV_SAMPLE_FMT_FLTP;
c->bit_rate = 64000;
c->sample_rate = 44100;
if ((*codec)->supported_samplerates) {
c->sample_rate = (*codec)->supported_samplerates[0];
for (i = 0; (*codec)->supported_samplerates[i]; i++) {
if ((*codec)->supported_samplerates[i] == 44100)
c->sample_rate = 44100;
}
}
c->channels = av_get_channel_layout_nb_channels(c->channel_layout);
c->channel_layout = AV_CH_LAYOUT_STEREO;
if ((*codec)->channel_layouts) {
c->channel_layout = (*codec)->channel_layouts[0];
for (i = 0; (*codec)->channel_layouts[i]; i++) {
if ((*codec)->channel_layouts[i] == AV_CH_LAYOUT_STEREO)
c->channel_layout = AV_CH_LAYOUT_STEREO;
}
}
c->channels = av_get_channel_layout_nb_channels(c->channel_layout);
ost->st->time_base = (AVRational){ 1, c->sample_rate };
break;
case AVMEDIA_TYPE_VIDEO:
printf("###################### video add stream\n");
c->codec_id = codec_id;
c->bit_rate = 400000;
/* Resolution must be a multiple of two. */
c->width = 352;
c->height = 288;
/* timebase: This is the fundamental unit of time (in seconds) in terms
* of which frame timestamps are represented. For fixed-fps content,
* timebase should be 1/framerate and timestamp increments should be
* identical to 1. */
ost->st->time_base = (AVRational){ 1, STREAM_FRAME_RATE };
c->time_base = ost->st->time_base;
c->gop_size = 12; /* emit one intra frame every twelve frames at most */
c->pix_fmt = STREAM_PIX_FMT;
if (c->codec_id == AV_CODEC_ID_MPEG2VIDEO) {
/* just for testing, we also add B-frames */
c->max_b_frames = 2;//2
}
if (c->codec_id == AV_CODEC_ID_MPEG1VIDEO) {
/* Needed to avoid using macroblocks in which some coeffs overflow.
* This does not happen with normal video, it just happens here as
* the motion of the chroma plane does not match the luma plane. */
c->mb_decision = 2;//2
}
break;
default:
break;
}
/* Some formats want stream headers to be separate. */
if (oc->oformat->flags & AVFMT_GLOBALHEADER)
c->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
}
/**************************************************************/
/* video output */
static AVFrame *alloc_picture(enum AVPixelFormat pix_fmt, int width, int height)
{
AVFrame *picture;
int ret;
picture = av_frame_alloc();
if (!picture)
return NULL;
picture->format = pix_fmt;
picture->width = width;
picture->height = height;
/* allocate the buffers for the frame data */
ret = av_frame_get_buffer(picture, 32);
if (ret < 0) {
fprintf(stderr, "Could not allocate frame data.\n");
exit(1);
}
return picture;
}
static void open_video(AVFormatContext *oc, AVCodec *codec, OutputStream *ost, AVDictionary *opt_arg)
{
int ret;
AVCodecContext *c = ost->enc;
AVDictionary *opt = NULL;
av_dict_copy(&opt, opt_arg, 0);
printf("In open video\n");
/* open the codec */
ret = avcodec_open2(c, codec, &opt);
av_dict_free(&opt);
if (ret < 0) {
fprintf(stderr, "Could not open video codec: %s\n", av_err2str(ret));
exit(1);
}
/* allocate and init a re-usable frame */
ost->frame = alloc_picture(c->pix_fmt, c->width, c->height);
if (!ost->frame) {
fprintf(stderr, "Could not allocate video frame\n");
exit(1);
}
/* If the output format is not YUV420P, then a temporary YUV420P
* picture is needed too. It is then converted to the required
* output format. */
ost->tmp_frame = NULL;
if (c->pix_fmt != AV_PIX_FMT_YUV420P) {
ost->tmp_frame = alloc_picture(AV_PIX_FMT_YUV420P, c->width, c->height);
if (!ost->tmp_frame) {
fprintf(stderr, "Could not allocate temporary picture\n");
exit(1);
}
}
/* copy the stream parameters to the muxer */
ret = avcodec_parameters_from_context(ost->st->codecpar, c);
if (ret < 0) {
fprintf(stderr, "Could not copy the stream parameters\n");
exit(1);
}
}
int open_image(const char* imageFileName,int width,int height,AVFrame *pict)
{
AVFormatContext *pFormatContext = avformat_alloc_context();
if (!pFormatContext)
{
printf("ERROR could not allocate memory for Format Context");
return 0;
}
//av_register_all();
if (avformat_open_input(&pFormatContext, imageFileName, NULL, NULL) != 0)
{
printf("ERROR could not open the file");
return 0;
}
printf("format %s, duration %ld us, bit_rate %ld", pFormatContext->iformat->name, pFormatContext->duration, pFormatContext->bit_rate);
if (avformat_find_stream_info(pFormatContext, NULL) < 0)
{
printf("ERROR could not get the stream info");
return 0;
}
AVCodecParameters *pCodecParameters = pFormatContext->streams[0]->codecpar;
printf("%d",pFormatContext->nb_streams);
AVCodec *pCodec = avcodec_find_decoder(AV_CODEC_ID_MJPEG);
if (pCodec==NULL) {
printf("ERROR unsupported codecopenimage!");
printf("---------------------------------------11");
return 0;
}
AVCodecContext *pCodecCtx =avcodec_alloc_context3(pCodec);
if (!pCodecCtx)
{
printf("failed to allocated memory for AVCodecContext");
return 0;
}
pCodecCtx->width = pCodecParameters->width;
pCodecCtx->height = pCodecParameters->height;
pCodecCtx->pix_fmt = AV_PIX_FMT_YUV420P;
if(pCodecCtx->color_range==AVCOL_RANGE_JPEG) printf("\nAVCOL_RANGE_JPEG\n");
if (avcodec_parameters_to_context(pCodecCtx, pCodecParameters) < 0)
{
printf("failed to copy codec params to codec context");
return 0;
}
// Open codec
if(avcodec_open2(pCodecCtx, pCodec,NULL)<0)
{
printf("Could not open codec");
return 0;
}
AVFrame *pFrame = av_frame_alloc();
if (!pFrame)
{
printf("Can't allocate memory for AVFrame\n");
return 0;
}
AVPacket *packet= av_packet_alloc();
int temp=av_read_frame(pFormatContext, packet);
while (temp >= 0)
{
if(packet->stream_index != 0){
continue;}
int ret = avcodec_send_packet(pCodecCtx,packet);
if (ret < 0)
{
printf("Error while sending a packet to the decoder: %s", av_err2str(ret));
return 0;
}
ret = avcodec_receive_frame(pCodecCtx,pFrame);
if (ret = 0 || ret != AVERROR(EAGAIN) || ret != AVERROR_EOF)
{
pFrame->quality = 1;
av_packet_unref(packet);
struct SwsContext *resize;
resize = sws_getContext(pCodecParameters->width, pCodecParameters->height,
AV_PIX_FMT_YUVJ420P,
width, height,
AV_PIX_FMT_YUV420P,
SCALE_FLAGS, NULL, NULL, NULL);
sws_scale(resize,(const uint8_t * const *)pFrame->data, pFrame->linesize,
0, pCodecParameters->height, pict->data, pict->linesize);
av_packet_free(&packet);
avcodec_free_context(&pCodecCtx);
avformat_close_input(&pFormatContext);
return 1;
}
else {
printf("Error [%d] while decoding frame: %s\n", ret,
strerror(AVERROR(ret)));
return 0;
}
}
avcodec_free_context(&pCodecCtx);
avformat_close_input(&pFormatContext);
return 0;
}
static void fill_yuv_image(AVFrame *pict, int frame_index,
int width, int height)
{
int ret;
/* when we pass a frame to the encoder, it may keep a reference to it
* internally;
* make sure we do not overwrite it here*/
ret = av_frame_make_writable(pict);
if (ret < 0)
exit(1);
char filename[256];
sprintf(filename, "sample%d.jpg", frame_index+1);
if(open_image(filename,width,height,pict)!=1) exit(1);
}
static AVFrame *get_video_frame(OutputStream *ost)
{
AVCodecContext *c = ost->enc;
/* check if we want to generate more frames */
if (av_compare_ts(ost->next_pts, c->time_base,
STREAM_DURATION, (AVRational){ 1, 1 }) >= 0)
return NULL;
if (c->pix_fmt != AV_PIX_FMT_YUV420P) {
/* as we only generate a YUV420P picture, we must convert it
* to the codec pixel format if needed */
if (!ost->sws_ctx) {
ost->sws_ctx = sws_getContext(c->width, c->height,
AV_PIX_FMT_YUV420P,
c->width, c->height,
c->pix_fmt,
SCALE_FLAGS, NULL, NULL, NULL);
if (!ost->sws_ctx) {
fprintf(stderr,
"Could not initialize the conversion context\n");
exit(1);
}
}
fill_yuv_image(ost->tmp_frame, ost->next_pts, c->width, c->height);
sws_scale(ost->sws_ctx,
(const uint8_t * const *)ost->tmp_frame->data, ost->tmp_frame->linesize,
0, c->height, ost->frame->data, ost->frame->linesize);
} else {
fill_yuv_image(ost->frame, ost->next_pts, c->width, c->height);
}
ost->frame->pts = ost->next_pts++;
return ost->frame;
}
/*
* encode one video frame and send it to the muxer
* return 1 when encoding is finished, 0 otherwise
*/
static int write_video_frame(AVFormatContext *oc, OutputStream *ost)
{
int ret;
AVCodecContext *c;
AVFrame *frame;
int got_packet = 0;
AVPacket pkt = { 0 };
c = ost->enc;
printf("In write_video_frame\n");
frame = get_video_frame(ost);
av_init_packet(&pkt);
/* encode the image */
ret = avcodec_encode_video2(c, &pkt, frame, &got_packet);
if (ret < 0) {
fprintf(stderr, "Error encoding video frame: %s\n", av_err2str(ret));
exit(1);
}
if (got_packet) {
ret = write_frame(oc, &c->time_base, ost->st, &pkt);
} else {
ret = 0;
}
if (ret < 0) {
fprintf(stderr, "Error while writing video frame: %s\n", av_err2str(ret));
exit(1);
}
return (frame || got_packet) ? 0 : 1;
}
static void close_stream(AVFormatContext *oc, OutputStream *ost)
{
avcodec_free_context(&ost->enc);
av_frame_free(&ost->frame);
av_frame_free(&ost->tmp_frame);
sws_freeContext(ost->sws_ctx);
swr_free(&ost->swr_ctx);
}
/**************************************************************/
/* media file output */
int main(int argc, char **argv)
{
OutputStream video_st = { 0 };
const char *filename;
AVOutputFormat *fmt;
AVFormatContext *oc;
AVCodec *video_codec;
int ret;
int have_video = 0;
int encode_video = 0;
AVDictionary *opt = NULL;
int i;
/* Initialize libavcodec, and register all codecs and formats. */
av_register_all();
if (argc < 2) {
printf("usage: %s output_file\n"
"API example program to output a media file with libavformat.\n"
"This program generates a synthetic audio and video stream, encodes and\n"
"muxes them into a file named output_file.\n"
"The output format is automatically guessed according to the file extension.\n"
"Raw images can also be output by using '%%d' in the filename.\n"
"\n", argv[0]);
return 1;
}
filename = argv[1];
for (i = 2; i+1 < argc; i+=2) {
if (!strcmp(argv[i], "-flags") || !strcmp(argv[i], "-fflags"))
av_dict_set(&opt, argv[i]+1, argv[i+1], 0);
}
/* allocate the output media context */
avformat_alloc_output_context2(&oc, NULL, NULL, filename);
if (!oc) {
printf("Could not deduce output format from file extension: using MPEG.\n");
avformat_alloc_output_context2(&oc, NULL, "mpeg", filename);
}
if (!oc)
return 1;
fmt = oc->oformat;
/* Add the audio and video streams using the default format codecs
* and initialize the codecs. */
if (fmt->video_codec != AV_CODEC_ID_NONE) {
add_stream(&video_st, oc, &video_codec, fmt->video_codec);
have_video = 1;
encode_video = 1;
}
/* Now that all the parameters are set, we can open the audio and
* video codecs and allocate the necessary encode buffers. */
if (have_video)
open_video(oc, video_codec, &video_st, opt);
av_dump_format(oc, 0, filename, 1);
/* open the output file, if needed */
if (!(fmt->flags & AVFMT_NOFILE)) {
ret = avio_open(&oc->pb, filename, AVIO_FLAG_WRITE);
if (ret < 0) {
fprintf(stderr, "Could not open '%s': %s\n", filename,
av_err2str(ret));
return 1;
}
}
/* Write the stream header, if any. */
ret = avformat_write_header(oc, &opt);
if (ret < 0) {
fprintf(stderr, "Error occurred when opening output file: %s\n",
av_err2str(ret));
return 1;
}
while (encode_video) {
/* select the stream to encode */
printf("######################\n");
if (encode_video &&
(av_compare_ts(video_st.next_pts, video_st.enc->time_base,STREAM_DURATION, (AVRational){ 1, 1 }) <= 0)) {
encode_video = !write_video_frame(oc, &video_st);
}
}
av_write_trailer(oc);
/* Close each codec. */
if (have_video)
close_stream(oc, &video_st);
if (!(fmt->flags & AVFMT_NOFILE))
/* Close the output file. */
avio_closep(&oc->pb);
/* free the stream */
avformat_free_context(oc);
return 0;
}``` -
libavcodec avcodec_open2 returns -22
28 juillet 2012, par buchtakI am trying to learn how to encode video using libavcodec library. I use the following initialization :
avcodec_register_all();
// This works fine.
AVCodec *avcodec = avcodec_find_encoder( CODEC_ID_H264 );
AVCodecContext *avctx = avcodec_alloc_context3( avcodec );
avctx->bit_rate = 400000;
avctx->width = 640;
avctx->height = 480;
avctx->time_base.den = 15;
avctx->time_base.num = 1;
avctx->gop_size = 10;
avctx->max_b_frames = 1;
avctx->pix_fmt = PIX_FMT_YUV420P;
av_opt_set( avctx->priv_data, "preset", "slow", 0 );
// ret should be zero, but it's negative
int ret = avcodec_open2( avctx, avcodec, NULL );However,
avcodec_open2(...)
always returns a negative value. Theavcodec_find_encoder(...)
works fine and the returned pointer is notNULL
. I use Win7 x64, 64-bit Zeranoe FFmpeg build fromhttp://ffmpeg.zeranoe.com/builds/
According to the readme the FFmpeg version is 2012-06-22 git-c17808c built with
--enable-libx264
. I also triedCODEC_ID_MPEG1VIDEO
and changing some of the initialization parameters, but no matter what I do, theavcodec_open2(...)
always returns value-22
. The decoding/encoding example provided with the Zeranoe build (it's the same one as http://ffmpeg.org/doxygen/trunk/api-example_8c-source.html) does not work either... -
Understanding FFMPEG Video Encoding
20 juin 2014, par SetSlapShotGot this from the encoding example in ffmpeg. I can somewhat follow the authors example for audio encoding, but I find myself befuddled looking at the C code (I commented in block numbers to help me reference what I’m talking about)...
static void video_encode_example(const char *filename)
{
AVCodec *codec;
AVCodecContext *c= NULL;
int i, out_size, size, x, y, outbuf_size;
FILE *f;
AVFrame *picture;
uint8_t *outbuf, *picture_buf; //BLOCK ONE
printf("Video encoding\n");
/* find the mpeg1 video encoder */
codec = avcodec_find_encoder(CODEC_ID_MPEG1VIDEO);
if (!codec) {
fprintf(stderr, "codec not found\n");
exit(1); //BLOCK TWO
}
c= avcodec_alloc_context();
picture= avcodec_alloc_frame();
/* put sample parameters */
c->bit_rate = 400000;
/* resolution must be a multiple of two */
c->width = 352;
c->height = 288;
/* frames per second */
c->time_base= (AVRational){1,25};
c->gop_size = 10; /* emit one intra frame every ten frames */
c->max_b_frames=1;
c->pix_fmt = PIX_FMT_YUV420P; //BLOCK THREE
/* open it */
if (avcodec_open(c, codec) < 0) {
fprintf(stderr, "could not open codec\n");
exit(1);
}
f = fopen(filename, "wb");
if (!f) {
fprintf(stderr, "could not open %s\n", filename);
exit(1);
} //BLOCK FOUR
/* alloc image and output buffer */
outbuf_size = 100000;
outbuf = malloc(outbuf_size);
size = c->width * c->height;
picture_buf = malloc((size * 3) / 2); /* size for YUV 420 */
picture->data[0] = picture_buf;
picture->data[1] = picture->data[0] + size;
picture->data[2] = picture->data[1] + size / 4;
picture->linesize[0] = c->width;
picture->linesize[1] = c->width / 2;
picture->linesize[2] = c->width / 2; //BLOCK FIVE
/* encode 1 second of video */
for(i=0;i<25;i++) {
fflush(stdout);
/* prepare a dummy image */
/* Y */
for(y=0;yheight;y++) {
for(x=0;xwidth;x++) {
picture->data[0][y * picture->linesize[0] + x] = x + y + i * 3;
}
} //BLOCK SIX
/* Cb and Cr */
for(y=0;yheight/2;y++) {
for(x=0;xwidth/2;x++) {
picture->data[1][y * picture->linesize[1] + x] = 128 + y + i * 2;
picture->data[2][y * picture->linesize[2] + x] = 64 + x + i * 5;
}
} //BLOCK SEVEN
/* encode the image */
out_size = avcodec_encode_video(c, outbuf, outbuf_size, picture);
printf("encoding frame %3d (size=%5d)\n", i, out_size);
fwrite(outbuf, 1, out_size, f);
} //BLOCK EIGHT
/* get the delayed frames */
for(; out_size; i++) {
fflush(stdout);
out_size = avcodec_encode_video(c, outbuf, outbuf_size, NULL);
printf("write frame %3d (size=%5d)\n", i, out_size);
fwrite(outbuf, 1, out_size, f);
} //BLOCK NINE
/* add sequence end code to have a real mpeg file */
outbuf[0] = 0x00;
outbuf[1] = 0x00;
outbuf[2] = 0x01;
outbuf[3] = 0xb7;
fwrite(outbuf, 1, 4, f);
fclose(f);
free(picture_buf);
free(outbuf);
avcodec_close(c);
av_free(c);
av_free(picture);
} //BLOCK TENHere’s what I can get from the authors code block by block...
BLOCK ONE : Initializing Variables and pointers. I couldn’t find the AVFrame struct yet in the ffmpeg source code so I don’t know what its referencing
BLOCK TWO : Uses a codec from the file, if not found close.
BLOCK THREE : Sets sample video parameters. Only thing I don’t really get is gop size. I read about intra frames and I still don’t get what they are.
BLOCK FOUR : Open the file for writing...
BLOCK FIVE : Here’s where they really start losing me. Part is probably because I don’t know exactly what AVFrame is, but why do they only use 3/2 of the image size ?
BLOCK SIX & SEVEN : I don’t understand what they are trying to accomplish with this math.
BLOCK EIGHT : It looks like the avcodec function does all the work here, not concerned with that for the time being..
BLOCK NINE : Since it’s outside the 25 frame for loop I assume it gets the leftover frames ?
BLOCK TEN : Close, free mem, etc...
I know this is a large block of code to be confused with, any input would be helpful. I got put in over my head at work. Thanks in advance SO.