
Recherche avancée
Autres articles (70)
-
Configurer la prise en compte des langues
15 novembre 2010, parAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...) -
Les formats acceptés
28 janvier 2010, parLes commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
ffmpeg -codecs ffmpeg -formats
Les format videos acceptés en entrée
Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
Les formats vidéos de sortie possibles
Dans un premier temps on (...) -
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)
Sur d’autres sites (7790)
-
How to explain the given ffplay C code snippet ?
20 juillet 2015, par Jerikc XIONGThe following code snippet is from ffplay :
static int decoder_decode_frame(Decoder *d, AVFrame *frame, AVSubtitle *sub) {
int got_frame = 0;
do {
int ret = -1;
if (d->queue->abort_request)
return -1;
if (!d->packet_pending || d->queue->serial != d->pkt_serial) {
AVPacket pkt;
do {
if (d->queue->nb_packets == 0)
SDL_CondSignal(d->empty_queue_cond);
if (packet_queue_get(d->queue, &pkt, 1, &d->pkt_serial) < 0)
return -1;
if (pkt.data == flush_pkt.data) {
avcodec_flush_buffers(d->avctx);
d->finished = 0;
d->next_pts = d->start_pts;
d->next_pts_tb = d->start_pts_tb;
}
} while (pkt.data == flush_pkt.data || d->queue->serial != d->pkt_serial);
av_free_packet(&d->pkt);
d->pkt_temp = d->pkt = pkt;
d->packet_pending = 1;
}
switch (d->avctx->codec_type) {
case AVMEDIA_TYPE_VIDEO:
ret = avcodec_decode_video2(d->avctx, frame, &got_frame, &d->pkt_temp);
if (got_frame) {
if (decoder_reorder_pts == -1) {
frame->pts = av_frame_get_best_effort_timestamp(frame);
} else if (decoder_reorder_pts) {
frame->pts = frame->pkt_pts;
} else {
frame->pts = frame->pkt_dts;
}
}
break;
case AVMEDIA_TYPE_AUDIO:
ret = avcodec_decode_audio4(d->avctx, frame, &got_frame, &d->pkt_temp);
if (got_frame) {
AVRational tb = (AVRational){1, frame->sample_rate};
if (frame->pts != AV_NOPTS_VALUE)
frame->pts = av_rescale_q(frame->pts, d->avctx->time_base, tb);
else if (frame->pkt_pts != AV_NOPTS_VALUE)
frame->pts = av_rescale_q(frame->pkt_pts, av_codec_get_pkt_timebase(d->avctx), tb);
else if (d->next_pts != AV_NOPTS_VALUE)
frame->pts = av_rescale_q(d->next_pts, d->next_pts_tb, tb);
if (frame->pts != AV_NOPTS_VALUE) {
d->next_pts = frame->pts + frame->nb_samples;
d->next_pts_tb = tb;
}
}
break;
case AVMEDIA_TYPE_SUBTITLE:
ret = avcodec_decode_subtitle2(d->avctx, sub, &got_frame, &d->pkt_temp);
break;
}
if (ret < 0) {
d->packet_pending = 0;
} else {
d->pkt_temp.dts =
d->pkt_temp.pts = AV_NOPTS_VALUE;
if (d->pkt_temp.data) {
if (d->avctx->codec_type != AVMEDIA_TYPE_AUDIO)
ret = d->pkt_temp.size;
d->pkt_temp.data += ret;
d->pkt_temp.size -= ret;
if (d->pkt_temp.size <= 0)
d->packet_pending = 0;
} else {
if (!got_frame) {
d->packet_pending = 0;
d->finished = d->pkt_serial; // FLAG
}
}
}
} while (!got_frame && !d->finished);
return got_frame;
}It’s difficult for me to understand the following code :
d->finished = d->pkt_serial; // FLAG
Can anyone help me ?
Thanks.
-
How do I set the framerate/FPS in an FFmpeg code (C) ?
2 juin 2020, par Tobias v. BrevernI try to encode single pictures to a .avi video. The goal is to have every picture displayed for a set amount of seconds to create a slide show. I tried my script with 10 pictures and a delay of 1/5 of a second but the output file was not even half a second long (but displayed every picture). For setting the framerate I use the time_base option of the AVCodeContext :



ctx->time_base = (AVRational) {1, 5};



When I use the command
ffmpeg -framerate 1/3 -i img%03d.png -codec png output.avi
everything works fine and I get the file I want. I use the png codec because it was the only one i tried that is playable with Windows Media Player.


Am I missing anything here ? Is there another option that has impact on the framerate ?



This is my code so far :



Note : I use a couple of self made data structures and methodes from other classes. They are the ones written in Caps Lock. They basicly do what the name suggests but are necessary for my project. The Input Array contains the pictures that i want to encode.



include <libavutil></libavutil>opt.h>
include <libavutil></libavutil>imgutils.h>
include <libavutil></libavutil>error.h>

void PixmapsToAVI (ARRAY* arr, String outfile, double secs)
{
 if (arr!=nil && outfile!="" && secs!=0) {
 AVCodec* codec = avcodec_find_encoder(AV_CODEC_ID_PNG);
 if (codec) {
 int width = -1;
 int height = -1;
 int ret = 0;

 AVCodecContext* ctx = NULL;
 ctx = avcodec_alloc_context3(codec);
 AVFrame* frame = av_frame_alloc();
 AVPacket* pkt = av_packet_alloc();

 FILE* file = fopen(outfile, "wb");

 ARRAYELEMENT* e;
 int count = 0;
 forall (e, *arr) {
 BITMAP bitmap (e->value, false);
 if (width < 0) {
 width = bitmap.Width();
 height = bitmap.Height();

 ctx->width = width;
 ctx->height = height;
 ctx->time_base = (AVRational){1, 5};
 ctx->framerate = (AVRational){5, 1};
 ctx->pix_fmt = AV_PIX_FMT_RGB24;
 ret = avcodec_open2(ctx, codec, NULL);

 frame->width = width;
 frame->height = height;
 frame->format = ctx->pix_fmt;
 av_opt_set(ctx->priv_data, "preset", "slow", 1);

 }
 ret = av_frame_get_buffer(frame, 1);
 frame->linesize[0] = width*3;

 bitmap.Convert32();
 byte* pixels = bitmap.PixelsRGB(); 

//The two methodes above convert the Pixmap into the RGB structure we need
//They are not needed to get an output file but are needed to get one that makes sense

 fflush(stdout);
 int writeable = av_frame_make_writable(frame);
 if (writeable>=0) {
 for(int i=0; i<(height*width*3); i++){
 frame->data[0][i] = pixels[i];
 }
 }
 ret = avcodec_send_frame(ctx, frame);
 for(int i=0; i= 0) {
 ret = avcodec_receive_packet(ctx, pkt);
 }
 count++;
 avcodec_receive_packet(ctx, pkt);
 fwrite(pkt->data, 1, pkt->size, file);
 fflush(stdout);
 av_packet_unref(pkt);
 }
 fclose(file);
 avcodec_free_context(&ctx);
 av_frame_free(&frame);
 av_packet_free(&pkt);

 }
 }
} 









-
How to let fluent-ffmpeg complete rendering before executing next line of code ?
12 avril 2020, par wongzThe .forEach() loop cuts ffmpeg short so it doesn't fully finish rendering any single video. How can I allow ffmpeg to finish rendering before the next loop occurs ?



let videos = [vid1.mp4, vid2.mp4, vid3.mp4];

videos.forEach((vid, i) => {
 ffmpeg(vid)
 .size('1280x720')
 .save(vid);
}