
Recherche avancée
Médias (91)
-
Les Miserables
9 décembre 2019, par
Mis à jour : Décembre 2019
Langue : français
Type : Textuel
-
VideoHandle
8 novembre 2019, par
Mis à jour : Novembre 2019
Langue : français
Type : Video
-
Somos millones 1
21 juillet 2014, par
Mis à jour : Juin 2015
Langue : français
Type : Video
-
Un test - mauritanie
3 avril 2014, par
Mis à jour : Avril 2014
Langue : français
Type : Textuel
-
Pourquoi Obama lit il mes mails ?
4 février 2014, par
Mis à jour : Février 2014
Langue : français
-
IMG 0222
6 octobre 2013, par
Mis à jour : Octobre 2013
Langue : français
Type : Image
Autres articles (56)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Gestion générale des documents
13 mai 2011, parMédiaSPIP ne modifie jamais le document original mis en ligne.
Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...) -
La sauvegarde automatique de canaux SPIP
1er avril 2010, parDans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)
Sur d’autres sites (12614)
-
Libav/ffmpeg api duplicates first frame of mp4 video and makes it second frame
22 janvier 2020, par Optic_RayI am making video mp4 file from couple of jpg image files with different sizes present as sample< number >.jpg (sample1.jpg, sample2.jpg, etc) in a folder. I modified ffmpeg muxing.c example to make it create mp4 file from these set of jpg images(for frames) and also modified it to create only video stream(no audio stream). It is able to create mp4 file with n video frames from n different jpg image files, but mp4 file it outputs when n=2 isn’t as expected. I have tried from n=1 to n=7 so far. When n=2 mp4 file is created but I could see only first jpg image as frame in the entire video. It appears as if first frame(first jpg image) is duplicated and used as second frame too, as I just can’t find image of second jpg image file when I play the video. When I make video with 2 frames from 2 different jpg files, I want first frame to be first jpg image and second frame to be second jpg image. How do I achieve that ?
ffmpeg configuration :
ffmpeg version 3.4.6-0ubuntu0.18.04.1 Copyright (c) 2000-2019 the FFmpeg developers
built with gcc 7 (Ubuntu 7.3.0-16ubuntu3)
configuration: --prefix=/usr --extra-version=0ubuntu0.18.04.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-librsvg --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared
libavutil 55. 78.100 / 55. 78.100
libavcodec 57.107.100 / 57.107.100
libavformat 57. 83.100 / 57. 83.100
libavdevice 57. 10.100 / 57. 10.100
libavfilter 6.107.100 / 6.107.100
libavresample 3. 7. 0 / 3. 7. 0
libswscale 4. 8.100 / 4. 8.100
libswresample 2. 9.100 / 2. 9.100
libpostproc 54. 7.100 / 54. 7.100
Hyper fast Audio and Video encoderNumber of frames in video can be set in
#define STREAM_FRAME_RATE n//n=number of frames
. Following is the entire code :#include
#include <libavutil></libavutil>avassert.h>
#include <libavutil></libavutil>channel_layout.h>
#include <libavutil></libavutil>opt.h>
#include <libavutil></libavutil>mathematics.h>
#include <libavutil></libavutil>imgutils.h>
#include <libavutil></libavutil>timestamp.h>
#include <libavformat></libavformat>avformat.h>
#include <libswscale></libswscale>swscale.h>
#include <libswresample></libswresample>swresample.h>
#define STREAM_DURATION 1
#define STREAM_FRAME_RATE 2 /* 2 image/s */
#define STREAM_PIX_FMT AV_PIX_FMT_YUV420P /* default pix_fmt */
#define SCALE_FLAGS SWS_BICUBIC
// a wrapper around a single output AVStream
typedef struct OutputStream {
AVStream *st;
AVCodecContext *enc;
/* pts of the next frame that will be generated */
int64_t next_pts;
int samples_count;
AVFrame *frame;
AVFrame *tmp_frame;
float t, tincr, tincr2;
struct SwsContext *sws_ctx;
struct SwrContext *swr_ctx;
} OutputStream;
static void log_packet(const AVFormatContext *fmt_ctx, const AVPacket *pkt)
{
AVRational *time_base = &fmt_ctx->streams[pkt->stream_index]->time_base;
printf("pts:%s pts_time:%s dts:%s dts_time:%s duration:%s duration_time:%s stream_index:%d\n",
av_ts2str(pkt->pts), av_ts2timestr(pkt->pts, time_base),
av_ts2str(pkt->dts), av_ts2timestr(pkt->dts, time_base),
av_ts2str(pkt->duration), av_ts2timestr(pkt->duration, time_base),
pkt->stream_index);
}
static int write_frame(AVFormatContext *fmt_ctx, const AVRational *time_base, AVStream *st, AVPacket *pkt)
{
/* rescale output packet timestamp values from codec to stream timebase */
av_packet_rescale_ts(pkt, *time_base, st->time_base);
pkt->stream_index = st->index;
/* Write the compressed frame to the media file. */
log_packet(fmt_ctx, pkt);
return av_interleaved_write_frame(fmt_ctx, pkt);
}
/* Add an output stream. */
static void add_stream(OutputStream *ost, AVFormatContext *oc,
AVCodec **codec,
enum AVCodecID codec_id)
{
AVCodecContext *c;
int i;
/* find the encoder */
*codec = avcodec_find_encoder(codec_id);
if (!(*codec)) {
fprintf(stderr, "Could not find encoder for '%s'\n",
avcodec_get_name(codec_id));
exit(1);
}
ost->st = avformat_new_stream(oc, NULL);
if (!ost->st) {
fprintf(stderr, "Could not allocate stream\n");
exit(1);
}
ost->st->id = oc->nb_streams-1;
c = avcodec_alloc_context3(*codec);
if (!c) {
fprintf(stderr, "Could not alloc an encoding context\n");
exit(1);
}
ost->enc = c;
switch ((*codec)->type) {
case AVMEDIA_TYPE_AUDIO:
printf("######################2 audio add stream\n");
c->sample_fmt = (*codec)->sample_fmts ?
(*codec)->sample_fmts[0] : AV_SAMPLE_FMT_FLTP;
c->bit_rate = 64000;
c->sample_rate = 44100;
if ((*codec)->supported_samplerates) {
c->sample_rate = (*codec)->supported_samplerates[0];
for (i = 0; (*codec)->supported_samplerates[i]; i++) {
if ((*codec)->supported_samplerates[i] == 44100)
c->sample_rate = 44100;
}
}
c->channels = av_get_channel_layout_nb_channels(c->channel_layout);
c->channel_layout = AV_CH_LAYOUT_STEREO;
if ((*codec)->channel_layouts) {
c->channel_layout = (*codec)->channel_layouts[0];
for (i = 0; (*codec)->channel_layouts[i]; i++) {
if ((*codec)->channel_layouts[i] == AV_CH_LAYOUT_STEREO)
c->channel_layout = AV_CH_LAYOUT_STEREO;
}
}
c->channels = av_get_channel_layout_nb_channels(c->channel_layout);
ost->st->time_base = (AVRational){ 1, c->sample_rate };
break;
case AVMEDIA_TYPE_VIDEO:
printf("###################### video add stream\n");
c->codec_id = codec_id;
c->bit_rate = 400000;
/* Resolution must be a multiple of two. */
c->width = 352;
c->height = 288;
/* timebase: This is the fundamental unit of time (in seconds) in terms
* of which frame timestamps are represented. For fixed-fps content,
* timebase should be 1/framerate and timestamp increments should be
* identical to 1. */
ost->st->time_base = (AVRational){ 1, STREAM_FRAME_RATE };
c->time_base = ost->st->time_base;
c->gop_size = 12; /* emit one intra frame every twelve frames at most */
c->pix_fmt = STREAM_PIX_FMT;
if (c->codec_id == AV_CODEC_ID_MPEG2VIDEO) {
/* just for testing, we also add B-frames */
c->max_b_frames = 2;//2
}
if (c->codec_id == AV_CODEC_ID_MPEG1VIDEO) {
/* Needed to avoid using macroblocks in which some coeffs overflow.
* This does not happen with normal video, it just happens here as
* the motion of the chroma plane does not match the luma plane. */
c->mb_decision = 2;//2
}
break;
default:
break;
}
/* Some formats want stream headers to be separate. */
if (oc->oformat->flags & AVFMT_GLOBALHEADER)
c->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
}
/**************************************************************/
/* video output */
static AVFrame *alloc_picture(enum AVPixelFormat pix_fmt, int width, int height)
{
AVFrame *picture;
int ret;
picture = av_frame_alloc();
if (!picture)
return NULL;
picture->format = pix_fmt;
picture->width = width;
picture->height = height;
/* allocate the buffers for the frame data */
ret = av_frame_get_buffer(picture, 32);
if (ret < 0) {
fprintf(stderr, "Could not allocate frame data.\n");
exit(1);
}
return picture;
}
static void open_video(AVFormatContext *oc, AVCodec *codec, OutputStream *ost, AVDictionary *opt_arg)
{
int ret;
AVCodecContext *c = ost->enc;
AVDictionary *opt = NULL;
av_dict_copy(&opt, opt_arg, 0);
printf("In open video\n");
/* open the codec */
ret = avcodec_open2(c, codec, &opt);
av_dict_free(&opt);
if (ret < 0) {
fprintf(stderr, "Could not open video codec: %s\n", av_err2str(ret));
exit(1);
}
/* allocate and init a re-usable frame */
ost->frame = alloc_picture(c->pix_fmt, c->width, c->height);
if (!ost->frame) {
fprintf(stderr, "Could not allocate video frame\n");
exit(1);
}
/* If the output format is not YUV420P, then a temporary YUV420P
* picture is needed too. It is then converted to the required
* output format. */
ost->tmp_frame = NULL;
if (c->pix_fmt != AV_PIX_FMT_YUV420P) {
ost->tmp_frame = alloc_picture(AV_PIX_FMT_YUV420P, c->width, c->height);
if (!ost->tmp_frame) {
fprintf(stderr, "Could not allocate temporary picture\n");
exit(1);
}
}
/* copy the stream parameters to the muxer */
ret = avcodec_parameters_from_context(ost->st->codecpar, c);
if (ret < 0) {
fprintf(stderr, "Could not copy the stream parameters\n");
exit(1);
}
}
int open_image(const char* imageFileName,int width,int height,AVFrame *pict)
{
AVFormatContext *pFormatContext = avformat_alloc_context();
if (!pFormatContext)
{
printf("ERROR could not allocate memory for Format Context");
return 0;
}
//av_register_all();
if (avformat_open_input(&pFormatContext, imageFileName, NULL, NULL) != 0)
{
printf("ERROR could not open the file");
return 0;
}
printf("format %s, duration %ld us, bit_rate %ld", pFormatContext->iformat->name, pFormatContext->duration, pFormatContext->bit_rate);
if (avformat_find_stream_info(pFormatContext, NULL) < 0)
{
printf("ERROR could not get the stream info");
return 0;
}
AVCodecParameters *pCodecParameters = pFormatContext->streams[0]->codecpar;
printf("%d",pFormatContext->nb_streams);
AVCodec *pCodec = avcodec_find_decoder(AV_CODEC_ID_MJPEG);
if (pCodec==NULL) {
printf("ERROR unsupported codecopenimage!");
printf("---------------------------------------11");
return 0;
}
AVCodecContext *pCodecCtx =avcodec_alloc_context3(pCodec);
if (!pCodecCtx)
{
printf("failed to allocated memory for AVCodecContext");
return 0;
}
pCodecCtx->width = pCodecParameters->width;
pCodecCtx->height = pCodecParameters->height;
pCodecCtx->pix_fmt = AV_PIX_FMT_YUV420P;
if(pCodecCtx->color_range==AVCOL_RANGE_JPEG) printf("\nAVCOL_RANGE_JPEG\n");
if (avcodec_parameters_to_context(pCodecCtx, pCodecParameters) < 0)
{
printf("failed to copy codec params to codec context");
return 0;
}
// Open codec
if(avcodec_open2(pCodecCtx, pCodec,NULL)<0)
{
printf("Could not open codec");
return 0;
}
AVFrame *pFrame = av_frame_alloc();
if (!pFrame)
{
printf("Can't allocate memory for AVFrame\n");
return 0;
}
AVPacket *packet= av_packet_alloc();
int temp=av_read_frame(pFormatContext, packet);
while (temp >= 0)
{
if(packet->stream_index != 0){
continue;}
int ret = avcodec_send_packet(pCodecCtx,packet);
if (ret < 0)
{
printf("Error while sending a packet to the decoder: %s", av_err2str(ret));
return 0;
}
ret = avcodec_receive_frame(pCodecCtx,pFrame);
if (ret = 0 || ret != AVERROR(EAGAIN) || ret != AVERROR_EOF)
{
pFrame->quality = 1;
av_packet_unref(packet);
struct SwsContext *resize;
resize = sws_getContext(pCodecParameters->width, pCodecParameters->height,
AV_PIX_FMT_YUVJ420P,
width, height,
AV_PIX_FMT_YUV420P,
SCALE_FLAGS, NULL, NULL, NULL);
sws_scale(resize,(const uint8_t * const *)pFrame->data, pFrame->linesize,
0, pCodecParameters->height, pict->data, pict->linesize);
av_packet_free(&packet);
avcodec_free_context(&pCodecCtx);
avformat_close_input(&pFormatContext);
return 1;
}
else {
printf("Error [%d] while decoding frame: %s\n", ret,
strerror(AVERROR(ret)));
return 0;
}
}
avcodec_free_context(&pCodecCtx);
avformat_close_input(&pFormatContext);
return 0;
}
static void fill_yuv_image(AVFrame *pict, int frame_index,
int width, int height)
{
int ret;
/* when we pass a frame to the encoder, it may keep a reference to it
* internally;
* make sure we do not overwrite it here*/
ret = av_frame_make_writable(pict);
if (ret < 0)
exit(1);
char filename[256];
sprintf(filename, "sample%d.jpg", frame_index+1);
if(open_image(filename,width,height,pict)!=1) exit(1);
}
static AVFrame *get_video_frame(OutputStream *ost)
{
AVCodecContext *c = ost->enc;
/* check if we want to generate more frames */
if (av_compare_ts(ost->next_pts, c->time_base,
STREAM_DURATION, (AVRational){ 1, 1 }) >= 0)
return NULL;
if (c->pix_fmt != AV_PIX_FMT_YUV420P) {
/* as we only generate a YUV420P picture, we must convert it
* to the codec pixel format if needed */
if (!ost->sws_ctx) {
ost->sws_ctx = sws_getContext(c->width, c->height,
AV_PIX_FMT_YUV420P,
c->width, c->height,
c->pix_fmt,
SCALE_FLAGS, NULL, NULL, NULL);
if (!ost->sws_ctx) {
fprintf(stderr,
"Could not initialize the conversion context\n");
exit(1);
}
}
fill_yuv_image(ost->tmp_frame, ost->next_pts, c->width, c->height);
sws_scale(ost->sws_ctx,
(const uint8_t * const *)ost->tmp_frame->data, ost->tmp_frame->linesize,
0, c->height, ost->frame->data, ost->frame->linesize);
} else {
fill_yuv_image(ost->frame, ost->next_pts, c->width, c->height);
}
ost->frame->pts = ost->next_pts++;
return ost->frame;
}
/*
* encode one video frame and send it to the muxer
* return 1 when encoding is finished, 0 otherwise
*/
static int write_video_frame(AVFormatContext *oc, OutputStream *ost)
{
int ret;
AVCodecContext *c;
AVFrame *frame;
int got_packet = 0;
AVPacket pkt = { 0 };
c = ost->enc;
printf("In write_video_frame\n");
frame = get_video_frame(ost);
av_init_packet(&pkt);
/* encode the image */
ret = avcodec_encode_video2(c, &pkt, frame, &got_packet);
if (ret < 0) {
fprintf(stderr, "Error encoding video frame: %s\n", av_err2str(ret));
exit(1);
}
if (got_packet) {
ret = write_frame(oc, &c->time_base, ost->st, &pkt);
} else {
ret = 0;
}
if (ret < 0) {
fprintf(stderr, "Error while writing video frame: %s\n", av_err2str(ret));
exit(1);
}
return (frame || got_packet) ? 0 : 1;
}
static void close_stream(AVFormatContext *oc, OutputStream *ost)
{
avcodec_free_context(&ost->enc);
av_frame_free(&ost->frame);
av_frame_free(&ost->tmp_frame);
sws_freeContext(ost->sws_ctx);
swr_free(&ost->swr_ctx);
}
/**************************************************************/
/* media file output */
int main(int argc, char **argv)
{
OutputStream video_st = { 0 };
const char *filename;
AVOutputFormat *fmt;
AVFormatContext *oc;
AVCodec *video_codec;
int ret;
int have_video = 0;
int encode_video = 0;
AVDictionary *opt = NULL;
int i;
/* Initialize libavcodec, and register all codecs and formats. */
av_register_all();
if (argc < 2) {
printf("usage: %s output_file\n"
"API example program to output a media file with libavformat.\n"
"This program generates a synthetic audio and video stream, encodes and\n"
"muxes them into a file named output_file.\n"
"The output format is automatically guessed according to the file extension.\n"
"Raw images can also be output by using '%%d' in the filename.\n"
"\n", argv[0]);
return 1;
}
filename = argv[1];
for (i = 2; i+1 < argc; i+=2) {
if (!strcmp(argv[i], "-flags") || !strcmp(argv[i], "-fflags"))
av_dict_set(&opt, argv[i]+1, argv[i+1], 0);
}
/* allocate the output media context */
avformat_alloc_output_context2(&oc, NULL, NULL, filename);
if (!oc) {
printf("Could not deduce output format from file extension: using MPEG.\n");
avformat_alloc_output_context2(&oc, NULL, "mpeg", filename);
}
if (!oc)
return 1;
fmt = oc->oformat;
/* Add the audio and video streams using the default format codecs
* and initialize the codecs. */
if (fmt->video_codec != AV_CODEC_ID_NONE) {
add_stream(&video_st, oc, &video_codec, fmt->video_codec);
have_video = 1;
encode_video = 1;
}
/* Now that all the parameters are set, we can open the audio and
* video codecs and allocate the necessary encode buffers. */
if (have_video)
open_video(oc, video_codec, &video_st, opt);
av_dump_format(oc, 0, filename, 1);
/* open the output file, if needed */
if (!(fmt->flags & AVFMT_NOFILE)) {
ret = avio_open(&oc->pb, filename, AVIO_FLAG_WRITE);
if (ret < 0) {
fprintf(stderr, "Could not open '%s': %s\n", filename,
av_err2str(ret));
return 1;
}
}
/* Write the stream header, if any. */
ret = avformat_write_header(oc, &opt);
if (ret < 0) {
fprintf(stderr, "Error occurred when opening output file: %s\n",
av_err2str(ret));
return 1;
}
while (encode_video) {
/* select the stream to encode */
printf("######################\n");
if (encode_video &&
(av_compare_ts(video_st.next_pts, video_st.enc->time_base,STREAM_DURATION, (AVRational){ 1, 1 }) <= 0)) {
encode_video = !write_video_frame(oc, &video_st);
}
}
av_write_trailer(oc);
/* Close each codec. */
if (have_video)
close_stream(oc, &video_st);
if (!(fmt->flags & AVFMT_NOFILE))
/* Close the output file. */
avio_closep(&oc->pb);
/* free the stream */
avformat_free_context(oc);
return 0;
}``` -
libavcodec/dnxhd : Enable 12-bit DNxHR support.
2 août 2016, par Steven Robertsonlibavcodec/dnxhd : Enable 12-bit DNxHR support.
10- and 12-bit DNxHR use the same DC coefficient decoding process and
VLC table, just with a different shift value. From SMPTE 2019-1:2016,
8.2.4 DC Coefficient Decoding :"For 8-bit video sampling, the maximum value of η=11 and for
10-/12-bit video sampling, the maximum value of η=13."A sample file will be uploaded to show that with this patch, things
decode correctly :
dnxhr_hqx_12bit_1080p_smpte_colorbars_davinci_resolve.movSigned-off-by : Steven Robertson <steven@strobe.cc>
Signed-off-by : Michael Niedermayer <michael@niedermayer.cc> -
Evolution #3692 : Suivre les évolution de MediaJS
28 juin 2017, par Franck DHello, juste pour dire que la version 4.2.2 vient de sortir https://github.com/mediaelement/mediaelement/blob/master/changelog.md
J’ai pas fait de test, Il y a pas mal de corrections de bugs, mais aussi la mise à jour de l’API Facebook. Elle était en 2.6 et passe en 2.9 !
https://github.com/mediaelement/mediaelement/commit/8a5760dc6ca3fe5c32eb26f44142e579d718128a
L’intérêt de mettre à jour la lib, c’est que cela fonctionne, après le 13 juillet 2018 https://developers.facebook.com/docs/apps/changelogNous ne mettons pas souvent à jour les libs, (sauf sécu ou gros bug) dans les versions mineures de spip, et nous ne sortons pas souvent de mise à jour "majeure" (avec un peu de chances, la 3.3 sortira d’ici juillet 2019), c’est pour cela que je pense que cela serait une bonne idée de mettre la nouvelle lib, car sinon, une fonction ne fonctionnera plus dès l’année prochaine :-(
Franck