
Recherche avancée
Médias (2)
-
Granite de l’Aber Ildut
9 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
-
Géodiversité
9 septembre 2011, par ,
Mis à jour : Août 2018
Langue : français
Type : Texte
Autres articles (57)
-
Les formats acceptés
28 janvier 2010, parLes commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
ffmpeg -codecs ffmpeg -formats
Les format videos acceptés en entrée
Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
Les formats vidéos de sortie possibles
Dans un premier temps on (...) -
Submit bugs and patches
13 avril 2011Unfortunately a software is never perfect.
If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
You may also (...) -
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)
Sur d’autres sites (5123)
-
ffmpeg : missing frames with mp4 encoding
6 juillet 2016, par SierraI’m currently developing a desktop app that generates videos from pictures (
QImage
to be more specific). I’m working with Qt 5.6 and the last build of ffmpeg (build git-0a9e781 (2016-06-10)).I encode severals
QImage
to create an .mp4 video. I already have an output but it seems that some frames are missing.Here is my code. I tried to be as clear as possible, removing comments and errors catching.
## INITIALIZATION
#####################################################################
AVOutputFormat * outputFormat = Q_NULLPTR;
AVFormatContext * formatContext = Q_NULLPTR;
// filePath: "C:/Users/.../qt_temp.Jv7868.mp4"
avformat_alloc_output_context2(&formatContext, NULL, NULL, filePath.data());
outputFormat = formatContext->oformat;
if (outputFormat->video_codec != AV_CODEC_ID_NONE) {
// Finding a registered encoder with a matching codec ID...
*codec = avcodec_find_encoder(outputFormat->video_codec);
// Adding a new stream to a media file...
stream = avformat_new_stream(formatContext, *codec);
stream->id = formatContext->nb_streams - 1;
AVCodecContext * codecContext = avcodec_alloc_context3(*codec);
switch ((*codec)->type) {
case AVMEDIA_TYPE_VIDEO:
codecContext->codec_id = outputFormat->video_codec;
codecContext->bit_rate = 400000;
codecContext->width = 1240;
codecContext->height = 874;
// Timebase: this is the fundamental unit of time (in seconds) in terms of which frame
// timestamps are represented. For fixed-fps content, timebase should be 1/framerate
// and timestamp increments should be identical to 1.
stream->time_base = (AVRational){1, 24};
codecContext->time_base = stream->time_base;
// Emit 1 intra frame every 12 frames at most
codecContext->gop_size = 12;
codecContext->pix_fmt = AV_PIX_FMT_YUV420P;
if (codecContext->codec_id == AV_CODEC_ID_H264) {
av_opt_set(codecContext->priv_data, "preset", "slow", 0);
}
break;
}
if (formatContext->oformat->flags & AVFMT_GLOBALHEADER) {
codecContext->flags |= CODEC_FLAG_GLOBAL_HEADER;
}
}
avcodec_open2(codecContext, codec, NULL);
// Allocating and initializing a re-usable frames...
frame = allocPicture(codecContext->width, codecContext->height, codecContext->pix_fmt);
tmpFrame = allocPicture(codecContext->width, codecContext->height, AV_PIX_FMT_BGRA);
avcodec_parameters_from_context(stream->codecpar, codecContext);
av_dump_format(formatContext, 0, filePath.data(), 1);
if (!(outputFormat->flags & AVFMT_NOFILE)) {
avio_open(&formatContext->pb, filePath.data(), AVIO_FLAG_WRITE);
}
// Writing the stream header, if any...
avformat_write_header(formatContext, NULL);
## RECEIVING A NEW FRAME
#####################################################################
// New QImage received: QImage image
const qint32 width = image.width();
const qint32 height = image.height();
// When we pass a frame to the encoder, it may keep a reference to it internally;
// make sure we do not overwrite it here!
av_frame_make_writable(tmpFrame);
for (qint32 y = 0; y < height(); y++) {
const uint8_t * scanline = image.scanLine(y);
for (qint32 x = 0; x < width() * 4; x++) {
tmpFrame->data[0][y * tmpFrame->linesize[0] + x] = scanline[x];
}
}
// As we only generate a BGRA picture, we must convert it to the
// codec pixel format if needed.
if (!swsCtx) {
swsCtx = sws_getContext(width, height,
AV_PIX_FMT_BGRA,
codecContext->width, codecContext->height,
codecContext->pix_fmt,
swsFlags, NULL, NULL, NULL);
}
sws_scale(swsCtx,
(const uint8_t * const *)tmpFrame->data,
tmpFrame->linesize,
0,
codecContext->height,
frame->data,
frame->linesize);
...
AVFrame * frame = Q_NULLPTR;
int gotPacket = 0;
av_init_packet(&packet);
// Packet data will be allocated by the encoder
this->packet.data = NULL;
this->packet.size = 0;
frame->pts = nextPts++; // nextPts starts to 0
avcodec_encode_video2(codecContext, &packet, frame, &gotPacket);
if (gotPacket) {
if (codecContext->coded_frame->key_frame) {
packet.flags |= AV_PKT_FLAG_KEY;
}
// Rescale output packet timestamp values from codec to stream timebase
av_packet_rescale_ts(packet, *codecContext->time_base, stream->time_base);
packet->stream_index = stream->index;
// Write the compressed frame to the media file.
av_interleaved_write_frame(formatContext, packet);
av_packet_unref(&this->packet);
}
## FINISHING ENCODING
#####################################################################
// Retrieving delayed frames if any...
for (int gotOutput = 1; gotOutput;) {
avcodec_encode_video2(codecContext, &packet, NULL, &gotOutput);
if (gotOutput) {
// Rescale output packet timestamp values from codec to stream timebase
av_packet_rescale_ts(packet, *codecContext->time_base, stream->time_base);
packet->stream_index = stream->index;
// Write the compressed frame to the media file.
av_interleaved_write_frame(formatContext, packet);
av_packet_unref(&packet);
}
}
av_write_trailer(formatContext);
avcodec_free_context(&codecContext);
av_frame_free(&frame);
av_frame_free(&tmpFrame);
sws_freeContext(swsCtx);
if (!(outputFormat->flags & AVFMT_NOFILE)) {
// Closing the output file...
avio_closep(&formatContext->pb);
}
avformat_free_context(formatContext);A part of the last second is always cut (e.g. when I send 48 frames, 24 fps, media players show 1,9 seconds of the video). I analyzed the video (48 frames, 24fps) with ffmpeg in command line, and I found something weird :
When I re encode the video with ffmpeg (in command line) to the same format, I get a more logical output :
From what I rode on different topics, I think it is closely connected to the h264 codec but I have no idea how to fix it. I’m not familiar with ffmpeg so any kind of help would be highly appreciated. Thank you.
EDIT 06/07/2016
Digging a little bit more in ffmpeg examples, I noticed these lines when closing the media file :uint8_t endcode[] = { 0, 0, 1, 0xb7 };
...
/* add sequence end code to have a real mpeg file */
fwrite(endcode, 1, sizeof(endcode), f);Is that sequence could be linked to my problem ? I’m trying to implement this to my code but, for now, it corrupts the media file. Any idea on how can I implement that line for my case ?
-
Anomalie #3707 : bug dans la gestion des langues avec le multilinguisme : impossible de fixer la t...
30 juin 2016, par Guillaume Fahrnerle patch demandé sur IRC :
— - prive/formulaires/traduire.php 2016-03-10 15:32:38.000000000 +0100 +++ /var/www/www.root-me.org/htdocs/prive/formulaires/traduire.php 2016-05-26 09:27:03.194473328 +0200 @@ -154,7 +154,8 @@ if (!_request(’annuler’) and autoriser(’changerlangue’, $objet, $id_objet)) // action/editer_xxx doit traiter la modif de changer_lang $res = formulaires_editer_objet_traiter($objet, $id_objet, 0, 0, $retour) ;+ + if (!_request(’annuler’) and autoriser(’changertraduction’, $objet, $id_objet)) if ($id_trad = _request(’id_trad’) or _request(’supprimer_trad’)) $referencer_traduction = charger_fonction(’referencer_traduction’, ’action’) ; $referencer_traduction($objet, $id_objet, intval($id_trad)) ; // 0 si supprimer_trad @@ -166,7 +167,7 @@ $_id_table_objet = id_table_objet($objet) ; if ($id_trad = sql_getfetsel(’id_trad’, $table_objet_sql, "$_id_table_objet=" . intval($id_objet))) $referencer_traduction = charger_fonction(’referencer_traduction’, ’action’) ;
$referencer_traduction($objet, $id_trad, $new_id_trad) ; + $res = $referencer_traduction($objet, $id_trad, $new_id_trad) ;
— - prive/formulaires/traduire.html 2016-03-10 15:32:38.000000000 +0100 +++ /var/www/www.root-me.org/htdocs/prive/formulaires/traduire.html 2016-05-26 09:31:39.834465138 +0200 @@ -24,7 +24,7 @@ f.find(’.boutons,.new_trad,.editer_id_trad’).show(’fast’) ; f.find(’#changer_lang’).eq(0).focus() ;return false ;" ><:bouton_changer :> \([(#ENV_langue| ?[(#ENV_objet|objet_infotexte_langue_objet|_T)],<:info_traductions :>)]\)] ;[(#ENV_langue|oui) + [(#ENVeditable|oui) [(#ENV{_saisie_en_cours}"> ]]]
-
FFMPEG : Video file to YUV conversion by binary ffmpeg and by code C++ give different results
30 juin 2016, par Anny GDisclaimer : I have looked at the following question,
FFMPEG : RGB to YUV conversion by binary ffmpeg and by code C++ give different results
but it didn’t help and it is not applicable to me because I am not using SwsContext or anything.Following first few tutorials by http://dranger.com/ffmpeg/, I have created a simple program that reads a video, decodes it and then when the frame is decoded, it writes the raw yuv values to a file (no padding), using the data provided by AVFrame after we successfully decoded a frame. To be more specific, I write out arrays
AVFrame->data[0]
,AVFrame->data[1]
andAVFrame->data[2]
to a file, i.e. I simply append Y values, then U values, then V values to a file. The file turns out to be of yuv422p format.When I convert the same original video to a raw yuv format using the ffmpeg(same version of ffmpeg) command line tool, the two yuv files are the same in size, but differ in content.
FYI, I am able to play both of the yuv files using the yuv player, and they look identical as well.
Here is the exact command I run to convert the original video to a yuv video using ffmpeg command line tool
~/bin/ffmpeg -i super-short-video.h264 -c:v rawvideo -pix_fmt yuv422p "super-short-video-yuv422p.yuv"
What causes this difference in bytes and can it be fixed ? Is there perhaps another way of converting the original video to a yuv using the ffmpeg tool but maybe I need to use different settings ?
Ffmpeg output when I convert to a yuv format :
ffmpeg version N-80002-g5afecff Copyright (c) 2000-2016 the FFmpeg developers
built with gcc 4.8 (Ubuntu 4.8.4-2ubuntu1~14.04.1)
configuration: --prefix=/home/me/ffmpeg_build --pkg-config-flags=--static --extra-cflags=-I/home/me/ffmpeg_build/include --extra-ldflags=-L/home/me/ffmpeg_build/lib --bindir=/home/me/bin --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-nonfree --extra-cflags=-pg --extra-ldflags=-pg --disable-stripping
libavutil 55. 24.100 / 55. 24.100
libavcodec 57. 42.100 / 57. 42.100
libavformat 57. 36.100 / 57. 36.100
libavdevice 57. 0.101 / 57. 0.101
libavfilter 6. 45.100 / 6. 45.100
libswscale 4. 1.100 / 4. 1.100
libswresample 2. 0.101 / 2. 0.101
libpostproc 54. 0.100 / 54. 0.100
Input #0, h264, from 'super-short-video.h264':
Duration: N/A, bitrate: N/A
Stream #0:0: Video: h264 (High), yuv420p, 1280x720, 25 fps, 25 tbr, 1200k tbn
[rawvideo @ 0x24f6fc0] Using AVStream.codec to pass codec parameters to muxers is deprecated, use AVStream.codecpar instead.
Output #0, rawvideo, to 'super-short-video-yuv422p.yuv':
Metadata:
encoder : Lavf57.36.100
Stream #0:0: Video: rawvideo (Y42B / 0x42323459), yuv422p, 1280x720, q=2-31, 200 kb/s, 25 fps, 25 tbn
Metadata:
encoder : Lavc57.42.100 rawvideo
Stream mapping:
Stream #0:0 -> #0:0 (h264 (native) -> rawvideo (native))
Press [q] to stop, [?] for help
frame= 50 fps=0.0 q=-0.0 Lsize= 90000kB time=00:00:02.00 bitrate=368640.0kbits/s speed=11.3x
video:90000kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.000000%