
Recherche avancée
Autres articles (109)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (14237)
-
Why is ffmpeg faster than this minimal example ?
23 juillet 2022, par Dave CeddiaI'm wanting to read the audio out of a video file as fast as possible, using the libav libraries. It's all working fine, but it seems like it could be faster.


To get a performance baseline, I ran this ffmpeg command and timed it :


time ffmpeg -threads 1 -i file -map 0:a:0 -f null -



On a test file (a 2.5gb 2hr .MOV with pcm_s16be audio) this comes out to about 1.35 seconds on my M1 Macbook Pro.


On the other hand, this minimal C code (based on FFmpeg's "Demuxing and decoding" example) is consistently around 0.3 seconds slower.


#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>

static int decode_packet(AVCodecContext *dec, const AVPacket *pkt, AVFrame *frame)
{
 int ret = 0;

 // submit the packet to the decoder
 ret = avcodec_send_packet(dec, pkt);

 // get all the available frames from the decoder
 while (ret >= 0) {
 ret = avcodec_receive_frame(dec, frame);
 av_frame_unref(frame);
 }

 return 0;
}

int main (int argc, char **argv)
{
 int ret = 0;
 AVFormatContext *fmt_ctx = NULL;
 AVCodecContext *dec_ctx = NULL;
 AVFrame *frame = NULL;
 AVPacket *pkt = NULL;

 if (argc != 3) {
 exit(1);
 }

 int stream_idx = atoi(argv[2]);

 /* open input file, and allocate format context */
 avformat_open_input(&fmt_ctx, argv[1], NULL, NULL);

 /* get the stream */
 AVStream *st = fmt_ctx->streams[stream_idx];

 /* find a decoder for the stream */
 AVCodec *dec = avcodec_find_decoder(st->codecpar->codec_id);

 /* allocate a codec context for the decoder */
 dec_ctx = avcodec_alloc_context3(dec);

 /* copy codec parameters from input stream to output codec context */
 avcodec_parameters_to_context(dec_ctx, st->codecpar);

 /* init the decoder */
 avcodec_open2(dec_ctx, dec, NULL);

 /* allocate frame and packet structs */
 frame = av_frame_alloc();
 pkt = av_packet_alloc();

 /* read frames from the specified stream */
 while (av_read_frame(fmt_ctx, pkt) >= 0) {
 if (pkt->stream_index == stream_idx)
 ret = decode_packet(dec_ctx, pkt, frame);

 av_packet_unref(pkt);
 if (ret < 0)
 break;
 }

 /* flush the decoders */
 decode_packet(dec_ctx, NULL, frame);

 return ret < 0;
}



I tried measuring parts of this program to see if it was spending a lot of time in the setup, but it's not – at least 1.5 seconds of the runtime is the loop where it's reading frames.


So I took some flamegraph recordings (using cargo-flamegraph) and ran each a few times to make sure the timing was consistent. There's probably some overhead since both were consistently higher than running normally, but they still have the 0.3 second delta.


# 1.812 total
time sudo flamegraph ./minimal file 1

# 1.542 total
time sudo flamegraph ffmpeg -threads 1 -i file -map 0:a:0 -f null - 2>&1



Here are the flamegraphs stacked up, scaled so that the faster one is only 85% as wide as the slower one. (click for larger)




The interesting thing that stands out to me is how long is spent on
read
in the minimal example vs. ffmpeg :



The time spent on
lseek
is also a lot longer in the minimal program – it's plainly visible in that flamegraph, but in the ffmpeg flamegraph,lseek
is a single pixel wide.

What's causing this discrepancy ? Is ffmpeg actually doing less work than I think it is here ? Is the minimal code doing something naive ? Is there some buffering or other I/O optimizations that ffmpeg has enabled ?


How can I shave 0.3 seconds off of the minimal example's runtime ?


-
avcodec/dpx : fix check of minimal data size for unpadded content
19 octobre 2022, par Jerome Martinezavcodec/dpx : fix check of minimal data size for unpadded content
stride value is not relevant with unpadded content and the total count
of pixels (width x height) must be used instead of the rounding based on
width only then multiplied by heightunpadded_10bit value computing is moved sooner in the code in order to
be able to use it during computing of minimal content size. Also make sure to
only set it for 10bit.Fix 'Overread buffer' error when the content is not lucky enough to have
(enough) padding bytes at the end for not being rejected by the formula
based on the stride valueFixes ticket #10259.
Signed-off-by : Jerome Martinez <jerome@mediaarea.net>
Signed-off-by : Marton Balint <cus@passwd.hu> -
ffmpeg utility vs direct libav* capabilities
15 avril 2015, par ranshI would like to ask some general question about ffmpeg utility.
When it is documented in ffmpeg wiki’s :
https://www.ffmpeg.org/general.html
That it support some feature (for example jpeg2000), does it mean that only ffmpeg utility support this feature, or does it mean that using directly the libraries in the ffmpeg repository (through libav*, libavcodecand , libopenjpeg, and other external libraries which are part of ffmpeg) will also support this feature as well ?Thank you very much,
Ran