
Recherche avancée
Autres articles (107)
-
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)
Sur d’autres sites (11063)
-
How to increase the compression ratio of a JPEG2000 file with "avenc_jpeg2000" GStreamer encoder ?
10 juillet 2023, par qfr.bertin.groupI'm using the plugin avenc_jpeg2000, from gst-libav module, combined with videotestsrc and filesink plugins for encoding a raw picture to a JPEG2000 picture :


gst-launch-1.0 videotestsrc num-buffers=1 ! avenc_jpeg2000 ! filesink location=/tmp/picture-ref.jp2



This pipeline works and produce a 31.85 KiB (32,616) file.




Now, I want to divide the size of my output file by two by increasing the compression ratio of the encoder avenc_jpeg2000. To achieve this, I want to minimize the number of bits required to represent the image with an allowable level of distortion. I know JPEG2000 standard support lossless and lossy compression mode. For my use case, the lossy compression mode is acceptable.


How should I proceed to increase the compression of my output file ? What encoder's properties should I play with for doing that ?


My test configuration :


- 

-
i.MX 8M Plus


-
GStreamer 1.18.0


-
libav 1.18.0 (Release date : 2020-09-08)










I tried to play with "bitrate" and "bitrate-tolerance" properties, but it seems to have no effect on the size of the output file :


gst-launch-1.0 videotestsrc num-buffers=1 ! avenc_jpeg2000 bitrate=100000 bitrate-tolerance=10000 ! filesink location=/tmp/picture-test-01.jp2



I compare files by doing a checksum with
sha224sum
command :

d0da9118a9c93a0420d6d62f104e0d99fe6e50cda5e87a46cef126f9 /tmp/picture-ref.jp2

d0da9118a9c93a0420d6d62f104e0d99fe6e50cda5e87a46cef126f9 /tmp/picture-test-01.jp2



-
-
FFmpeg : "filter_complex" results in worse quality than "vf"
16 novembre 2016, par DasmowenatorI’m trying to use FFmpeg to do some complex video transcoding (such as concatenating multiple files). To do this, I’ve been trying to use the filter_complex, but I’ve noticed a slight drop in quality from what I saw earlier using the normal video filter.
To double-check, I boiled down my command to a simple transcode — one using filter_complex and one just using the vf — and I’ve confirmed that the output of the complex filter is noticeably blurry compared to the output of the normal video filter. I can’t find any FFmpeg documentation explaining this... does anyone know why this is happening and how I can get filter_complex to output the same quality video as vf ?
Command using the normal video filter (vf) :
ffmpeg -i input.ts -map_chapters -1 -f mpegts -an -sn -map 0:0 -vf "[in]yadif=deint=interlaced[out]" -vcodec libx264 -profile:v baseline -level 2 -b:v 800k output.ts
FFmpeg Output :
ffmpeg version 3.0 Copyright (c) 2000-2016 the FFmpeg developers
built with gcc 4.1.2 (GCC) 20070626 (Red Hat 4.1.2-14)
configuration: --enable-gpl --enable-nonfree --enable-libx264 --enable-libfdk-aac --enable-libfaac --enable-libvpx --enable-encoder=vorbis --enable-libvorbis --enable-libmp3lame --enable-libspeex --disable-decoder=prores --disable-decoder=prores_lgpl --disable-ffplay --disable-ffserver --disable-shared --enable-static --extra-cflags=-I/local/build/include --extra-libs=-lfdk-aac --extra-ldflags=-L/local/build/lib --prefix=/local/build/install
libavutil 55. 17.103 / 55. 17.103
libavcodec 57. 24.102 / 57. 24.102
libavformat 57. 25.100 / 57. 25.100
libavdevice 57. 0.101 / 57. 0.101
libavfilter 6. 31.100 / 6. 31.100
libswscale 4. 0.100 / 4. 0.100
libswresample 2. 0.101 / 2. 0.101
libpostproc 54. 0.100 / 54. 0.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'clip01.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf57.41.100
Duration: 00:00:10.02, start: 0.023220, bitrate: 741 kb/s
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 854x480 [SAR 1:1 DAR 427:240], 604 kb/s, 30 fps, 30 tbr, 15360 tbn, 60 tbc (default)
Metadata:
handler_name : VideoHandler
Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s (default)
Metadata:
handler_name : SoundHandler
[libx264 @ 0x138a99e0] No 608/708 caption insertion into sei user data.
[libx264 @ 0x138a99e0] using SAR=1/1
[libx264 @ 0x138a99e0] frame MB size (54x30) > level limit (396)
[libx264 @ 0x138a99e0] MB rate (48600) > level limit (11880)
[libx264 @ 0x138a99e0] using cpu capabilities: MMX2 SSE2Fast SSSE3 Cache64
[libx264 @ 0x138a99e0] profile Constrained Baseline, level 2.0
Output #0, mpegts, to 'output.ts':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf57.25.100
Stream #0:0(und): Video: h264 (libx264), yuv420p, 854x480 [SAR 1:1 DAR 427:240], q=-1--1, 800 kb/s, 30 fps, 90k tbn, 30 tbc (default)
Metadata:
handler_name : VideoHandler
encoder : Lavc57.24.102 libx264
Side data:
unknown side data type 10 (24 bytes)
Stream mapping:
Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
Press [q] to stop, [?] for help
frame= 300 fps=179 q=-1.0 Lsize= 1059kB time=00:00:10.03 bitrate= 864.9kbits/s speed= 6x
video:949kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 11.660099%
[libx264 @ 0x138a99e0] frame I:2 Avg QP:23.16 size: 26978
[libx264 @ 0x138a99e0] frame P:298 Avg QP:23.09 size: 3079
[libx264 @ 0x138a99e0] mb I I16..4: 44.3% 0.0% 55.7%
[libx264 @ 0x138a99e0] mb P I16..4: 1.1% 0.0% 0.5% P16..4: 28.7% 8.4% 2.1% 0.0% 0.0% skip:59.3%
[libx264 @ 0x138a99e0] final ratefactor: 22.63
[libx264 @ 0x138a99e0] coded y,uvDC,uvAC intra: 30.8% 62.8% 23.0% inter: 7.2% 14.8% 0.3%
[libx264 @ 0x138a99e0] i16 v,h,dc,p: 25% 39% 11% 24%
[libx264 @ 0x138a99e0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 22% 29% 16% 5% 6% 7% 6% 4% 5%
[libx264 @ 0x138a99e0] i8c dc,h,v,p: 45% 29% 18% 8%
[libx264 @ 0x138a99e0] kb/s:777.19Command using the complex filter :
ffmpeg -i input.ts -map_chapters -1 -f mpegts -filter_complex "[0:v:0]yadif=deint=interlaced[v0];[v0]concat=n=1:v=1:a=0[cat_v]" -an -sn -map "[cat_v]" -vcodec libx264 -profile:v baseline -level 2 -b:v 800k output.ts
FFmpeg Output :
ffmpeg version 3.0 Copyright (c) 2000-2016 the FFmpeg developers
built with gcc 4.1.2 (GCC) 20070626 (Red Hat 4.1.2-14)
configuration: --enable-gpl --enable-nonfree --enable-libx264 --enable-libfdk-aac --enable-libfaac --enable-libvpx --enable-encoder=vorbis --enable-libvorbis --enable-libmp3lame --enable-libspeex --disable-decoder=prores --disable-decoder=prores_lgpl --disable-ffplay --disable-ffserver --disable-shared --enable-static --extra-cflags=-I/local/build/include --extra-libs=-lfdk-aac --extra-ldflags=-L/local/build/lib --prefix=/local/build/install
libavutil 55. 17.103 / 55. 17.103
libavcodec 57. 24.102 / 57. 24.102
libavformat 57. 25.100 / 57. 25.100
libavdevice 57. 0.101 / 57. 0.101
libavfilter 6. 31.100 / 6. 31.100
libswscale 4. 0.100 / 4. 0.100
libswresample 2. 0.101 / 2. 0.101
libpostproc 54. 0.100 / 54. 0.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'clip01.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf57.41.100
Duration: 00:00:10.02, start: 0.023220, bitrate: 741 kb/s
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 854x480 [SAR 1:1 DAR 427:240], 604 kb/s, 30 fps, 30 tbr, 15360 tbn, 60 tbc (default)
Metadata:
handler_name : VideoHandler
Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s (default)
Metadata:
handler_name : SoundHandler
[libx264 @ 0x81f5200] No 608/708 caption insertion into sei user data.
[libx264 @ 0x81f5200] using SAR=1/1
[libx264 @ 0x81f5200] frame MB size (54x30) > level limit (396)
[libx264 @ 0x81f5200] MB rate (48600) > level limit (11880)
[libx264 @ 0x81f5200] using cpu capabilities: MMX2 SSE2Fast SSSE3 Cache64
[libx264 @ 0x81f5200] profile Constrained Baseline, level 2.0
Output #0, mpegts, to 'output.ts':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf57.25.100
Stream #0:0: Video: h264 (libx264), yuv420p, 854x480 [SAR 1:1 DAR 427:240], q=-1--1, 800 kb/s, 30 fps, 90k tbn, 30 tbc (default)
Metadata:
encoder : Lavc57.24.102 libx264
Side data:
unknown side data type 10 (24 bytes)
Stream mapping:
Stream #0:0 (h264) -> yadif
concat -> Stream #0:0 (libx264)
Press [q] to stop, [?] for help
frame= 300 fps=179 q=-1.0 Lsize= 1059kB time=00:00:10.03 bitrate= 864.9kbits/s speed= 6x
video:949kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 11.660099%
[libx264 @ 0x81f5200] frame I:2 Avg QP:23.16 size: 26978
[libx264 @ 0x81f5200] frame P:298 Avg QP:23.09 size: 3079
[libx264 @ 0x81f5200] mb I I16..4: 44.3% 0.0% 55.7%
[libx264 @ 0x81f5200] mb P I16..4: 1.1% 0.0% 0.5% P16..4: 28.7% 8.4% 2.1% 0.0% 0.0% skip:59.3%
[libx264 @ 0x81f5200] final ratefactor: 22.63
[libx264 @ 0x81f5200] coded y,uvDC,uvAC intra: 30.8% 62.8% 23.0% inter: 7.2% 14.8% 0.3%
[libx264 @ 0x81f5200] i16 v,h,dc,p: 25% 39% 11% 24%
[libx264 @ 0x81f5200] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 22% 29% 16% 5% 6% 7% 6% 4% 5%
[libx264 @ 0x81f5200] i8c dc,h,v,p: 45% 29% 18% 8%
[libx264 @ 0x81f5200] kb/s:777.19 -
ffmpeg convert to webm error "too many invisible frames"
24 janvier 2019, par Вадим КоломиецI need to convert any format (for example, mp4, avi etc) to .webm with own ioContext. I build ffmpeg with vpx, ogg, vorbis, opus and create simple project. But when i write any frame i get error "Too many invisible frames. Failed to send packet to filter vp9_superframe for stream 0"
I’ve already tried convert from webm to webm with copy codec params with avcodec_parameters_copy and this works.
#include <qcoreapplication>
#include <qfileinfo>
#include <iostream>
#include <fstream>
extern "C" {
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavutil></libavutil>timestamp.h>
#include <libavformat></libavformat>avformat.h>
#include <libavfilter></libavfilter>buffersink.h>
#include <libavfilter></libavfilter>buffersrc.h>
#include <libavutil></libavutil>opt.h>
#include <libavutil></libavutil>pixdesc.h>
}
using namespace std;
struct BufferData {
QByteArray data;
uint fullsize;
BufferData() {
fullsize =0;
}
};
static int write_packet_to_buffer(void *opaque, uint8_t *buf, int buf_size) {
BufferData *bufferData = static_cast(opaque);
bufferData->fullsize += buf_size;
bufferData->data.append((const char*)buf, buf_size);
return buf_size;
}
static bool writeBuffer(const QString &filename, BufferData *bufferData) {
QFile file(filename);
if( !file.open(QIODevice::WriteOnly) ) return false;
file.write(bufferData->data);
qDebug()<<"FILE SIZE = " << file.size();
file.close();
return true;
}
int main(int argc, char *argv[])
{
QCoreApplication a(argc, argv);
AVOutputFormat *ofmt = NULL;
AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
AVPacket pkt;
int ret;
int stream_index = 0;
int *stream_mapping = NULL;
int stream_mapping_size = 0;
const char *in_filename = "../assets/sample.mp4";
const char *out_filename = "../assets/sample_new.webm";
//------------------------ Input file ----------------------------
if ((ret = avformat_open_input(&ifmt_ctx, in_filename, 0, 0)) < 0) {
fprintf(stderr, "Could not open input file '%s'", in_filename);
return 1;
}
if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) < 0) {
fprintf(stderr, "Failed to retrieve input stream information");
return 1;
}
av_dump_format(ifmt_ctx, 0, in_filename, 0);
//-----------------------------------------------------------------
//---------------------- BUFFER -------------------------
AVIOContext *avio_ctx = NULL;
uint8_t *avio_ctx_buffer = NULL;
size_t avio_ctx_buffer_size = 4096*1024;
const size_t bd_buf_size = 1024*1024;
/* fill opaque structure used by the AVIOContext write callback */
avio_ctx_buffer = (uint8_t*)av_malloc(avio_ctx_buffer_size);
if (!avio_ctx_buffer) return AVERROR(ENOMEM);
BufferData bufferData;
avio_ctx = avio_alloc_context(avio_ctx_buffer, avio_ctx_buffer_size,
1, &bufferData, NULL,
&write_packet_to_buffer, NULL);
if (!avio_ctx) return AVERROR(ENOMEM);
//------------------------------------------------------
avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, out_filename);
if (!ofmt_ctx) {
fprintf(stderr, "Could not create output context\n");
ret = AVERROR_UNKNOWN;
return 1;
}
//------------------------ Stream list ----------------------------
stream_mapping_size = ifmt_ctx->nb_streams;
stream_mapping = (int*)av_mallocz_array(stream_mapping_size, sizeof(*stream_mapping));
if (!stream_mapping) {
ret = AVERROR(ENOMEM);
return 1;
}
//-------------------------------------------------------------------
//------------------------ Output file ----------------------------
AVCodec *encoder;
AVCodecContext *input_ctx;
AVCodecContext *enc_ctx;
for (int i=0; i < ifmt_ctx->nb_streams; i++) {
AVStream *out_stream;
AVStream *in_stream = ifmt_ctx->streams[i];
AVCodecParameters *in_codecpar = in_stream->codecpar;
if (in_codecpar->codec_type != AVMEDIA_TYPE_AUDIO &&
in_codecpar->codec_type != AVMEDIA_TYPE_VIDEO &&
in_codecpar->codec_type != AVMEDIA_TYPE_SUBTITLE) {
stream_mapping[i] = -1;
continue;
}
enc_ctx = avcodec_alloc_context3(encoder);
if (!enc_ctx) {
av_log(NULL, AV_LOG_FATAL, "Failed to allocate the encoder context\n");
return AVERROR(ENOMEM);
}
stream_mapping[i] = stream_index++;
out_stream = avformat_new_stream(ofmt_ctx, NULL);
if (!out_stream) {
fprintf(stderr, "Failed allocating output stream\n");
ret = AVERROR_UNKNOWN;
return 1;
}
out_stream->codecpar->width = in_codecpar->width;
out_stream->codecpar->height = in_codecpar->height;
out_stream->codecpar->level = in_codecpar->level;
out_stream->codecpar->format =in_codecpar->format;
out_stream->codecpar->profile =in_codecpar->profile;
out_stream->codecpar->bit_rate =in_codecpar->bit_rate;
out_stream->codecpar->channels =in_codecpar->channels;
out_stream->codecpar->codec_tag = 0;
out_stream->codecpar->color_trc =in_codecpar->color_trc;
out_stream->codecpar->codec_type =in_codecpar->codec_type;
out_stream->codecpar->frame_size =in_codecpar->frame_size;
out_stream->codecpar->block_align =in_codecpar->block_align;
out_stream->codecpar->color_range =in_codecpar->color_range;
out_stream->codecpar->color_space =in_codecpar->color_space;
out_stream->codecpar->field_order =in_codecpar->field_order;
out_stream->codecpar->sample_rate =in_codecpar->sample_rate;
out_stream->codecpar->video_delay =in_codecpar->video_delay;
out_stream->codecpar->seek_preroll =in_codecpar->seek_preroll;
out_stream->codecpar->channel_layout =in_codecpar->channel_layout;
out_stream->codecpar->chroma_location =in_codecpar->chroma_location;
out_stream->codecpar->color_primaries =in_codecpar->color_primaries;
out_stream->codecpar->initial_padding =in_codecpar->initial_padding;
out_stream->codecpar->trailing_padding =in_codecpar->trailing_padding;
out_stream->codecpar->bits_per_raw_sample = in_codecpar->bits_per_raw_sample;
out_stream->codecpar->sample_aspect_ratio.num = in_codecpar->sample_aspect_ratio.num;
out_stream->codecpar->sample_aspect_ratio.den = in_codecpar->sample_aspect_ratio.den;
out_stream->codecpar->bits_per_coded_sample = in_codecpar->bits_per_coded_sample;
if (in_codecpar->codec_type == AVMEDIA_TYPE_VIDEO) {
out_stream->codecpar->codec_id =ofmt_ctx->oformat->video_codec;
}
else if(in_codecpar->codec_type == AVMEDIA_TYPE_AUDIO) {
out_stream->codecpar->codec_id = ofmt_ctx->oformat- >audio_codec;
}
}
av_dump_format(ofmt_ctx, 0, out_filename, 1);
ofmt_ctx->pb = avio_ctx;
ret = avformat_write_header(ofmt_ctx, NULL);
if (ret < 0) {
fprintf(stderr, "Error occurred when opening output file\n");
return 1;
}
//------------------------------------------------------------------------------
while (1) {
AVStream *in_stream, *out_stream;
ret = av_read_frame(ifmt_ctx, &pkt);
if (ret < 0)
break;
in_stream = ifmt_ctx->streams[pkt.stream_index];
if (pkt.stream_index >= stream_mapping_size ||
stream_mapping[pkt.stream_index] < 0) {
av_packet_unref(&pkt);
continue;
}
pkt.stream_index = stream_mapping[pkt.stream_index];
out_stream = ofmt_ctx->streams[pkt.stream_index];
/* copy packet */
pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base, AVRounding(AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX));
pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base, AVRounding(AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX));
pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base);
pkt.pos = -1;
ret = av_interleaved_write_frame(ofmt_ctx, &pkt);
if (ret < 0) {
fprintf(stderr, "Error muxing packet\n");
break;
}
av_packet_unref(&pkt);
}
av_write_trailer(ofmt_ctx);
avformat_close_input(&ifmt_ctx);
/* close output */
writeBuffer(fileNameOut, &bufferData);
avformat_free_context(ofmt_ctx);
av_freep(&stream_mapping);
if (ret < 0 && ret != AVERROR_EOF) {
fprintf(stderr, "Error occurred: %d\n",ret);
return 1;
}
return a.exec();
}
</fstream></iostream></qfileinfo></qcoreapplication>