
Recherche avancée
Autres articles (50)
-
La file d’attente de SPIPmotion
28 novembre 2010, parUne file d’attente stockée dans la base de donnée
Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...) -
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)
Sur d’autres sites (7012)
-
ffmpeg C api cutting a video when packet dts is greater than pts
10 mars 2017, par TastyCatFoodCorrupted videos
In trying to cut out a duration of one of videos using ffmpeg C api, using the code posted here : How to cut video with FFmpeg C API , ffmpeg spit out the log below :
D/logger: Loop count:9 out: pts:0 pts_time:0 dts:2002 dts_time:0.0333667 duration:2002 duration_time:0.0333667 stream_index:1
D/trim_video: Error muxing packet Invalid argumentffmpeg considercs an instruction to decompress a frame after presenting it to be a nonsense, which is well...reasonable but stringent.
My VLC player finds the video alright and plays it of course.
Note :
The code immediately below is in c++ written to be compiled with g++ as I’m developing for android. For C code, scroll down further.
My solution(g++) :
extern "C" {
#include "libavformat/avformat.h"
#include "libavutil/mathematics.h"
#include "libavutil/timestamp.h"
static void log_packet(
const AVFormatContext *fmt_ctx,
const AVPacket *pkt, const char *tag,
long count=0)
{
printf("loop count %d pts:%f dts:%f duration:%f stream_index:%d\n",
count,
static_cast<double>(pkt->pts),
static_cast<double>(pkt->dts),
static_cast<double>(pkt->duration),
pkt->stream_index);
return;
}
int trimVideo(
const char* in_filename,
const char* out_filename,
double cutFrom,
double cutUpTo)
{
AVOutputFormat *ofmt = NULL;
AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
AVPacket pkt;
int ret, i;
//jboolean copy = true;
//const char *in_filename = env->GetStringUTFChars(jstring_in_filename,&copy);
//const char *out_filename = env->GetStringUTFChars(jstring_out_filename,&copy);
long loopCount = 0;
av_register_all();
// Cutting may change the pts and dts of the resulting video;
// if frames in head position are removed.
// In the case like that, src stream's copy start pts
// need to be recorded and is used to compute the new pts value.
// e.g.
// new_pts = current_pts - trim_start_position_pts;
// nb-streams is the number of elements in AVFormatContext.streams.
// Initial pts value must be recorded for each stream.
//May be malloc and memset should be replaced with [].
int64_t *dts_start_from = NULL;
int64_t *pts_start_from = NULL;
if ((ret = avformat_open_input(&ifmt_ctx, in_filename, 0, 0)) < 0) {
printf( "Could not open input file '%s'", in_filename);
goto end;
}
if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) < 0) {
printf("Failed to retrieve input stream information");
goto end;
}
av_dump_format(ifmt_ctx, 0, in_filename, 0);
avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, out_filename);
if (!ofmt_ctx) {
printf( "Could not create output context\n");
ret = AVERROR_UNKNOWN;
goto end;
}
ofmt = ofmt_ctx->oformat;
//preparing streams
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
AVStream *in_stream = ifmt_ctx->streams[i];
AVStream *out_stream = avformat_new_stream(ofmt_ctx, in_stream->codec->codec);
if (!out_stream) {
printf( "Failed allocating output stream\n");
ret = AVERROR_UNKNOWN;
goto end;
}
ret = avcodec_copy_context(out_stream->codec, in_stream->codec);
if (ret < 0) {
printf( "Failed to copy context from input to output stream codec context\n");
goto end;
}
out_stream->codec->codec_tag = 0;
if (ofmt_ctx->oformat->flags & AVFMT_GLOBALHEADER)
out_stream->codec->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
}
av_dump_format(ofmt_ctx, 0, out_filename, 1);
if (!(ofmt->flags & AVFMT_NOFILE)) {
ret = avio_open(&ofmt_ctx->pb, out_filename, AVIO_FLAG_WRITE);
if (ret < 0) {
printf( "Could not open output file '%s'", out_filename);
goto end;
}
}
//preparing the header
ret = avformat_write_header(ofmt_ctx, NULL);
if (ret < 0) {
printf( "Error occurred when opening output file\n");
goto end;
}
// av_seek_frame translates AV_TIME_BASE into an appropriate time base.
ret = av_seek_frame(ifmt_ctx, -1, cutFrom*AV_TIME_BASE, AVSEEK_FLAG_ANY);
if (ret < 0) {
printf( "Error seek\n");
goto end;
}
dts_start_from = static_cast(
malloc(sizeof(int64_t) * ifmt_ctx->nb_streams));
memset(dts_start_from, 0, sizeof(int64_t) * ifmt_ctx->nb_streams);
pts_start_from = static_cast(
malloc(sizeof(int64_t) * ifmt_ctx->nb_streams));
memset(pts_start_from, 0, sizeof(int64_t) * ifmt_ctx->nb_streams);
//writing
while (1) {
AVStream *in_stream, *out_stream;
//reading frame into pkt
ret = av_read_frame(ifmt_ctx, &pkt);
if (ret < 0)
break;
in_stream = ifmt_ctx->streams[pkt.stream_index];
out_stream = ofmt_ctx->streams[pkt.stream_index];
//if end reached
if (av_q2d(in_stream->time_base) * pkt.pts > cutUpTo) {
av_packet_unref(&pkt);
break;
}
// Recording the initial pts value for each stream
// Recording dts does not do the trick because AVPacket.dts values
// in some video files are larger than corresponding pts values
// and ffmpeg does not like it.
if (dts_start_from[pkt.stream_index] == 0) {
dts_start_from[pkt.stream_index] = pkt.pts;
printf("dts_initial_value: %f for stream index: %d \n",
static_cast<double>(dts_start_from[pkt.stream_index]),
pkt.stream_index
);
}
if (pts_start_from[pkt.stream_index] == 0) {
pts_start_from[pkt.stream_index] = pkt.pts;
printf( "pts_initial_value: %f for stream index %d\n",
static_cast<double>(pts_start_from[pkt.stream_index]),
pkt.stream_index);
}
log_packet(ifmt_ctx, &pkt, "in",loopCount);
/* Computes pts etc
* av_rescale_q_rend etc are countering changes in time_base between
* out_stream and in_stream, so regardless of time_base values for
* in and out streams, the rate at which frames are refreshed remains
* the same.
*
pkt.pts = pkt.pts * (in_stream->time_base/ out_stream->time_base)
As `time_base == 1/frame_rate`, the above is an equivalent of
(out_stream_frame_rate/in_stream_frame_rate)*pkt.pts where
frame_rate is the number of frames to be displayed per second.
AV_ROUND_PASS_MINMAX may set pts or dts to AV_NOPTS_VALUE
* */
pkt.pts =
av_rescale_q_rnd(
pkt.pts - pts_start_from[pkt.stream_index],
static_cast<avrational>(in_stream->time_base),
static_cast<avrational>(out_stream->time_base),
static_cast<avrounding>(AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX));
pkt.dts =
av_rescale_q_rnd(
pkt.dts - dts_start_from[pkt.stream_index],
static_cast<avrational>(in_stream->time_base),
static_cast<avrational>(out_stream->time_base),
static_cast<avrounding>(AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX));
if(pkt.dts>pkt.pts) pkt.dts = pkt.pts -1;
if(pkt.dts < 0) pkt.dts = 0;
if(pkt.pts < 0) pkt.pts = 0;
pkt.duration = av_rescale_q(
pkt.duration,
in_stream->time_base,
out_stream->time_base);
pkt.pos = -1;
log_packet(ofmt_ctx, &pkt, "out",loopCount);
// Writes to the file after buffering packets enough to generate a frame
// and probably sorting packets in dts order.
ret = av_interleaved_write_frame(ofmt_ctx, &pkt);
// ret = av_write_frame(ofmt_ctx, &pkt);
if (ret < 0) {
printf( "Error muxing packet %d \n", ret);
//continue;
break;
}
av_packet_unref(&pkt);
++loopCount;
}
//Writing end code?
av_write_trailer(ofmt_ctx);
end:
avformat_close_input(&ifmt_ctx);
if(dts_start_from)free(dts_start_from);
if(pts_start_from)free(pts_start_from);
/* close output */
if (ofmt_ctx && !(ofmt->flags & AVFMT_NOFILE))
avio_closep(&ofmt_ctx->pb);
avformat_free_context(ofmt_ctx);
if (ret < 0 && ret != AVERROR_EOF) {
//printf( "Error occurred: %s\n", av_err2str(ret));
return 1;
}
return 0;
}
}
</avrounding></avrational></avrational></avrounding></avrational></avrational></double></double></double></double></double>c compatible(Console says g++ but I’m sure this is C code)
#include "libavformat/avformat.h"
#include "libavutil/mathematics.h"
#include "libavutil/timestamp.h"
static void log_packet(
const AVFormatContext *fmt_ctx,
const AVPacket *pkt, const char *tag,
long count)
{
printf("loop count %d pts:%f dts:%f duration:%f stream_index:%d\n",
count,
(double)pkt->pts,
(double)pkt->dts,
(double)pkt->duration,
pkt->stream_index);
return;
}
int trimVideo(
const char* in_filename,
const char* out_filename,
double cutFrom,
double cutUpTo)
{
AVOutputFormat *ofmt = NULL;
AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
AVPacket pkt;
int ret, i;
//jboolean copy = true;
//const char *in_filename = env->GetStringUTFChars(jstring_in_filename,&copy);
//const char *out_filename = env->GetStringUTFChars(jstring_out_filename,&copy);
long loopCount = 0;
av_register_all();
// Cutting may change the pts and dts of the resulting video;
// if frames in head position are removed.
// In the case like that, src stream's copy start pts
// need to be recorded and is used to compute the new pts value.
// e.g.
// new_pts = current_pts - trim_start_position_pts;
// nb-streams is the number of elements in AVFormatContext.streams.
// Initial pts value must be recorded for each stream.
//May be malloc and memset should be replaced with [].
int64_t *dts_start_from = NULL;
int64_t *pts_start_from = NULL;
if ((ret = avformat_open_input(&ifmt_ctx, in_filename, 0, 0)) < 0) {
printf( "Could not open input file '%s'", in_filename);
goto end;
}
if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) < 0) {
printf("Failed to retrieve input stream information");
goto end;
}
av_dump_format(ifmt_ctx, 0, in_filename, 0);
avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, out_filename);
if (!ofmt_ctx) {
printf( "Could not create output context\n");
ret = AVERROR_UNKNOWN;
goto end;
}
ofmt = ofmt_ctx->oformat;
//preparing streams
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
AVStream *in_stream = ifmt_ctx->streams[i];
AVStream *out_stream = avformat_new_stream(ofmt_ctx, in_stream->codec->codec);
if (!out_stream) {
printf( "Failed allocating output stream\n");
ret = AVERROR_UNKNOWN;
goto end;
}
ret = avcodec_copy_context(out_stream->codec, in_stream->codec);
if (ret < 0) {
printf( "Failed to copy context from input to output stream codec context\n");
goto end;
}
out_stream->codec->codec_tag = 0;
if (ofmt_ctx->oformat->flags & AVFMT_GLOBALHEADER)
out_stream->codec->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
}
av_dump_format(ofmt_ctx, 0, out_filename, 1);
if (!(ofmt->flags & AVFMT_NOFILE)) {
ret = avio_open(&ofmt_ctx->pb, out_filename, AVIO_FLAG_WRITE);
if (ret < 0) {
printf( "Could not open output file '%s'", out_filename);
goto end;
}
}
//preparing the header
ret = avformat_write_header(ofmt_ctx, NULL);
if (ret < 0) {
printf( "Error occurred when opening output file\n");
goto end;
}
// av_seek_frame translates AV_TIME_BASE into an appropriate time base.
ret = av_seek_frame(ifmt_ctx, -1, cutFrom*AV_TIME_BASE, AVSEEK_FLAG_ANY);
if (ret < 0) {
printf( "Error seek\n");
goto end;
}
dts_start_from = (int64_t*)
malloc(sizeof(int64_t) * ifmt_ctx->nb_streams);
memset(dts_start_from, 0, sizeof(int64_t) * ifmt_ctx->nb_streams);
pts_start_from = (int64_t*)
malloc(sizeof(int64_t) * ifmt_ctx->nb_streams);
memset(pts_start_from, 0, sizeof(int64_t) * ifmt_ctx->nb_streams);
//writing
while (1) {
AVStream *in_stream, *out_stream;
//reading frame into pkt
ret = av_read_frame(ifmt_ctx, &pkt);
if (ret < 0)
break;
in_stream = ifmt_ctx->streams[pkt.stream_index];
out_stream = ofmt_ctx->streams[pkt.stream_index];
//if end reached
if (av_q2d(in_stream->time_base) * pkt.pts > cutUpTo) {
av_packet_unref(&pkt);
break;
}
// Recording the initial pts value for each stream
// Recording dts does not do the trick because AVPacket.dts values
// in some video files are larger than corresponding pts values
// and ffmpeg does not like it.
if (dts_start_from[pkt.stream_index] == 0) {
dts_start_from[pkt.stream_index] = pkt.pts;
printf("dts_initial_value: %f for stream index: %d \n",
(double)dts_start_from[pkt.stream_index],
pkt.stream_index
);
}
if (pts_start_from[pkt.stream_index] == 0) {
pts_start_from[pkt.stream_index] = pkt.pts;
printf( "pts_initial_value: %f for stream index %d\n",
(double)pts_start_from[pkt.stream_index],
pkt.stream_index);
}
log_packet(ifmt_ctx, &pkt, "in",loopCount);
/* Computes pts etc
* av_rescale_q_rend etc are countering changes in time_base between
* out_stream and in_stream, so regardless of time_base values for
* in and out streams, the rate at which frames are refreshed remains
* the same.
*
pkt.pts = pkt.pts * (in_stream->time_base/ out_stream->time_base)
As `time_base == 1/frame_rate`, the above is an equivalent of
(out_stream_frame_rate/in_stream_frame_rate)*pkt.pts where
frame_rate is the number of frames to be displayed per second.
AV_ROUND_PASS_MINMAX may set pts or dts to AV_NOPTS_VALUE
* */
pkt.pts =
av_rescale_q_rnd(
pkt.pts - pts_start_from[pkt.stream_index],
(AVRational)in_stream->time_base,
(AVRational)out_stream->time_base,
(AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX));
pkt.dts =
av_rescale_q_rnd(
pkt.dts - dts_start_from[pkt.stream_index],
(AVRational)in_stream->time_base,
(AVRational)out_stream->time_base,
AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
if(pkt.dts>pkt.pts) pkt.dts = pkt.pts -1;
if(pkt.dts < 0) pkt.dts = 0;
if(pkt.pts < 0) pkt.pts = 0;
pkt.duration = av_rescale_q(
pkt.duration,
in_stream->time_base,
out_stream->time_base);
pkt.pos = -1;
log_packet(ofmt_ctx, &pkt, "out",loopCount);
// Writes to the file after buffering packets enough to generate a frame
// and probably sorting packets in dts order.
ret = av_interleaved_write_frame(ofmt_ctx, &pkt);
// ret = av_write_frame(ofmt_ctx, &pkt);
if (ret < 0) {
printf( "Error muxing packet %d \n", ret);
//continue;
break;
}
av_packet_unref(&pkt);
++loopCount;
}
//Writing end code?
av_write_trailer(ofmt_ctx);
end:
avformat_close_input(&ifmt_ctx);
if(dts_start_from)free(dts_start_from);
if(pts_start_from)free(pts_start_from);
/* close output */
if (ofmt_ctx && !(ofmt->flags & AVFMT_NOFILE))
avio_closep(&ofmt_ctx->pb);
avformat_free_context(ofmt_ctx);
if (ret < 0 && ret != AVERROR_EOF) {
//printf( "Error occurred: %s\n", av_err2str(ret));
return 1;
}
return 0;
}What is the problem
My code does not produce the error because I’m doing
new_dts = current_dts - initial_pts_for_current_stream
. It works but now dts values are not properly computed.How to recalculate dts properly ?
P.S
Since Olaf seems to have a very strong opinion, posting the build console message for my main.c.
I don’t really know C or C++ but GNU gcc seems to be calling gcc for compiling and g++ for linking.
Well, the extension for my main is now .c and the compiler being called is gcc, so that should at least mean I have got a code written in C language...------------- Build: Debug in videoTrimmer (compiler: GNU GCC Compiler)---------------
gcc -Wall -fexceptions -std=c99 -g -I/home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/include -I/usr/include -I/usr/local/include -c /home/d/CodeBlockWorkplace/videoTrimmer/main.c -o obj/Debug/main.o
/home/d/CodeBlockWorkplace/videoTrimmer/main.c: In function ‘log_packet’:
/home/d/CodeBlockWorkplace/videoTrimmer/main.c:15:12: warning: format ‘%d’ expects argument of type ‘int’, but argument 2 has type ‘long int’ [-Wformat=]
printf("loop count %d pts:%f dts:%f duration:%f stream_index:%d\n",
^
/home/d/CodeBlockWorkplace/videoTrimmer/main.c: In function ‘trimVideo’:
/home/d/CodeBlockWorkplace/videoTrimmer/main.c:79:9: warning: ‘codec’ is deprecated [-Wdeprecated-declarations]
AVStream *out_stream = avformat_new_stream(ofmt_ctx, in_stream->codec->codec);
^
In file included from /home/d/CodeBlockWorkplace/videoTrimmer/main.c:3:0:
/home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/include/libavformat/avformat.h:893:21: note: declared here
AVCodecContext *codec;
^
/home/d/CodeBlockWorkplace/videoTrimmer/main.c:86:9: warning: ‘avcodec_copy_context’ is deprecated [-Wdeprecated-declarations]
ret = avcodec_copy_context(out_stream->codec, in_stream->codec);
^
In file included from /home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/include/libavformat/avformat.h:319:0,
from /home/d/CodeBlockWorkplace/videoTrimmer/main.c:3:
/home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/include/libavcodec/avcodec.h:4286:5: note: declared here
int avcodec_copy_context(AVCodecContext *dest, const AVCodecContext *src);
^
/home/d/CodeBlockWorkplace/videoTrimmer/main.c:86:9: warning: ‘codec’ is deprecated [-Wdeprecated-declarations]
ret = avcodec_copy_context(out_stream->codec, in_stream->codec);
^
In file included from /home/d/CodeBlockWorkplace/videoTrimmer/main.c:3:0:
/home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/include/libavformat/avformat.h:893:21: note: declared here
AVCodecContext *codec;
^
/home/d/CodeBlockWorkplace/videoTrimmer/main.c:86:9: warning: ‘codec’ is deprecated [-Wdeprecated-declarations]
ret = avcodec_copy_context(out_stream->codec, in_stream->codec);
^
In file included from /home/d/CodeBlockWorkplace/videoTrimmer/main.c:3:0:
/home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/include/libavformat/avformat.h:893:21: note: declared here
AVCodecContext *codec;
^
/home/d/CodeBlockWorkplace/videoTrimmer/main.c:91:9: warning: ‘codec’ is deprecated [-Wdeprecated-declarations]
out_stream->codec->codec_tag = 0;
^
In file included from /home/d/CodeBlockWorkplace/videoTrimmer/main.c:3:0:
/home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/include/libavformat/avformat.h:893:21: note: declared here
AVCodecContext *codec;
^
/home/d/CodeBlockWorkplace/videoTrimmer/main.c:93:13: warning: ‘codec’ is deprecated [-Wdeprecated-declarations]
out_stream->codec->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
^
In file included from /home/d/CodeBlockWorkplace/videoTrimmer/main.c:3:0:
/home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/include/libavformat/avformat.h:893:21: note: declared here
AVCodecContext *codec;
^
g++ -L/home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib -L/usr/lib -L/usr/local/lib -o bin/Debug/videoTrimmer obj/Debug/main.o ../../Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib/libavformat.a ../../Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib/libavcodec.a ../../Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib/libavutil.a ../../Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib/libswresample.a ../../Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib/libswscale.a ../../Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib/libavfilter.a ../../Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib/libpostproc.a ../../Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib/libavdevice.a -lX11 -lvdpau -lva -lva-drm -lva-x11 -ldl -lpthread -lz -llzma -lx264
Output file is bin/Debug/videoTrimmer with size 77.24 MB
Process terminated with status 0 (0 minute(s), 16 second(s))
0 error(s), 8 warning(s) (0 minute(s), 16 second(s)) -
Converting mkv to h264 FFmpeg
14 janvier 2021, par Rikus HoneyEDIT :
This question has become very popular and is one of the top results for searching "convert mkv to h264 ffmpeg" and thus I feel it is appropriate to add that for anyone stumbling upon this question to rather use


ffmpeg -i input.mkv -c:v libx264 -c:a aac output.mp4



as
libvo_aacenc
has been removed in recent versions of FFmpeg and it now has a native aac encoder. For more information visit the FFmpeg wiki page for encoding AAC.

Here is the original question :


I would like to convert my .mkv files to .mp4 using FFmpeg. I have tried the following code :


ffmpeg -i input.mkv -c:v libx264 -c:a libvo_aacenc output.mp4



But I get the error :




Error while opening encoder for output stream #0:1 - maybe incorrect parameters such as bit_rate, rate, width or height.




Is there any way to get around this ? I have tried setting the bitrate of the audio but the problem seems to persist.


-
doc/general.texi : update AviSynth+ reference page
24 mars 2019, par Stephen Hutchinsondoc/general.texi : update AviSynth+ reference page
Directed to the AviSynth+ entry on AviSynth Wiki rather than to
the github repository, since the wiki page is both more informative
and has the relevant Git/download links. The github releases page
is little more than a changelog.