
Recherche avancée
Médias (1)
-
Sintel MP4 Surround 5.1 Full
13 mai 2011, par
Mis à jour : Février 2012
Langue : English
Type : Video
Autres articles (21)
-
Ajouter notes et légendes aux images
7 février 2011, parPour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
Modification lors de l’ajout d’un média
Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...) -
Prérequis à l’installation
31 janvier 2010, parPréambule
Cet article n’a pas pour but de détailler les installations de ces logiciels mais plutôt de donner des informations sur leur configuration spécifique.
Avant toute chose SPIPMotion tout comme MediaSPIP est fait pour tourner sur des distributions Linux de type Debian ou dérivées (Ubuntu...). Les documentations de ce site se réfèrent donc à ces distributions. Il est également possible de l’utiliser sur d’autres distributions Linux mais aucune garantie de bon fonctionnement n’est possible.
Il (...)
Sur d’autres sites (3503)
-
ClassX installation "codec not found" due to dependencies [migrated]
25 mars 2014, par khateebClassX is an interactive lecture streaming system developed in the Electrical Engineering Department at Stanford University.
Unlike conventional lecture capturing systems, ClassX requires very simple consumer-grade equipment and minimal human operation.I faced problems during installing it, I hope you have a solution.
BTW : I successfully installed it 2 years ago, but now I think the problem as the dependencies and Ubuntu versions are different than the versions we used two years ago.
Detailed description of the problem :
- I’m using Ubuntu 12.04
- I followed the instructions @ ClassX installation guide, and all steps till step 4 are successfully done (the encoder bin file generated).
- When trying to encode the video using the classX web system, it shows the encoding completed after few seconds.However, there are no tiles generated.
-
I tried to execute the command at CX_log.txt, and the following error appears.
mahmoud@Mahmoud-HP-Pavilion-dv5-Notebook-PC : $ sudo perl /var/www/ClassXWebSystem/system/publishers/web/actions/encode.pl "/var/www/ClassXWebSystem/content/encoding/FALL_2013_2014/CS106A_FALL_2013_2014/lecSEven" "/var/www/ClassXWebSystem/content/encoding/FALL_2013_2014/CS106A_FALL_2013_2014/.encoding_1372706251" "/var/www/ClassXWebSystem/content/streaming/FALL_2013_2014/CS106A_FALL_2013_2014/lecSEven" "/var/www/ClassXWebSystem/system/publishers/bin" classx y n n
[sudo] password for mahmoud :
00068.jpg
..
.
00068.mp4
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from ’/var/www/ClassXWebSystem/content/encoding/FALL_2013_2014/CS106A_FALL_2013_2014/.encoding_1372706251/00068.mp4’ :
Metadata :
major_brand : isom
minor_version : 512
compatible_brands : isomiso2avc1mp41
encoder : Lavf52.39.0
Duration : 00:02:30.05, start : 0.000000, bitrate : 8141 kb/s
Stream #0.0(und) : Video : h264, yuv420p, 1920x1080 [PAR 1:1 DAR 16:9], 8007 kb/s, 29.95 fps, 29.97 tbr, 2997 tbn, 59.94 tbc
Stream #0.1(und) : Audio : aac, 44100 Hz, stereo, s16, 128 kb/s
Output #0, mp4, to ’/var/www/ClassXWebSystem/content/encoding/FALL_2013_2014/CS106A_FALL_2013_2014/.encoding_1372706251/stream0.mp4’ :
Stream #0.0 : Video : [0][0][0][0] / 0x0000, yuv420p, 640x360, q=32-36, 64 kb/s, 90k tbn, 14.99 tbc
codec not found
-
ClassX installation "codec not found" due to dependencies [migrated]
25 mars 2014, par khateebClassX is an interactive lecture streaming system developed in the Electrical Engineering Department at Stanford University.
Unlike conventional lecture capturing systems, ClassX requires very simple consumer-grade equipment and minimal human operation.I faced problems during installing it, I hope you have a solution.
BTW : I successfully installed it 2 years ago, but now I think the problem as the dependencies and Ubuntu versions are different than the versions we used two years ago.
Detailed description of the problem :
- I'm using Ubuntu 12.04
- I followed the instructions @ ClassX installation guide, and all steps till step 4 are successfully done (the encoder bin file generated).
- When trying to encode the video using the classX web system, it shows the encoding completed after few seconds.However, there are no tiles generated.
-
I tried to execute the command at CX_log.txt, and the following error appears.
mahmoud@Mahmoud-HP-Pavilion-dv5-Notebook-PC : $ sudo perl /var/www/ClassXWebSystem/system/publishers/web/actions/encode.pl "/var/www/ClassXWebSystem/content/encoding/FALL_2013_2014/CS106A_FALL_2013_2014/lecSEven" "/var/www/ClassXWebSystem/content/encoding/FALL_2013_2014/CS106A_FALL_2013_2014/.encoding_1372706251" "/var/www/ClassXWebSystem/content/streaming/FALL_2013_2014/CS106A_FALL_2013_2014/lecSEven" "/var/www/ClassXWebSystem/system/publishers/bin" classx y n n
[sudo] password for mahmoud :
00068.jpg
..
.
00068.mp4
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/var/www/ClassXWebSystem/content/encoding/FALL_2013_2014/CS106A_FALL_2013_2014/.encoding_1372706251/00068.mp4' :
Metadata :
major_brand : isom
minor_version : 512
compatible_brands : isomiso2avc1mp41
encoder : Lavf52.39.0
Duration : 00:02:30.05, start : 0.000000, bitrate : 8141 kb/s
Stream #0.0(und) : Video : h264, yuv420p, 1920x1080 [PAR 1:1 DAR 16:9], 8007 kb/s, 29.95 fps, 29.97 tbr, 2997 tbn, 59.94 tbc
Stream #0.1(und) : Audio : aac, 44100 Hz, stereo, s16, 128 kb/s
Output #0, mp4, to '/var/www/ClassXWebSystem/content/encoding/FALL_2013_2014/CS106A_FALL_2013_2014/.encoding_1372706251/stream0.mp4' :
Stream #0.0 : Video : [0][0][0][0] / 0x0000, yuv420p, 640x360, q=32-36, 64 kb/s, 90k tbn, 14.99 tbc
codec not found
-
FFmpeg how to apply "aac_adtstoasc" and "h264_mp4toannexb" bitstream filters to transcode to h.264 with AAC
9 juillet 2015, par larodI’ve been struggling with this issue for about a month, I have studied FFmpeg documentation more specifically transcode_aac.c, transcoding.c, decoding_encoding.c and Handbrake’s implementation which is really dense.
The error that I’m getting is the following :
[mp4 @ 0x11102f800] Malformed AAC bitstream detected: use the audio bitstream filter 'aac_adtstoasc' to fix it ('-bsf:a aac_adtstoasc' option with ffmpeg)
.The research I’ve done points to a filter that needs to be implemented.
FIX:AAC in some container format (FLV, MP4, MKV etc.) need "aac_adtstoasc" bitstream filter (BSF).
I know I can do the following :
AVBitStreamFilterContext* aacbsfc = av_bitstream_filter_init("aac_adtstoasc");
And then do something like this :
av_bitstream_filter_filter(aacbsfc, in_stream->codec, NULL, &pkt.data, &pkt.size, pkt.data, pkt.size, 0);
What eludes me is when to filter the
AVPacket
, is it before callingav_packet_rescale_ts
or insideinit_filter
. I would greatly appreciate if someone can point me in the right direction. Thanks in advance.// Variables
AVFormatContext *_ifmt_ctx, *_ofmt_ctx;
FilteringContext *_filter_ctx;
AVBitStreamFilterContext *_h264bsfc;
AVBitStreamFilterContext *_aacbsfc;
NSURL *_srcURL, *_dstURL;
- (IBAction)trancode:(id)sender {
NSLog(@"%s %@",__func__, _mediaFile.fsName);
int ret, got_frame;
int (*dec_func)(AVCodecContext *, AVFrame *, int *, const AVPacket *);
unsigned int stream_index, i;
enum AVMediaType type;
AVPacket packet = {.data = NULL, .size = 0};
AVFrame *frame = NULL;
_h264bsfc = av_bitstream_filter_init("h264_mp4toannexb");
_aacbsfc = av_bitstream_filter_init("aac_adtstoasc");
_srcURL = [Utility urlFromBookmark:_mediaFile.bookmark];
if ([_srcURL startAccessingSecurityScopedResource]) {
NSString *newFileName = [[_srcURL.lastPathComponent stringByDeletingPathExtension]stringByAppendingPathExtension:@"mp4"];
_dstURL = [NSURL fileURLWithPath:[[_srcURL URLByDeletingLastPathComponent]URLByAppendingPathComponent:newFileName].path isDirectory:NO];
[AppDelegate ffmpegRegisterAll];
ret = open_input_file(_srcURL.path.fileSystemRepresentation);
if (ret < 0) {
NSLog(@"Error openning input file.");
}
ret = open_output_file(_dstURL.path.fileSystemRepresentation);
if (ret < 0) {
NSLog(@"Error openning output file.");
}
ret = init_filters();
if (ret < 0) {
NSLog(@"Error initializing filters.");
}
AVBitStreamFilterContext *h264bsfc = av_bitstream_filter_init("h264_mp4toannexb");
AVBitStreamFilterContext* aacbsfc = av_bitstream_filter_init("aac_adtstoasc");
// Transcode *******************************************************************************
while (1) {
if ((ret = av_read_frame(_ifmt_ctx, &packet)) < 0) {
break;
}
stream_index = packet.stream_index;
type = _ifmt_ctx->streams[packet.stream_index]->codec->codec_type;
av_log(NULL, AV_LOG_DEBUG, "Demuxer gave frame of stream_index %u\n", stream_index);
if (_filter_ctx[stream_index].filter_graph) {
av_log(NULL, AV_LOG_DEBUG, "Going to reencode&filter the frame\n");
frame = av_frame_alloc();
if (!frame) {
ret = AVERROR(ENOMEM);
break;
}
av_packet_rescale_ts(&packet, _ifmt_ctx->streams[stream_index]->time_base, _ifmt_ctx->streams[stream_index]->codec->time_base);
dec_func = (type == AVMEDIA_TYPE_VIDEO) ? avcodec_decode_video2 : avcodec_decode_audio4;
ret = dec_func(_ifmt_ctx->streams[stream_index]->codec, frame, &got_frame, &packet);
if (ret < 0) {
av_frame_free(&frame);
av_log(NULL, AV_LOG_ERROR, "Decoding failed\n");
break;
}
if (got_frame) {
frame->pts = av_frame_get_best_effort_timestamp(frame);
ret = filter_encode_write_frame(frame, stream_index);
av_frame_free(&frame);
if (ret < 0)
goto end;
} else {
av_frame_free(&frame);
}
} else {
/* remux this frame without reencoding */
av_packet_rescale_ts(&packet,
_ifmt_ctx->streams[stream_index]->time_base,
_ofmt_ctx->streams[stream_index]->time_base);
ret = av_interleaved_write_frame(_ofmt_ctx, &packet);
if (ret < 0)
goto end;
}
av_free_packet(&packet);
}
// *****************************************************************************************
/* flush filters and encoders */
for (i = 0; i < _ifmt_ctx->nb_streams; i++) {
/* flush filter */
if (!_filter_ctx[i].filter_graph)
continue;
ret = filter_encode_write_frame(NULL, i);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Flushing filter failed\n");
goto end;
}
/* flush encoder */
ret = flush_encoder(i);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Flushing encoder failed\n");
goto end;
}
}
av_write_trailer(_ofmt_ctx);
av_bitstream_filter_close(h264bsfc);
av_bitstream_filter_close(aacbsfc);
} else {
NSLog(@"Unable to resolve url for %@",_mediaFile.url.lastPathComponent);
}
[_srcURL stopAccessingSecurityScopedResource];
end:
av_free_packet(&packet);
av_frame_free(&frame);
for (i = 0; i < _ifmt_ctx->nb_streams; i++) {
avcodec_close(_ifmt_ctx->streams[i]->codec);
if (_ofmt_ctx && _ofmt_ctx->nb_streams > i && _ofmt_ctx->streams[i] && _ofmt_ctx->streams[i]->codec)
avcodec_close(_ofmt_ctx->streams[i]->codec);
if (_filter_ctx && _filter_ctx[i].filter_graph)
avfilter_graph_free(&_filter_ctx[i].filter_graph);
}
av_free(_filter_ctx);
avformat_close_input(&_ifmt_ctx);
if (_ofmt_ctx && !(_ofmt_ctx->oformat->flags & AVFMT_NOFILE))
avio_closep(&_ofmt_ctx->pb);
avformat_free_context(_ofmt_ctx);
}The following method is used to open the input file and create the ifmt_ctx.
int open_input_file(const char *filename) {
int ret;
unsigned int i;
_ifmt_ctx = NULL;
if ((ret = avformat_open_input(&_ifmt_ctx, filename, NULL, NULL)) < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot open input file\n");
return ret;
}
if ((ret = avformat_find_stream_info(_ifmt_ctx, NULL)) < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot find stream information\n");
return ret;
}
for (i = 0; i < _ifmt_ctx->nb_streams; i++) {
AVStream *stream;
AVCodecContext *codec_ctx;
stream = _ifmt_ctx->streams[i];
codec_ctx = stream->codec;
/* Reencode video & audio and remux subtitles etc. */
if (codec_ctx->codec_type == AVMEDIA_TYPE_VIDEO
|| codec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
/* Open decoder */
ret = avcodec_open2(codec_ctx,
avcodec_find_decoder(codec_ctx->codec_id), NULL);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Failed to open decoder for stream #%u\n", i);
return ret;
}
}
}
// Remove later
av_dump_format(_ifmt_ctx, 0, filename, 0);
return 0;
}This method is used to open the output file and create the output format context.
int open_output_file(const char *filename) {
AVStream *out_stream;
AVStream *in_stream;
AVCodecContext *dec_ctx, *enc_ctx;
AVCodec *encoder;
int ret;
unsigned int i;
_ofmt_ctx = NULL;
avformat_alloc_output_context2(&_ofmt_ctx, NULL, NULL, filename);
if (!_ofmt_ctx) {
av_log(NULL, AV_LOG_ERROR, "Could not create output context\n");
return AVERROR_UNKNOWN;
}
for (i = 0; i < _ifmt_ctx->nb_streams; i++) {
out_stream = avformat_new_stream(_ofmt_ctx, NULL);
if (!out_stream) {
av_log(NULL, AV_LOG_ERROR, "Failed allocating output stream\n");
return AVERROR_UNKNOWN;
}
in_stream = _ifmt_ctx->streams[i];
dec_ctx = in_stream->codec;
enc_ctx = out_stream->codec;
if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO) {
// set video stream
encoder = avcodec_find_encoder(AV_CODEC_ID_H264);
avcodec_get_context_defaults3(enc_ctx, encoder);
av_opt_set(enc_ctx->priv_data, "preset", "slow", 0);
enc_ctx->height = dec_ctx->height;
enc_ctx->width = dec_ctx->width;
enc_ctx->bit_rate = dec_ctx->bit_rate;
enc_ctx->time_base = out_stream->time_base = dec_ctx->time_base;
enc_ctx->pix_fmt = encoder->pix_fmts[0];
ret = avcodec_open2(enc_ctx, encoder, NULL);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot open video encoder for stream #%u\n", i);
return ret;
}
} else if (dec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
// set audio stream
//encoder = avcodec_find_encoder(AV_CODEC_ID_AAC);
encoder = avcodec_find_encoder_by_name("libfdk_aac");
avcodec_get_context_defaults3(enc_ctx, encoder);
enc_ctx->profile = FF_PROFILE_AAC_HE_V2;
enc_ctx->sample_rate = dec_ctx->sample_rate;
enc_ctx->channel_layout = dec_ctx->channel_layout;
enc_ctx->channels = av_get_channel_layout_nb_channels(enc_ctx->channel_layout);
enc_ctx->sample_fmt = encoder->sample_fmts[0];
enc_ctx->time_base = out_stream->time_base = (AVRational){1, enc_ctx->sample_rate};
enc_ctx->bit_rate = dec_ctx->bit_rate;
ret = avcodec_open2(enc_ctx, encoder, NULL);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot open video encoder for stream #%u\n", i);
return ret;
}
} else if (dec_ctx->codec_type == AVMEDIA_TYPE_UNKNOWN) {
// deal with error
av_log(NULL, AV_LOG_FATAL, "Elementary stream #%d is of unknown type, cannot proceed\n", i);
return AVERROR_INVALIDDATA;
} else {
// remux stream
ret = avcodec_copy_context(_ofmt_ctx->streams[i]->codec,
_ifmt_ctx->streams[i]->codec);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Copying stream context failed\n");
return ret;
}
}
if (_ofmt_ctx->oformat->flags & AVFMT_GLOBALHEADER) {
enc_ctx->flags |= CODEC_FLAG_GLOBAL_HEADER;
}
}
av_dump_format(_ofmt_ctx, 0, filename, 1);
NSURL *openFileURL = [Utility openPanelAt:[NSURL URLWithString:_dstURL.URLByDeletingLastPathComponent.path]
withTitle:@"Transcode File"
message:@"Please allow Maví to create the new file."
andPrompt:@"Grant Access"];
openFileURL = [openFileURL URLByAppendingPathComponent:_dstURL.lastPathComponent isDirectory:NO];
if (!(_ofmt_ctx->oformat->flags & AVFMT_NOFILE)) {
ret = avio_open(&_ofmt_ctx->pb, openFileURL.fileSystemRepresentation, AVIO_FLAG_WRITE);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Could not open output file '%s'", filename);
return ret;
}
}
/* init muxer, write output file header */
ret = avformat_write_header(_ofmt_ctx, NULL);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Error occurred when opening output file\n");
return ret;
}
return 0;
}These two methods deal with initialising the filters and filtering.
int init_filters(void) {
const char *filter_spec;
unsigned int i;
int ret;
_filter_ctx = av_malloc_array(_ifmt_ctx->nb_streams, sizeof(*_filter_ctx));
if (!_filter_ctx)
return AVERROR(ENOMEM);
for (i = 0; i < _ifmt_ctx->nb_streams; i++) {
_filter_ctx[i].buffersrc_ctx = NULL;
_filter_ctx[i].buffersink_ctx = NULL;
_filter_ctx[i].filter_graph = NULL;
if (!(_ifmt_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO
|| _ifmt_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO))
continue;
if (_ifmt_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO)
filter_spec = "null"; /* passthrough (dummy) filter for video */
else
filter_spec = "anull"; /* passthrough (dummy) filter for audio */
ret = init_filter(&_filter_ctx[i], _ifmt_ctx->streams[i]->codec,
_ofmt_ctx->streams[i]->codec, filter_spec);
if (ret)
return ret;
}
return 0;
}
int init_filter(FilteringContext* fctx, AVCodecContext *dec_ctx, AVCodecContext *enc_ctx, const char *filter_spec) {
char args[512];
int ret = 0;
AVFilter *buffersrc = NULL;
AVFilter *buffersink = NULL;
AVFilterContext *buffersrc_ctx = NULL;
AVFilterContext *buffersink_ctx = NULL;
AVFilterInOut *outputs = avfilter_inout_alloc();
AVFilterInOut *inputs = avfilter_inout_alloc();
AVFilterGraph *filter_graph = avfilter_graph_alloc();
if (!outputs || !inputs || !filter_graph) {
ret = AVERROR(ENOMEM);
goto end;
}
if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO) {
buffersrc = avfilter_get_by_name("buffer");
buffersink = avfilter_get_by_name("buffersink");
if (!buffersrc || !buffersink) {
av_log(NULL, AV_LOG_ERROR, "filtering source or sink element not found\n");
ret = AVERROR_UNKNOWN;
goto end;
}
snprintf(args, sizeof(args),
"video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
dec_ctx->width, dec_ctx->height, dec_ctx->pix_fmt,
dec_ctx->time_base.num, dec_ctx->time_base.den,
dec_ctx->sample_aspect_ratio.num,
dec_ctx->sample_aspect_ratio.den);
ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in",
args, NULL, filter_graph);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot create buffer source\n");
goto end;
}
ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out",
NULL, NULL, filter_graph);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot create buffer sink\n");
goto end;
}
ret = av_opt_set_bin(buffersink_ctx, "pix_fmts",
(uint8_t*)&enc_ctx->pix_fmt, sizeof(enc_ctx->pix_fmt),
AV_OPT_SEARCH_CHILDREN);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot set output pixel format\n");
goto end;
}
} else if (dec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
buffersrc = avfilter_get_by_name("abuffer");
buffersink = avfilter_get_by_name("abuffersink");
if (!buffersrc || !buffersink) {
av_log(NULL, AV_LOG_ERROR, "filtering source or sink element not found\n");
ret = AVERROR_UNKNOWN;
goto end;
}
if (!dec_ctx->channel_layout)
dec_ctx->channel_layout =
av_get_default_channel_layout(dec_ctx->channels);
snprintf(args, sizeof(args),
"time_base=%d/%d:sample_rate=%d:sample_fmt=%s:channel_layout=0x%"PRIx64,
dec_ctx->time_base.num, dec_ctx->time_base.den, dec_ctx->sample_rate,
av_get_sample_fmt_name(dec_ctx->sample_fmt),
dec_ctx->channel_layout);
ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in",
args, NULL, filter_graph);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot create audio buffer source\n");
goto end;
}
ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out",
NULL, NULL, filter_graph);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot create audio buffer sink\n");
goto end;
}
ret = av_opt_set_bin(buffersink_ctx, "sample_fmts",
(uint8_t*)&enc_ctx->sample_fmt, sizeof(enc_ctx->sample_fmt),
AV_OPT_SEARCH_CHILDREN);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot set output sample format\n");
goto end;
}
ret = av_opt_set_bin(buffersink_ctx, "channel_layouts",
(uint8_t*)&enc_ctx->channel_layout,
sizeof(enc_ctx->channel_layout), AV_OPT_SEARCH_CHILDREN);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot set output channel layout\n");
goto end;
}
ret = av_opt_set_bin(buffersink_ctx, "sample_rates",
(uint8_t*)&enc_ctx->sample_rate, sizeof(enc_ctx->sample_rate),
AV_OPT_SEARCH_CHILDREN);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot set output sample rate\n");
goto end;
}
} else {
ret = AVERROR_UNKNOWN;
goto end;
}
/* Endpoints for the filter graph. */
outputs->name = av_strdup("in");
outputs->filter_ctx = buffersrc_ctx;
outputs->pad_idx = 0;
outputs->next = NULL;
inputs->name = av_strdup("out");
inputs->filter_ctx = buffersink_ctx;
inputs->pad_idx = 0;
inputs->next = NULL;
if (!outputs->name || !inputs->name) {
ret = AVERROR(ENOMEM);
goto end;
}
if ((ret = avfilter_graph_parse_ptr(filter_graph, filter_spec,
&inputs, &outputs, NULL)) < 0)
goto end;
if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0)
goto end;
/* Fill FilteringContext */
fctx->buffersrc_ctx = buffersrc_ctx;
fctx->buffersink_ctx = buffersink_ctx;
fctx->filter_graph = filter_graph;
end:
avfilter_inout_free(&inputs);
avfilter_inout_free(&outputs);
return ret;
}Finally these two methods take care of writing the frames.
int encode_write_frame(AVFrame *filt_frame, unsigned int stream_index, int *got_frame) {
int ret;
int got_frame_local;
AVPacket enc_pkt;
int (*enc_func)(AVCodecContext *, AVPacket *, const AVFrame *, int *) =
(_ifmt_ctx->streams[stream_index]->codec->codec_type ==
AVMEDIA_TYPE_VIDEO) ? avcodec_encode_video2 : avcodec_encode_audio2;
if (!got_frame)
got_frame = &got_frame_local;
av_log(NULL, AV_LOG_INFO, "Encoding frame\n");
/* encode filtered frame */
enc_pkt.data = NULL;
enc_pkt.size = 0;
av_init_packet(&enc_pkt);
ret = enc_func(_ofmt_ctx->streams[stream_index]->codec, &enc_pkt,
filt_frame, got_frame);
av_frame_free(&filt_frame);
if (ret < 0)
return ret;
if (!(*got_frame))
return 0;
/* prepare packet for muxing */
enc_pkt.stream_index = stream_index;
av_packet_rescale_ts(&enc_pkt,
_ofmt_ctx->streams[stream_index]->codec->time_base,
_ofmt_ctx->streams[stream_index]->time_base);
av_log(NULL, AV_LOG_DEBUG, "Muxing frame\n");
/* mux encoded frame */
ret = av_interleaved_write_frame(_ofmt_ctx, &enc_pkt);
return ret;
}
int filter_encode_write_frame(AVFrame *frame, unsigned int stream_index)
{
int ret;
AVFrame *filt_frame;
av_log(NULL, AV_LOG_INFO, "Pushing decoded frame to filters\n");
/* push the decoded frame into the filtergraph */
ret = av_buffersrc_add_frame_flags(_filter_ctx[stream_index].buffersrc_ctx,
frame, 0);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Error while feeding the filtergraph\n");
return ret;
}
/* pull filtered frames from the filtergraph */
while (1) {
filt_frame = av_frame_alloc();
if (!filt_frame) {
ret = AVERROR(ENOMEM);
break;
}
av_log(NULL, AV_LOG_INFO, "Pulling filtered frame from filters\n");
ret = av_buffersink_get_frame(_filter_ctx[stream_index].buffersink_ctx,
filt_frame);
if (ret < 0) {
/* if no more frames for output - returns AVERROR(EAGAIN)
* if flushed and no more frames for output - returns AVERROR_EOF
* rewrite retcode to 0 to show it as normal procedure completion
*/
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
ret = 0;
av_frame_free(&filt_frame);
break;
}
filt_frame->pict_type = AV_PICTURE_TYPE_NONE;
ret = encode_write_frame(filt_frame, stream_index, NULL);
if (ret < 0)
break;
}
return ret;
}