
Recherche avancée
Autres articles (62)
-
La file d’attente de SPIPmotion
28 novembre 2010, parUne file d’attente stockée dans la base de donnée
Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...) -
Gestion générale des documents
13 mai 2011, parMédiaSPIP ne modifie jamais le document original mis en ligne.
Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...) -
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.
Sur d’autres sites (8764)
-
ffmpeg + AWS Lambda issues. Won't compress full video
7 juillet 2022, par Joesph Stah LynnSo I followed this tutorial to set everything up, and changed the function a bit to compress video, but no matter what I try, on larger videos (basically anything over 50-100MB), the output file will always be cut short, and depending on the encoding settings I'm using, will be cut by different amounts. I tried using the solution found here, adding a -nostdin flag to my ffmpeg command, but that also didn't seem to fix the issue.

Another odd thing, is no matter what I try, if I remove the '-f mpegts' flag, the output video will be 0B.

My Lambda function is set up with 3008MB of Memory (submitted a ticket to get my limit upped so I can use the full 10240MB available), and 2048MB of Ephemeral storage (I honestly am not sure if I need anything more than the minimum 512, but I upped it to try and fix the issue). When I check my cloudwatch logs, on really large files, it will occasionally time out, but other than that, I will get no error messages, just the standard start, end, and billable time messages.

This is the code for my lambda function.


import json
import os
import subprocess
import shlex
import boto3

S3_DESTINATION_BUCKET = "rw-video-out"
SIGNED_URL_TIMEOUT = 600

def lambda_handler(event, context):

 s3_source_bucket = event['Records'][0]['s3']['bucket']['name']
 s3_source_key = event['Records'][0]['s3']['object']['key']

 s3_source_basename = os.path.splitext(os.path.basename(s3_source_key))[0]
 s3_destination_filename = s3_source_basename + "-comp.mp4"

 s3_client = boto3.client('s3')
 s3_source_signed_url = s3_client.generate_presigned_url('get_object',
 Params={'Bucket': s3_source_bucket, 'Key': s3_source_key},
 ExpiresIn=SIGNED_URL_TIMEOUT)

 ffmpeg_cmd = f"/opt/bin/ffmpeg -nostdin -i {s3_source_signed_url} -f mpegts libx264 -preset fast -crf 28 -c:a copy - "
 command1 = shlex.split(ffmpeg_cmd)
 p1 = subprocess.run(command1, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
 resp = s3_client.put_object(Body=p1.stdout, Bucket=S3_DESTINATION_BUCKET, Key=s3_destination_filename)
 s3 = boto3.resource('s3')
 s3.Object(s3_source_bucket,s3_source_key).delete()

 return {
 'statusCode': 200,
 'body': json.dumps('Processing complete successfully')
 }



This is the code from the solution I mentioned, but when I try using this code, I get output.mp4 not found errors


def lambda_handler(event, context):
 print(event)
 os.chdir('/tmp')
 s3_source_bucket = event['Records'][0]['s3']['bucket']['name']
 s3_source_key = event['Records'][0]['s3']['object']['key']

 s3_source_basename = os.path.splitext(os.path.basename(s3_source_key))[0]
 s3_destination_filename = s3_source_basename + ".mp4"

 s3_client = boto3.client('s3')
 s3_source_signed_url = s3_client.generate_presigned_url('get_object',
 Params={'Bucket': s3_source_bucket, 'Key': s3_source_key},
 ExpiresIn=SIGNED_URL_TIMEOUT)
 print(s3_source_signed_url)
 s3_client.download_file(s3_source_bucket,s3_source_key,s3_source_key)
 # ffmpeg_cmd = "/opt/bin/ffmpeg -framerate 25 -i \"" + s3_source_signed_url + "\" output.mp4 "
 ffmpeg_cmd = f"/opt/bin/ffmpeg -framerate 25 -i {s3_source_key} output.mp4 "
 # command1 = shlex.split(ffmpeg_cmd)
 # print(command1)
 os.system(ffmpeg_cmd)
 # os.system('ls')
 # p1 = subprocess.run(command1, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
 file = 'output.mp4'
 resp = s3_client.put_object(Body=open(file,"rb"), Bucket=S3_DESTINATION_BUCKET, Key=s3_destination_filename)
 # resp = s3_client.put_object(Body=p1.stdout, Bucket=S3_DESTINATION_BUCKET, Key=s3_destination_filename)
 s3 = boto3.resource('s3')
 s3.Object(s3_source_bucket,s3_source_key).delete()
 return {
 'statusCode': 200,
 'body': json.dumps('Processing complete successfully')
 }



Any help would be greatly appreciated.


-
FFmpeg how to apply "aac_adtstoasc" and "h264_mp4toannexb" bitstream filters to transcode to h.264 with AAC
9 juillet 2015, par larodI’ve been struggling with this issue for about a month, I have studied FFmpeg documentation more specifically transcode_aac.c, transcoding.c, decoding_encoding.c and Handbrake’s implementation which is really dense.
The error that I’m getting is the following :
[mp4 @ 0x11102f800] Malformed AAC bitstream detected: use the audio bitstream filter 'aac_adtstoasc' to fix it ('-bsf:a aac_adtstoasc' option with ffmpeg)
.The research I’ve done points to a filter that needs to be implemented.
FIX:AAC in some container format (FLV, MP4, MKV etc.) need "aac_adtstoasc" bitstream filter (BSF).
I know I can do the following :
AVBitStreamFilterContext* aacbsfc = av_bitstream_filter_init("aac_adtstoasc");
And then do something like this :
av_bitstream_filter_filter(aacbsfc, in_stream->codec, NULL, &pkt.data, &pkt.size, pkt.data, pkt.size, 0);
What eludes me is when to filter the
AVPacket
, is it before callingav_packet_rescale_ts
or insideinit_filter
. I would greatly appreciate if someone can point me in the right direction. Thanks in advance.// Variables
AVFormatContext *_ifmt_ctx, *_ofmt_ctx;
FilteringContext *_filter_ctx;
AVBitStreamFilterContext *_h264bsfc;
AVBitStreamFilterContext *_aacbsfc;
NSURL *_srcURL, *_dstURL;
- (IBAction)trancode:(id)sender {
NSLog(@"%s %@",__func__, _mediaFile.fsName);
int ret, got_frame;
int (*dec_func)(AVCodecContext *, AVFrame *, int *, const AVPacket *);
unsigned int stream_index, i;
enum AVMediaType type;
AVPacket packet = {.data = NULL, .size = 0};
AVFrame *frame = NULL;
_h264bsfc = av_bitstream_filter_init("h264_mp4toannexb");
_aacbsfc = av_bitstream_filter_init("aac_adtstoasc");
_srcURL = [Utility urlFromBookmark:_mediaFile.bookmark];
if ([_srcURL startAccessingSecurityScopedResource]) {
NSString *newFileName = [[_srcURL.lastPathComponent stringByDeletingPathExtension]stringByAppendingPathExtension:@"mp4"];
_dstURL = [NSURL fileURLWithPath:[[_srcURL URLByDeletingLastPathComponent]URLByAppendingPathComponent:newFileName].path isDirectory:NO];
[AppDelegate ffmpegRegisterAll];
ret = open_input_file(_srcURL.path.fileSystemRepresentation);
if (ret < 0) {
NSLog(@"Error openning input file.");
}
ret = open_output_file(_dstURL.path.fileSystemRepresentation);
if (ret < 0) {
NSLog(@"Error openning output file.");
}
ret = init_filters();
if (ret < 0) {
NSLog(@"Error initializing filters.");
}
AVBitStreamFilterContext *h264bsfc = av_bitstream_filter_init("h264_mp4toannexb");
AVBitStreamFilterContext* aacbsfc = av_bitstream_filter_init("aac_adtstoasc");
// Transcode *******************************************************************************
while (1) {
if ((ret = av_read_frame(_ifmt_ctx, &packet)) < 0) {
break;
}
stream_index = packet.stream_index;
type = _ifmt_ctx->streams[packet.stream_index]->codec->codec_type;
av_log(NULL, AV_LOG_DEBUG, "Demuxer gave frame of stream_index %u\n", stream_index);
if (_filter_ctx[stream_index].filter_graph) {
av_log(NULL, AV_LOG_DEBUG, "Going to reencode&filter the frame\n");
frame = av_frame_alloc();
if (!frame) {
ret = AVERROR(ENOMEM);
break;
}
av_packet_rescale_ts(&packet, _ifmt_ctx->streams[stream_index]->time_base, _ifmt_ctx->streams[stream_index]->codec->time_base);
dec_func = (type == AVMEDIA_TYPE_VIDEO) ? avcodec_decode_video2 : avcodec_decode_audio4;
ret = dec_func(_ifmt_ctx->streams[stream_index]->codec, frame, &got_frame, &packet);
if (ret < 0) {
av_frame_free(&frame);
av_log(NULL, AV_LOG_ERROR, "Decoding failed\n");
break;
}
if (got_frame) {
frame->pts = av_frame_get_best_effort_timestamp(frame);
ret = filter_encode_write_frame(frame, stream_index);
av_frame_free(&frame);
if (ret < 0)
goto end;
} else {
av_frame_free(&frame);
}
} else {
/* remux this frame without reencoding */
av_packet_rescale_ts(&packet,
_ifmt_ctx->streams[stream_index]->time_base,
_ofmt_ctx->streams[stream_index]->time_base);
ret = av_interleaved_write_frame(_ofmt_ctx, &packet);
if (ret < 0)
goto end;
}
av_free_packet(&packet);
}
// *****************************************************************************************
/* flush filters and encoders */
for (i = 0; i < _ifmt_ctx->nb_streams; i++) {
/* flush filter */
if (!_filter_ctx[i].filter_graph)
continue;
ret = filter_encode_write_frame(NULL, i);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Flushing filter failed\n");
goto end;
}
/* flush encoder */
ret = flush_encoder(i);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Flushing encoder failed\n");
goto end;
}
}
av_write_trailer(_ofmt_ctx);
av_bitstream_filter_close(h264bsfc);
av_bitstream_filter_close(aacbsfc);
} else {
NSLog(@"Unable to resolve url for %@",_mediaFile.url.lastPathComponent);
}
[_srcURL stopAccessingSecurityScopedResource];
end:
av_free_packet(&packet);
av_frame_free(&frame);
for (i = 0; i < _ifmt_ctx->nb_streams; i++) {
avcodec_close(_ifmt_ctx->streams[i]->codec);
if (_ofmt_ctx && _ofmt_ctx->nb_streams > i && _ofmt_ctx->streams[i] && _ofmt_ctx->streams[i]->codec)
avcodec_close(_ofmt_ctx->streams[i]->codec);
if (_filter_ctx && _filter_ctx[i].filter_graph)
avfilter_graph_free(&_filter_ctx[i].filter_graph);
}
av_free(_filter_ctx);
avformat_close_input(&_ifmt_ctx);
if (_ofmt_ctx && !(_ofmt_ctx->oformat->flags & AVFMT_NOFILE))
avio_closep(&_ofmt_ctx->pb);
avformat_free_context(_ofmt_ctx);
}The following method is used to open the input file and create the ifmt_ctx.
int open_input_file(const char *filename) {
int ret;
unsigned int i;
_ifmt_ctx = NULL;
if ((ret = avformat_open_input(&_ifmt_ctx, filename, NULL, NULL)) < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot open input file\n");
return ret;
}
if ((ret = avformat_find_stream_info(_ifmt_ctx, NULL)) < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot find stream information\n");
return ret;
}
for (i = 0; i < _ifmt_ctx->nb_streams; i++) {
AVStream *stream;
AVCodecContext *codec_ctx;
stream = _ifmt_ctx->streams[i];
codec_ctx = stream->codec;
/* Reencode video & audio and remux subtitles etc. */
if (codec_ctx->codec_type == AVMEDIA_TYPE_VIDEO
|| codec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
/* Open decoder */
ret = avcodec_open2(codec_ctx,
avcodec_find_decoder(codec_ctx->codec_id), NULL);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Failed to open decoder for stream #%u\n", i);
return ret;
}
}
}
// Remove later
av_dump_format(_ifmt_ctx, 0, filename, 0);
return 0;
}This method is used to open the output file and create the output format context.
int open_output_file(const char *filename) {
AVStream *out_stream;
AVStream *in_stream;
AVCodecContext *dec_ctx, *enc_ctx;
AVCodec *encoder;
int ret;
unsigned int i;
_ofmt_ctx = NULL;
avformat_alloc_output_context2(&_ofmt_ctx, NULL, NULL, filename);
if (!_ofmt_ctx) {
av_log(NULL, AV_LOG_ERROR, "Could not create output context\n");
return AVERROR_UNKNOWN;
}
for (i = 0; i < _ifmt_ctx->nb_streams; i++) {
out_stream = avformat_new_stream(_ofmt_ctx, NULL);
if (!out_stream) {
av_log(NULL, AV_LOG_ERROR, "Failed allocating output stream\n");
return AVERROR_UNKNOWN;
}
in_stream = _ifmt_ctx->streams[i];
dec_ctx = in_stream->codec;
enc_ctx = out_stream->codec;
if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO) {
// set video stream
encoder = avcodec_find_encoder(AV_CODEC_ID_H264);
avcodec_get_context_defaults3(enc_ctx, encoder);
av_opt_set(enc_ctx->priv_data, "preset", "slow", 0);
enc_ctx->height = dec_ctx->height;
enc_ctx->width = dec_ctx->width;
enc_ctx->bit_rate = dec_ctx->bit_rate;
enc_ctx->time_base = out_stream->time_base = dec_ctx->time_base;
enc_ctx->pix_fmt = encoder->pix_fmts[0];
ret = avcodec_open2(enc_ctx, encoder, NULL);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot open video encoder for stream #%u\n", i);
return ret;
}
} else if (dec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
// set audio stream
//encoder = avcodec_find_encoder(AV_CODEC_ID_AAC);
encoder = avcodec_find_encoder_by_name("libfdk_aac");
avcodec_get_context_defaults3(enc_ctx, encoder);
enc_ctx->profile = FF_PROFILE_AAC_HE_V2;
enc_ctx->sample_rate = dec_ctx->sample_rate;
enc_ctx->channel_layout = dec_ctx->channel_layout;
enc_ctx->channels = av_get_channel_layout_nb_channels(enc_ctx->channel_layout);
enc_ctx->sample_fmt = encoder->sample_fmts[0];
enc_ctx->time_base = out_stream->time_base = (AVRational){1, enc_ctx->sample_rate};
enc_ctx->bit_rate = dec_ctx->bit_rate;
ret = avcodec_open2(enc_ctx, encoder, NULL);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot open video encoder for stream #%u\n", i);
return ret;
}
} else if (dec_ctx->codec_type == AVMEDIA_TYPE_UNKNOWN) {
// deal with error
av_log(NULL, AV_LOG_FATAL, "Elementary stream #%d is of unknown type, cannot proceed\n", i);
return AVERROR_INVALIDDATA;
} else {
// remux stream
ret = avcodec_copy_context(_ofmt_ctx->streams[i]->codec,
_ifmt_ctx->streams[i]->codec);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Copying stream context failed\n");
return ret;
}
}
if (_ofmt_ctx->oformat->flags & AVFMT_GLOBALHEADER) {
enc_ctx->flags |= CODEC_FLAG_GLOBAL_HEADER;
}
}
av_dump_format(_ofmt_ctx, 0, filename, 1);
NSURL *openFileURL = [Utility openPanelAt:[NSURL URLWithString:_dstURL.URLByDeletingLastPathComponent.path]
withTitle:@"Transcode File"
message:@"Please allow Maví to create the new file."
andPrompt:@"Grant Access"];
openFileURL = [openFileURL URLByAppendingPathComponent:_dstURL.lastPathComponent isDirectory:NO];
if (!(_ofmt_ctx->oformat->flags & AVFMT_NOFILE)) {
ret = avio_open(&_ofmt_ctx->pb, openFileURL.fileSystemRepresentation, AVIO_FLAG_WRITE);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Could not open output file '%s'", filename);
return ret;
}
}
/* init muxer, write output file header */
ret = avformat_write_header(_ofmt_ctx, NULL);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Error occurred when opening output file\n");
return ret;
}
return 0;
}These two methods deal with initialising the filters and filtering.
int init_filters(void) {
const char *filter_spec;
unsigned int i;
int ret;
_filter_ctx = av_malloc_array(_ifmt_ctx->nb_streams, sizeof(*_filter_ctx));
if (!_filter_ctx)
return AVERROR(ENOMEM);
for (i = 0; i < _ifmt_ctx->nb_streams; i++) {
_filter_ctx[i].buffersrc_ctx = NULL;
_filter_ctx[i].buffersink_ctx = NULL;
_filter_ctx[i].filter_graph = NULL;
if (!(_ifmt_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO
|| _ifmt_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO))
continue;
if (_ifmt_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO)
filter_spec = "null"; /* passthrough (dummy) filter for video */
else
filter_spec = "anull"; /* passthrough (dummy) filter for audio */
ret = init_filter(&_filter_ctx[i], _ifmt_ctx->streams[i]->codec,
_ofmt_ctx->streams[i]->codec, filter_spec);
if (ret)
return ret;
}
return 0;
}
int init_filter(FilteringContext* fctx, AVCodecContext *dec_ctx, AVCodecContext *enc_ctx, const char *filter_spec) {
char args[512];
int ret = 0;
AVFilter *buffersrc = NULL;
AVFilter *buffersink = NULL;
AVFilterContext *buffersrc_ctx = NULL;
AVFilterContext *buffersink_ctx = NULL;
AVFilterInOut *outputs = avfilter_inout_alloc();
AVFilterInOut *inputs = avfilter_inout_alloc();
AVFilterGraph *filter_graph = avfilter_graph_alloc();
if (!outputs || !inputs || !filter_graph) {
ret = AVERROR(ENOMEM);
goto end;
}
if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO) {
buffersrc = avfilter_get_by_name("buffer");
buffersink = avfilter_get_by_name("buffersink");
if (!buffersrc || !buffersink) {
av_log(NULL, AV_LOG_ERROR, "filtering source or sink element not found\n");
ret = AVERROR_UNKNOWN;
goto end;
}
snprintf(args, sizeof(args),
"video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
dec_ctx->width, dec_ctx->height, dec_ctx->pix_fmt,
dec_ctx->time_base.num, dec_ctx->time_base.den,
dec_ctx->sample_aspect_ratio.num,
dec_ctx->sample_aspect_ratio.den);
ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in",
args, NULL, filter_graph);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot create buffer source\n");
goto end;
}
ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out",
NULL, NULL, filter_graph);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot create buffer sink\n");
goto end;
}
ret = av_opt_set_bin(buffersink_ctx, "pix_fmts",
(uint8_t*)&enc_ctx->pix_fmt, sizeof(enc_ctx->pix_fmt),
AV_OPT_SEARCH_CHILDREN);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot set output pixel format\n");
goto end;
}
} else if (dec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
buffersrc = avfilter_get_by_name("abuffer");
buffersink = avfilter_get_by_name("abuffersink");
if (!buffersrc || !buffersink) {
av_log(NULL, AV_LOG_ERROR, "filtering source or sink element not found\n");
ret = AVERROR_UNKNOWN;
goto end;
}
if (!dec_ctx->channel_layout)
dec_ctx->channel_layout =
av_get_default_channel_layout(dec_ctx->channels);
snprintf(args, sizeof(args),
"time_base=%d/%d:sample_rate=%d:sample_fmt=%s:channel_layout=0x%"PRIx64,
dec_ctx->time_base.num, dec_ctx->time_base.den, dec_ctx->sample_rate,
av_get_sample_fmt_name(dec_ctx->sample_fmt),
dec_ctx->channel_layout);
ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in",
args, NULL, filter_graph);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot create audio buffer source\n");
goto end;
}
ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out",
NULL, NULL, filter_graph);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot create audio buffer sink\n");
goto end;
}
ret = av_opt_set_bin(buffersink_ctx, "sample_fmts",
(uint8_t*)&enc_ctx->sample_fmt, sizeof(enc_ctx->sample_fmt),
AV_OPT_SEARCH_CHILDREN);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot set output sample format\n");
goto end;
}
ret = av_opt_set_bin(buffersink_ctx, "channel_layouts",
(uint8_t*)&enc_ctx->channel_layout,
sizeof(enc_ctx->channel_layout), AV_OPT_SEARCH_CHILDREN);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot set output channel layout\n");
goto end;
}
ret = av_opt_set_bin(buffersink_ctx, "sample_rates",
(uint8_t*)&enc_ctx->sample_rate, sizeof(enc_ctx->sample_rate),
AV_OPT_SEARCH_CHILDREN);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot set output sample rate\n");
goto end;
}
} else {
ret = AVERROR_UNKNOWN;
goto end;
}
/* Endpoints for the filter graph. */
outputs->name = av_strdup("in");
outputs->filter_ctx = buffersrc_ctx;
outputs->pad_idx = 0;
outputs->next = NULL;
inputs->name = av_strdup("out");
inputs->filter_ctx = buffersink_ctx;
inputs->pad_idx = 0;
inputs->next = NULL;
if (!outputs->name || !inputs->name) {
ret = AVERROR(ENOMEM);
goto end;
}
if ((ret = avfilter_graph_parse_ptr(filter_graph, filter_spec,
&inputs, &outputs, NULL)) < 0)
goto end;
if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0)
goto end;
/* Fill FilteringContext */
fctx->buffersrc_ctx = buffersrc_ctx;
fctx->buffersink_ctx = buffersink_ctx;
fctx->filter_graph = filter_graph;
end:
avfilter_inout_free(&inputs);
avfilter_inout_free(&outputs);
return ret;
}Finally these two methods take care of writing the frames.
int encode_write_frame(AVFrame *filt_frame, unsigned int stream_index, int *got_frame) {
int ret;
int got_frame_local;
AVPacket enc_pkt;
int (*enc_func)(AVCodecContext *, AVPacket *, const AVFrame *, int *) =
(_ifmt_ctx->streams[stream_index]->codec->codec_type ==
AVMEDIA_TYPE_VIDEO) ? avcodec_encode_video2 : avcodec_encode_audio2;
if (!got_frame)
got_frame = &got_frame_local;
av_log(NULL, AV_LOG_INFO, "Encoding frame\n");
/* encode filtered frame */
enc_pkt.data = NULL;
enc_pkt.size = 0;
av_init_packet(&enc_pkt);
ret = enc_func(_ofmt_ctx->streams[stream_index]->codec, &enc_pkt,
filt_frame, got_frame);
av_frame_free(&filt_frame);
if (ret < 0)
return ret;
if (!(*got_frame))
return 0;
/* prepare packet for muxing */
enc_pkt.stream_index = stream_index;
av_packet_rescale_ts(&enc_pkt,
_ofmt_ctx->streams[stream_index]->codec->time_base,
_ofmt_ctx->streams[stream_index]->time_base);
av_log(NULL, AV_LOG_DEBUG, "Muxing frame\n");
/* mux encoded frame */
ret = av_interleaved_write_frame(_ofmt_ctx, &enc_pkt);
return ret;
}
int filter_encode_write_frame(AVFrame *frame, unsigned int stream_index)
{
int ret;
AVFrame *filt_frame;
av_log(NULL, AV_LOG_INFO, "Pushing decoded frame to filters\n");
/* push the decoded frame into the filtergraph */
ret = av_buffersrc_add_frame_flags(_filter_ctx[stream_index].buffersrc_ctx,
frame, 0);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Error while feeding the filtergraph\n");
return ret;
}
/* pull filtered frames from the filtergraph */
while (1) {
filt_frame = av_frame_alloc();
if (!filt_frame) {
ret = AVERROR(ENOMEM);
break;
}
av_log(NULL, AV_LOG_INFO, "Pulling filtered frame from filters\n");
ret = av_buffersink_get_frame(_filter_ctx[stream_index].buffersink_ctx,
filt_frame);
if (ret < 0) {
/* if no more frames for output - returns AVERROR(EAGAIN)
* if flushed and no more frames for output - returns AVERROR_EOF
* rewrite retcode to 0 to show it as normal procedure completion
*/
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
ret = 0;
av_frame_free(&filt_frame);
break;
}
filt_frame->pict_type = AV_PICTURE_TYPE_NONE;
ret = encode_write_frame(filt_frame, stream_index, NULL);
if (ret < 0)
break;
}
return ret;
} -
avcodec/cabac_functions : Fix "left shift of negative value -31767"
27 novembre 2015, par Michael Niedermayeravcodec/cabac_functions : Fix "left shift of negative value -31767"
Fixes : 1430e9c43fae47a24c179c7c54f94918/signal_sigsegv_421427_2340_591e9810c7b09efe501ad84638c9e9f8.264
Found-by : Mateusz "j00ru" Jurczyk and Gynvael Coldwind
Found-by : xiedingbao (Ticket4727)
Signed-off-by : Michael Niedermayer <michael@niedermayer.cc>