Recherche avancée

Médias (3)

Mot : - Tags -/spip

Autres articles (46)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (11657)

  • How to use ffmpeg lib transform mp4(h264&aac) to m3u8 (hls) by C code ?

    1er juillet 2020, par itning

    I used official examples transcoding.c but console print pkt->duration = 0, maybe the hls segment duration will not precise.

    


    I use this code to set duration but invalid。

    


    av_opt_set_int(ofmt_ctx->priv_data, "hls_time", 5, AV_OPT_SEARCH_CHILDREN);

    


    In command line

    


    ffmpeg -i a.mp4 -codec copy -vbsf h264_mp4toannexb -map 0 -f segment -segment_list a.m3u8 -segment_time 10 a-%03d.ts

    


    How to use C code to achieve this command ?

    


    this is my code :

    


    /**&#xA; * @file&#xA; * API example for demuxing, decoding, filtering, encoding and muxing&#xA; * @example transcoding.c&#xA; */&#xA;&#xA;#include <libavcodec></libavcodec>avcodec.h>&#xA;#include <libavformat></libavformat>avformat.h>&#xA;#include <libavfilter></libavfilter>buffersink.h>&#xA;#include <libavfilter></libavfilter>buffersrc.h>&#xA;#include <libavutil></libavutil>opt.h>&#xA;#include <libavutil></libavutil>pixdesc.h>&#xA;&#xA;static AVFormatContext *ifmt_ctx;&#xA;static AVFormatContext *ofmt_ctx;&#xA;typedef struct FilteringContext {&#xA;    AVFilterContext *buffersink_ctx;&#xA;    AVFilterContext *buffersrc_ctx;&#xA;    AVFilterGraph *filter_graph;&#xA;} FilteringContext;&#xA;static FilteringContext *filter_ctx;&#xA;&#xA;typedef struct StreamContext {&#xA;    AVCodecContext *dec_ctx;&#xA;    AVCodecContext *enc_ctx;&#xA;} StreamContext;&#xA;static StreamContext *stream_ctx;&#xA;&#xA;static int open_input_file(const char *filename) {&#xA;    int ret;&#xA;    unsigned int i;&#xA;&#xA;    ifmt_ctx = NULL;&#xA;    if ((ret = avformat_open_input(&amp;ifmt_ctx, filename, NULL, NULL)) &lt; 0) {&#xA;        av_log(NULL, AV_LOG_ERROR, "Cannot open input file\n");&#xA;        return ret;&#xA;    }&#xA;&#xA;    if ((ret = avformat_find_stream_info(ifmt_ctx, NULL)) &lt; 0) {&#xA;        av_log(NULL, AV_LOG_ERROR, "Cannot find stream information\n");&#xA;        return ret;&#xA;    }&#xA;&#xA;    stream_ctx = av_mallocz_array(ifmt_ctx->nb_streams, sizeof(*stream_ctx));&#xA;    if (!stream_ctx)&#xA;        return AVERROR(ENOMEM);&#xA;&#xA;    for (i = 0; i &lt; ifmt_ctx->nb_streams; i&#x2B;&#x2B;) {&#xA;        AVStream *stream = ifmt_ctx->streams[i];&#xA;        AVCodec *dec = avcodec_find_decoder(stream->codecpar->codec_id);&#xA;        AVCodecContext *codec_ctx;&#xA;        if (!dec) {&#xA;            av_log(NULL, AV_LOG_ERROR, "Failed to find decoder for stream #%u\n", i);&#xA;            return AVERROR_DECODER_NOT_FOUND;&#xA;        }&#xA;        codec_ctx = avcodec_alloc_context3(dec);&#xA;        if (!codec_ctx) {&#xA;            av_log(NULL, AV_LOG_ERROR, "Failed to allocate the decoder context for stream #%u\n", i);&#xA;            return AVERROR(ENOMEM);&#xA;        }&#xA;        ret = avcodec_parameters_to_context(codec_ctx, stream->codecpar);&#xA;        if (ret &lt; 0) {&#xA;            av_log(NULL, AV_LOG_ERROR, "Failed to copy decoder parameters to input decoder context "&#xA;                                       "for stream #%u\n", i);&#xA;            return ret;&#xA;        }&#xA;        /* Reencode video &amp; audio and remux subtitles etc. */&#xA;        if (codec_ctx->codec_type == AVMEDIA_TYPE_VIDEO&#xA;            || codec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {&#xA;            if (codec_ctx->codec_type == AVMEDIA_TYPE_VIDEO)&#xA;                codec_ctx->framerate = av_guess_frame_rate(ifmt_ctx, stream, NULL);&#xA;            /* Open decoder */&#xA;            ret = avcodec_open2(codec_ctx, dec, NULL);&#xA;            if (ret &lt; 0) {&#xA;                av_log(NULL, AV_LOG_ERROR, "Failed to open decoder for stream #%u\n", i);&#xA;                return ret;&#xA;            }&#xA;        }&#xA;        stream_ctx[i].dec_ctx = codec_ctx;&#xA;    }&#xA;&#xA;    av_dump_format(ifmt_ctx, 0, filename, 0);&#xA;    return 0;&#xA;}&#xA;&#xA;static int open_output_file(const char *filename, enum AVCodecID videoCodecId, enum AVCodecID audioCodecId) {&#xA;    AVStream *out_stream;&#xA;    AVStream *in_stream;&#xA;    AVCodecContext *dec_ctx, *enc_ctx;&#xA;    AVCodec *encoder;&#xA;    int ret;&#xA;    unsigned int i;&#xA;&#xA;    ofmt_ctx = NULL;&#xA;    avformat_alloc_output_context2(&amp;ofmt_ctx, NULL, NULL, filename);&#xA;    if (!ofmt_ctx) {&#xA;        av_log(NULL, AV_LOG_ERROR, "Could not create output context\n");&#xA;        return AVERROR_UNKNOWN;&#xA;    }&#xA;&#xA;    for (i = 0; i &lt; ifmt_ctx->nb_streams; i&#x2B;&#x2B;) {&#xA;        out_stream = avformat_new_stream(ofmt_ctx, NULL);&#xA;        if (!out_stream) {&#xA;            av_log(NULL, AV_LOG_ERROR, "Failed allocating output stream\n");&#xA;            return AVERROR_UNKNOWN;&#xA;        }&#xA;&#xA;        in_stream = ifmt_ctx->streams[i];&#xA;        dec_ctx = stream_ctx[i].dec_ctx;&#xA;&#xA;        if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO&#xA;            || dec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {&#xA;&#xA;            if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO) {&#xA;                encoder = avcodec_find_encoder(videoCodecId);&#xA;            } else {&#xA;                encoder = avcodec_find_encoder(audioCodecId);&#xA;            }&#xA;            //encoder = avcodec_find_encoder(dec_ctx->codec_id);&#xA;            if (!encoder) {&#xA;                av_log(NULL, AV_LOG_FATAL, "Necessary encoder not found\n");&#xA;                return AVERROR_INVALIDDATA;&#xA;            }&#xA;            enc_ctx = avcodec_alloc_context3(encoder);&#xA;            if (!enc_ctx) {&#xA;                av_log(NULL, AV_LOG_FATAL, "Failed to allocate the encoder context\n");&#xA;                return AVERROR(ENOMEM);&#xA;            }&#xA;&#xA;            /* In this example, we transcode to same properties (picture size,&#xA;             * sample rate etc.). These properties can be changed for output&#xA;             * streams easily using filters */&#xA;            if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO) {&#xA;                enc_ctx->height = dec_ctx->height;&#xA;                enc_ctx->width = dec_ctx->width;&#xA;                enc_ctx->sample_aspect_ratio = dec_ctx->sample_aspect_ratio;&#xA;                /* take first format from list of supported formats */&#xA;                if (encoder->pix_fmts)&#xA;                    enc_ctx->pix_fmt = encoder->pix_fmts[0];&#xA;                else&#xA;                    enc_ctx->pix_fmt = dec_ctx->pix_fmt;&#xA;                /* video time_base can be set to whatever is handy and supported by encoder */&#xA;                enc_ctx->time_base = av_inv_q(dec_ctx->framerate);&#xA;            } else {&#xA;                enc_ctx->sample_rate = dec_ctx->sample_rate;&#xA;                enc_ctx->channel_layout = dec_ctx->channel_layout;&#xA;                enc_ctx->channels = av_get_channel_layout_nb_channels(enc_ctx->channel_layout);&#xA;                /* take first format from list of supported formats */&#xA;                enc_ctx->sample_fmt = encoder->sample_fmts[0];&#xA;                enc_ctx->time_base = (AVRational) {1, enc_ctx->sample_rate};&#xA;            }&#xA;&#xA;            if (ofmt_ctx->oformat->flags &amp; AVFMT_GLOBALHEADER)&#xA;                enc_ctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;&#xA;&#xA;            /* Third parameter can be used to pass settings to encoder */&#xA;            ret = avcodec_open2(enc_ctx, encoder, NULL);&#xA;            if (ret &lt; 0) {&#xA;                av_log(NULL, AV_LOG_ERROR, "Cannot open video encoder for stream #%u\n", i);&#xA;                return ret;&#xA;            }&#xA;            ret = avcodec_parameters_from_context(out_stream->codecpar, enc_ctx);&#xA;            if (ret &lt; 0) {&#xA;                av_log(NULL, AV_LOG_ERROR, "Failed to copy encoder parameters to output stream #%u\n", i);&#xA;                return ret;&#xA;            }&#xA;&#xA;            out_stream->time_base = enc_ctx->time_base;&#xA;            stream_ctx[i].enc_ctx = enc_ctx;&#xA;        } else if (dec_ctx->codec_type == AVMEDIA_TYPE_UNKNOWN) {&#xA;            av_log(NULL, AV_LOG_FATAL, "Elementary stream #%d is of unknown type, cannot proceed\n", i);&#xA;            return AVERROR_INVALIDDATA;&#xA;        } else {&#xA;            /* if this stream must be remuxed */&#xA;            ret = avcodec_parameters_copy(out_stream->codecpar, in_stream->codecpar);&#xA;            if (ret &lt; 0) {&#xA;                av_log(NULL, AV_LOG_ERROR, "Copying parameters for stream #%u failed\n", i);&#xA;                return ret;&#xA;            }&#xA;            out_stream->time_base = in_stream->time_base;&#xA;        }&#xA;&#xA;    }&#xA;    av_dump_format(ofmt_ctx, 0, filename, 1);&#xA;&#xA;    av_opt_set_int(ofmt_ctx->priv_data, "hls_time", 5, AV_OPT_SEARCH_CHILDREN);&#xA;&#xA;    if (!(ofmt_ctx->oformat->flags &amp; AVFMT_NOFILE)) {&#xA;        ret = avio_open(&amp;ofmt_ctx->pb, filename, AVIO_FLAG_WRITE);&#xA;        if (ret &lt; 0) {&#xA;            av_log(NULL, AV_LOG_ERROR, "Could not open output file &#x27;%s&#x27;", filename);&#xA;            return ret;&#xA;        }&#xA;    }&#xA;&#xA;    /* init muxer, write output file header */&#xA;    ret = avformat_write_header(ofmt_ctx, NULL);&#xA;    if (ret &lt; 0) {&#xA;        av_log(NULL, AV_LOG_ERROR, "Error occurred when opening output file\n");&#xA;        return ret;&#xA;    }&#xA;&#xA;    return 0;&#xA;}&#xA;&#xA;static int init_filter(FilteringContext *fctx, AVCodecContext *dec_ctx,&#xA;                       AVCodecContext *enc_ctx, const char *filter_spec) {&#xA;    char args[512];&#xA;    int ret = 0;&#xA;    const AVFilter *buffersrc = NULL;&#xA;    const AVFilter *buffersink = NULL;&#xA;    AVFilterContext *buffersrc_ctx = NULL;&#xA;    AVFilterContext *buffersink_ctx = NULL;&#xA;    AVFilterInOut *outputs = avfilter_inout_alloc();&#xA;    AVFilterInOut *inputs = avfilter_inout_alloc();&#xA;    AVFilterGraph *filter_graph = avfilter_graph_alloc();&#xA;&#xA;    if (!outputs || !inputs || !filter_graph) {&#xA;        ret = AVERROR(ENOMEM);&#xA;        goto end;&#xA;    }&#xA;&#xA;    if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO) {&#xA;        buffersrc = avfilter_get_by_name("buffer");&#xA;        buffersink = avfilter_get_by_name("buffersink");&#xA;        if (!buffersrc || !buffersink) {&#xA;            av_log(NULL, AV_LOG_ERROR, "filtering source or sink element not found\n");&#xA;            ret = AVERROR_UNKNOWN;&#xA;            goto end;&#xA;        }&#xA;&#xA;        snprintf(args, sizeof(args),&#xA;                 "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",&#xA;                 dec_ctx->width, dec_ctx->height, dec_ctx->pix_fmt,&#xA;                 dec_ctx->time_base.num, dec_ctx->time_base.den,&#xA;                 dec_ctx->sample_aspect_ratio.num,&#xA;                 dec_ctx->sample_aspect_ratio.den);&#xA;&#xA;        ret = avfilter_graph_create_filter(&amp;buffersrc_ctx, buffersrc, "in",&#xA;                                           args, NULL, filter_graph);&#xA;        if (ret &lt; 0) {&#xA;            av_log(NULL, AV_LOG_ERROR, "Cannot create buffer source\n");&#xA;            goto end;&#xA;        }&#xA;&#xA;        ret = avfilter_graph_create_filter(&amp;buffersink_ctx, buffersink, "out",&#xA;                                           NULL, NULL, filter_graph);&#xA;        if (ret &lt; 0) {&#xA;            av_log(NULL, AV_LOG_ERROR, "Cannot create buffer sink\n");&#xA;            goto end;&#xA;        }&#xA;&#xA;        ret = av_opt_set_bin(buffersink_ctx, "pix_fmts",&#xA;                             (uint8_t *) &amp;enc_ctx->pix_fmt, sizeof(enc_ctx->pix_fmt),&#xA;                             AV_OPT_SEARCH_CHILDREN);&#xA;        if (ret &lt; 0) {&#xA;            av_log(NULL, AV_LOG_ERROR, "Cannot set output pixel format\n");&#xA;            goto end;&#xA;        }&#xA;    } else if (dec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {&#xA;        buffersrc = avfilter_get_by_name("abuffer");&#xA;        buffersink = avfilter_get_by_name("abuffersink");&#xA;        if (!buffersrc || !buffersink) {&#xA;            av_log(NULL, AV_LOG_ERROR, "filtering source or sink element not found\n");&#xA;            ret = AVERROR_UNKNOWN;&#xA;            goto end;&#xA;        }&#xA;&#xA;        if (!dec_ctx->channel_layout)&#xA;            dec_ctx->channel_layout =&#xA;                    av_get_default_channel_layout(dec_ctx->channels);&#xA;        snprintf(args, sizeof(args),&#xA;                 "time_base=%d/%d:sample_rate=%d:sample_fmt=%s:channel_layout=0x%"PRIx64,&#xA;                 dec_ctx->time_base.num, dec_ctx->time_base.den, dec_ctx->sample_rate,&#xA;                 av_get_sample_fmt_name(dec_ctx->sample_fmt),&#xA;                 dec_ctx->channel_layout);&#xA;        ret = avfilter_graph_create_filter(&amp;buffersrc_ctx, buffersrc, "in",&#xA;                                           args, NULL, filter_graph);&#xA;        if (ret &lt; 0) {&#xA;            av_log(NULL, AV_LOG_ERROR, "Cannot create audio buffer source\n");&#xA;            goto end;&#xA;        }&#xA;&#xA;        ret = avfilter_graph_create_filter(&amp;buffersink_ctx, buffersink, "out",&#xA;                                           NULL, NULL, filter_graph);&#xA;        if (ret &lt; 0) {&#xA;            av_log(NULL, AV_LOG_ERROR, "Cannot create audio buffer sink\n");&#xA;            goto end;&#xA;        }&#xA;&#xA;        ret = av_opt_set_bin(buffersink_ctx, "sample_fmts",&#xA;                             (uint8_t *) &amp;enc_ctx->sample_fmt, sizeof(enc_ctx->sample_fmt),&#xA;                             AV_OPT_SEARCH_CHILDREN);&#xA;        if (ret &lt; 0) {&#xA;            av_log(NULL, AV_LOG_ERROR, "Cannot set output sample format\n");&#xA;            goto end;&#xA;        }&#xA;&#xA;        ret = av_opt_set_bin(buffersink_ctx, "channel_layouts",&#xA;                             (uint8_t *) &amp;enc_ctx->channel_layout,&#xA;                             sizeof(enc_ctx->channel_layout), AV_OPT_SEARCH_CHILDREN);&#xA;        if (ret &lt; 0) {&#xA;            av_log(NULL, AV_LOG_ERROR, "Cannot set output channel layout\n");&#xA;            goto end;&#xA;        }&#xA;&#xA;        ret = av_opt_set_bin(buffersink_ctx, "sample_rates",&#xA;                             (uint8_t *) &amp;enc_ctx->sample_rate, sizeof(enc_ctx->sample_rate),&#xA;                             AV_OPT_SEARCH_CHILDREN);&#xA;        if (ret &lt; 0) {&#xA;            av_log(NULL, AV_LOG_ERROR, "Cannot set output sample rate\n");&#xA;            goto end;&#xA;        }&#xA;    } else {&#xA;        ret = AVERROR_UNKNOWN;&#xA;        goto end;&#xA;    }&#xA;&#xA;    /* Endpoints for the filter graph. */&#xA;    outputs->name = av_strdup("in");&#xA;    outputs->filter_ctx = buffersrc_ctx;&#xA;    outputs->pad_idx = 0;&#xA;    outputs->next = NULL;&#xA;&#xA;    inputs->name = av_strdup("out");&#xA;    inputs->filter_ctx = buffersink_ctx;&#xA;    inputs->pad_idx = 0;&#xA;    inputs->next = NULL;&#xA;&#xA;    if (!outputs->name || !inputs->name) {&#xA;        ret = AVERROR(ENOMEM);&#xA;        goto end;&#xA;    }&#xA;&#xA;    if ((ret = avfilter_graph_parse_ptr(filter_graph, filter_spec,&#xA;                                        &amp;inputs, &amp;outputs, NULL)) &lt; 0)&#xA;        goto end;&#xA;&#xA;    if ((ret = avfilter_graph_config(filter_graph, NULL)) &lt; 0)&#xA;        goto end;&#xA;&#xA;    /* Fill FilteringContext */&#xA;    fctx->buffersrc_ctx = buffersrc_ctx;&#xA;    fctx->buffersink_ctx = buffersink_ctx;&#xA;    fctx->filter_graph = filter_graph;&#xA;&#xA;    end:&#xA;    avfilter_inout_free(&amp;inputs);&#xA;    avfilter_inout_free(&amp;outputs);&#xA;&#xA;    return ret;&#xA;}&#xA;&#xA;static int init_filters(void) {&#xA;    const char *filter_spec;&#xA;    unsigned int i;&#xA;    int ret;&#xA;    filter_ctx = av_malloc_array(ifmt_ctx->nb_streams, sizeof(*filter_ctx));&#xA;    if (!filter_ctx)&#xA;        return AVERROR(ENOMEM);&#xA;&#xA;    for (i = 0; i &lt; ifmt_ctx->nb_streams; i&#x2B;&#x2B;) {&#xA;        filter_ctx[i].buffersrc_ctx = NULL;&#xA;        filter_ctx[i].buffersink_ctx = NULL;&#xA;        filter_ctx[i].filter_graph = NULL;&#xA;        if (!(ifmt_ctx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_AUDIO&#xA;              || ifmt_ctx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO))&#xA;            continue;&#xA;&#xA;&#xA;        if (ifmt_ctx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO)&#xA;            filter_spec = "null"; /* passthrough (dummy) filter for video */&#xA;        else&#xA;            filter_spec = "anull"; /* passthrough (dummy) filter for audio */&#xA;        ret = init_filter(&amp;filter_ctx[i], stream_ctx[i].dec_ctx,&#xA;                          stream_ctx[i].enc_ctx, filter_spec);&#xA;        if (ret)&#xA;            return ret;&#xA;    }&#xA;    return 0;&#xA;}&#xA;&#xA;static int encode_write_frame(AVFrame *filt_frame, unsigned int stream_index, int *got_frame) {&#xA;    int ret;&#xA;    int got_frame_local;&#xA;    AVPacket enc_pkt;&#xA;    int (*enc_func)(AVCodecContext *, AVPacket *, const AVFrame *, int *) =&#xA;    (ifmt_ctx->streams[stream_index]->codecpar->codec_type ==&#xA;     AVMEDIA_TYPE_VIDEO) ? avcodec_encode_video2 : avcodec_encode_audio2;&#xA;&#xA;    if (!got_frame)&#xA;        got_frame = &amp;got_frame_local;&#xA;&#xA;    av_log(NULL, AV_LOG_INFO, "Encoding frame\n");&#xA;    /* encode filtered frame */&#xA;    enc_pkt.data = NULL;&#xA;    enc_pkt.size = 0;&#xA;    av_init_packet(&amp;enc_pkt);&#xA;    ret = enc_func(stream_ctx[stream_index].enc_ctx, &amp;enc_pkt,&#xA;                   filt_frame, got_frame);&#xA;    av_frame_free(&amp;filt_frame);&#xA;    if (ret &lt; 0)&#xA;        return ret;&#xA;    if (!(*got_frame))&#xA;        return 0;&#xA;&#xA;    /* prepare packet for muxing */&#xA;    enc_pkt.stream_index = stream_index;&#xA;    av_packet_rescale_ts(&amp;enc_pkt,&#xA;                         stream_ctx[stream_index].enc_ctx->time_base,&#xA;                         ofmt_ctx->streams[stream_index]->time_base);&#xA;&#xA;    av_log(NULL, AV_LOG_DEBUG, "Muxing frame\n");&#xA;    /* mux encoded frame */&#xA;    ret = av_interleaved_write_frame(ofmt_ctx, &amp;enc_pkt);&#xA;    return ret;&#xA;}&#xA;&#xA;static int filter_encode_write_frame(AVFrame *frame, unsigned int stream_index) {&#xA;    int ret;&#xA;    AVFrame *filt_frame;&#xA;&#xA;    av_log(NULL, AV_LOG_INFO, "Pushing decoded frame to filters\n");&#xA;    /* push the decoded frame into the filtergraph */&#xA;    ret = av_buffersrc_add_frame_flags(filter_ctx[stream_index].buffersrc_ctx,&#xA;                                       frame, 0);&#xA;    if (ret &lt; 0) {&#xA;        av_log(NULL, AV_LOG_ERROR, "Error while feeding the filtergraph\n");&#xA;        return ret;&#xA;    }&#xA;&#xA;    /* pull filtered frames from the filtergraph */&#xA;    while (1) {&#xA;        filt_frame = av_frame_alloc();&#xA;        if (!filt_frame) {&#xA;            ret = AVERROR(ENOMEM);&#xA;            break;&#xA;        }&#xA;        av_log(NULL, AV_LOG_INFO, "Pulling filtered frame from filters\n");&#xA;        ret = av_buffersink_get_frame(filter_ctx[stream_index].buffersink_ctx,&#xA;                                      filt_frame);&#xA;        if (ret &lt; 0) {&#xA;            /* if no more frames for output - returns AVERROR(EAGAIN)&#xA;             * if flushed and no more frames for output - returns AVERROR_EOF&#xA;             * rewrite retcode to 0 to show it as normal procedure completion&#xA;             */&#xA;            if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)&#xA;                ret = 0;&#xA;            av_frame_free(&amp;filt_frame);&#xA;            break;&#xA;        }&#xA;&#xA;        filt_frame->pict_type = AV_PICTURE_TYPE_NONE;&#xA;        ret = encode_write_frame(filt_frame, stream_index, NULL);&#xA;        if (ret &lt; 0)&#xA;            break;&#xA;    }&#xA;&#xA;    return ret;&#xA;}&#xA;&#xA;static int flush_encoder(unsigned int stream_index) {&#xA;    int ret;&#xA;    int got_frame;&#xA;&#xA;    if (!(stream_ctx[stream_index].enc_ctx->codec->capabilities &amp;&#xA;          AV_CODEC_CAP_DELAY))&#xA;        return 0;&#xA;&#xA;    while (1) {&#xA;        av_log(NULL, AV_LOG_INFO, "Flushing stream #%u encoder\n", stream_index);&#xA;        ret = encode_write_frame(NULL, stream_index, &amp;got_frame);&#xA;        if (ret &lt; 0)&#xA;            break;&#xA;        if (!got_frame)&#xA;            return 0;&#xA;    }&#xA;    return ret;&#xA;}&#xA;&#xA;int main() {&#xA;    char *inputFile = "D:/20200623_094923.mp4";&#xA;    char *outputFile = "D:/test/te.m3u8";&#xA;    enum AVCodecID videoCodec = AV_CODEC_ID_H264;&#xA;    enum AVCodecID audioCodec = AV_CODEC_ID_AAC;&#xA;&#xA;    int ret;&#xA;    AVPacket packet = {.data = NULL, .size = 0};&#xA;    AVFrame *frame = NULL;&#xA;    enum AVMediaType type;&#xA;    unsigned int stream_index;&#xA;    unsigned int i;&#xA;    int got_frame;&#xA;    int (*dec_func)(AVCodecContext *, AVFrame *, int *, const AVPacket *);&#xA;&#xA;    if ((ret = open_input_file(inputFile)) &lt; 0)&#xA;        goto end;&#xA;    if ((ret = open_output_file(outputFile, videoCodec, audioCodec)) &lt; 0)&#xA;        goto end;&#xA;    if ((ret = init_filters()) &lt; 0)&#xA;        goto end;&#xA;&#xA;    /* read all packets */&#xA;    while (1) {&#xA;        if ((ret = av_read_frame(ifmt_ctx, &amp;packet)) &lt; 0)&#xA;            break;&#xA;        stream_index = packet.stream_index;&#xA;        type = ifmt_ctx->streams[packet.stream_index]->codecpar->codec_type;&#xA;        av_log(NULL, AV_LOG_DEBUG, "Demuxer gave frame of stream_index %u\n",&#xA;               stream_index);&#xA;&#xA;        if (filter_ctx[stream_index].filter_graph) {&#xA;            av_log(NULL, AV_LOG_DEBUG, "Going to reencode&amp;filter the frame\n");&#xA;            frame = av_frame_alloc();&#xA;            if (!frame) {&#xA;                ret = AVERROR(ENOMEM);&#xA;                break;&#xA;            }&#xA;            av_packet_rescale_ts(&amp;packet,&#xA;                                 ifmt_ctx->streams[stream_index]->time_base,&#xA;                                 stream_ctx[stream_index].dec_ctx->time_base);&#xA;            dec_func = (type == AVMEDIA_TYPE_VIDEO) ? avcodec_decode_video2 :&#xA;                       avcodec_decode_audio4;&#xA;            ret = dec_func(stream_ctx[stream_index].dec_ctx, frame,&#xA;                           &amp;got_frame, &amp;packet);&#xA;            if (ret &lt; 0) {&#xA;                av_frame_free(&amp;frame);&#xA;                av_log(NULL, AV_LOG_ERROR, "Decoding failed\n");&#xA;                break;&#xA;            }&#xA;&#xA;            if (got_frame) {&#xA;                frame->pts = frame->best_effort_timestamp;&#xA;                ret = filter_encode_write_frame(frame, stream_index);&#xA;                av_frame_free(&amp;frame);&#xA;                if (ret &lt; 0)&#xA;                    goto end;&#xA;            } else {&#xA;                av_frame_free(&amp;frame);&#xA;            }&#xA;        } else {&#xA;            /* remux this frame without reencoding */&#xA;            av_packet_rescale_ts(&amp;packet,&#xA;                                 ifmt_ctx->streams[stream_index]->time_base,&#xA;                                 ofmt_ctx->streams[stream_index]->time_base);&#xA;&#xA;            ret = av_interleaved_write_frame(ofmt_ctx, &amp;packet);&#xA;            if (ret &lt; 0)&#xA;                goto end;&#xA;        }&#xA;        av_packet_unref(&amp;packet);&#xA;    }&#xA;&#xA;    /* flush filters and encoders */&#xA;    for (i = 0; i &lt; ifmt_ctx->nb_streams; i&#x2B;&#x2B;) {&#xA;        /* flush filter */&#xA;        if (!filter_ctx[i].filter_graph)&#xA;            continue;&#xA;        ret = filter_encode_write_frame(NULL, i);&#xA;        if (ret &lt; 0) {&#xA;            av_log(NULL, AV_LOG_ERROR, "Flushing filter failed\n");&#xA;            goto end;&#xA;        }&#xA;&#xA;        /* flush encoder */&#xA;        ret = flush_encoder(i);&#xA;        if (ret &lt; 0) {&#xA;            av_log(NULL, AV_LOG_ERROR, "Flushing encoder failed\n");&#xA;            goto end;&#xA;        }&#xA;    }&#xA;&#xA;    av_write_trailer(ofmt_ctx);&#xA;    end:&#xA;    av_packet_unref(&amp;packet);&#xA;    av_frame_free(&amp;frame);&#xA;    for (i = 0; i &lt; ifmt_ctx->nb_streams; i&#x2B;&#x2B;) {&#xA;        avcodec_free_context(&amp;stream_ctx[i].dec_ctx);&#xA;        if (ofmt_ctx &amp;&amp; ofmt_ctx->nb_streams > i &amp;&amp; ofmt_ctx->streams[i] &amp;&amp; stream_ctx[i].enc_ctx)&#xA;            avcodec_free_context(&amp;stream_ctx[i].enc_ctx);&#xA;        if (filter_ctx &amp;&amp; filter_ctx[i].filter_graph)&#xA;            avfilter_graph_free(&amp;filter_ctx[i].filter_graph);&#xA;    }&#xA;    av_free(filter_ctx);&#xA;    av_free(stream_ctx);&#xA;    avformat_close_input(&amp;ifmt_ctx);&#xA;    if (ofmt_ctx &amp;&amp; !(ofmt_ctx->oformat->flags &amp; AVFMT_NOFILE))&#xA;        avio_closep(&amp;ofmt_ctx->pb);&#xA;    avformat_free_context(ofmt_ctx);&#xA;&#xA;    if (ret &lt; 0)&#xA;        av_log(NULL, AV_LOG_ERROR, "Error occurred: %s\n", av_err2str(ret));&#xA;&#xA;    return ret ? 1 : 0;&#xA;}&#xA;

    &#xA;

  • Produce waveform video from audio using FFMPEG

    27 avril 2017, par RhythmicDevil

    I am trying to create a waveform video from audio. My goal is to produce a video that looks something like this

    enter image description here

    For my test I have an mp3 that plays a short clipped sound. There are 4 bars of 1/4 notes and 4 bars of 1/8 notes played at 120bpm. I am having some trouble coming up with the right combination of preprocessing and filtering to produce a video that looks like the image. The colors dont have to be exact, I am more concerned with the shape of the beats. I tried a couple of different approaches using showwaves and showspectrum. I cant quite wrap my head around why when using showwaves the beats go past so quickly, but using showspectrum produces a video where I can see each individual beat.

    ShowWaves

    ffmpeg -i beat_test.mp3 -filter_complex "[0:a]showwaves=s=1280x100:mode=cline:rate=25:scale=sqrt,format=yuv420p[v]" -map "[v]" -map 0:a output_wav.mp4

    This link will download the output of that command.

    ShowSpectrum

    ffmpeg -i beat_test.mp3 -filter_complex "[0:a]showspectrum=s=1280x100:mode=combined:color=intensity:saturation=5:slide=1:scale=cbrt,format=yuv420p[v]" -map "[v]" -an -map 0:a output_spec.mp4

    This link will download the output of that command.

    I posted the simple examples because I didn’t want to confuse the issue by adding all the variations I have tried.

    In practice I suppose I can get away with the output from showspectrum but I’d like to understand where/how I am thinking about this incorrectly. Thanks for any advice.

    Here is a link to the source audio file.

  • Aspect ratio problems at transcoding with ffmpeg [closed]

    19 novembre 2023, par udippel

    I have a huge collection of videos from the last 20+ years, videos in all sorts of formats. I use gerbera as open source UPnP-AV media server. Our TV handles only very limited of these formats. Therefore I use the transcoding feature of gerbera (I don't want to convert the 2000+ files ; thereby avoiding loss of multiple audio tracks, (multiple) subtitles, and so forth).

    &#xA;

    This is my current unified argument line for ffmpeg :&#xA;-c:v mpeg2video -maxrate 20000k -vf setdar=16/9 -r 24000/1001 -qscale:v 4 -top 1 -c:a mp2 -f mpeg -y&#xA;It works pretty okay, except of the aspect ratios. Well, I don't understand this fully, because ffprobe for File A states :&#xA;Stream #0:0: Video: mpeg4 (Simple Profile) (XVID / 0x44495658), yuv420p, 624x464 [SAR 1:1 DAR 39:29], 1500 kb/s, 25 fps, 25 tbr, 25 tbn, 25 tbc&#xA;This file displays very well.&#xA;File B comes as :&#xA;Stream #0:0(eng): Video: h264 (High), yuv420p(tv, bt709, progressive), 960x720, SAR 1:1 DAR 4:3, 23.98 fps, 23.98 tbr, 1k tbn, 180k tbc (default)&#xA;This file displays horribly squeezed vertically and doesn't fill the screen left and right neither, with the same settings of the TV. Also, playing this file (and others, naturally) the TV doesn't offer the 14:9 display option, which is available e.g. for the file further up.

    &#xA;

    Both have same SAR, DAR, almost identical H:V ratios (1.345, 1.333) ; and almost identical DAR.

    &#xA;

    My questions :

    &#xA;

      &#xA;
    1. Why, despite of almost identical pixel ratios, DAR and SAR are these files handled so differently in one and the same session on the same TV (SONY), please ?
    2. &#xA;

    3. With which method could I instruct ffmpeg to display the second file properly, too, please ?&#xA;(I have already tried 'scale' ; but to no avail. Which could have been foreseen, since the ratios are already very close.)&#xA;My guess is, that the (tv, bt709, progressive) mess things up.&#xA;(I have already tried to add the yuv420p in the argument line, also to no avail.)
    4. &#xA;

    &#xA;

    Appreciate any help,

    &#xA;

    Uwe

    &#xA;

    I have already tried to add a 'scale' option ; but to no avail. Which could have been foreseen, since the ratios are already very close.&#xA;I have already tried to add the yuv420p in the argument line, also to no avail.&#xA;I have already tried force_original_aspect_ratio, but also here, nothing improving.&#xA;Also, I played with -aspect, but the aspects are okay, and would need individual corrections, which I can't and don't to do for 2000+ files. A simple 16:9 doesn't cut it.

    &#xA;