Recherche avancée

Médias (1)

Mot : - Tags -/livre électronique

Autres articles (62)

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (8222)

  • ffmpeg : libavformat/libswresample to transcode and resample at same time

    21 février 2024, par whatdoido

    I want to transcode and down/re-sample the audio for output using ffmpeg's libav*/libswresample - I am using ffmpeg's (4.x) transcode_aac.c and resample_audio.c as reference - but the code produces audio with glitches that is clearly not what ffmpeg itself would produce (ie ffmpeg -i foo.wav -ar 22050 foo.m4a)

    


    Based on the ffmpeg examples, to resample audio it appears that I need to set the output AVAudioContext and SwrContext sample_rate to what I desire and ensure the swr_convert() is provided with the correct number of output samples based av_rescale_rnd( swr_delay(), ...) once I have an decoded input audio. I've taken care to ensure all the relevant calculations of samples for output are taken into account in the merged code (below) :

    


      

    • open_output_file() - AVCodecContext.sample_rate (avctx variable) set to our target (down sampled) sample_rate
    • 


    • read_decode_convert_and_store() is where the work happens : input audio is decoded to an AVFrame and this input frame is converted before being encoded.

        

      • init_converted_samples() and av_samples_alloc() uses the input frame's nb_samples
      • 


      • ADDED : calc the number of output samples via av_rescale_rnd() and swr_delay()
      • 


      • UPDATED : convert_samples() and swr_convert() uses the input frame's samples and our calculated output samples as parameters
      • 


      


    • 


    


    However the resulting audio file is produced with audio glitches. Does the community know of any references for how transcode AND resample should be done or what is missing in this example ?

    


        /* compile and run:&#xA;         gcc -I/usr/include/ffmpeg  transcode-swr-aac.c  -lavformat -lavutil -lavcodec -lswresample -lm&#xA;         ./a.out foo.wav foo.m4a&#xA;    */&#xA;&#xA;/*&#xA; * Copyright (c) 2013-2018 Andreas Unterweger&#xA; *  &#xA; * This file is part of FFmpeg.                                                 &#xA; ...                                                                       ...&#xA; *   &#xA; * @example transcode_aac.c                                                    &#xA; * Convert an input audio file to AAC in an MP4 container using FFmpeg.         &#xA; * Formats other than MP4 are supported based on the output file extension.                            &#xA; * @author Andreas Unterweger (xxxx@xxxxx.com)&#xA; */  &#xA;    #include &#xA; &#xA;&#xA;    #include "libavformat/avformat.h"&#xA;    #include "libavformat/avio.h"&#xA;    &#xA;    #include "libavcodec/avcodec.h"&#xA;    &#xA;    #include "libavutil/audio_fifo.h"&#xA;    #include "libavutil/avassert.h"&#xA;    #include "libavutil/avstring.h"&#xA;    #include "libavutil/channel_layout.h"&#xA;    #include "libavutil/frame.h"&#xA;    #include "libavutil/opt.h"&#xA;    &#xA;    #include "libswresample/swresample.h"&#xA;    &#xA;    #define OUTPUT_BIT_RATE 128000&#xA;    #define OUTPUT_CHANNELS 2&#xA;    &#xA;    static int open_input_file(const char *filename,&#xA;                               AVFormatContext **input_format_context,&#xA;                               AVCodecContext **input_codec_context)&#xA;    {&#xA;        AVCodecContext *avctx;&#xA;        const AVCodec *input_codec;&#xA;        const AVStream *stream;&#xA;        int error;&#xA;    &#xA;        if ((error = avformat_open_input(input_format_context, filename, NULL,&#xA;                                         NULL)) &lt; 0) {&#xA;            fprintf(stderr, "Could not open input file &#x27;%s&#x27; (error &#x27;%s&#x27;)\n",&#xA;                    filename, av_err2str(error));&#xA;            *input_format_context = NULL;&#xA;            return error;&#xA;        }&#xA;    &#xA;&#xA;        if ((error = avformat_find_stream_info(*input_format_context, NULL)) &lt; 0) {&#xA;            fprintf(stderr, "Could not open find stream info (error &#x27;%s&#x27;)\n",&#xA;                    av_err2str(error));&#xA;            avformat_close_input(input_format_context);&#xA;            return error;&#xA;        }&#xA;    &#xA;        if ((*input_format_context)->nb_streams != 1) {&#xA;            fprintf(stderr, "Expected one audio input stream, but found %d\n",&#xA;                    (*input_format_context)->nb_streams);&#xA;            avformat_close_input(input_format_context);&#xA;            return AVERROR_EXIT;&#xA;        }&#xA;    &#xA;        stream = (*input_format_context)->streams[0];&#xA;    &#xA;        if (!(input_codec = avcodec_find_decoder(stream->codecpar->codec_id))) {&#xA;            fprintf(stderr, "Could not find input codec\n");&#xA;            avformat_close_input(input_format_context);&#xA;            return AVERROR_EXIT;&#xA;        }&#xA;    &#xA;        avctx = avcodec_alloc_context3(input_codec);&#xA;        if (!avctx) {&#xA;            fprintf(stderr, "Could not allocate a decoding context\n");&#xA;            avformat_close_input(input_format_context);&#xA;            return AVERROR(ENOMEM);&#xA;        }&#xA;    &#xA;        /* Initialize the stream parameters with demuxer information. */&#xA;        error = avcodec_parameters_to_context(avctx, stream->codecpar);&#xA;        if (error &lt; 0) {&#xA;            avformat_close_input(input_format_context);&#xA;            avcodec_free_context(&amp;avctx);&#xA;            return error;&#xA;        }&#xA;    &#xA;        /* Open the decoder for the audio stream to use it later. */&#xA;        if ((error = avcodec_open2(avctx, input_codec, NULL)) &lt; 0) {&#xA;            fprintf(stderr, "Could not open input codec (error &#x27;%s&#x27;)\n",&#xA;                    av_err2str(error));&#xA;            avcodec_free_context(&amp;avctx);&#xA;            avformat_close_input(input_format_context);&#xA;            return error;&#xA;        }&#xA;    &#xA;        /* Set the packet timebase for the decoder. */&#xA;        avctx->pkt_timebase = stream->time_base;&#xA;    &#xA;        /* Save the decoder context for easier access later. */&#xA;        *input_codec_context = avctx;&#xA;    &#xA;        return 0;&#xA;    }&#xA;    &#xA;    static int open_output_file(const char *filename,&#xA;                                AVCodecContext *input_codec_context,&#xA;                                AVFormatContext **output_format_context,&#xA;                                AVCodecContext **output_codec_context)&#xA;    {&#xA;        AVCodecContext *avctx          = NULL;&#xA;        AVIOContext *output_io_context = NULL;&#xA;        AVStream *stream               = NULL;&#xA;        const AVCodec *output_codec    = NULL;&#xA;        int error;&#xA;    &#xA;&#xA;        if ((error = avio_open(&amp;output_io_context, filename,&#xA;                               AVIO_FLAG_WRITE)) &lt; 0) {&#xA;            fprintf(stderr, "Could not open output file &#x27;%s&#x27; (error &#x27;%s&#x27;)\n",&#xA;                    filename, av_err2str(error));&#xA;            return error;&#xA;        }&#xA;    &#xA;&#xA;        if (!(*output_format_context = avformat_alloc_context())) {&#xA;            fprintf(stderr, "Could not allocate output format context\n");&#xA;            return AVERROR(ENOMEM);&#xA;        }&#xA;    &#xA;&#xA;        (*output_format_context)->pb = output_io_context;&#xA;    &#xA;&#xA;        if (!((*output_format_context)->oformat = av_guess_format(NULL, filename,&#xA;                                                                  NULL))) {&#xA;            fprintf(stderr, "Could not find output file format\n");&#xA;            goto cleanup;&#xA;        }&#xA;    &#xA;        if (!((*output_format_context)->url = av_strdup(filename))) {&#xA;            fprintf(stderr, "Could not allocate url.\n");&#xA;            error = AVERROR(ENOMEM);&#xA;            goto cleanup;&#xA;        }&#xA;    &#xA;&#xA;        if (!(output_codec = avcodec_find_encoder(AV_CODEC_ID_AAC))) {&#xA;            fprintf(stderr, "Could not find an AAC encoder.\n");&#xA;            goto cleanup;&#xA;        }&#xA;    &#xA;        /* Create a new audio stream in the output file container. */&#xA;        if (!(stream = avformat_new_stream(*output_format_context, NULL))) {&#xA;            fprintf(stderr, "Could not create new stream\n");&#xA;            error = AVERROR(ENOMEM);&#xA;            goto cleanup;&#xA;        }&#xA;    &#xA;        avctx = avcodec_alloc_context3(output_codec);&#xA;        if (!avctx) {&#xA;            fprintf(stderr, "Could not allocate an encoding context\n");&#xA;            error = AVERROR(ENOMEM);&#xA;            goto cleanup;&#xA;        }&#xA;    &#xA;   /* Set the basic encoder parameters.&#xA;    * SET OUR DESIRED output sample_rate here&#xA;    */&#xA;        avctx->channels       = OUTPUT_CHANNELS;&#xA;        avctx->channel_layout = av_get_default_channel_layout(OUTPUT_CHANNELS);&#xA;        // avctx->sample_rate    = input_codec_context->sample_rate;&#xA;        avctx->sample_rate    = 22050;&#xA;        avctx->sample_fmt     = output_codec->sample_fmts[0];&#xA;        avctx->bit_rate       = OUTPUT_BIT_RATE;&#xA;    &#xA;        avctx->strict_std_compliance = FF_COMPLIANCE_EXPERIMENTAL;&#xA;    &#xA;        /* Set the sample rate for the container. */&#xA;        stream->time_base.den = avctx->sample_rate;&#xA;        stream->time_base.num = 1;&#xA;    &#xA;        if ((*output_format_context)->oformat->flags &amp; AVFMT_GLOBALHEADER)&#xA;            avctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;&#xA;    &#xA;        if ((error = avcodec_open2(avctx, output_codec, NULL)) &lt; 0) {&#xA;            fprintf(stderr, "Could not open output codec (error &#x27;%s&#x27;)\n",&#xA;                    av_err2str(error));&#xA;            goto cleanup;&#xA;        }&#xA;    &#xA;        error = avcodec_parameters_from_context(stream->codecpar, avctx);&#xA;        if (error &lt; 0) {&#xA;            fprintf(stderr, "Could not initialize stream parameters\n");&#xA;            goto cleanup;&#xA;        }&#xA;    &#xA;        /* Save the encoder context for easier access later. */&#xA;        *output_codec_context = avctx;&#xA;    &#xA;        return 0;&#xA;    &#xA;    cleanup:&#xA;        avcodec_free_context(&amp;avctx);&#xA;        avio_closep(&amp;(*output_format_context)->pb);&#xA;        avformat_free_context(*output_format_context);&#xA;        *output_format_context = NULL;&#xA;        return error &lt; 0 ? error : AVERROR_EXIT;&#xA;    }&#xA;    &#xA;    /**&#xA;     * Initialize one data packet for reading or writing.&#xA;     */&#xA;    static int init_packet(AVPacket **packet)&#xA;    {&#xA;        if (!(*packet = av_packet_alloc())) {&#xA;            fprintf(stderr, "Could not allocate packet\n");&#xA;            return AVERROR(ENOMEM);&#xA;        }&#xA;        return 0;&#xA;    }&#xA;    &#xA;    static int init_input_frame(AVFrame **frame)&#xA;    {&#xA;        if (!(*frame = av_frame_alloc())) {&#xA;            fprintf(stderr, "Could not allocate input frame\n");&#xA;            return AVERROR(ENOMEM);&#xA;        }&#xA;        return 0;&#xA;    }&#xA;    &#xA;    static int init_resampler(AVCodecContext *input_codec_context,&#xA;                              AVCodecContext *output_codec_context,&#xA;                              SwrContext **resample_context)&#xA;    {&#xA;            int error;&#xA;&#xA;  /**&#xA;   * create the resample, including ref to the desired output sample rate&#xA;   */&#xA;            *resample_context = swr_alloc_set_opts(NULL,&#xA;                                                  av_get_default_channel_layout(output_codec_context->channels),&#xA;                                                  output_codec_context->sample_fmt,&#xA;                                                  output_codec_context->sample_rate,&#xA;                              av_get_default_channel_layout(input_codec_context->channels),&#xA;                                                  input_codec_context->sample_fmt,&#xA;                                                  input_codec_context->sample_rate,&#xA;                                                  0, NULL);&#xA;            if (!*resample_context &lt; 0) {&#xA;                fprintf(stderr, "Could not allocate resample context\n");&#xA;            return AVERROR(ENOMEM);&#xA;            }&#xA;    &#xA;            if ((error = swr_init(*resample_context)) &lt; 0) {&#xA;                fprintf(stderr, "Could not open resample context\n");&#xA;                swr_free(resample_context);&#xA;                return error;&#xA;            }&#xA;        return 0;&#xA;    }&#xA;    &#xA;    static int init_fifo(AVAudioFifo **fifo, AVCodecContext *output_codec_context)&#xA;    {&#xA;        if (!(*fifo = av_audio_fifo_alloc(output_codec_context->sample_fmt,&#xA;                                          output_codec_context->channels, 1))) {&#xA;            fprintf(stderr, "Could not allocate FIFO\n");&#xA;            return AVERROR(ENOMEM);&#xA;        }&#xA;        return 0;&#xA;    }&#xA;    &#xA;    static int write_output_file_header(AVFormatContext *output_format_context)&#xA;    {&#xA;        int error;&#xA;        if ((error = avformat_write_header(output_format_context, NULL)) &lt; 0) {&#xA;            fprintf(stderr, "Could not write output file header (error &#x27;%s&#x27;)\n",&#xA;                    av_err2str(error));&#xA;            return error;&#xA;        }&#xA;        return 0;&#xA;    }&#xA;    &#xA;    static int decode_audio_frame(AVFrame *frame,&#xA;                                  AVFormatContext *input_format_context,&#xA;                                  AVCodecContext *input_codec_context,&#xA;                                  int *data_present, int *finished)&#xA;    {&#xA;        AVPacket *input_packet;&#xA;        int error;&#xA;    &#xA;        error = init_packet(&amp;input_packet);&#xA;        if (error &lt; 0)&#xA;            return error;&#xA;    &#xA;        *data_present = 0;&#xA;        *finished = 0;&#xA;&#xA;        if ((error = av_read_frame(input_format_context, input_packet)) &lt; 0) {&#xA;            if (error == AVERROR_EOF)&#xA;                *finished = 1;&#xA;            else {&#xA;                fprintf(stderr, "Could not read frame (error &#x27;%s&#x27;)\n",&#xA;                        av_err2str(error));&#xA;                goto cleanup;&#xA;            }&#xA;        }&#xA;    &#xA;        if ((error = avcodec_send_packet(input_codec_context, input_packet)) &lt; 0) {&#xA;            fprintf(stderr, "Could not send packet for decoding (error &#x27;%s&#x27;)\n",&#xA;                    av_err2str(error));&#xA;            goto cleanup;&#xA;        }&#xA;    &#xA;        error = avcodec_receive_frame(input_codec_context, frame);&#xA;        if (error == AVERROR(EAGAIN)) {&#xA;            error = 0;&#xA;            goto cleanup;&#xA;        } else if (error == AVERROR_EOF) {&#xA;            *finished = 1;&#xA;            error = 0;&#xA;            goto cleanup;&#xA;        } else if (error &lt; 0) {&#xA;            fprintf(stderr, "Could not decode frame (error &#x27;%s&#x27;)\n",&#xA;                    av_err2str(error));&#xA;            goto cleanup;&#xA;        } else {&#xA;            *data_present = 1;&#xA;            goto cleanup;&#xA;        }&#xA;    &#xA;    cleanup:&#xA;        av_packet_free(&amp;input_packet);&#xA;        return error;&#xA;    }&#xA;    &#xA;    static int init_converted_samples(uint8_t ***converted_input_samples,&#xA;                                      AVCodecContext *output_codec_context,&#xA;                                      int frame_size)&#xA;    {&#xA;        int error;&#xA;    &#xA;        if (!(*converted_input_samples = calloc(output_codec_context->channels,&#xA;                                                sizeof(**converted_input_samples)))) {&#xA;            fprintf(stderr, "Could not allocate converted input sample pointers\n");&#xA;            return AVERROR(ENOMEM);&#xA;        }&#xA;    &#xA;&#xA;        if ((error = av_samples_alloc(*converted_input_samples, NULL,&#xA;                                      output_codec_context->channels,&#xA;                                      frame_size,&#xA;                                      output_codec_context->sample_fmt, 0)) &lt; 0) {&#xA;            fprintf(stderr,&#xA;                    "Could not allocate converted input samples (error &#x27;%s&#x27;)\n",&#xA;                    av_err2str(error));&#xA;            av_freep(&amp;(*converted_input_samples)[0]);&#xA;            free(*converted_input_samples);&#xA;            return error;&#xA;        }&#xA;        return 0;&#xA;    }&#xA;    &#xA;    static int convert_samples(const uint8_t **input_data, const int input_nb_samples,&#xA;                               uint8_t **converted_data, const int output_nb_samples,&#xA;                               SwrContext *resample_context)&#xA;    {&#xA;        int error;&#xA;    &#xA;        if ((error = swr_convert(resample_context,&#xA;                                 converted_data, output_nb_samples,&#xA;                                 input_data    , input_nb_samples)) &lt; 0) {&#xA;            fprintf(stderr, "Could not convert input samples (error &#x27;%s&#x27;)\n",&#xA;                    av_err2str(error));&#xA;            return error;&#xA;        }&#xA;    &#xA;        return 0;&#xA;    }&#xA;    &#xA;    static int add_samples_to_fifo(AVAudioFifo *fifo,&#xA;                                   uint8_t **converted_input_samples,&#xA;                                   const int frame_size)&#xA;    {&#xA;        int error;&#xA;    &#xA;        if ((error = av_audio_fifo_realloc(fifo, av_audio_fifo_size(fifo) &#x2B; frame_size)) &lt; 0) {&#xA;            fprintf(stderr, "Could not reallocate FIFO\n");&#xA;            return error;&#xA;        }&#xA;    &#xA;        if (av_audio_fifo_write(fifo, (void **)converted_input_samples,&#xA;                                frame_size) &lt; frame_size) {&#xA;            fprintf(stderr, "Could not write data to FIFO\n");&#xA;            return AVERROR_EXIT;&#xA;        }&#xA;        return 0;&#xA;    }&#xA;    &#xA;    static int read_decode_convert_and_store(AVAudioFifo *fifo,&#xA;                                             AVFormatContext *input_format_context,&#xA;                                             AVCodecContext *input_codec_context,&#xA;                                             AVCodecContext *output_codec_context,&#xA;                                             SwrContext *resampler_context,&#xA;                                             int *finished)&#xA;    {&#xA;        AVFrame *input_frame = NULL;&#xA;        uint8_t **converted_input_samples = NULL;&#xA;        int data_present;&#xA;        int ret = AVERROR_EXIT;&#xA;    &#xA;&#xA;        if (init_input_frame(&amp;input_frame))&#xA;            goto cleanup;&#xA;&#xA;        if (decode_audio_frame(input_frame, input_format_context,&#xA;                               input_codec_context, &amp;data_present, finished))&#xA;            goto cleanup;&#xA;&#xA;        if (*finished) {&#xA;            ret = 0;&#xA;            goto cleanup;&#xA;        }&#xA;&#xA;        if (data_present) {&#xA;            /* Initialize the temporary storage for the converted input samples. */&#xA;            if (init_converted_samples(&amp;converted_input_samples, output_codec_context,&#xA;                                       input_frame->nb_samples))&#xA;                goto cleanup;&#xA; &#xA;    /* figure out how many samples are required for target sample_rate incl&#xA;     * any items left in the swr buffer&#xA;     */   &#xA;            int  output_nb_samples = av_rescale_rnd(&#xA;                                       swr_get_delay(resampler_context, input_codec_context->sample_rate) &#x2B; input_frame->nb_samples,&#xA;                                       output_codec_context->sample_rate, &#xA;                                        input_codec_context->sample_rate,&#xA;                                       AV_ROUND_UP);&#xA; &#xA;            /* ignore, just to ensure we&#x27;ve got enough buffer alloc&#x27;d for conversion buffer */&#xA;            av_assert1(input_frame->nb_samples > output_nb_samples);&#xA;   &#xA;    /* Convert the input samples to the desired output sample format, via swr_convert().&#xA;     */&#xA;            if (convert_samples((const uint8_t**)input_frame->extended_data, input_frame->nb_samples,&#xA;                        converted_input_samples, output_nb_samples,&#xA;                    resampler_context))&#xA;                goto cleanup;&#xA;    &#xA;            /* Add the converted input samples to the FIFO buffer for later processing. */&#xA;            if (add_samples_to_fifo(fifo, converted_input_samples,&#xA;                                    output_nb_samples))&#xA;                goto cleanup;&#xA;            ret = 0;&#xA;        }&#xA;        ret = 0;&#xA;    &#xA;    cleanup:&#xA;        if (converted_input_samples) {&#xA;            av_freep(&amp;converted_input_samples[0]);&#xA;            free(converted_input_samples);&#xA;        }&#xA;        av_frame_free(&amp;input_frame);&#xA;    &#xA;        return ret;&#xA;    }&#xA;    &#xA;    static int init_output_frame(AVFrame **frame,&#xA;                                 AVCodecContext *output_codec_context,&#xA;                                 int frame_size)&#xA;    {&#xA;        int error;&#xA;    &#xA;        if (!(*frame = av_frame_alloc())) {&#xA;            fprintf(stderr, "Could not allocate output frame\n");&#xA;            return AVERROR_EXIT;&#xA;        }&#xA;    &#xA;        /* Set the frame&#x27;s parameters, especially its size and format.&#xA;         * av_frame_get_buffer needs this to allocate memory for the&#xA;         * audio samples of the frame.&#xA;         * Default channel layouts based on the number of channels&#xA;         * are assumed for simplicity. */&#xA;        (*frame)->nb_samples     = frame_size;&#xA;        (*frame)->channel_layout = output_codec_context->channel_layout;&#xA;        (*frame)->format         = output_codec_context->sample_fmt;&#xA;        (*frame)->sample_rate    = output_codec_context->sample_rate;&#xA;    &#xA;        /* Allocate the samples of the created frame. This call will make&#xA;         * sure that the audio frame can hold as many samples as specified. */&#xA;        if ((error = av_frame_get_buffer(*frame, 0)) &lt; 0) {&#xA;            fprintf(stderr, "Could not allocate output frame samples (error &#x27;%s&#x27;)\n",&#xA;                    av_err2str(error));&#xA;            av_frame_free(frame);&#xA;            return error;&#xA;        }&#xA;    &#xA;        return 0;&#xA;    }&#xA;    &#xA;    /* Global timestamp for the audio frames. */&#xA;    static int64_t pts = 0;&#xA;    &#xA;    /**&#xA;     * Encode one frame worth of audio to the output file.&#xA;     */&#xA;    static int encode_audio_frame(AVFrame *frame,&#xA;                                  AVFormatContext *output_format_context,&#xA;                                  AVCodecContext *output_codec_context,&#xA;                                  int *data_present)&#xA;    {&#xA;        AVPacket *output_packet;&#xA;        int error;&#xA;    &#xA;        error = init_packet(&amp;output_packet);&#xA;        if (error &lt; 0)&#xA;            return error;&#xA;    &#xA;        /* Set a timestamp based on the sample rate for the container. */&#xA;        if (frame) {&#xA;            frame->pts = pts;&#xA;            pts &#x2B;= frame->nb_samples;&#xA;        }&#xA;    &#xA;        *data_present = 0;&#xA;        error = avcodec_send_frame(output_codec_context, frame);&#xA;        if (error &lt; 0 &amp;&amp; error != AVERROR_EOF) {&#xA;          fprintf(stderr, "Could not send packet for encoding (error &#x27;%s&#x27;)\n",&#xA;                  av_err2str(error));&#xA;          goto cleanup;&#xA;        }&#xA;    &#xA;&#xA;        error = avcodec_receive_packet(output_codec_context, output_packet);&#xA;        if (error == AVERROR(EAGAIN)) {&#xA;            error = 0;&#xA;            goto cleanup;&#xA;        } else if (error == AVERROR_EOF) {&#xA;            error = 0;&#xA;            goto cleanup;&#xA;        } else if (error &lt; 0) {&#xA;            fprintf(stderr, "Could not encode frame (error &#x27;%s&#x27;)\n",&#xA;                    av_err2str(error));&#xA;            goto cleanup;&#xA;        } else {&#xA;            *data_present = 1;&#xA;        }&#xA;    &#xA;        /* Write one audio frame from the temporary packet to the output file. */&#xA;        if (*data_present &amp;&amp;&#xA;            (error = av_write_frame(output_format_context, output_packet)) &lt; 0) {&#xA;            fprintf(stderr, "Could not write frame (error &#x27;%s&#x27;)\n",&#xA;                    av_err2str(error));&#xA;            goto cleanup;&#xA;        }&#xA;    &#xA;    cleanup:&#xA;        av_packet_free(&amp;output_packet);&#xA;        return error;&#xA;    }&#xA;    &#xA;    /**&#xA;     * Load one audio frame from the FIFO buffer, encode and write it to the&#xA;     * output file.&#xA;     */&#xA;    static int load_encode_and_write(AVAudioFifo *fifo,&#xA;                                     AVFormatContext *output_format_context,&#xA;                                     AVCodecContext *output_codec_context)&#xA;    {&#xA;        AVFrame *output_frame;&#xA;        /* Use the maximum number of possible samples per frame.&#xA;         * If there is less than the maximum possible frame size in the FIFO&#xA;         * buffer use this number. Otherwise, use the maximum possible frame size. */&#xA;        const int frame_size = FFMIN(av_audio_fifo_size(fifo),&#xA;                                     output_codec_context->frame_size);&#xA;        int data_written;&#xA;    &#xA;        if (init_output_frame(&amp;output_frame, output_codec_context, frame_size))&#xA;            return AVERROR_EXIT;&#xA;    &#xA;        /* Read as many samples from the FIFO buffer as required to fill the frame.&#xA;         * The samples are stored in the frame temporarily. */&#xA;        if (av_audio_fifo_read(fifo, (void **)output_frame->data, frame_size) &lt; frame_size) {&#xA;            fprintf(stderr, "Could not read data from FIFO\n");&#xA;            av_frame_free(&amp;output_frame);&#xA;            return AVERROR_EXIT;&#xA;        }&#xA;    &#xA;        /* Encode one frame worth of audio samples. */&#xA;        if (encode_audio_frame(output_frame, output_format_context,&#xA;                               output_codec_context, &amp;data_written)) {&#xA;            av_frame_free(&amp;output_frame);&#xA;            return AVERROR_EXIT;&#xA;        }&#xA;        av_frame_free(&amp;output_frame);&#xA;        return 0;&#xA;    }&#xA;    &#xA;    /**&#xA;     * Write the trailer of the output file container.&#xA;     */&#xA;    static int write_output_file_trailer(AVFormatContext *output_format_context)&#xA;    {&#xA;        int error;&#xA;        if ((error = av_write_trailer(output_format_context)) &lt; 0) {&#xA;            fprintf(stderr, "Could not write output file trailer (error &#x27;%s&#x27;)\n",&#xA;                    av_err2str(error));&#xA;            return error;&#xA;        }&#xA;        return 0;&#xA;    }&#xA;    &#xA;    int main(int argc, char **argv)&#xA;    {&#xA;        AVFormatContext *input_format_context = NULL, *output_format_context = NULL;&#xA;        AVCodecContext *input_codec_context = NULL, *output_codec_context = NULL;&#xA;        SwrContext *resample_context = NULL;&#xA;        AVAudioFifo *fifo = NULL;&#xA;        int ret = AVERROR_EXIT;&#xA;    &#xA;        if (argc != 3) {&#xA;            fprintf(stderr, "Usage: %s <input file="file" /> <output file="file">\n", argv[0]);&#xA;            exit(1);&#xA;        }&#xA;    &#xA;&#xA;        if (open_input_file(argv[1], &amp;input_format_context,&#xA;                            &amp;input_codec_context))&#xA;            goto cleanup;&#xA;&#xA;        if (open_output_file(argv[2], input_codec_context,&#xA;                             &amp;output_format_context, &amp;output_codec_context))&#xA;            goto cleanup;&#xA;&#xA;        if (init_resampler(input_codec_context, output_codec_context,&#xA;                           &amp;resample_context))&#xA;            goto cleanup;&#xA;&#xA;        if (init_fifo(&amp;fifo, output_codec_context))&#xA;            goto cleanup;&#xA;&#xA;        if (write_output_file_header(output_format_context))&#xA;            goto cleanup;&#xA;    &#xA;        while (1) {&#xA;            /* Use the encoder&#x27;s desired frame size for processing. */&#xA;            const int output_frame_size = output_codec_context->frame_size;&#xA;            int finished                = 0;&#xA;    &#xA;            while (av_audio_fifo_size(fifo) &lt; output_frame_size) {&#xA;                /* Decode one frame worth of audio samples, convert it to the&#xA;                 * output sample format and put it into the FIFO buffer. */&#xA;                if (read_decode_convert_and_store(fifo, input_format_context,&#xA;                                                  input_codec_context,&#xA;                                                  output_codec_context,&#xA;                                                  resample_context, &amp;finished))&#xA;                    goto cleanup;&#xA;    &#xA;                if (finished)&#xA;                    break;&#xA;            }&#xA;    &#xA;            while (av_audio_fifo_size(fifo) >= output_frame_size ||&#xA;                   (finished &amp;&amp; av_audio_fifo_size(fifo) > 0))&#xA;                if (load_encode_and_write(fifo, output_format_context,&#xA;                                          output_codec_context))&#xA;                    goto cleanup;&#xA;    &#xA;            if (finished) {&#xA;                int data_written;&#xA;                do {&#xA;                    if (encode_audio_frame(NULL, output_format_context,&#xA;                                           output_codec_context, &amp;data_written))&#xA;                        goto cleanup;&#xA;                } while (data_written);&#xA;                break;&#xA;            }&#xA;        }&#xA;    &#xA;        if (write_output_file_trailer(output_format_context))&#xA;            goto cleanup;&#xA;        ret = 0;&#xA;    &#xA;    cleanup:&#xA;        if (fifo)&#xA;            av_audio_fifo_free(fifo);&#xA;        swr_free(&amp;resample_context);&#xA;        if (output_codec_context)&#xA;            avcodec_free_context(&amp;output_codec_context);&#xA;        if (output_format_context) {&#xA;            avio_closep(&amp;output_format_context->pb);&#xA;            avformat_free_context(output_format_context);&#xA;        }&#xA;        if (input_codec_context)&#xA;            avcodec_free_context(&amp;input_codec_context);&#xA;        if (input_format_context)&#xA;            avformat_close_input(&amp;input_format_context);&#xA;    &#xA;        return ret;&#xA;    }&#xA;</output>

    &#xA;

  • Matomo Celebrates 15 Years of Building an Open-Source & Transparent Web Analytics Solution

    30 juin 2022, par Matthieu Aubry — About, Community
    &lt;script type=&quot;text/javascript&quot;&gt;<br />
           if ('function' === typeof window.playMatomoVideo){<br />
           window.playMatomoVideo(&quot;brand&quot;, &quot;#brand&quot;)<br />
           } else {<br />
           document.addEventListener(&quot;DOMContentLoaded&quot;, function() { window.playMatomoVideo(&quot;brand&quot;, &quot;#brand&quot;); });<br />
           }<br />
      &lt;/script&gt;

    Fifteen years ago, I realised that people (myself included) were increasingly integrating the internet into their everyday lives, and it was clear that it would only expand in the future. It was an exciting new world, but the amount of personal data shared online, level of tracking and lack of security was a growing concern. Google Analytics was just launched then and was already gaining huge traction – so data from millions of websites started flowing into Google’s database, creating what was then the biggest centralised database about people worldwide and their actions online.

    So as a young engineering student, I decided we needed to build an open source and transparent solution that could help make the internet more secure and private while still providing organisations with powerful insights. I aimed to create a win-win solution for businesses and their digital consumers.

    And in 2007, I started developing Matomo with the help from Scott Switzer and Jennifer Langdon (who offered me an internship and support).   

    All thanks to the Matomo Community

    We have reached significant milestones and made major changes over the last 15 years, but we wouldn’t be where we are today without the Matomo Community.

    So I would like to celebrate and thank the hundreds of volunteer developers who have donated their time to develop Matomo, the thousands of contributors who provided feedback to improve Matomo, the countless supportive forum members, our passionate team of 40 at Matomo, the numerous translators who have translated Matomo and the 1.5 million websites that choose Matomo as their analytics platform.

    Matomo's Birthday
    Team Meetup in Paris in 2012

    Matomo has been a community effort built on the shoulders of many, and we will continue to work for you. 

    So let’s look at some milestones we have achieved over the last 15 years.

    Looking back on milestones in our timeline

    2007

    • Birth of Matomo
    • First alpha version released

    2008

    • Release first public 0.1.0 version

    2009

    • 50,000 websites use Matomo

    2010

    • Matomo first stable 1.0.0 released
    • Mobile app launched

    2011

    • Released Ecommerce Analytics, Custom Variables, First Party Cookies

    • Released Privacy control features (first of many privacy features to come !)

    2012

    • Released Log Analytics feature
    • 1 Million Downloads !
    • 300,000 websites worldwide use Matomo

    2013

    • Matomo is now available in 50 languages !
    • Matomo brand redesign

    2016

    2017

    • Launched Matomo Cloud service 
    • Released Multi Channel Conversion Attribution Premium Feature, Custom Reports Premium Feature, Login Saml Premium Feature, WooCommerceAnalytics Premium Feature and Heatmap & Session Recording Premium Feature 

    2018

    2019

    2020

    2021

    • 1,000,000 websites worldwide use Matomo
    • including 30,000 active Matomo for WordPress installations
    • Released SEO Web Vitals, Advertising Conversion Export and Tracking Spam Prevention feature

    2022

    • Released WP Statistics to Matomo importer

    Our efforts continue

    While we’ve seen incredible growth over the years, our work doesn’t stop there. In fact, we’re only just getting started.

    Today over 55% of the internet continues to use privacy-threatening web analytics solutions, while 1.5% uses Matomo. So there are still great strides to be made to create a more private internet, and joining the Matomo Community is one way to support this movement.

    There are many ways to get involved too, such as :

    So what comes next for Matomo ?

    The future of Matomo is approachable, powerful and flexible. We’re strengthening the customers’ voice, expanding our resources internally (we’re continuously hiring !) and conducting rigorous customer research to craft a tool that balances usability and functionality.

    I look forward to the next 15 years and seeing what the future holds for Matomo and our community.

  • fluent-ffmpeg concatenate files ends up with wrong length

    24 juillet 2021, par Hugo Cox

    I have the following input file :

    &#xA;

    ffconcat version 1.0&#xA;file &#x27;../tmp/59bd6a7896654d0b0c00705f/vidR/intro.mp4&#x27; #0&#xA;file &#x27;../tmp/59bd6a7896654d0b0c00705f/vidR/out003.mp4&#x27; #1&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #2&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #2&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #2&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #2&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #2&#xA;file &#x27;../intersegment/paddingsegment_h264_0.6s_1920x1080_30fps.mp4&#x27; #2&#xA;file &#x27;../tmp/59bd6a7896654d0b0c00705f/vidR/out007.mp4&#x27; #3&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #4&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #4&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #4&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #4&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #4&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #4&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #4&#xA;file &#x27;../intersegment/paddingsegment_h264_1.6s_1920x1080_30fps.mp4&#x27; #4&#xA;file &#x27;../tmp/59bd6a7896654d0b0c00705f/vidR/out013.mp4&#x27; #5&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #6&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #6&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #6&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #6&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #6&#xA;file &#x27;../intersegment/paddingsegment_h264_1.6s_1920x1080_30fps.mp4&#x27; #6&#xA;file &#x27;../tmp/59bd6a7896654d0b0c00705f/vidR/out017.mp4&#x27; #7&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #8&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #8&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #8&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #8&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #8&#xA;file &#x27;../intersegment/paddingsegment_h264_0.6s_1920x1080_30fps.mp4&#x27; #8&#xA;file &#x27;../tmp/59bd6a7896654d0b0c00705f/vidR/out021.mp4&#x27; #9&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #10&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #10&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #10&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #10&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #10&#xA;file &#x27;../intersegment/paddingsegment_h264_1.6s_1920x1080_30fps.mp4&#x27; #10&#xA;file &#x27;../tmp/59bd6a7896654d0b0c00705f/vidR/out025.mp4&#x27; #11&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #12&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #12&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #12&#xA;file &#x27;../intersegment/paddingsegment_h264_0.6s_1920x1080_30fps.mp4&#x27; #12&#xA;file &#x27;../tmp/59bd6a7896654d0b0c00705f/vidR/out028.mp4&#x27; #13&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #14&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #14&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #14&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #14&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #14&#xA;file &#x27;../intersegment/paddingsegment_h264_1.6s_1920x1080_30fps.mp4&#x27; #14&#xA;file &#x27;../tmp/59bd6a7896654d0b0c00705f/vidR/out032.mp4&#x27; #15&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #16&#xA;file &#x27;../intersegment/paddingsegment_h264_0.6s_1920x1080_30fps.mp4&#x27; #16&#xA;file &#x27;../tmp/59bd6a7896654d0b0c00705f/vidR/out034.mp4&#x27; #17&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #18&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #18&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #18&#xA;file &#x27;../intersegment/paddingsegment_h264_0.6s_1920x1080_30fps.mp4&#x27; #18&#xA;file &#x27;../tmp/59bd6a7896654d0b0c00705f/vidR/out037.mp4&#x27; #19&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #20&#xA;file &#x27;../intersegment/paddingsegment_h264_1.6s_1920x1080_30fps.mp4&#x27; #20&#xA;file &#x27;../tmp/59bd6a7896654d0b0c00705f/vidR/out039.mp4&#x27; #21&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #22&#xA;file &#x27;../intersegment/paddingsegment_h264_0.6s_1920x1080_30fps.mp4&#x27; #22&#xA;file &#x27;../tmp/59bd6a7896654d0b0c00705f/vidR/out041.mp4&#x27; #23&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #24&#xA;file &#x27;../intersegment/paddingsegment_h264_2.6s_1920x1080_30fps.mp4&#x27; #24&#xA;file &#x27;../tmp/59bd6a7896654d0b0c00705f/vidR/out043.mp4&#x27; #25&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #26&#xA;file &#x27;../intersegment/paddingsegment_h264_1.6s_1920x1080_30fps.mp4&#x27; #26&#xA;file &#x27;../tmp/59bd6a7896654d0b0c00705f/vidR/out045.mp4&#x27; #27&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #28&#xA;file &#x27;../intersegment/paddingsegment_h264_1.6s_1920x1080_30fps.mp4&#x27; #28&#xA;file &#x27;../tmp/59bd6a7896654d0b0c00705f/vidR/out047.mp4&#x27; #29&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #30&#xA;file &#x27;../intersegment/paddingsegment_h264_1.6s_1920x1080_30fps.mp4&#x27; #30&#xA;file &#x27;../tmp/59bd6a7896654d0b0c00705f/vidR/out049.mp4&#x27; #31&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #32&#xA;file &#x27;../intersegment/paddingsegment_h264_1.6s_1920x1080_30fps.mp4&#x27; #32&#xA;file &#x27;../tmp/59bd6a7896654d0b0c00705f/vidR/out051.mp4&#x27; #33&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #34&#xA;file &#x27;../intersegment/paddingsegment_h264_0.6s_1920x1080_30fps.mp4&#x27; #34&#xA;file &#x27;../tmp/59bd6a7896654d0b0c00705f/vidR/out053.mp4&#x27; #35&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #36&#xA;file &#x27;../intersegment/paddingsegment_h264_1.6s_1920x1080_30fps.mp4&#x27; #36&#xA;file &#x27;../tmp/59bd6a7896654d0b0c00705f/vidR/out055.mp4&#x27; #37&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #38&#xA;file &#x27;../intersegment/paddingsegment_h264_2.6s_1920x1080_30fps.mp4&#x27; #38&#xA;file &#x27;../tmp/59bd6a7896654d0b0c00705f/vidR/out057.mp4&#x27; #39&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #40&#xA;file &#x27;../intersegment/paddingsegment_h264_0.6s_1920x1080_30fps.mp4&#x27; #40&#xA;file &#x27;../tmp/59bd6a7896654d0b0c00705f/vidR/out059.mp4&#x27; #41&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #42&#xA;file &#x27;../intersegment/paddingsegment_h264_1.6s_1920x1080_30fps.mp4&#x27; #42&#xA;file &#x27;../tmp/59bd6a7896654d0b0c00705f/vidR/out061.mp4&#x27; #43&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #44&#xA;file &#x27;../intersegment/paddingsegment_h264_1.6s_1920x1080_30fps.mp4&#x27; #44&#xA;file &#x27;../tmp/59bd6a7896654d0b0c00705f/vidR/out063.mp4&#x27; #45&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #46&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #46&#xA;file &#x27;../intersegment/paddingsegment_h264_0.6s_1920x1080_30fps.mp4&#x27; #46&#xA;file &#x27;../tmp/59bd6a7896654d0b0c00705f/vidR/out065.mp4&#x27; #47&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #48&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #48&#xA;file &#x27;../intersegment/paddingsegment_h264_1.6s_1920x1080_30fps.mp4&#x27; #48&#xA;file &#x27;../tmp/59bd6a7896654d0b0c00705f/vidR/out067.mp4&#x27; #49&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #50&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #50&#xA;file &#x27;../intersegment/paddingsegment_h264_2.6s_1920x1080_30fps.mp4&#x27; #50&#xA;file &#x27;../tmp/59bd6a7896654d0b0c00705f/vidR/out070.mp4&#x27; #51&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #52&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #52&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #52&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #52&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #52&#xA;file &#x27;../intersegment/paddingsegment_h264_1.6s_1920x1080_30fps.mp4&#x27; #52&#xA;file &#x27;../tmp/59bd6a7896654d0b0c00705f/vidR/out074.mp4&#x27; #53&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #54&#xA;file &#x27;../intersegment/paddingsegment_h264_0.6s_1920x1080_30fps.mp4&#x27; #54&#xA;file &#x27;../tmp/59bd6a7896654d0b0c00705f/vidR/out076.mp4&#x27; #55&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #56&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #56&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #56&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #56&#xA;file &#x27;../intersegment/paddingsegment_h264_1.6s_1920x1080_30fps.mp4&#x27; #56&#xA;file &#x27;../tmp/59bd6a7896654d0b0c00705f/vidR/out080.mp4&#x27; #57&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #58&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #58&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #58&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #58&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #58&#xA;file &#x27;../intersegment/paddingsegment_h264_1.6s_1920x1080_30fps.mp4&#x27; #58&#xA;file &#x27;../tmp/59bd6a7896654d0b0c00705f/vidR/out084.mp4&#x27; #59&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #60&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #60&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #60&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #60&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #60&#xA;file &#x27;../intersegment/paddingsegment_h264_1.6s_1920x1080_30fps.mp4&#x27; #60&#xA;file &#x27;../tmp/59bd6a7896654d0b0c00705f/vidR/out089.mp4&#x27; #61&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #62&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #62&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #62&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #62&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #62&#xA;file &#x27;../intersegment/paddingsegment_h264_1.6s_1920x1080_30fps.mp4&#x27; #62&#xA;file &#x27;../tmp/59bd6a7896654d0b0c00705f/vidR/out093.mp4&#x27; #63&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #64&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #64&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #64&#xA;file &#x27;../intersegment/paddingsegment_h264_1.6s_1920x1080_30fps.mp4&#x27; #64&#xA;file &#x27;../tmp/59bd6a7896654d0b0c00705f/vidR/out096.mp4&#x27; #65&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #66&#xA;file &#x27;../intersegment/paddingsegment_h264_2.6s_1920x1080_30fps.mp4&#x27; #66&#xA;file &#x27;../tmp/59bd6a7896654d0b0c00705f/vidR/out098.mp4&#x27; #67&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #68&#xA;file &#x27;../intersegment/paddingsegment_h264_2.6s_1920x1080_30fps.mp4&#x27; #68&#xA;file &#x27;../tmp/59bd6a7896654d0b0c00705f/vidR/out100.mp4&#x27; #69&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #70&#xA;file &#x27;../intersegment/paddingsegment_h264_0.6s_1920x1080_30fps.mp4&#x27; #70&#xA;file &#x27;../tmp/59bd6a7896654d0b0c00705f/vidR/out102.mp4&#x27; #71&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #72&#xA;file &#x27;../intersegment/paddingsegment_h264_1.6s_1920x1080_30fps.mp4&#x27; #72&#xA;file &#x27;../tmp/59bd6a7896654d0b0c00705f/vidR/out104.mp4&#x27; #73&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #74&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #74&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #74&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #74&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #74&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #74&#xA;file &#x27;../intersegment/paddingsegment_h264_0.6s_1920x1080_30fps.mp4&#x27; #74&#xA;file &#x27;../tmp/59bd6a7896654d0b0c00705f/vidR/out109.mp4&#x27; #75&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #76&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #76&#xA;file &#x27;../intersegment/paddingsegment_h264_2.6s_1920x1080_30fps.mp4&#x27; #76&#xA;file &#x27;../tmp/59bd6a7896654d0b0c00705f/vidR/out112.mp4&#x27; #77&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #78&#xA;file &#x27;../intersegment/paddingsegment_h264_0.6s_1920x1080_30fps.mp4&#x27; #78&#xA;file &#x27;../tmp/59bd6a7896654d0b0c00705f/vidR/out114.mp4&#x27; #79&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #80&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #80&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #80&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #80&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #80&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #80&#xA;file &#x27;../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27; #80&#xA;file &#x27;../intersegment/paddingsegment_h264_0.6s_1920x1080_30fps.mp4&#x27; #80&#xA;file &#x27;../tmp/59bd6a7896654d0b0c00705f/vidR/out119.mp4&#x27; #81&#xA;file &#x27;../tmp/59bd6a7896654d0b0c00705f/vidR/outro.mp4&#x27; #82&#xA;

    &#xA;

    However, when I use the following command :

    &#xA;

                ffmpeg(__dirname &#x2B; &#x27;/log/vidR_concatenate.txt&#x27;)&#xA;                .inputFormat(&#x27;concat&#x27;)&#xA;                .inputOptions([&#xA;                    &#x27;-safe 0&#x27;&#xA;                ]).outputOptions([&#xA;                    &#x27;-c copy&#x27;&#xA;                ]).output(__dirname &#x2B; &#x27;/output/&#x27; &#x2B; ID &#x2B; &#x27;/video/1080p/&#x27; &#x2B; ID &#x2B; &#x27;-R-1080p.mp4&#x27;)&#xA;                .on(&#x27;start&#x27;, function (commandLine) {&#xA;                    console.log(&#x27;Spawned Ffmpeg with command: &#x27; &#x2B; commandLine);&#xA;                })&#xA;                .on(&#x27;error&#x27;, function (err, stdout, stderr) {&#xA;                    console.log(&#x27;An error occurred: &#x27; &#x2B; err.message, err, stderr);&#xA;                })&#xA;                .on(&#x27;progress&#x27;, function (progress) {&#xA;                    console.log(&#x27;Processing: &#x27; &#x2B; progress.percent &#x2B; &#x27;% done&#x27;)&#xA;                })&#xA;                .on(&#x27;end&#x27;, function (err, stdout, stderr) {&#xA;                    console.log(&#x27;Finished vidR processing!&#x27; /*, err, stdout, stderr*/)&#xA;                    resolve()&#xA;                })&#xA;                .run()&#xA;

    &#xA;

    I do not end up with a video length that is the sum of all individual videos !

    &#xA;

    ffprobe -i ../intersegment/intersegment_h264_3s_1920x1080_30fps.mp4 :

    &#xA;

    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from &#x27;intersegment/intersegment_h264_3s_1920x1080_30fps.mp4&#x27;:&#xA;  Metadata:&#xA;    major_brand     : isom&#xA;    minor_version   : 512&#xA;    compatible_brands: isomiso2avc1mp41&#xA;    encoder         : Lavf58.29.100&#xA;  Duration: 00:00:03.00, start: 0.000000, bitrate: 24 kb/s&#xA;    Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], 19 kb/s, 30 fps, 30 tbr, 15360 tbn, 60 tbc (default)&#xA;    Metadata:&#xA;      handler_name    : VideoHandler&#xA;

    &#xA;

    ffprobe -i ../intersegment/paddingsegment_h264_0.6s_1920x1080_30fps.mp4 :

    &#xA;

    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from &#x27;intersegment/paddingsegment_h264_0.6s_1920x1080_30fps.mp4&#x27;:&#xA;  Metadata:&#xA;    major_brand     : isom&#xA;    minor_version   : 512&#xA;    compatible_brands: isomiso2avc1mp41&#xA;    encoder         : Lavf58.29.100&#xA;  Duration: 00:00:00.60, start: 0.000000, bitrate: 45 kb/s&#xA;    Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], 30 kb/s, 30 fps, 30 tbr, 15360 tbn, 60 tbc (default)&#xA;    Metadata:&#xA;      handler_name    : VideoHandler&#xA;

    &#xA;

    ffprobe -i ../tmp/59bd6a7896654d0b0c00705f/vidR/intro.mp4 :

    &#xA;

    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from &#x27;tmp/59bd6a7896654d0b0c00705f/vidR/intro.mp4&#x27;:&#xA;  Metadata:&#xA;    major_brand     : isom&#xA;    minor_version   : 512&#xA;    compatible_brands: isomiso2avc1mp41&#xA;    encoder         : Lavf58.29.100&#xA;  Duration: 00:00:14.80, start: 0.000000, bitrate: 21 kb/s&#xA;    Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], 17 kb/s, 30 fps, 30 tbr, 15360 tbn, 60 tbc (default)&#xA;    Metadata:&#xA;      handler_name    : VideoHandler&#xA;

    &#xA;

    ffprobe -i ../tmp/59bd6a7896654d0b0c00705f/vidR/outro.mp4 :

    &#xA;

    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from &#x27;tmp/59bd6a7896654d0b0c00705f/vidR/outro.mp4&#x27;:&#xA;  Metadata:&#xA;    major_brand     : isom&#xA;    minor_version   : 512&#xA;    compatible_brands: isomiso2avc1mp41&#xA;    encoder         : Lavf58.29.100&#xA;  Duration: 00:00:05.30, start: 0.000000, bitrate: 22 kb/s&#xA;    Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], 18 kb/s, 30 fps, 30 tbr, 15360 tbn, 60 tbc (default)&#xA;    Metadata:&#xA;      handler_name    : VideoHandler&#xA;

    &#xA;

    ffprobe -i ../tmp/59bd6a7896654d0b0c00705f/vidR/out003.mp4 :

    &#xA;

    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from &#x27;tmp/59bd6a7896654d0b0c00705f/vidR/out003.mp4&#x27;:&#xA;  Metadata:&#xA;    major_brand     : isom&#xA;    minor_version   : 512&#xA;    compatible_brands: isomiso2avc1mp41&#xA;    title           : Big Buck Bunny, Sunflower version&#xA;    artist          : Blender Foundation 2008, Janus Bager Kristensen 2013&#xA;    composer        : Sacha Goedegebure&#xA;    encoder         : Lavf58.29.100&#xA;    comment         : Creative Commons Attribution 3.0 - http://bbb3d.renderfarming.net&#xA;    genre           : Animation&#xA;  Duration: 00:00:08.40, start: 0.000000, bitrate: 3703 kb/s&#xA;    Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], 3699 kb/s, 30 fps, 30 tbr, 15360 tbn, 60 tbc (default)&#xA;    Metadata:&#xA;      handler_name    : GPAC ISO Video Handler&#xA;

    &#xA;

    I thought all the fps, tbr, tbn and tbc are the same, so what is the problem ??&#xA;It is off several seconds from the total sum of each individual file !

    &#xA;