Recherche avancée

Médias (1)

Mot : - Tags -/Christian Nold

Autres articles (74)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

Sur d’autres sites (8773)

  • ffmpeg overlay/amix audio sync issue

    28 décembre 2016, par user3847630

    I have modified the transcoding example in ffmpeg to take two inputs and passing the video/audio frames through overlay/amix filters .In output I am getting the second video overlayed on first and both audios mixed together.But there is sync loss between overlayed video and its audio.It would be great help,if someone tells me how to achieve the sync between ovelayed video and audio.

  • Ffmpeg in Android Studio

    7 décembre 2016, par Ahmed Abd-Elmeged

    i’am trying to add Ffmpeg library to use it in Live Streaming app based in Youtube APi when i add it and i use Cmake Build tool the header files only shown in the c native class which i create and i want it to be included in the other header files
    which came with library how can i do it in cmake script or onther way ?

    This my cmake build script

    # Sets the minimum version of CMake required to build the native
    # library. You should either keep the default value or only pass a
    # value of 3.4.0 or lower.

    cmake_minimum_required(VERSION 3.4.1)

    # Creates and names a library, sets it as either STATIC
    # or SHARED, and provides the relative paths to its source code.
    # You can define multiple libraries, and CMake builds it for you.
    # Gradle automatically packages shared libraries with your APK.

    add_library( # Sets the name of the library.
                ffmpeg-jni

                # Sets the library as a shared library.
                SHARED

                # Provides a relative path to your source file(s).
                # Associated headers in the same location as their source
                # file are automatically included.
                src/main/cpp/ffmpeg-jni.c )

    # Searches for a specified prebuilt library and stores the path as a
    # variable. Because system libraries are included in the search path by
    # default, you only need to specify the name of the public NDK library
    # you want to add. CMake verifies that the library exists before
    # completing its build.

    find_library( # Sets the name of the path variable.
                 log-lib

                 # Specifies the name of the NDK library that
                 # you want CMake to locate.
                 log )

    # Specifies libraries CMake should link to your target library. You
    # can link multiple libraries, such as libraries you define in the
    # build script, prebuilt third-party libraries, or system libraries.

    target_link_libraries( # Specifies the target library.
                          ffmpeg-jni

                          # Links the target library to the log library
                          # included in the NDK.
                          ${log-lib} )

    include_directories(src/main/cpp/include/ffmpeglib)

    include_directories(src/main/cpp/include/libavcodec)

    include_directories(src/main/cpp/include/libavformat)

    include_directories(src/main/cpp/include/libavutil)

    This my native class i create as native file i android studio and i also have an error in jintJNIcall not found

    #include <android></android>log.h>
    #include
    #include

    #include "libavcodec/avcodec.h"
    #include "libavformat/avformat.h"
    #include "libavutil/opt.h"



    #ifdef __cplusplus
    extern "C" {
    #endif

    JNIEXPORT jboolean
    JNICALL Java_com_example_venrto_ffmpegtest_Ffmpeg_init(JNIEnv *env, jobject thiz,
                                                            jint width, jint height,
                                                            jint audio_sample_rate,
                                                            jstring rtmp_url);
    JNIEXPORT void JNICALL Java_com_example_venrto_ffmpegtest_Ffmpeg_shutdown(JNIEnv
                                                                               *env,
                                                                               jobject thiz
    );
    JNIEXPORT jintJNICALL
           Java_com_example_venrto_ffmpegtest_Ffmpeg_encodeVideoFrame(JNIEnv
                                                                        *env,
                                                                        jobject thiz,
                                                                        jbyteArray
                                                                        yuv_image);
    JNIEXPORT jint
    JNICALL Java_com_example_venrto_ffmpegtest_Ffmpeg_encodeAudioFrame(JNIEnv *env,
                                                                        jobject thiz,
                                                                        jshortArray audio_data,
                                                                        jint length);

    #ifdef __cplusplus
    }
    #endif

    #define LOGI(...) __android_log_print(ANDROID_LOG_INFO, "ffmpeg-jni", __VA_ARGS__)
    #define URL_WRONLY 2
    static AVFormatContext *fmt_context;
    static AVStream *video_stream;
    static AVStream *audio_stream;

    static int pts
           = 0;
    static int last_audio_pts = 0;

    // Buffers for UV format conversion
    static unsigned char *u_buf;
    static unsigned char *v_buf;

    static int enable_audio = 1;
    static int64_t audio_samples_written = 0;
    static int audio_sample_rate = 0;

    // Stupid buffer for audio samples. Not even a proper ring buffer
    #define AUDIO_MAX_BUF_SIZE 16384  // 2x what we get from Java
    static short audio_buf[AUDIO_MAX_BUF_SIZE];
    static int audio_buf_size = 0;

    void AudioBuffer_Push(const short *audio, int num_samples) {
       if (audio_buf_size >= AUDIO_MAX_BUF_SIZE - num_samples) {
           LOGI("AUDIO BUFFER OVERFLOW: %i + %i > %i", audio_buf_size, num_samples,
                AUDIO_MAX_BUF_SIZE);
           return;
       }
       for (int i = 0; i &lt; num_samples; i++) {
           audio_buf[audio_buf_size++] = audio[i];
       }
    }

    int AudioBuffer_Size() { return audio_buf_size; }

    short *AudioBuffer_Get() { return audio_buf; }

    void AudioBuffer_Pop(int num_samples) {
       if (num_samples > audio_buf_size) {
           LOGI("Audio buffer Pop WTF: %i vs %i", num_samples, audio_buf_size);
           return;
       }
       memmove(audio_buf, audio_buf + num_samples, num_samples * sizeof(short));
       audio_buf_size -= num_samples;
    }

    void AudioBuffer_Clear() {
       memset(audio_buf, 0, sizeof(audio_buf));
       audio_buf_size = 0;
    }

    static void log_callback(void *ptr, int level, const char *fmt, va_list vl) {
       char x[2048];
       vsnprintf(x, 2048, fmt, vl);
       LOGI(x);
    }

    JNIEXPORT jboolean
    JNICALL Java_com_example_venrto_ffmpegtest_Ffmpeg_init(JNIEnv *env, jobject thiz,
                                                            jint width, jint height,
                                                            jint audio_sample_rate_param,
                                                            jstring rtmp_url) {
       avcodec_register_all();
       av_register_all();
       av_log_set_callback(log_callback);

       fmt_context = avformat_alloc_context();
       AVOutputFormat *ofmt = av_guess_format("flv", NULL, NULL);
       if (ofmt) {
           LOGI("av_guess_format returned %s", ofmt->long_name);
       } else {
           LOGI("av_guess_format fail");
           return JNI_FALSE;
       }

       fmt_context->oformat = ofmt;
       LOGI("creating video stream");
       video_stream = av_new_stream(fmt_context, 0);

       if (enable_audio) {
           LOGI("creating audio stream");
           audio_stream = av_new_stream(fmt_context, 1);
       }

       // Open Video Codec.
       // ======================
       AVCodec *video_codec = avcodec_find_encoder(AV_CODEC_ID_H264);
       if (!video_codec) {
           LOGI("Did not find the video codec");
           return JNI_FALSE;  // leak!
       } else {
           LOGI("Video codec found!");
       }
       AVCodecContext *video_codec_ctx = video_stream->codec;
       video_codec_ctx->codec_id = video_codec->id;
       video_codec_ctx->codec_type = AVMEDIA_TYPE_VIDEO;
       video_codec_ctx->level = 31;

       video_codec_ctx->width = width;
       video_codec_ctx->height = height;
       video_codec_ctx->pix_fmt = PIX_FMT_YUV420P;
       video_codec_ctx->rc_max_rate = 0;
       video_codec_ctx->rc_buffer_size = 0;
       video_codec_ctx->gop_size = 12;
       video_codec_ctx->max_b_frames = 0;
       video_codec_ctx->slices = 8;
       video_codec_ctx->b_frame_strategy = 1;
       video_codec_ctx->coder_type = 0;
       video_codec_ctx->me_cmp = 1;
       video_codec_ctx->me_range = 16;
       video_codec_ctx->qmin = 10;
       video_codec_ctx->qmax = 51;
       video_codec_ctx->keyint_min = 25;
       video_codec_ctx->refs = 3;
       video_codec_ctx->trellis = 0;
       video_codec_ctx->scenechange_threshold = 40;
       video_codec_ctx->flags |= CODEC_FLAG_LOOP_FILTER;
       video_codec_ctx->me_method = ME_HEX;
       video_codec_ctx->me_subpel_quality = 6;
       video_codec_ctx->i_quant_factor = 0.71;
       video_codec_ctx->qcompress = 0.6;
       video_codec_ctx->max_qdiff = 4;
       video_codec_ctx->time_base.den = 10;
       video_codec_ctx->time_base.num = 1;
       video_codec_ctx->bit_rate = 3200 * 1000;
       video_codec_ctx->bit_rate_tolerance = 0;
       video_codec_ctx->flags2 |= 0x00000100;

       fmt_context->bit_rate = 4000 * 1000;

       av_opt_set(video_codec_ctx, "partitions", "i8x8,i4x4,p8x8,b8x8", 0);
       av_opt_set_int(video_codec_ctx, "direct-pred", 1, 0);
       av_opt_set_int(video_codec_ctx, "rc-lookahead", 0, 0);
       av_opt_set_int(video_codec_ctx, "fast-pskip", 1, 0);
       av_opt_set_int(video_codec_ctx, "mixed-refs", 1, 0);
       av_opt_set_int(video_codec_ctx, "8x8dct", 0, 0);
       av_opt_set_int(video_codec_ctx, "weightb", 0, 0);

       if (fmt_context->oformat->flags &amp; AVFMT_GLOBALHEADER)
           video_codec_ctx->flags |= CODEC_FLAG_GLOBAL_HEADER;

       LOGI("Opening video codec");
       AVDictionary *vopts = NULL;
       av_dict_set(&amp;vopts, "profile", "main", 0);
       //av_dict_set(&amp;vopts, "vprofile", "main", 0);
       av_dict_set(&amp;vopts, "rc-lookahead", 0, 0);
       av_dict_set(&amp;vopts, "tune", "film", 0);
       av_dict_set(&amp;vopts, "preset", "ultrafast", 0);
       av_opt_set(video_codec_ctx->priv_data, "tune", "film", 0);
       av_opt_set(video_codec_ctx->priv_data, "preset", "ultrafast", 0);
       av_opt_set(video_codec_ctx->priv_data, "tune", "film", 0);
       int open_res = avcodec_open2(video_codec_ctx, video_codec, &amp;vopts);
       if (open_res &lt; 0) {
           LOGI("Error opening video codec: %i", open_res);
           return JNI_FALSE;   // leak!
       }

       // Open Audio Codec.
       // ======================

       if (enable_audio) {
           AudioBuffer_Clear();
           audio_sample_rate = audio_sample_rate_param;
           AVCodec *audio_codec = avcodec_find_encoder(AV_CODEC_ID_AAC);
           if (!audio_codec) {
               LOGI("Did not find the audio codec");
               return JNI_FALSE;  // leak!
           } else {
               LOGI("Audio codec found!");
           }
           AVCodecContext *audio_codec_ctx = audio_stream->codec;
           audio_codec_ctx->codec_id = audio_codec->id;
           audio_codec_ctx->codec_type = AVMEDIA_TYPE_AUDIO;
           audio_codec_ctx->bit_rate = 128000;
           audio_codec_ctx->bit_rate_tolerance = 16000;
           audio_codec_ctx->channels = 1;
           audio_codec_ctx->profile = FF_PROFILE_AAC_LOW;
           audio_codec_ctx->sample_fmt = AV_SAMPLE_FMT_FLT;
           audio_codec_ctx->sample_rate = 44100;

           LOGI("Opening audio codec");
           AVDictionary *opts = NULL;
           av_dict_set(&amp;opts, "strict", "experimental", 0);
           open_res = avcodec_open2(audio_codec_ctx, audio_codec, &amp;opts);
           LOGI("audio frame size: %i", audio_codec_ctx->frame_size);

           if (open_res &lt; 0) {
               LOGI("Error opening audio codec: %i", open_res);
               return JNI_FALSE;   // leak!
           }
       }

       const jbyte *url = (*env)->GetStringUTFChars(env, rtmp_url, NULL);

       // Point to an output file
       if (!(ofmt->flags &amp; AVFMT_NOFILE)) {
           if (avio_open(&amp;fmt_context->pb, url, URL_WRONLY) &lt; 0) {
               LOGI("ERROR: Could not open file %s", url);
               return JNI_FALSE;  // leak!
           }
       }
       (*env)->ReleaseStringUTFChars(env, rtmp_url, url);

       LOGI("Writing output header.");
       // Write file header
       if (avformat_write_header(fmt_context, NULL) != 0) {
           LOGI("ERROR: av_write_header failed");
           return JNI_FALSE;
       }

       pts = 0;
       last_audio_pts = 0;
       audio_samples_written = 0;

       // Initialize buffers for UV format conversion
       int frame_size = video_codec_ctx->width * video_codec_ctx->height;
       u_buf = (unsigned char *) av_malloc(frame_size / 4);
       v_buf = (unsigned char *) av_malloc(frame_size / 4);

       LOGI("ffmpeg encoding init done");
       return JNI_TRUE;
    }

    JNIEXPORT void JNICALL
    Java_com_example_venrto_ffmpegtest_Ffmpeg_shutdown(JNIEnv
                                                        *env,
                                                        jobject thiz
    ) {
       av_write_trailer(fmt_context);
       avio_close(fmt_context
                          ->pb);
       avcodec_close(video_stream
                             ->codec);
       if (enable_audio) {
           avcodec_close(audio_stream
                                 ->codec);
       }
       av_free(fmt_context);
       av_free(u_buf);
       av_free(v_buf);

       fmt_context = NULL;
       u_buf = NULL;
       v_buf = NULL;
    }

    JNIEXPORT jintJNICALL
    Java_com_example_venrto_ffmpegtest_Ffmpeg_encodeVideoFrame(JNIEnv
                                                                *env,
                                                                jobject thiz,
                                                                jbyteArray
                                                                yuv_image) {
       int yuv_length = (*env)->GetArrayLength(env, yuv_image);
       unsigned char *yuv_data = (*env)->GetByteArrayElements(env, yuv_image, 0);

       AVCodecContext *video_codec_ctx = video_stream->codec;
    //LOGI("Yuv size: %i w: %i h: %i", yuv_length, video_codec_ctx->width, video_codec_ctx->height);

       int frame_size = video_codec_ctx->width * video_codec_ctx->height;

       const unsigned char *uv = yuv_data + frame_size;

    // Convert YUV from NV12 to I420. Y channel is the same so we don't touch it,
    // we just have to deinterleave UV.
       for (
               int i = 0;
               i &lt; frame_size / 4; i++) {
           v_buf[i] = uv[i * 2];
           u_buf[i] = uv[i * 2 + 1];
       }

       AVFrame source;
       memset(&amp;source, 0, sizeof(AVFrame));
       source.data[0] =
               yuv_data;
       source.data[1] =
               u_buf;
       source.data[2] =
               v_buf;
       source.linesize[0] = video_codec_ctx->
               width;
       source.linesize[1] = video_codec_ctx->width / 2;
       source.linesize[2] = video_codec_ctx->width / 2;

    // only for bitrate regulation. irrelevant for sync.
       source.
               pts = pts;
       pts++;

       int out_length = frame_size + (frame_size / 2);
       unsigned char *out = (unsigned char *) av_malloc(out_length);
       int compressed_length = avcodec_encode_video(video_codec_ctx, out, out_length, &amp;source);

       (*env)->
               ReleaseByteArrayElements(env, yuv_image, yuv_data,
                                        0);

    // Write to file too
       if (compressed_length > 0) {
           AVPacket pkt;
           av_init_packet(&amp;pkt);
           pkt.
                   pts = last_audio_pts;
           if (video_codec_ctx->coded_frame &amp;&amp; video_codec_ctx->coded_frame->key_frame) {
               pkt.flags |= 0x0001;
           }
           pkt.
                   stream_index = video_stream->index;
           pkt.
                   data = out;
           pkt.
                   size = compressed_length;
           if (
                   av_interleaved_write_frame(fmt_context,
                                              &amp;pkt) != 0) {
               LOGI("Error writing video frame");
           }
       } else {
           LOGI("??? compressed_length &lt;= 0");
       }

       last_audio_pts++;

       av_free(out);
       return
               compressed_length;
    }

    JNIEXPORT jintJNICALL
    Java_com_example_venrto_ffmpegtest_Ffmpeg_encodeAudioFrame(JNIEnv
                                                                *env,
                                                                jobject thiz,
                                                                jshortArray
                                                                audio_data,
                                                                jint length
    ) {
       if (!enable_audio) {
           return 0;
       }

       short *audio = (*env)->GetShortArrayElements(env, audio_data, 0);
    //LOGI("java audio buffer size: %i", length);

       AVCodecContext *audio_codec_ctx = audio_stream->codec;

       unsigned char *out = av_malloc(128000);

       AudioBuffer_Push(audio, length
       );

       int total_compressed = 0;
       while (

               AudioBuffer_Size()

               >= audio_codec_ctx->frame_size) {
           AVPacket pkt;
           av_init_packet(&amp;pkt);

           int compressed_length = avcodec_encode_audio(audio_codec_ctx, out, 128000,
                                                        AudioBuffer_Get());

           total_compressed +=
                   compressed_length;
           audio_samples_written += audio_codec_ctx->
                   frame_size;

           int new_pts = (audio_samples_written * 1000) / audio_sample_rate;
           if (compressed_length > 0) {
               pkt.
                       size = compressed_length;
               pkt.
                       pts = new_pts;
               last_audio_pts = new_pts;
    //LOGI("audio_samples_written: %i  comp_length: %i   pts: %i", (int)audio_samples_written, (int)compressed_length, (int)new_pts);
               pkt.flags |= 0x0001;
               pkt.
                       stream_index = audio_stream->index;
               pkt.
                       data = out;
               if (
                       av_interleaved_write_frame(fmt_context,
                                                  &amp;pkt) != 0) {
                   LOGI("Error writing audio frame");
               }
           }
           AudioBuffer_Pop(audio_codec_ctx
                                   ->frame_size);
       }

       (*env)->
               ReleaseShortArrayElements(env, audio_data, audio,
                                         0);

       av_free(out);
       return
               total_compressed;
    }

    this an example for a header file which i have an error include

    #include "libavutil/samplefmt.h"
    #include "libavutil/avutil.h"
    #include "libavutil/cpu.h"
    #include "libavutil/dict.h"
    #include "libavutil/log.h"
    #include "libavutil/pixfmt.h"
    #include "libavutil/rational.h"

    #include "libavutil/version.h"

    in the libavcodec/avcodec.h

    This code based in this example :
    https://github.com/youtube/yt-watchme

  • FFMPEG "buffer queue overflow, dropping" with trim and atrim filters

    11 décembre 2016, par Prasanna Mahendiran

    In FFMPEG I am actually trimming and concating a 24 FPS video. When I apply a complex filter

    ffmpeg -i sample.mp4 -filter_complex \
     "[0:v]setpts = PTS-STARTPTS[bv];
     [bv]split=6[v0][v1][v2][v3][v4][v5];
     [v0]trim=start_frame=1:end_frame=142,loop=1:1:1,setpts=N/FRAME_RATE/TB[0v];
     [v1]trim=start_frame=846:end_frame=878,loop=1:1:1,setpts=N/FRAME_RATE/TB[1v];
     [v2]trim=start_frame=57:end_frame=114,loop=1:1:1,setpts=N/FRAME_RATE/TB[2v];
     [v3]trim=start_frame=865:end_frame=885,loop=1:1:1,setpts=N/FRAME_RATE/TB[3v];
     [v4]trim=start_frame=70:end_frame=155,loop=1:1:1,setpts=N/FRAME_RATE/TB[4v];
     [v5]trim=start_frame=155:end_frame=909,loop=1:1:1,setpts=N/FRAME_RATE/TB[5v];
     [0:a]asplit=6[a0][a1][a2][a3][a4][a5];
     [a0]atrim=0.041666666666666664:5.917,asetpts=N/SR/TB[0a];
     [a1]atrim=35.256:36.603,asetpts=N/SR/TB[1a];
     [a2]atrim=2.379:4.767,asetpts=N/SR/TB[2a];
     [a3]atrim=36.024:36.859,asetpts=N/SR/TB[3a];
     [a4]atrim=2.93:6.438172,asetpts=N/SR/TB[4a];
     [a5]atrim=6.438172:37.895,asetpts=N/SR/TB[5a];
     [0v][0a][1v][1a][2v][2a][3v][3a][4v][4a][5v][5a]concat=n=6:v=1:a=1[vv][aa]"\
     -map "[vv]" -map "[aa]" output.mp4

    I am getting "buffer queue overflow, dropping" error. The resultant video and audio is still and not working properly.

    ffmpeg version 3.2-1~16.04.york1 Copyright (c) 2000-2016 the FFmpeg developers
     built with gcc 5.4.1 (Ubuntu 5.4.1-3ubuntu1~ubuntu16.04.1york0) 20161019
     configuration: --prefix=/usr --extra-version='1~16.04.york1' --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-libtesseract --disable-stripping --disable-decoder=libschroedinger --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libebur128 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librubberband --enable-libschroedinger --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-opengl --enable-sdl2 --enable-x11grab --enable-libdc1394 --enable-libiec61883 --enable-openal --enable-frei0r --enable-libopencv --enable-libx264 --enable-chromaprint --enable-shared
     libavutil      55. 34.100 / 55. 34.100
     libavcodec     57. 64.100 / 57. 64.100
     libavformat    57. 56.100 / 57. 56.100
     libavdevice    57.  1.100 / 57.  1.100
     libavfilter     6. 65.100 /  6. 65.100
     libavresample   3.  1.  0 /  3.  1.  0
     libswscale      4.  2.100 /  4.  2.100
     libswresample   2.  3.100 /  2.  3.100
     libpostproc    54.  1.100 / 54.  1.100
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'sample.mp4':
     Metadata:
       major_brand     : isom
       minor_version   : 512
       compatible_brands: isomiso2avc1mp41
       track           : 0
       artist          :
       album           :
       date            : 0
       genre           :
       lyrics          :
       title           :
       encoder         : Lavf56.36.100
     Duration: 00:00:37.90, start: 0.000000, bitrate: 951 kb/s
       Stream #0:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p, 640x480 [SAR 1:1 DAR 4:3], 820 kb/s, 24 fps, 24 tbr, 12288 tbn, 48 tbc (default)
       Metadata:
         handler_name    : VideoHandler
       Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 126 kb/s (default)
       Metadata:
         handler_name    : SoundHandler
    File 'output.mp4' already exists. Overwrite ? [y/N] y
    [libx264 @ 0x55650097a540] using SAR=1/1
    [libx264 @ 0x55650097a540] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 AVX2 LZCNT BMI2
    [libx264 @ 0x55650097a540] profile High, level 3.0
    [libx264 @ 0x55650097a540] 264 - core 148 r2643 5c65704 - H.264/MPEG-4 AVC codec - Copyleft 2003-2015 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=24 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
    Output #0, mp4, to 'output.mp4':
     Metadata:
       major_brand     : isom
       minor_version   : 512
       compatible_brands: isomiso2avc1mp41
       track           : 0
       artist          :
       album           :
       date            : 0
       genre           :
       lyrics          :
       title           :
       encoder         : Lavf57.56.100
       Stream #0:0: Video: h264 (libx264) ([33][0][0][0] / 0x0021), yuv420p, 640x480 [SAR 1:1 DAR 4:3], q=-1--1, 24 fps, 12288 tbn, 24 tbc (default)
       Metadata:
         encoder         : Lavc57.64.100 libx264
       Side data:
         cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
       Stream #0:1: Audio: aac (LC) ([64][0][0][0] / 0x0040), 44100 Hz, stereo, fltp, 128 kb/s (default)
       Metadata:
         encoder         : Lavc57.64.100 aac
    Stream mapping:
     Stream #0:0 (h264) -> setpts
     Stream #0:1 (aac) -> asplit
     concat:out:v0 -> Stream #0:0 (libx264)
     concat:out:a0 -> Stream #0:1 (aac)
    Press [q] to stop, [?] for help
    [Parsed_concat_33 @ 0x55650097b420] Buffer queue overflow, dropping. 471.5kbits/s speed=4.94x    
       Last message repeated 201 times
    [Parsed_concat_33 @ 0x55650097b420] Buffer queue overflow, dropping. 522.9kbits/s speed=3.89x    
       Last message repeated 1266 times
    [Parsed_concat_33 @ 0x55650097b420] Buffer queue overflow, dropping. 557.0kbits/s speed=3.28x    
       Last message repeated 48 times
    [output stream 0:1 @ 0x556500947e20] 100 buffers queued in output stream 0:1, something may be wrong.
    [Parsed_concat_33 @ 0x55650097b420] Buffer queue overflow, dropping. 718.6kbits/s speed=3.46x    
       Last message repeated 19 times
    [output stream 0:0 @ 0x5565009785c0] 100 buffers queued in output stream 0:0, something may be wrong.
    frame= 1091 fps=117 q=-1.0 Lsize=    2795kB time=00:00:45.51 bitrate= 503.1kbits/s dup=475 drop=0 speed=4.88x    
    video:2455kB audio:316kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.861779%
    [libx264 @ 0x55650097a540] frame I:8     Avg QP:19.26  size: 24207
    [libx264 @ 0x55650097a540] frame P:409   Avg QP:21.33  size:  4108
    [libx264 @ 0x55650097a540] frame B:674   Avg QP:27.46  size:   949
    [libx264 @ 0x55650097a540] consecutive B-frames: 10.3% 13.9% 24.5% 51.3%
    [libx264 @ 0x55650097a540] mb I  I16..4:  9.9% 57.0% 33.1%
    [libx264 @ 0x55650097a540] mb P  I16..4:  3.6%  7.6%  2.9%  P16..4: 33.0% 10.6%  3.0%  0.0%  0.0%    skip:39.2%
    [libx264 @ 0x55650097a540] mb B  I16..4:  0.4%  0.8%  0.4%  B16..8: 24.5%  2.6%  0.2%  direct: 0.5%  skip:70.5%  L0:55.5% L1:41.8% BI: 2.7%
    [libx264 @ 0x55650097a540] 8x8 transform intra:53.8% inter:66.7%
    [libx264 @ 0x55650097a540] coded y,uvDC,uvAC intra: 44.6% 50.0% 14.8% inter: 6.2% 7.7% 0.2%
    [libx264 @ 0x55650097a540] i16 v,h,dc,p: 22% 28% 17% 33%
    [libx264 @ 0x55650097a540] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 20% 23% 28%  3%  4%  3% 11%  3%  5%
    [libx264 @ 0x55650097a540] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 26% 26% 16%  2%  5%  3% 16%  3%  3%
    [libx264 @ 0x55650097a540] i8c dc,h,v,p: 60% 22% 13%  6%
    [libx264 @ 0x55650097a540] Weighted P-Frames: Y:0.0% UV:0.0%
    [libx264 @ 0x55650097a540] ref P L0: 72.6%  8.4% 15.1%  3.9%
    [libx264 @ 0x55650097a540] ref B L0: 88.5% 10.7%  0.8%
    [libx264 @ 0x55650097a540] ref B L1: 93.3%  6.7%
    [libx264 @ 0x55650097a540] kb/s:442.30
    [aac @ 0x556500979280] Qavg: 3215.870

    I tried with other stackoverflow questions but none of them worked. Also I think it is partially because the trim timings are mixed. That is start time can be anywhere between 0-end. When I make it strictly increasing it is working fine.