Recherche avancée

Médias (2)

Mot : - Tags -/documentation

Autres articles (69)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Mise à disposition des fichiers

    14 avril 2011, par

    Par défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
    Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
    Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)

Sur d’autres sites (8106)

  • libav ffmpeg codec copy rtp_mpegts streaming with very bad quality

    27 décembre 2017, par Dinkan

    I am trying to do codec copy of a stream(testing with file now & later
    going to use live stream) with format rtp_mpegts over network & play
    using VLC player. Started my proof of concept code with slightly
    modified remuxing.c in the examples.

    I am essentially trying to do is to replicate
    ./ffmpeg -re -i TEST_VIDEO.ts -acodec copy -vcodec copy -f rtp_mpegts
    rtp ://239.245.0.2:5002

    Streaming is happening, but the quality is terrible.
    Looks like many frames are skipped plus streaming is happening really
    slow(buffer underflow reported by VLC player)

    File plays perfectly fine directly on VLC player.
    Please help.

    Stream details.
    Input #0, mpegts, from ' TEST_VIDEO.ts':
     Duration: 00:10:00.40, start: 41313.400811, bitrate: 2840 kb/s
     Program 1
       Stream #0:0[0x11]: Video: h264 (High) ([27][0][0][0] / 0x001B),
    yuv420p(tv, bt709, top first), 1440x1080 [SAR 4:3 DAR 16:9], 29.97
    fps, 59.94 tbr, 90k tbn, 59.94 tbc
       Stream #0:1[0x14]: Audio: ac3 (AC-3 / 0x332D4341), 48000 Hz,
    stereo, fltp, 448 kb/s
    Output #0, rtp_mpegts, to 'rtp://239.255.0.2:5004':
     Metadata:
       encoder         : Lavf57.83.100
       Stream #0:0: Video: h264 (High) ([27][0][0][0] / 0x001B),
    yuv420p(tv, bt709, top first), 1440x1080 [SAR 4:3 DAR 16:9], q=2-31,
    29.97 fps, 59.94 tbr, 90k tbn, 29.97 tbc
       Stream #0:1: Audio: ac3 (AC-3 / 0x332D4341), 48000 Hz, stereo,
    fltp, 448 kb/s
    Stream mapping:
     Stream #0:0 -> #0:0 (copy)
     Stream #0:1 -> #0:1 (copy)
    Press [q] to stop, [?] for help
    frame=  418 fps=5.2 q=-1.0 size=    3346kB time=00:00:08.50
    bitrate=3223.5kbits/s speed=0.106x

    My complete source code(This is almost same as remuxing.c)

    #include <libavutil></libavutil>timestamp.h>
    #include <libavformat></libavformat>avformat.h>

    static void log_packet(const AVFormatContext *fmt_ctx, const AVPacket
    *pkt, const char *tag)
    {
       AVRational *time_base = &amp;fmt_ctx->streams[pkt->stream_index]->time_base;

       printf("%s: pts:%s pts_time:%s dts:%s dts_time:%s duration:%s
    duration_time:%s stream_index:%d\n",
              tag,
              av_ts2str(pkt->pts), av_ts2timestr(pkt->pts, time_base),
              av_ts2str(pkt->dts), av_ts2timestr(pkt->dts, time_base),
              av_ts2str(pkt->duration), av_ts2timestr(pkt->duration, time_base),
              pkt->stream_index);
    }


    int main(int argc, char **argv)
    {
       AVOutputFormat *ofmt = NULL;
       AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
       AVPacket pkt;
       const char *in_filename, *out_filename;
       int ret, i;
       int stream_index = 0;
       int *stream_mapping = NULL;
       int stream_mapping_size = 0;
       AVRational mux_timebase;
       int64_t start_time = 0; //(of->start_time == AV_NOPTS_VALUE) ? 0 :
    of->start_time;
       int64_t ost_tb_start_time = 0; //av_rescale_q(start_time,
    AV_TIME_BASE_Q, ost->mux_timebase);

       if (argc &lt; 3) {
           printf("usage: %s input output\n"
                  "API example program to remux a media file with
    libavformat and libavcodec.\n"
                  "The output format is guessed according to the file extension.\n"
                  "\n", argv[0]);
           return 1;
       }

       in_filename  = argv[1];
       out_filename = argv[2];

       av_register_all();
       avcodec_register_all();
       avformat_network_init();

       if ((ret = avformat_open_input(&amp;ifmt_ctx, in_filename, 0, 0)) &lt; 0) {
           fprintf(stderr, "Could not open input file '%s'", in_filename);
           goto end;
       }

       if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) &lt; 0) {
           fprintf(stderr, "Failed to retrieve input stream information");
           goto end;
       }

       av_dump_format(ifmt_ctx, 0, in_filename, 0);

       avformat_alloc_output_context2(&amp;ofmt_ctx, NULL, "rtp_mpegts", out_filename);
       if (!ofmt_ctx) {
           fprintf(stderr, "Could not create output context\n");
           ret = AVERROR_UNKNOWN;
           goto end;
       }

       stream_mapping_size = ifmt_ctx->nb_streams;
       stream_mapping = av_mallocz_array(stream_mapping_size,
    sizeof(*stream_mapping));
       if (!stream_mapping) {
           ret = AVERROR(ENOMEM);
           goto end;
       }

       ofmt = ofmt_ctx->oformat;

       for (i = 0; i &lt; ifmt_ctx->nb_streams; i++)
       {
           AVStream *out_stream;
           AVStream *in_stream = ifmt_ctx->streams[i];
           AVCodecParameters *in_codecpar = in_stream->codecpar;

           if (in_codecpar->codec_type != AVMEDIA_TYPE_AUDIO &amp;&amp;
               in_codecpar->codec_type != AVMEDIA_TYPE_VIDEO &amp;&amp;
               in_codecpar->codec_type != AVMEDIA_TYPE_SUBTITLE) {
               stream_mapping[i] = -1;
               continue;
           }

           stream_mapping[i] = stream_index++;


           out_stream = avformat_new_stream(ofmt_ctx, NULL);
           if (!out_stream) {
               fprintf(stderr, "Failed allocating output stream\n");
               ret = AVERROR_UNKNOWN;
               goto end;
           }

           //out_stream->codecpar->codec_tag = 0;
           if (0 == out_stream->codecpar->codec_tag)
           {
               unsigned int codec_tag_tmp;

               if (!out_stream->codecpar->codec_tag ||
                   av_codec_get_id (ofmt->codec_tag,
    in_codecpar->codec_tag) == in_codecpar->codec_id ||
                   !av_codec_get_tag2(ofmt->codec_tag,
    in_codecpar->codec_id, &amp;codec_tag_tmp))
                   out_stream->codecpar->codec_tag  = in_codecpar->codec_tag;
           }
           //ret = avcodec_parameters_to_context(ost->enc_ctx, ist->st->codecpar);

           ret = avcodec_parameters_copy(out_stream->codecpar, in_codecpar);
           if (ret &lt; 0) {
               fprintf(stderr, "Failed to copy codec parameters\n");
               goto end;
           }
           //out_stream->codecpar->codec_tag = codec_tag;
           // copy timebase while removing common factors

           printf("bit_rate %lld sample_rate %d frame_size %d\n",
                  in_codecpar->bit_rate, in_codecpar->sample_rate,
    in_codecpar->frame_size);

           out_stream->avg_frame_rate = in_stream->avg_frame_rate;

           ret = avformat_transfer_internal_stream_timing_info(ofmt,

    out_stream, in_stream,
                                                               AVFMT_TBCF_AUTO);
           if (ret &lt; 0) {
               fprintf(stderr,
    "avformat_transfer_internal_stream_timing_info failed\n");
               goto end;
           }

           if (out_stream->time_base.num &lt;= 0 || out_stream->time_base.den &lt;= 0)
               out_stream->time_base =
    av_add_q(av_stream_get_codec_timebase(out_stream), (AVRational){0,
    1});

           // copy estimated duration as a hint to the muxer
           if (out_stream->duration &lt;= 0 &amp;&amp; in_stream->duration > 0)
               out_stream->duration = av_rescale_q(in_stream->duration,
    in_stream->time_base, out_stream->time_base);

           // copy disposition
           out_stream->disposition = in_stream->disposition;

           out_stream->sample_aspect_ratio = in_stream->sample_aspect_ratio;
           out_stream->avg_frame_rate = in_stream->avg_frame_rate;
           out_stream->r_frame_rate = in_stream->r_frame_rate;

           if ( in_codecpar->codec_type == AVMEDIA_TYPE_VIDEO)
           {

               mux_timebase = in_stream->time_base;
           }


           if (in_stream->nb_side_data) {
               for (i = 0; i &lt; in_stream->nb_side_data; i++) {
                   const AVPacketSideData *sd_src = &amp;in_stream->side_data[i];
                   uint8_t *dst_data;

                   dst_data = av_stream_new_side_data(out_stream,
    sd_src->type, sd_src->size);
                   if (!dst_data)
                       return AVERROR(ENOMEM);
                   memcpy(dst_data, sd_src->data, sd_src->size);
               }
           }
       }

       av_dump_format(ofmt_ctx, 0, out_filename, 1);

       start_time = ofmt_ctx->duration;
       ost_tb_start_time = av_rescale_q(ofmt_ctx->duration,
    AV_TIME_BASE_Q, mux_timebase);

       if (!(ofmt->flags &amp; AVFMT_NOFILE))
       {
           ret = avio_open(&amp;ofmt_ctx->pb, out_filename, AVIO_FLAG_WRITE);
           if (ret &lt; 0) {
               fprintf(stderr, "Could not open output file '%s'", out_filename);
               goto end;
           }
       }

       ret = avformat_write_header(ofmt_ctx, NULL);
       if (ret &lt; 0) {
           fprintf(stderr, "Error occurred when opening output file\n");
           goto end;
       }

       while (1)
       {
           AVStream *in_stream, *out_stream;

           ret = av_read_frame(ifmt_ctx, &amp;pkt);
           if (ret &lt; 0)
               break;

           in_stream  = ifmt_ctx->streams[pkt.stream_index];
           if (pkt.stream_index >= stream_mapping_size ||
               stream_mapping[pkt.stream_index] &lt; 0) {
               av_packet_unref(&amp;pkt);
               continue;
           }

           pkt.stream_index = stream_mapping[pkt.stream_index];
           out_stream = ofmt_ctx->streams[pkt.stream_index];

           //log_packet(ifmt_ctx, &amp;pkt, "in");


           //ofmt_ctx->bit_rate = ifmt_ctx->bit_rate;
           ofmt_ctx->duration = ifmt_ctx->duration;
           /* copy packet */
           //pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base,
    out_stream->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
           //pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base,
    out_stream->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);

           if (pkt.pts != AV_NOPTS_VALUE)
               pkt.pts = av_rescale_q(pkt.pts,
    in_stream->time_base,mux_timebase) - ost_tb_start_time;
           else
               pkt.pts = AV_NOPTS_VALUE;

           if (pkt.dts == AV_NOPTS_VALUE)
               pkt.dts = av_rescale_q(pkt.dts, AV_TIME_BASE_Q, mux_timebase);
           else
               pkt.dts = av_rescale_q(pkt.dts, in_stream->time_base, mux_timebase);
           pkt.dts -= ost_tb_start_time;

           pkt.duration = av_rescale_q(pkt.duration,
    in_stream->time_base, mux_timebase);
           //pkt.duration = av_rescale_q(1,
    av_inv_q(out_stream->avg_frame_rate), mux_timebase);
           pkt.pos = -1;
           //log_packet(ofmt_ctx, &amp;pkt, "out");


           ret = av_interleaved_write_frame(ofmt_ctx, &amp;pkt);
           if (ret &lt; 0) {
               fprintf(stderr, "Error muxing packet\n");
               break;
           }
           av_packet_unref(&amp;pkt);
       }

       av_write_trailer(ofmt_ctx);
    end:

       avformat_close_input(&amp;ifmt_ctx);

       /* close output */
       if (ofmt_ctx &amp;&amp; !(ofmt->flags &amp; AVFMT_NOFILE))
           avio_closep(&amp;ofmt_ctx->pb);
       avformat_free_context(ofmt_ctx);

       av_freep(&amp;stream_mapping);

       if (ret &lt; 0 &amp;&amp; ret != AVERROR_EOF) {
           fprintf(stderr, "Error occurred: %s\n", av_err2str(ret));
           return 1;
       }

       return 0;
    }
  • What is Google Analytics data sampling and what’s so bad about it ?

    16 août 2019, par Joselyn Khor — Analytics Tips, Development

    What is Google Analytics data sampling, and what’s so bad about it ?

    Google (2019) explains what data sampling is :

    “In data analysis, sampling is the practice of analysing a subset of all data in order to uncover the meaningful information in the larger data set.”[1]

    This is basically saying instead of analysing all of the data, there’s a threshold on how much data is analysed and any data after that will be an assumption based on patterns.

    Google’s (2019) data sampling thresholds :

    Ad-hoc queries of your data are subject to the following general thresholds for sampling :
    [Google] Analytics Standard : 500k sessions at the property level for the date range you are using
    [Google] Analytics 360 : 100M sessions at the view level for the date range you are using (para. 3) [2]

    This threshold is limiting because your data in GA may become more inaccurate as the traffic to your website increases.

    Say you’re looking through all your traffic data from the last year and find you have 5 million page views. Only 500K of that 5 million is accurate ! The data for the remaining 4.5 million (90%) is an assumption based on the 500K sample size.

    This is a key weapon Google uses to sell to large businesses. In order to increase that threshold for more accurate reporting, upgrading to premium Google Analytics 360 for approximately US$150,000 per year seems to be the only choice.

    What’s so bad about data sampling ?

    It’s unfair to say sampled data is to be disregarded completely. There is a calculation ensuring it is representative and can allow you to get good enough insights. However, we don’t encourage it as we don’t just want “good enough” data. We want the actual facts.

    In a recent survey sent to Matomo customers, we found a large proportion of users switched from GA to Matomo due to the data sampling issue.

    The two reasons why data sampling isn’t preferable : 

    1. If the selected sample size is too small, you won’t get a good representative of all the data. 
    2. The bigger your website grows, the more inaccurate your reports will become.

    An example of why we don’t fully trust sampled data is, say you have an ecommerce store and see your GA revenue reports aren’t matching the actual sales data, due to data sampling. In GA you may be seeing revenue for the month as $1 million, instead of actual sales of $800K.

    The sampling here has caused an inaccuracy that could have negative financial implications. What you get in the GA report is an estimated dollar figure rather than the actual sales. Making decisions based on inaccurate data can be costly in this case. 

    Another disadvantage to sampled data is that you might be missing out on opportunities you would’ve noticed if you were given a view of the whole. E.g. not being able to see real patterns occurring due to the data already being predicted. 

    By not getting a chance to see things as they are and only being able to jump to the conclusions and assumptions made by GA is risky. The bigger your business grows, the less you can risk making business decisions based on assumptions that could be inaccurate. 

    If you feel you could be missing out on opportunities because your GA data is sampled data, get 100% accurately reported data. 

    The benefits of 100% accurate data

    Matomo doesn’t use data sampling on any of our products or plans. You get to see all of your data and not a sampled data set.

    Data quality is necessary for high impact decision-making. It’s hard to make strategic changes if you don’t have confidence that your data is reliable and accurate.

    Learn about how Matomo is a serious contender to Google Analytics 360. 

    Now you can import your Google Analytics data directly into your Matomo

    If you’re wanting to make the switch to Matomo but worried about losing all your historic Google Analytics data, you can now import this directly into your Matomo with the Google Analytics Importer tool.


    Take the challenge !

    Compare your Google Analytics data (sampled data) against your Matomo data, or if you don’t have Matomo data yet, sign up to our 30-day free trial and start tracking !

    References :

    [1 & 2] About data sampling. (2019). In Analytics Help About data sampling. Retrieved August 14, 2019, from https://support.google.com/analytics/answer/2637192

  • ffmpeg android concatenation with different codec [on hold]

    31 mars 2016, par raphwilmes

    I’m trying to concatenate 10 videos, but 5 of them have
    Stream #0:0(und) : Video : h264 (High) (avc1 / 0x31637661), yuv420p, 600x976 [SAR 1:1 DAR 75:122], 17 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc (default)

    And the rest have
    Stream #1:0(eng) : Video : h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p, 640x480x 175 kb/s, SAR 65536:65536 DAR 4:3, 7.69frp, 23.08 tbr, 90k tbn, 15tbc (default)

    And my code is :

    String commandForMerge = "";
    for(int i = 1; i&lt;=numberQuestion;i++){
       commandForMerge += "-i "+toDir+"/image"+i+".mp4 -i "+toDir+"/"+i+".mp4 ";
    }
    String cmd = commandForMerge+"-filter_complex concat=n=10:v=1:a=1 -y "+toDir+"/input.mp4";
    execFFmpegBinaryMerge(cmd);

    Here my fonction :

    private void execFFmpegBinaryMerge(final String command) {
       try {

           ffmpeg.execute(command, new ExecuteBinaryResponseHandler() {
               @Override
               public void onFailure(String s) {
               }

               @Override
               public void onSuccess(String s) {
               }

               @Override
               public void onProgress(String s) {
                   txtProcess.setText(txtProcess.getText()+"\n "+s.toString());
               }

               @Override
               public void onStart() {
                   txtProcess.setText("mise en place...");
               }

               @Override
               public void onFinish() {

               }
           });
       } catch (FFmpegCommandAlreadyRunningException e) {
           // do nothing for now
       }
    }

    I use this library : https://github.com/WritingMinds/ffmpeg-android-java

    Edit :

    Now, I’v this : Unable to find a suitable output format for ’[1:v]fps=25,setsar=1,setpts=PTS-STARTPTS[v1] ;’ Invalid Agurment

    My Command is : *ffmpeg -i image1.mp4 -i 1.mp4 -i image2.mp4 -i 2.mp4 -i image3.mp4 -i 3.mp4.mp4 -i image4.mp4 -i 4.mp4 -i image5.mp4 -i .mp4

    - filter-complex
    "[0:v]scale=-1:480,pad=640:0 :(ow-iw)/2,setsar=1,setpts=PTS-STARTPTS[v0] ;
    [1:v]fps=25,setsar=1,setpts=PTS-STARTPTS[v1] ;
    [2:v]scale=-1:480,pad=640:0 :(ow-iw)/2,setsar=1,setpts=PTS-STARTPTS[v2] ;
    [3:v]fps=25,setsar=1,setpts=PTS-STARTPTS[v3] ;
    [4:v]scale=-1:480,pad=640:0 :(ow-iw)/2,setsar=1,setpts=PTS-STARTPTS[v4] ;
    [5:v]fps=25,setsar=1,setpts=PTS-STARTPTS[v5] ;
    [6:v]scale=-1:480,pad=640:0 :(ow-iw)/2,setsar=1,setpts=PTS-STARTPTS[v6] ;
    [7:v]fps=25,setsar=1,setpts=PTS-STARTPTS[v7] ;
    [8:v]scale=-1:480,pad=640:0 :(ow-iw)/2,setsar=1,setpts=PTS-STARTPTS[v8] ;
    [9:v]fps=25,setsar=1,setpts=PTS-STARTPTS[v9] ;"
    ’[v0][0:a][v1][1:a][v2][2:a][v3][3:a][v4][4:a][v5][5:a][v6][6:a][v7][7:a][v8][8:a][v9][9:a]’ concat=n=10:v=1:a=1 -y output.mp4
    *

    Now the "[0:v]scale=-1:480,pad=640:0 :(ow-iw)/2,setsar=1,setpts=PTS-STARTPTS[v0] ;" pass but not the [1:v]fps=25,setsar=1,setpts=PTS-STARTPTS[v1] ;

    output :

    WARINING:linker : /data/data/vrapp_mobile.com.duoquiz2/files/ffmpeg has text relocation. This is wasting memory and is security risk. Please fix.
    ffmpeg version n2.4.2 Copyright (c) 2000-2014 the FFmpeg developpers built on Oct 7 2014 15:08:46 with gcc 4.8 (GCC)
    configuration :—target-os=linux —cross-prefix=/home/sb/Source-Code/ffmpeg-android/toolchain-android/bin/arm-linux-androideabi- —arm=arm —cpu=cortex-a8 —enable-runtime-cpudetect —sysroot=/home/sb/Source-Code/ffmpeg-android/toolchain-android/sysroot —enable-pic—enable-libx264 —enable-libass —enable-libfreetype —enable-libfrididi —enable-fontconfig —enable-pthreads —disable-debug —disable-ffserver —enable-version3 —enable-hardcored-tables —disable-ffplay —disable-ffprobe —enable-gpl —enable-yasm —disable-doc —disable-shared —enable-static —pjg-config=/home/sb/Source-Code/ffmpeg-android/ffmpeg-pkg-config —prefix=/home/sb/Source-Code/ffmpeg-android/build/armeabi-v7a-neon —extra-cflags=’-l/home/sb/Source-Code/ffmpeg-android/toolchain-android/include -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -fno-strict-overflow -fstack-protextor-all -mfpu=neon’ —extra-ldflags=’-L/home/sb/Source-Code/ffmpeg-android/toolchain-android/lib -Wl,-z,relro -Wl,now -pie’ —extra-libs=’-lpng -lexpat -lm’ —extra-cxxflags =
    libavutil 54. 7.100/54. 7.100
    libavcodec 56 1.100/56. 1.100
    libavformat 56. 4.101/56. 4.101
    libavdevice 56. 0.100/56. 0.100
    libavfilter 5. 1.100/5. 1.100
    libswscale 3. 0.100/3. 0.100
    libswresample 1. 1.100/1. 1.100
    libpostproc 53. 0.100/53. 0.100
    Input #0,mov,mp4,m4a,3gp,mj2, from ’/storage/emulated/0/DuoQuiz/image1.mp4’ :
    Metadata :
    major_brand :isom
    minor_version :512
    compatible_brands:isomiso2avc1mp41
    encoder :Lavf56.4.101
    Duration : 00:00:02.40, start : 0.023021 , bitrate : 334 kb/s
    Stream #0:0(und) : Video : h264 (High) (avc1 / 0x31637661), yuv420p, 600x976 [SAR 1:1 DAR 75:122], 17 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc (default)
    Metadata :
    handler_name : VideoHandler
    Stream #0:1(und) : Audio.mp3 (mp4a / 0x6134706D), 48000 Hz, stereo, s16p, 320 kb/s (default)
    Metadata :
    handler_name :SoundHandler
    Input #1,mov,mp4,m4a,3gp,3g2,mj2, from ’/storage/emulated/0/DuoQuiz/1.mp4’ :
    Metadata :
    major_brand :isom
    minor_version :0
    compatible_brands:isom3gp4
    creation_time :2016-03-21 16:05:25
    Duration : 00:00:02.73, start : 0.000000, bitrate : 229 kb/s
    Stream #1:0(eng) : Video : h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p, 640x480x 175 kb/s, SAR 65536:65536 DAR 4:3, 7.69frp, 23.08 tbr, 90k tbn, 15tbc (default)
    Metadata :
    rotate :270
    creation_time :2016-03-21 16:05:25
    handler_name :VideoHandle
    Side data :
    displaymatrix : rotation of 90.00 degrees
    Stream #1:1(eng) : Audio : aac (mp4a / 0x6134706D),44100 Hz, mono, fltp, 49 kb/s (default)
    Metadata :
    creation_time :2016-03-21 16:05:25
    handler_name :SoundHandle
    Input #2,mov,mp4,m4a,3gp,mj2, from ’/storage/emulated/0/DuoQuiz/image2.mp4’ :
    Metadata :
    major_brand :isom
    minor_version :512
    compatible_brands:isomiso2avc1mp41
    encoder :Lavf56.4.101
    Duration : 00:00:02.40, start : 0.023021 , bitrate : 334 kb/s
    Stream #0:0(und) : Video : h264 (High) (avc1 / 0x31637661), yuv420p, 600x976 [SAR 1:1 DAR 75:122], 17 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc (default)
    Metadata :
    handler_name : VideoHandler
    Stream #0:1(und) : Audio.mp3 (mp4a / 0x6134706D), 48000 Hz, stereo, s16p, 320 kb/s (default)
    Metadata :
    handler_name :SoundHandler
    Input #3,mov,mp4,m4a,3gp,3g2,mj2, from ’/storage/emulated/0/DuoQuiz/2.mp4’ :
    Metadata :
    major_brand :isom
    minor_version :0
    compatible_brands:isom3gp4
    creation_time :2016-03-21 16:05:25
    Duration : 00:00:02.73, start : 0.000000, bitrate : 229 kb/s
    Stream #1:0(eng) : Video : h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p, 640x480x 175 kb/s, SAR 65536:65536 DAR 4:3, 7.69frp, 23.08 tbr, 90k tbn, 15tbc (default)
    Metadata :
    rotate :270
    creation_time :2016-03-21 16:05:25
    handler_name :VideoHandle
    Side data :
    displaymatrix : rotation of 90.00 degrees
    Stream #1:1(eng) : Audio : aac (mp4a / 0x6134706D),44100 Hz, mono, fltp, 49 kb/s (default)
    Metadata :
    creation_time :2016-03-21 16:05:25
    handler_name :SoundHandle
    Input #4,mov,mp4,m4a,3gp,mj2, from ’/storage/emulated/0/DuoQuiz/image3.mp4’ :
    Metadata :
    major_brand :isom
    minor_version :512
    compatible_brands:isomiso2avc1mp41
    encoder :Lavf56.4.101
    Duration : 00:00:02.40, start : 0.023021 , bitrate : 334 kb/s
    Stream #0:0(und) : Video : h264 (High) (avc1 / 0x31637661), yuv420p, 600x976 [SAR 1:1 DAR 75:122], 17 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc (default)
    Metadata :
    handler_name : VideoHandler
    Stream #0:1(und) : Audio.mp3 (mp4a / 0x6134706D), 48000 Hz, stereo, s16p, 320 kb/s (default)
    Metadata :
    handler_name :SoundHandler
    Input #5,mov,mp4,m4a,3gp,3g2,mj2, from ’/storage/emulated/0/DuoQuiz/3.mp4’ :
    Metadata :
    major_brand :isom
    minor_version :0
    compatible_brands:isom3gp4
    creation_time :2016-03-21
    Duration : 00:00:02.73, start : 0.000000, bitrate : 229 kb/s
    Stream #1:0(eng) : Video : h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p, 640x480x 175 kb/s, SAR 65536:65536 DAR 4:3, 7.69frp, 23.08 tbr, 90k tbn, 15tbc (default)
    Metadata :
    rotate :270
    creation_time :2016-03-2116:05:25
    handler_name :VideoHandle
    Side data :
    displaymatrix : rotation of 90.00 degrees
    Stream #1:1(eng) : Audio : aac (mp4a / 0x6134706D),44100 Hz, mono, fltp, 49 kb/s (default)
    Metadata :
    creation_time :2016-03-21 16:05:25
    handler_name :SoundHandle
    Input #6,mov,mp4,m4a,3gp,mj2, from ’/storage/emulated/0/DuoQuiz/image4.mp4’ :
    Metadata :
    major_brand :isom
    minor_version :512
    compatible_brands:isomiso2avc1mp41
    encoder :Lavf56.4.101
    Duration : 00:00:02.40, start : 0.023021 , bitrate : 334 kb/s
    Stream #0:0(und) : Video : h264 (High) (avc1 / 0x31637661), yuv420p, 600x976 [SAR 1:1 DAR 75:122], 17 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc (default)
    Metadata :
    handler_name : VideoHandler
    Stream #0:1(und) : Audio.mp3 (mp4a / 0x6134706D), 48000 Hz, stereo, s16p, 320 kb/s (default)
    Metadata :
    handler_name :SoundHandler
    Input #7,mov,mp4,m4a,3gp,3g2,mj2, from ’/storage/emulated/0/DuoQuiz/4.mp4’ :
    Metadata :
    major_brand :isom
    minor_version :0
    compatible_brands:isom3gp4
    creation_time :2016-03-21
    Duration : 00:00:02.73, start : 0.000000, bitrate : 229 kb/s
    Stream #1:0(eng) : Video : h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p, 640x480x 175 kb/s, SAR 65536:65536 DAR 4:3, 7.69frp, 23.08 tbr, 90k tbn, 15tbc (default)
    Metadata :
    rotate :270
    creation_time :2016-03-2116:05:25
    handler_name :VideoHandle
    Side data :
    displaymatrix : rotation of 90.00 degrees
    Stream #1:1(eng) : Audio : aac (mp4a / 0x6134706D),44100 Hz, mono, fltp, 49 kb/s (default)
    Metadata :
    creation_time :2016-03-21 16:05:25
    handler_name :SoundHandle
    Input #8,mov,mp4,m4a,3gp,mj2, from ’/storage/emulated/0/DuoQuiz/image5.mp4’ :
    Metadata :
    major_brand :isom
    minor_version :512
    compatible_brands:isomiso2avc1mp41
    encoder :Lavf56.4.101
    Duration : 00:00:02.40, start : 0.023021 , bitrate : 334 kb/s
    Stream #0:0(und) : Video : h264 (High) (avc1 / 0x31637661), yuv420p, 600x976 [SAR 1:1 DAR 75:122], 17 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc (default)
    Metadata :
    handler_name : VideoHandler
    Stream #0:1(und) : Audio.mp3 (mp4a / 0x6134706D), 48000 Hz, stereo, s16p, 320 kb/s (default)
    Metadata :
    handler_name :SoundHandler
    Input #9,mov,mp4,m4a,3gp,3g2,mj2, from ’/storage/emulated/0/DuoQuiz/5.mp4’ :
    Metadata :
    major_brand :isom
    minor_version :0
    compatible_brands:isom3gp4
    creation_time :2016-03-21
    Duration : 00:00:02.73, start : 0.000000, bitrate : 229 kb/s
    Stream #1:0(eng) : Video : h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p, 640x480x 175 kb/s, SAR 65536:65536 DAR 4:3, 7.69frp, 23.08 tbr, 90k tbn, 15tbc (default)
    Metadata :
    rotate :270
    creation_time :2016-03-2116:05:25
    handler_name :VideoHandle
    Side data :
    displaymatrix : rotation of 90.00 degrees
    Stream #1:1(eng) : Audio : aac (mp4a / 0x6134706D),44100 Hz, mono, fltp, 49 kb/s (default)
    Metadata :
    creation_time :2016-03-21 16:05:25
    handler_name :SoundHandle
    [AVFilterGraph @ 0x2b5b2e10] No such filter : ""
    Error configuring filter
    *

    On my computer, i try this command :

    -i image1.mp4 -i 1.mp4 -i image2.mp4 -i 2.mp4 -i image3.mp4 -i 3.mp4 -i image4.mp4 -i 4.mp4 -i image5.mp4 -i 5.mp4 -filter_complex "[0:v]scale=-1:480,pad=640:0:(ow-iw)/2,setsar=1,setpts=PTS-STARTPTS[v0]; [1:v]fps=25,setsar=1,setpts=PTS-STARTPTS[v1]; [2:v]scale=-1:480,pad=640:0:(ow-iw)/2,setsar=1,setpts=PTS-STARTPTS[v2]; [3:v]fps=25,setsar=1,setpts=PTS-STARTPTS[v3]; [4:v]scale=-1:480,pad=640:0:(ow-iw)/2,setsar=1,setpts=PTS-STARTPTS[v4]; [5:v]fps=25,setsar=1,setpts=PTS-STARTPTS[v5]; [6:v]scale=-1:480,pad=640:0:(ow-iw)/2,setsar=1,setpts=PTS-STARTPTS[v6]; [7:v]fps=25,setsar=1,setpts=PTS-STARTPTS[v7]; [8:v]scale=-1:480,pad=640:0:(ow-iw)/2,setsar=1,setpts=PTS-STARTPTS[v8]; [9:v]fps=25,setsar=1,setpts=PTS-STARTPTS[v9]; [v0][0:a][v1][1:a][v2][2:a][v3][3:a][v4][4:a][v5][5:a][v6][6:a][v7][7:a][v8][8:a][v9][9:a] concat=n=10:v=1:a=1" -y output.mp4

    That’s ok on my PC but not on my Android