Recherche avancée

Médias (1)

Mot : - Tags -/musée

Autres articles (77)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (8087)

  • ffmpeg C api cutting a video when packet dts is greater than pts

    10 mars 2017, par TastyCatFood

    Corrupted videos

    In trying to cut out a duration of one of videos using ffmpeg C api, using the code posted here : How to cut video with FFmpeg C API , ffmpeg spit out the log below :

    D/logger: Loop count:9   out: pts:0 pts_time:0 dts:2002 dts_time:0.0333667 duration:2002 duration_time:0.0333667 stream_index:1
    D/trim_video: Error muxing packet Invalid argument

    ffmpeg considercs an instruction to decompress a frame after presenting it to be a nonsense, which is well...reasonable but stringent.

    My VLC player finds the video alright and plays it of course.

    Note :

    The code immediately below is in c++ written to be compiled with g++ as I’m developing for android. For C code, scroll down further.

    My solution(g++) :

    extern "C" {
    #include "libavformat/avformat.h"
    #include "libavutil/mathematics.h"
    #include "libavutil/timestamp.h"



    static void log_packet(
           const AVFormatContext *fmt_ctx,
           const AVPacket *pkt, const char *tag,
           long count=0)
    {

       printf("loop count %d pts:%f dts:%f duration:%f stream_index:%d\n",
              count,
              static_cast<double>(pkt->pts),
              static_cast<double>(pkt->dts),
              static_cast<double>(pkt->duration),
              pkt->stream_index);
       return;
    }

    int trimVideo(
           const char* in_filename,
           const char* out_filename,
           double cutFrom,
           double cutUpTo)
    {
       AVOutputFormat *ofmt = NULL;
       AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
       AVPacket pkt;
       int ret, i;
       //jboolean  copy = true;
       //const char *in_filename = env->GetStringUTFChars(jstring_in_filename,&amp;copy);
       //const char *out_filename = env->GetStringUTFChars(jstring_out_filename,&amp;copy);
       long loopCount = 0;

       av_register_all();

       // Cutting may change the pts and dts of the resulting video;
       // if frames in head position are removed.
       // In the case like that, src stream's copy start pts
       // need to be recorded and is used to compute the new pts value.
       // e.g.
       //    new_pts = current_pts - trim_start_position_pts;

       // nb-streams is the number of elements in AVFormatContext.streams.
       // Initial pts value must be recorded for each stream.

       //May be malloc and memset should be replaced with [].
       int64_t *dts_start_from = NULL;
       int64_t *pts_start_from = NULL;

       if ((ret = avformat_open_input(&amp;ifmt_ctx, in_filename, 0, 0)) &lt; 0) {
           printf( "Could not open input file '%s'", in_filename);
           goto end;
       }

       if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) &lt; 0) {
           printf("Failed to retrieve input stream information");
           goto end;
       }

       av_dump_format(ifmt_ctx, 0, in_filename, 0);

       avformat_alloc_output_context2(&amp;ofmt_ctx, NULL, NULL, out_filename);
       if (!ofmt_ctx) {
           printf( "Could not create output context\n");
           ret = AVERROR_UNKNOWN;
           goto end;
       }

       ofmt = ofmt_ctx->oformat;

       //preparing streams
       for (i = 0; i &lt; ifmt_ctx->nb_streams; i++) {
           AVStream *in_stream = ifmt_ctx->streams[i];
           AVStream *out_stream = avformat_new_stream(ofmt_ctx, in_stream->codec->codec);
           if (!out_stream) {
               printf( "Failed allocating output stream\n");
               ret = AVERROR_UNKNOWN;
               goto end;
           }

           ret = avcodec_copy_context(out_stream->codec, in_stream->codec);
           if (ret &lt; 0) {
               printf( "Failed to copy context from input to output stream codec context\n");
               goto end;
           }
           out_stream->codec->codec_tag = 0;
           if (ofmt_ctx->oformat->flags &amp; AVFMT_GLOBALHEADER)
               out_stream->codec->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
       }
       av_dump_format(ofmt_ctx, 0, out_filename, 1);

       if (!(ofmt->flags &amp; AVFMT_NOFILE)) {
           ret = avio_open(&amp;ofmt_ctx->pb, out_filename, AVIO_FLAG_WRITE);
           if (ret &lt; 0) {
               printf( "Could not open output file '%s'", out_filename);
               goto end;
           }
       }
       //preparing the header
       ret = avformat_write_header(ofmt_ctx, NULL);
       if (ret &lt; 0) {
           printf( "Error occurred when opening output file\n");
           goto end;
       }

       // av_seek_frame translates AV_TIME_BASE into an appropriate time base.
       ret = av_seek_frame(ifmt_ctx, -1, cutFrom*AV_TIME_BASE, AVSEEK_FLAG_ANY);
       if (ret &lt; 0) {
           printf( "Error seek\n");
           goto end;
       }
       dts_start_from = static_cast(
               malloc(sizeof(int64_t) * ifmt_ctx->nb_streams));
       memset(dts_start_from, 0, sizeof(int64_t) * ifmt_ctx->nb_streams);
       pts_start_from = static_cast(
               malloc(sizeof(int64_t) * ifmt_ctx->nb_streams));
       memset(pts_start_from, 0, sizeof(int64_t) * ifmt_ctx->nb_streams);

       //writing
       while (1) {
           AVStream *in_stream, *out_stream;
           //reading frame into pkt
           ret = av_read_frame(ifmt_ctx, &amp;pkt);
           if (ret &lt; 0)
               break;
           in_stream  = ifmt_ctx->streams[pkt.stream_index];
           out_stream = ofmt_ctx->streams[pkt.stream_index];

           //if end reached
           if (av_q2d(in_stream->time_base) * pkt.pts > cutUpTo) {
               av_packet_unref(&amp;pkt);
               break;
           }


           // Recording the initial pts value for each stream
           // Recording dts does not do the trick because AVPacket.dts values
           // in some video files are larger than corresponding pts values
           // and ffmpeg does not like it.
           if (dts_start_from[pkt.stream_index] == 0) {
               dts_start_from[pkt.stream_index] = pkt.pts;
               printf("dts_initial_value: %f for stream index: %d \n",
                       static_cast<double>(dts_start_from[pkt.stream_index]),
                                   pkt.stream_index

               );
           }
           if (pts_start_from[pkt.stream_index] == 0) {
               pts_start_from[pkt.stream_index] = pkt.pts;
               printf( "pts_initial_value:  %f for stream index %d\n",
                       static_cast<double>(pts_start_from[pkt.stream_index]),
                                   pkt.stream_index);
           }

           log_packet(ifmt_ctx, &amp;pkt, "in",loopCount);

           /* Computes pts etc
            *      av_rescale_q_rend etc are countering changes in time_base between
            *      out_stream and in_stream, so regardless of time_base values for
            *      in and out streams, the rate at which frames are refreshed remains
            *      the same.
            *
                   pkt.pts = pkt.pts * (in_stream->time_base/ out_stream->time_base)
                   As `time_base == 1/frame_rate`, the above is an equivalent of

                   (out_stream_frame_rate/in_stream_frame_rate)*pkt.pts where
                   frame_rate is the number of frames to be displayed per second.

                   AV_ROUND_PASS_MINMAX may set pts or dts to AV_NOPTS_VALUE
            * */


           pkt.pts =
                   av_rescale_q_rnd(
                   pkt.pts - pts_start_from[pkt.stream_index],
                   static_cast<avrational>(in_stream->time_base),
                   static_cast<avrational>(out_stream->time_base),
                   static_cast<avrounding>(AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX));
           pkt.dts =
                   av_rescale_q_rnd(
                   pkt.dts - dts_start_from[pkt.stream_index],
                   static_cast<avrational>(in_stream->time_base),
                   static_cast<avrational>(out_stream->time_base),
                   static_cast<avrounding>(AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX));

           if(pkt.dts>pkt.pts) pkt.dts = pkt.pts -1;
           if(pkt.dts &lt; 0) pkt.dts = 0;
           if(pkt.pts &lt; 0) pkt.pts = 0;

           pkt.duration = av_rescale_q(
                   pkt.duration,
                   in_stream->time_base,
                   out_stream->time_base);
           pkt.pos = -1;
           log_packet(ofmt_ctx, &amp;pkt, "out",loopCount);

           // Writes to the file after buffering packets enough to generate a frame
           // and probably sorting packets in dts order.
           ret = av_interleaved_write_frame(ofmt_ctx, &amp;pkt);
    //        ret = av_write_frame(ofmt_ctx, &amp;pkt);
           if (ret &lt; 0) {
               printf( "Error muxing packet %d \n", ret);
               //continue;
               break;
           }
           av_packet_unref(&amp;pkt);
           ++loopCount;
       }

       //Writing end code?
       av_write_trailer(ofmt_ctx);

       end:
       avformat_close_input(&amp;ifmt_ctx);

       if(dts_start_from)free(dts_start_from);
       if(pts_start_from)free(pts_start_from);

       /* close output */
       if (ofmt_ctx &amp;&amp; !(ofmt->flags &amp; AVFMT_NOFILE))
           avio_closep(&amp;ofmt_ctx->pb);
       avformat_free_context(ofmt_ctx);

       if (ret &lt; 0 &amp;&amp; ret != AVERROR_EOF) {
           //printf( "Error occurred: %s\n", av_err2str(ret));
           return 1;
       }

       return 0;
    }

    }
    </avrounding></avrational></avrational></avrounding></avrational></avrational></double></double></double></double></double>

    c compatible(Console says g++ but I’m sure this is C code)

    #include "libavformat/avformat.h"
    #include "libavutil/mathematics.h"
    #include "libavutil/timestamp.h"



    static void log_packet(
           const AVFormatContext *fmt_ctx,
           const AVPacket *pkt, const char *tag,
           long count)
    {

       printf("loop count %d pts:%f dts:%f duration:%f stream_index:%d\n",
              count,
              (double)pkt->pts,
              (double)pkt->dts,
              (double)pkt->duration,
              pkt->stream_index);
       return;
    }

    int trimVideo(
           const char* in_filename,
           const char* out_filename,
           double cutFrom,
           double cutUpTo)
    {
       AVOutputFormat *ofmt = NULL;
       AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
       AVPacket pkt;
       int ret, i;
       //jboolean  copy = true;
       //const char *in_filename = env->GetStringUTFChars(jstring_in_filename,&amp;copy);
       //const char *out_filename = env->GetStringUTFChars(jstring_out_filename,&amp;copy);
       long loopCount = 0;

       av_register_all();

       // Cutting may change the pts and dts of the resulting video;
       // if frames in head position are removed.
       // In the case like that, src stream's copy start pts
       // need to be recorded and is used to compute the new pts value.
       // e.g.
       //    new_pts = current_pts - trim_start_position_pts;

       // nb-streams is the number of elements in AVFormatContext.streams.
       // Initial pts value must be recorded for each stream.

       //May be malloc and memset should be replaced with [].
       int64_t *dts_start_from = NULL;
       int64_t *pts_start_from = NULL;

       if ((ret = avformat_open_input(&amp;ifmt_ctx, in_filename, 0, 0)) &lt; 0) {
           printf( "Could not open input file '%s'", in_filename);
           goto end;
       }

       if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) &lt; 0) {
           printf("Failed to retrieve input stream information");
           goto end;
       }

       av_dump_format(ifmt_ctx, 0, in_filename, 0);

       avformat_alloc_output_context2(&amp;ofmt_ctx, NULL, NULL, out_filename);
       if (!ofmt_ctx) {
           printf( "Could not create output context\n");
           ret = AVERROR_UNKNOWN;
           goto end;
       }

       ofmt = ofmt_ctx->oformat;

       //preparing streams
       for (i = 0; i &lt; ifmt_ctx->nb_streams; i++) {
           AVStream *in_stream = ifmt_ctx->streams[i];
           AVStream *out_stream = avformat_new_stream(ofmt_ctx, in_stream->codec->codec);
           if (!out_stream) {
               printf( "Failed allocating output stream\n");
               ret = AVERROR_UNKNOWN;
               goto end;
           }

           ret = avcodec_copy_context(out_stream->codec, in_stream->codec);
           if (ret &lt; 0) {
               printf( "Failed to copy context from input to output stream codec context\n");
               goto end;
           }
           out_stream->codec->codec_tag = 0;
           if (ofmt_ctx->oformat->flags &amp; AVFMT_GLOBALHEADER)
               out_stream->codec->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
       }
       av_dump_format(ofmt_ctx, 0, out_filename, 1);

       if (!(ofmt->flags &amp; AVFMT_NOFILE)) {
           ret = avio_open(&amp;ofmt_ctx->pb, out_filename, AVIO_FLAG_WRITE);
           if (ret &lt; 0) {
               printf( "Could not open output file '%s'", out_filename);
               goto end;
           }
       }
       //preparing the header
       ret = avformat_write_header(ofmt_ctx, NULL);
       if (ret &lt; 0) {
           printf( "Error occurred when opening output file\n");
           goto end;
       }

       // av_seek_frame translates AV_TIME_BASE into an appropriate time base.
       ret = av_seek_frame(ifmt_ctx, -1, cutFrom*AV_TIME_BASE, AVSEEK_FLAG_ANY);
       if (ret &lt; 0) {
           printf( "Error seek\n");
           goto end;
       }
       dts_start_from = (int64_t*)
               malloc(sizeof(int64_t) * ifmt_ctx->nb_streams);
       memset(dts_start_from, 0, sizeof(int64_t) * ifmt_ctx->nb_streams);
       pts_start_from = (int64_t*)
               malloc(sizeof(int64_t) * ifmt_ctx->nb_streams);
       memset(pts_start_from, 0, sizeof(int64_t) * ifmt_ctx->nb_streams);

       //writing
       while (1) {
           AVStream *in_stream, *out_stream;
           //reading frame into pkt
           ret = av_read_frame(ifmt_ctx, &amp;pkt);
           if (ret &lt; 0)
               break;
           in_stream  = ifmt_ctx->streams[pkt.stream_index];
           out_stream = ofmt_ctx->streams[pkt.stream_index];

           //if end reached
           if (av_q2d(in_stream->time_base) * pkt.pts > cutUpTo) {
               av_packet_unref(&amp;pkt);
               break;
           }


           // Recording the initial pts value for each stream
           // Recording dts does not do the trick because AVPacket.dts values
           // in some video files are larger than corresponding pts values
           // and ffmpeg does not like it.
           if (dts_start_from[pkt.stream_index] == 0) {
               dts_start_from[pkt.stream_index] = pkt.pts;
               printf("dts_initial_value: %f for stream index: %d \n",
                       (double)dts_start_from[pkt.stream_index],
                                   pkt.stream_index

               );
           }
           if (pts_start_from[pkt.stream_index] == 0) {
               pts_start_from[pkt.stream_index] = pkt.pts;
               printf( "pts_initial_value:  %f for stream index %d\n",
                       (double)pts_start_from[pkt.stream_index],
                                   pkt.stream_index);
           }

           log_packet(ifmt_ctx, &amp;pkt, "in",loopCount);

           /* Computes pts etc
            *      av_rescale_q_rend etc are countering changes in time_base between
            *      out_stream and in_stream, so regardless of time_base values for
            *      in and out streams, the rate at which frames are refreshed remains
            *      the same.
            *
                   pkt.pts = pkt.pts * (in_stream->time_base/ out_stream->time_base)
                   As `time_base == 1/frame_rate`, the above is an equivalent of

                   (out_stream_frame_rate/in_stream_frame_rate)*pkt.pts where
                   frame_rate is the number of frames to be displayed per second.

                   AV_ROUND_PASS_MINMAX may set pts or dts to AV_NOPTS_VALUE
            * */


           pkt.pts =
                   av_rescale_q_rnd(
                   pkt.pts - pts_start_from[pkt.stream_index],
                   (AVRational)in_stream->time_base,
                   (AVRational)out_stream->time_base,
                   (AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX));
           pkt.dts =
                   av_rescale_q_rnd(
                   pkt.dts - dts_start_from[pkt.stream_index],
                   (AVRational)in_stream->time_base,
                   (AVRational)out_stream->time_base,
                   AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);

           if(pkt.dts>pkt.pts) pkt.dts = pkt.pts -1;
           if(pkt.dts &lt; 0) pkt.dts = 0;
           if(pkt.pts &lt; 0) pkt.pts = 0;

           pkt.duration = av_rescale_q(
                   pkt.duration,
                   in_stream->time_base,
                   out_stream->time_base);
           pkt.pos = -1;
           log_packet(ofmt_ctx, &amp;pkt, "out",loopCount);

           // Writes to the file after buffering packets enough to generate a frame
           // and probably sorting packets in dts order.
           ret = av_interleaved_write_frame(ofmt_ctx, &amp;pkt);
    //        ret = av_write_frame(ofmt_ctx, &amp;pkt);
           if (ret &lt; 0) {
               printf( "Error muxing packet %d \n", ret);
               //continue;
               break;
           }
           av_packet_unref(&amp;pkt);
           ++loopCount;
       }

       //Writing end code?
       av_write_trailer(ofmt_ctx);

       end:
       avformat_close_input(&amp;ifmt_ctx);

       if(dts_start_from)free(dts_start_from);
       if(pts_start_from)free(pts_start_from);

       /* close output */
       if (ofmt_ctx &amp;&amp; !(ofmt->flags &amp; AVFMT_NOFILE))
           avio_closep(&amp;ofmt_ctx->pb);
       avformat_free_context(ofmt_ctx);

       if (ret &lt; 0 &amp;&amp; ret != AVERROR_EOF) {
           //printf( "Error occurred: %s\n", av_err2str(ret));
           return 1;
       }

       return 0;
    }

    What is the problem

    My code does not produce the error because I’m doing new_dts = current_dts - initial_pts_for_current_stream. It works but now dts values are not properly computed.

    How to recalculate dts properly ?

    P.S

    Since Olaf seems to have a very strong opinion, posting the build console message for my main.c.
    I don’t really know C or C++ but GNU gcc seems to be calling gcc for compiling and g++ for linking.
    Well, the extension for my main is now .c and the compiler being called is gcc, so that should at least mean I have got a code written in C language...

    ------------- Build: Debug in videoTrimmer (compiler: GNU GCC Compiler)---------------

    gcc -Wall -fexceptions -std=c99 -g -I/home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/include -I/usr/include -I/usr/local/include -c /home/d/CodeBlockWorkplace/videoTrimmer/main.c -o obj/Debug/main.o
    /home/d/CodeBlockWorkplace/videoTrimmer/main.c: In function ‘log_packet’:
    /home/d/CodeBlockWorkplace/videoTrimmer/main.c:15:12: warning: format ‘%d’ expects argument of type ‘int’, but argument 2 has type ‘long int’ [-Wformat=]
        printf("loop count %d pts:%f dts:%f duration:%f stream_index:%d\n",
               ^
    /home/d/CodeBlockWorkplace/videoTrimmer/main.c: In function ‘trimVideo’:
    /home/d/CodeBlockWorkplace/videoTrimmer/main.c:79:9: warning: ‘codec’ is deprecated [-Wdeprecated-declarations]
            AVStream *out_stream = avformat_new_stream(ofmt_ctx, in_stream->codec->codec);
            ^
    In file included from /home/d/CodeBlockWorkplace/videoTrimmer/main.c:3:0:
    /home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/include/libavformat/avformat.h:893:21: note: declared here
        AVCodecContext *codec;
                        ^
    /home/d/CodeBlockWorkplace/videoTrimmer/main.c:86:9: warning: ‘avcodec_copy_context’ is deprecated [-Wdeprecated-declarations]
            ret = avcodec_copy_context(out_stream->codec, in_stream->codec);
            ^
    In file included from /home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/include/libavformat/avformat.h:319:0,
                    from /home/d/CodeBlockWorkplace/videoTrimmer/main.c:3:
    /home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/include/libavcodec/avcodec.h:4286:5: note: declared here
    int avcodec_copy_context(AVCodecContext *dest, const AVCodecContext *src);
        ^
    /home/d/CodeBlockWorkplace/videoTrimmer/main.c:86:9: warning: ‘codec’ is deprecated [-Wdeprecated-declarations]
            ret = avcodec_copy_context(out_stream->codec, in_stream->codec);
            ^
    In file included from /home/d/CodeBlockWorkplace/videoTrimmer/main.c:3:0:
    /home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/include/libavformat/avformat.h:893:21: note: declared here
        AVCodecContext *codec;
                        ^
    /home/d/CodeBlockWorkplace/videoTrimmer/main.c:86:9: warning: ‘codec’ is deprecated [-Wdeprecated-declarations]
            ret = avcodec_copy_context(out_stream->codec, in_stream->codec);
            ^
    In file included from /home/d/CodeBlockWorkplace/videoTrimmer/main.c:3:0:
    /home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/include/libavformat/avformat.h:893:21: note: declared here
        AVCodecContext *codec;
                        ^
    /home/d/CodeBlockWorkplace/videoTrimmer/main.c:91:9: warning: ‘codec’ is deprecated [-Wdeprecated-declarations]
            out_stream->codec->codec_tag = 0;
            ^
    In file included from /home/d/CodeBlockWorkplace/videoTrimmer/main.c:3:0:
    /home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/include/libavformat/avformat.h:893:21: note: declared here
        AVCodecContext *codec;
                        ^
    /home/d/CodeBlockWorkplace/videoTrimmer/main.c:93:13: warning: ‘codec’ is deprecated [-Wdeprecated-declarations]
                out_stream->codec->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
                ^
    In file included from /home/d/CodeBlockWorkplace/videoTrimmer/main.c:3:0:
    /home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/include/libavformat/avformat.h:893:21: note: declared here
        AVCodecContext *codec;
                        ^
    g++ -L/home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib -L/usr/lib -L/usr/local/lib -o bin/Debug/videoTrimmer obj/Debug/main.o   ../../Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib/libavformat.a ../../Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib/libavcodec.a ../../Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib/libavutil.a ../../Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib/libswresample.a ../../Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib/libswscale.a ../../Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib/libavfilter.a ../../Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib/libpostproc.a ../../Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib/libavdevice.a -lX11 -lvdpau -lva -lva-drm -lva-x11 -ldl -lpthread -lz -llzma -lx264
    Output file is bin/Debug/videoTrimmer with size 77.24 MB
    Process terminated with status 0 (0 minute(s), 16 second(s))
    0 error(s), 8 warning(s) (0 minute(s), 16 second(s))
  • Merge all .h264 files in directory in alphabetical order

    9 mars 2017, par nicolashahn

    I have a camera that records in 5 second clips with filenames that are timestamps :

    2017-03-08-09-54-27.334326-000000.h264
    2017-03-08-09-54-27.334326-000001.h264
    2017-03-08-09-54-27.334326-000002.h264
    2017-03-08-09-54-27.334326-000003.h264
    ...

    What is the easiest way to merge these in order into one video file on OSX ?

  • RTP packets detected as UDP

    28 février 2017, par user3172852

    Here is what I am trying to do :

    WebRTC endpoint > RTP Endpoint > ffmpeg > RTMP server.

    This is what my SDP file looks like.

    var cm_offer = "v=0\n" +
                 "o=- 3641290734 3641290734 IN IP4 127.0.0.1\n" +
                 "s=nginx\n" +
                 "c=IN IP4 127.0.0.1\n" +
                 "t=0 0\n" +
                 "m=audio 60820 RTP/AVP 0\n" +
                 "a=rtpmap:0 PCMU/8000\n" +
                 "a=recvonly\n" +
                 "m=video 59618 RTP/AVP 101\n" +
                 "a=rtpmap:101 H264/90000\n" +
                 "a=recvonly\n";

    What’s happening is that wireshark can detect the incoming packets at port 59618, but not as RTP packets but UDP packets. I am trying to capture the packets using ffmpeg with the following command :

    ubuntu@ip-132-31-40-100:~$ ffmpeg -i udp://127.0.0.1:59618 -vcodec copy stream.mp4
    ffmpeg version git-2017-01-22-f1214ad Copyright (c) 2000-2017 the FFmpeg developers
     built with gcc 4.8 (Ubuntu 4.8.4-2ubuntu1~14.04.3)
     configuration: --extra-libs=-ldl --prefix=/opt/ffmpeg --mandir=/usr/share/man --enable-avresample --disable-debug --enable-nonfree --enable-gpl --enable-version3 --enable-libopencore-amrnb --enable-libopencore-amrwb --disable-decoder=amrnb --disable-decoder=amrwb --enable-libpulse --enable-libfreetype --enable-gnutls --enable-libx264 --enable-libx265 --enable-libfdk-aac --enable-libvorbis --enable-libmp3lame --enable-libopus --enable-libvpx --enable-libspeex --enable-libass --enable-avisynth --enable-libsoxr --enable-libxvid --enable-libvidstab --enable-libwavpack --enable-nvenc
     libavutil      55. 44.100 / 55. 44.100
     libavcodec     57. 75.100 / 57. 75.100
     libavformat    57. 63.100 / 57. 63.100
     libavdevice    57.  2.100 / 57.  2.100
     libavfilter     6. 69.100 /  6. 69.100
     libavresample   3.  2.  0 /  3.  2.  0
     libswscale      4.  3.101 /  4.  3.101
     libswresample   2.  4.100 /  2.  4.100
     libpostproc    54.  2.100 / 54.  2.100

    All I get is a blinking cursor and The stream.mp4 file is not written to disk after I exit (ctrl+c).

    So can you help me figure out :

    1. why wireshark cannot detect the packets as RTP (I suspect it has something to do with SDP)
    2. How to handle SDP answer when the RTP endpoint is pushing to ffmpeg which doesn’t send an answer back.

    Here is the entire code (hello world tutorial modified)

    /*
        * (C) Copyright 2014-2015 Kurento (http://kurento.org/)
        *
        * Licensed under the Apache License, Version 2.0 (the "License");
        * you may not use this file except in compliance with the License.
        * You may obtain a copy of the License at
        *
        *   http://www.apache.org/licenses/LICENSE-2.0
        *
        * Unless required by applicable law or agreed to in writing, software
        * distributed under the License is distributed on an "AS IS" BASIS,
        * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
        * See the License for the specific language governing permissions and
        * limitations under the License.
        */

       function getopts(args, opts)
       {
         var result = opts.default || {};
         args.replace(
             new RegExp("([^?=&amp;]+)(=([^&amp;]*))?", "g"),
             function($0, $1, $2, $3) { result[$1] = decodeURI($3); });

         return result;
       };

       var args = getopts(location.search,
       {
         default:
         {
           ws_uri: 'wss://' + location.hostname + ':8433/kurento',
           ice_servers: undefined
         }
       });

       function setIceCandidateCallbacks(webRtcPeer, webRtcEp, onerror)
       {
         webRtcPeer.on('icecandidate', function(candidate) {
           console.log("Local candidate:",candidate);

           candidate = kurentoClient.getComplexType('IceCandidate')(candidate);

           webRtcEp.addIceCandidate(candidate, onerror)
         });

         webRtcEp.on('OnIceCandidate', function(event) {
           var candidate = event.candidate;

           console.log("Remote candidate:",candidate);

           webRtcPeer.addIceCandidate(candidate, onerror);
         });
       }


       function setIceCandidateCallbacks2(webRtcPeer, rtpEp, onerror)
       {
         webRtcPeer.on('icecandidate', function(candidate) {
           console.log("Localr candidate:",candidate);

           candidate = kurentoClient.getComplexType('IceCandidate')(candidate);

           rtpEp.addIceCandidate(candidate, onerror)
         });
       }


       window.addEventListener('load', function()
       {
         console = new Console();

         var webRtcPeer;
         var pipeline;
         var webRtcEpt;

         var videoInput = document.getElementById('videoInput');
         var videoOutput = document.getElementById('videoOutput');

         var startButton = document.getElementById("start");
         var stopButton = document.getElementById("stop");

         startButton.addEventListener("click", function()
         {
           showSpinner(videoInput, videoOutput);

           var options = {
             localVideo: videoInput,
             remoteVideo: videoOutput
           };


           if (args.ice_servers) {
            console.log("Use ICE servers: " + args.ice_servers);
            options.configuration = {
              iceServers : JSON.parse(args.ice_servers)
            };
           } else {
            console.log("Use freeice")
           }

           webRtcPeer = kurentoUtils.WebRtcPeer.WebRtcPeerSendrecv(options, function(error)
           {
             if(error) return onError(error)

             this.generateOffer(onOffer)
           });

           function onOffer(error, sdpOffer)
           {
             if(error) return onError(error)

             kurentoClient(args.ws_uri, function(error, client)
             {
               if(error) return onError(error);

               client.create("MediaPipeline", function(error, _pipeline)
               {
                 if(error) return onError(error);

                 pipeline = _pipeline;

                 pipeline.create("WebRtcEndpoint", function(error, webRtc){
                   if(error) return onError(error);

                   webRtcEpt = webRtc;

                   setIceCandidateCallbacks(webRtcPeer, webRtc, onError)

                   webRtc.processOffer(sdpOffer, function(error, sdpAnswer){
                     if(error) return onError(error);

                     webRtcPeer.processAnswer(sdpAnswer, onError);
                   });
                   webRtc.gatherCandidates(onError);

                   webRtc.connect(webRtc, function(error){
                     if(error) return onError(error);

                     console.log("Loopback established");
                   });
                 });



               pipeline.create("RtpEndpoint", function(error, rtp){
                   if(error) return onError(error);

                   //setIceCandidateCallbacks2(webRtcPeer, rtp, onError)


                   var cm_offer = "v=0\n" +
                         "o=- 3641290734 3641290734 IN IP4 127.0.0.1\n" +
                         "s=nginx\n" +
                         "c=IN IP4 127.0.0.1\n" +
                         "t=0 0\n" +
                         "m=audio 60820 RTP/AVP 0\n" +
                         "a=rtpmap:0 PCMU/8000\n" +
                         "a=recvonly\n" +
                         "m=video 59618 RTP/AVP 101\n" +
                         "a=rtpmap:101 H264/90000\n" +
                         "a=recvonly\n";



                   rtp.processOffer(cm_offer, function(error, cm_sdpAnswer){
                     if(error) return onError(error);

                     //webRtcPeer.processAnswer(cm_sdpAnswer, onError);
                   });
                   //rtp.gatherCandidates(onError);

                   webRtcEpt.connect(rtp, function(error){
                     if(error) return onError(error);

                     console.log("RTP endpoint connected to webRTC");
                   });
                 });









               });
             });
           }
         });
         stopButton.addEventListener("click", stop);


         function stop() {
           if (webRtcPeer) {
             webRtcPeer.dispose();
             webRtcPeer = null;
           }

           if(pipeline){
             pipeline.release();
             pipeline = null;
           }

           hideSpinner(videoInput, videoOutput);
         }

         function onError(error) {
           if(error)
           {
             console.error(error);
             stop();
           }
         }
       })


       function showSpinner() {
         for (var i = 0; i &lt; arguments.length; i++) {
           arguments[i].poster = 'img/transparent-1px.png';
           arguments[i].style.background = "center transparent url('img/spinner.gif') no-repeat";
         }
       }

       function hideSpinner() {
         for (var i = 0; i &lt; arguments.length; i++) {
           arguments[i].src = '';
           arguments[i].poster = 'img/webrtc.png';
           arguments[i].style.background = '';
         }
       }

       /**
        * Lightbox utility (to display media pipeline image in a modal dialog)
        */
       $(document).delegate('*[data-toggle="lightbox"]', 'click', function(event) {
         event.preventDefault();
         $(this).ekkoLightbox();
       });