Recherche avancée

Médias (0)

Mot : - Tags -/navigation

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (45)

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

  • Les notifications de la ferme

    1er décembre 2010, par

    Afin d’assurer une gestion correcte de la ferme, il est nécessaire de notifier plusieurs choses lors d’actions spécifiques à la fois à l’utilisateur mais également à l’ensemble des administrateurs de la ferme.
    Les notifications de changement de statut
    Lors d’un changement de statut d’une instance, l’ensemble des administrateurs de la ferme doivent être notifiés de cette modification ainsi que l’utilisateur administrateur de l’instance.
    À la demande d’un canal
    Passage au statut "publie"
    Passage au (...)

  • Changer son thème graphique

    22 février 2011, par

    Le thème graphique ne touche pas à la disposition à proprement dite des éléments dans la page. Il ne fait que modifier l’apparence des éléments.
    Le placement peut être modifié effectivement, mais cette modification n’est que visuelle et non pas au niveau de la représentation sémantique de la page.
    Modifier le thème graphique utilisé
    Pour modifier le thème graphique utilisé, il est nécessaire que le plugin zen-garden soit activé sur le site.
    Il suffit ensuite de se rendre dans l’espace de configuration du (...)

Sur d’autres sites (6475)

  • Scale filter crashes with error when used from transcoding example

    27 juin 2017, par Vali

    I’ve modified a bit (just to compile in c++) this code example :
    https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/transcoding.c.

    What works : as is (null filter), a number of other filters like framerate, drawtext, ...

    What doesn’t work : scale filter when scaling down.

    I use the following syntax for scale ( I’ve tried many others also, same effect) :
    "scale=w=iw/2 :-1"

    The error is : "Input picture width (240) is greater than stride (128)" where the values for width and stride depend on the input.

    Misc environment info : windows, VS 2017, input example : rtsp ://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov

    Any clue as to what I’m doing wrong ?

    Thanks !


    EDITED to add working code sample


    #pragma comment(lib, "avcodec.lib")
    #pragma comment(lib, "avutil.lib")
    #pragma comment(lib, "avformat.lib")
    #pragma comment(lib, "avfilter.lib")

    /*
    * Copyright (c) 2010 Nicolas George
    * Copyright (c) 2011 Stefano Sabatini
    * Copyright (c) 2014 Andrey Utkin
    *
    **** EDITED 2017 for testing (see original here: https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/transcoding.c)
    *
    * Permission is hereby granted, free of charge, to any person obtaining a copy
    * of this software and associated documentation files (the "Software"), to deal
    * in the Software without restriction, including without limitation the rights
    * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
    * copies of the Software, and to permit persons to whom the Software is
    * furnished to do so, subject to the following conditions:
    *
    * The above copyright notice and this permission notice shall be included in
    * all copies or substantial portions of the Software.
    *
    * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
    * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
    * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
    * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
    * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
    * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
    * THE SOFTWARE.
    */

    /**
    * @file
    * API example for demuxing, decoding, filtering, encoding and muxing
    * @example transcoding.c
    */

    extern "C"
    {
       #include <libavcodec></libavcodec>avcodec.h>
       #include <libavformat></libavformat>avformat.h>
       #include <libavfilter></libavfilter>avfiltergraph.h>
       #include <libavfilter></libavfilter>buffersink.h>
       #include <libavfilter></libavfilter>buffersrc.h>
       #include <libavutil></libavutil>opt.h>
       #include <libavutil></libavutil>pixdesc.h>
    }


    static AVFormatContext *ifmt_ctx;
    static AVFormatContext *ofmt_ctx;
    typedef struct FilteringContext {
       AVFilterContext *buffersink_ctx;
       AVFilterContext *buffersrc_ctx;
       AVFilterGraph *filter_graph;
    } FilteringContext;
    static FilteringContext *filter_ctx;

    typedef struct StreamContext {
       AVCodecContext *dec_ctx;
       AVCodecContext *enc_ctx;
    } StreamContext;
    static StreamContext *stream_ctx;

    static int open_input_file(const char *filename, int&amp; videoStreamIndex)
    {
       int ret;
       unsigned int i;

       ifmt_ctx = NULL;
       if ((ret = avformat_open_input(&amp;ifmt_ctx, filename, NULL, NULL)) &lt; 0) {
           av_log(NULL, AV_LOG_ERROR, "Cannot open input file\n");
           return ret;
       }

       if ((ret = avformat_find_stream_info(ifmt_ctx, NULL)) &lt; 0) {
           av_log(NULL, AV_LOG_ERROR, "Cannot find stream information\n");
           return ret;
       }

       // Just need video
       videoStreamIndex = -1;
       for (unsigned int i = 0; i &lt; ifmt_ctx->nb_streams; i++)
       {
           if (ifmt_ctx->streams[i]->codecpar->codec_type != AVMEDIA_TYPE_VIDEO)
               continue;
           videoStreamIndex = i;
           break;
       }
       if (videoStreamIndex &lt; 0)
       {
           av_log(NULL, AV_LOG_ERROR, "Cannot find video stream\n");
           return videoStreamIndex;
       }


       stream_ctx = (StreamContext*)av_mallocz_array(ifmt_ctx->nb_streams, sizeof(*stream_ctx));
       if (!stream_ctx)
           return AVERROR(ENOMEM);

       for (i = 0; i &lt; ifmt_ctx->nb_streams; i++) {

           // Just need video
           if (i != videoStreamIndex)
               continue;


           AVStream *stream = ifmt_ctx->streams[i];
           AVCodec *dec = avcodec_find_decoder(stream->codecpar->codec_id);
           AVCodecContext *codec_ctx;
           if (!dec) {
               av_log(NULL, AV_LOG_ERROR, "Failed to find decoder for stream #%u\n", i);
               return AVERROR_DECODER_NOT_FOUND;
           }
           codec_ctx = avcodec_alloc_context3(dec);
           if (!codec_ctx) {
               av_log(NULL, AV_LOG_ERROR, "Failed to allocate the decoder context for stream #%u\n", i);
               return AVERROR(ENOMEM);
           }
           ret = avcodec_parameters_to_context(codec_ctx, stream->codecpar);
           if (ret &lt; 0) {
               av_log(NULL, AV_LOG_ERROR, "Failed to copy decoder parameters to input decoder context "
                   "for stream #%u\n", i);
               return ret;
           }
           /* Reencode video &amp; audio and remux subtitles etc. */
           if (codec_ctx->codec_type == AVMEDIA_TYPE_VIDEO
               || codec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
               if (codec_ctx->codec_type == AVMEDIA_TYPE_VIDEO)
                   codec_ctx->framerate = av_guess_frame_rate(ifmt_ctx, stream, NULL);
               /* Open decoder */
               ret = avcodec_open2(codec_ctx, dec, NULL);
               if (ret &lt; 0) {
                   av_log(NULL, AV_LOG_ERROR, "Failed to open decoder for stream #%u\n", i);
                   return ret;
               }
           }
           stream_ctx[i].dec_ctx = codec_ctx;
       }

       av_dump_format(ifmt_ctx, 0, filename, 0);
       return 0;
    }

    static int open_output_file(const char *filename, const int videoStreamIndex)
    {
       AVStream *out_stream;
       AVStream *in_stream;
       AVCodecContext *dec_ctx, *enc_ctx;
       AVCodec *encoder;
       int ret;
       unsigned int i;

       ofmt_ctx = NULL;
       avformat_alloc_output_context2(&amp;ofmt_ctx, NULL, NULL, filename);
       if (!ofmt_ctx) {
           av_log(NULL, AV_LOG_ERROR, "Could not create output context\n");
           return AVERROR_UNKNOWN;
       }


       for (i = 0; i &lt; ifmt_ctx->nb_streams; i++) {
           // Just need video
           if (i != videoStreamIndex)
               continue;

           out_stream = avformat_new_stream(ofmt_ctx, NULL);
           if (!out_stream) {
               av_log(NULL, AV_LOG_ERROR, "Failed allocating output stream\n");
               return AVERROR_UNKNOWN;
           }

           in_stream = ifmt_ctx->streams[i];
           dec_ctx = stream_ctx[i].dec_ctx;

           if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO) {
               /* in this example, we choose transcoding to same codec */
               encoder = avcodec_find_encoder(dec_ctx->codec_id);
               if (!encoder) {
                   av_log(NULL, AV_LOG_FATAL, "Necessary encoder not found\n");
                   return AVERROR_INVALIDDATA;
               }
               enc_ctx = avcodec_alloc_context3(encoder);
               if (!enc_ctx) {
                   av_log(NULL, AV_LOG_FATAL, "Failed to allocate the encoder context\n");
                   return AVERROR(ENOMEM);
               }

               /* In this example, we transcode to same properties (picture size,
               * sample rate etc.). These properties can be changed for output
               * streams easily using filters */
               enc_ctx->height = dec_ctx->height;
               enc_ctx->width = dec_ctx->width;
               enc_ctx->sample_aspect_ratio = dec_ctx->sample_aspect_ratio;
               /* take first format from list of supported formats */
               if (encoder->pix_fmts)
                   enc_ctx->pix_fmt = encoder->pix_fmts[0];
               else
                   enc_ctx->pix_fmt = dec_ctx->pix_fmt;

               /* video time_base can be set to whatever is handy and supported by encoder */
               //enc_ctx->time_base = av_inv_q(dec_ctx->framerate);
               enc_ctx->time_base = dec_ctx->time_base;


               /* Third parameter can be used to pass settings to encoder */
               ret = avcodec_open2(enc_ctx, encoder, NULL);
               if (ret &lt; 0) {
                   av_log(NULL, AV_LOG_ERROR, "Cannot open video encoder for stream #%u\n", i);
                   return ret;
               }
               ret = avcodec_parameters_from_context(out_stream->codecpar, enc_ctx);
               if (ret &lt; 0) {
                   av_log(NULL, AV_LOG_ERROR, "Failed to copy encoder parameters to output stream #%u\n", i);
                   return ret;
               }
               if (ofmt_ctx->oformat->flags &amp; AVFMT_GLOBALHEADER)
                   enc_ctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;

               out_stream->time_base = enc_ctx->time_base;
               stream_ctx[i].enc_ctx = enc_ctx;
           }
           else if (dec_ctx->codec_type == AVMEDIA_TYPE_UNKNOWN) {
               av_log(NULL, AV_LOG_FATAL, "Elementary stream #%d is of unknown type, cannot proceed\n", i);
               return AVERROR_INVALIDDATA;
           }
           else {
               /* if this stream must be remuxed */
               ret = avcodec_parameters_copy(out_stream->codecpar, in_stream->codecpar);
               if (ret &lt; 0) {
                   av_log(NULL, AV_LOG_ERROR, "Copying parameters for stream #%u failed\n", i);
                   return ret;
               }
               out_stream->time_base = in_stream->time_base;
           }

       }
       av_dump_format(ofmt_ctx, 0, filename, 1);

       if (!(ofmt_ctx->oformat->flags &amp; AVFMT_NOFILE)) {
           ret = avio_open(&amp;ofmt_ctx->pb, filename, AVIO_FLAG_WRITE);
           if (ret &lt; 0) {
               av_log(NULL, AV_LOG_ERROR, "Could not open output file '%s'", filename);
               return ret;
           }
       }

       /* init muxer, write output file header */
       ret = avformat_write_header(ofmt_ctx, NULL);
       if (ret &lt; 0) {
           av_log(NULL, AV_LOG_ERROR, "Error occurred when opening output file\n");
           return ret;
       }

       return 0;
    }

    static int init_filter(FilteringContext* fctx, AVCodecContext *dec_ctx,
       AVCodecContext *enc_ctx, const char *filter_spec)
    {
       char args[512];
       int ret = 0;
       AVFilter *buffersrc = NULL;
       AVFilter *buffersink = NULL;
       AVFilterContext *buffersrc_ctx = NULL;
       AVFilterContext *buffersink_ctx = NULL;
       AVFilterInOut *outputs = avfilter_inout_alloc();
       AVFilterInOut *inputs = avfilter_inout_alloc();
       AVFilterGraph *filter_graph = avfilter_graph_alloc();

       if (!outputs || !inputs || !filter_graph) {
           ret = AVERROR(ENOMEM);
           goto end;
       }

       if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO) {
           buffersrc = avfilter_get_by_name("buffer");
           buffersink = avfilter_get_by_name("buffersink");
           if (!buffersrc || !buffersink) {
               av_log(NULL, AV_LOG_ERROR, "filtering source or sink element not found\n");
               ret = AVERROR_UNKNOWN;
               goto end;
           }

           snprintf(args, sizeof(args),
               "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
               dec_ctx->width, dec_ctx->height, dec_ctx->pix_fmt,
               dec_ctx->time_base.num, dec_ctx->time_base.den,
               dec_ctx->sample_aspect_ratio.num,
               dec_ctx->sample_aspect_ratio.den);

           ret = avfilter_graph_create_filter(&amp;buffersrc_ctx, buffersrc, "in",
               args, NULL, filter_graph);
           if (ret &lt; 0) {
               av_log(NULL, AV_LOG_ERROR, "Cannot create buffer source\n");
               goto end;
           }

           ret = avfilter_graph_create_filter(&amp;buffersink_ctx, buffersink, "out",
               NULL, NULL, filter_graph);
           if (ret &lt; 0) {
               av_log(NULL, AV_LOG_ERROR, "Cannot create buffer sink\n");
               goto end;
           }

           ret = av_opt_set_bin(buffersink_ctx, "pix_fmts",
               (uint8_t*)&amp;enc_ctx->pix_fmt, sizeof(enc_ctx->pix_fmt),
               AV_OPT_SEARCH_CHILDREN);
           if (ret &lt; 0) {
               av_log(NULL, AV_LOG_ERROR, "Cannot set output pixel format\n");
               goto end;
           }
       }
       else {
           ret = AVERROR_UNKNOWN;
           goto end;
       }

       /* Endpoints for the filter graph. */
       outputs->name = av_strdup("in");
       outputs->filter_ctx = buffersrc_ctx;
       outputs->pad_idx = 0;
       outputs->next = NULL;

       inputs->name = av_strdup("out");
       inputs->filter_ctx = buffersink_ctx;
       inputs->pad_idx = 0;
       inputs->next = NULL;

       if (!outputs->name || !inputs->name) {
           ret = AVERROR(ENOMEM);
           goto end;
       }

       if ((ret = avfilter_graph_parse_ptr(filter_graph, filter_spec,
           &amp;inputs, &amp;outputs, NULL)) &lt; 0)
           goto end;

       if ((ret = avfilter_graph_config(filter_graph, NULL)) &lt; 0)
           goto end;

       /* Fill FilteringContext */
       fctx->buffersrc_ctx = buffersrc_ctx;
       fctx->buffersink_ctx = buffersink_ctx;
       fctx->filter_graph = filter_graph;

    end:
       avfilter_inout_free(&amp;inputs);
       avfilter_inout_free(&amp;outputs);

       return ret;
    }

    static int init_filters(const int videoStreamIndex)
    {
       const char *filter_spec;
       unsigned int i;
       int ret;
       filter_ctx = (FilteringContext*)av_malloc_array(ifmt_ctx->nb_streams, sizeof(*filter_ctx));
       if (!filter_ctx)
           return AVERROR(ENOMEM);

       for (i = 0; i &lt; ifmt_ctx->nb_streams; i++) {

           // Just video
           if (i != videoStreamIndex)
               continue;

           filter_ctx[i].buffersrc_ctx = NULL;
           filter_ctx[i].buffersink_ctx = NULL;
           filter_ctx[i].filter_graph = NULL;
           if (!(ifmt_ctx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_AUDIO
               || ifmt_ctx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO))
               continue;

           filter_spec = "null"; /* passthrough (dummy) filter for video */
           //filter_spec = "scale=w=iw/2:-1";
           // filter_spec = "drawtext=fontfile=FreeSerif.ttf: text='%{localtime}': x=w-text_w: y=0: fontsize=24: fontcolor=yellow@1.0: box=1: boxcolor=red@1.0";
           // filter_spec = "drawtext=fontfile=FreeSerif.ttf :text='test': x=w-text_w: y=text_h: fontsize=24: fontcolor=yellow@1.0: box=1: boxcolor=red@1.0";

           ret = init_filter(&amp;filter_ctx[i], stream_ctx[i].dec_ctx,
               stream_ctx[i].enc_ctx, filter_spec);
           if (ret)
               return ret;
       }
       return 0;
    }

    static int encode_write_frame(AVFrame *filt_frame, unsigned int stream_index, int *got_frame, const int videoStreamIndex) {

       // Just video
       if (stream_index != videoStreamIndex)
           return 0;

       int ret;
       int got_frame_local;
       AVPacket enc_pkt;
       int(*enc_func)(AVCodecContext *, AVPacket *, const AVFrame *, int *) =
           (ifmt_ctx->streams[stream_index]->codecpar->codec_type ==
               AVMEDIA_TYPE_VIDEO) ? avcodec_encode_video2 : avcodec_encode_audio2;

       if (!got_frame)
           got_frame = &amp;got_frame_local;

       // av_log(NULL, AV_LOG_INFO, "Encoding frame\n");
       /* encode filtered frame */
       enc_pkt.data = NULL;
       enc_pkt.size = 0;
       av_init_packet(&amp;enc_pkt);

       ret = enc_func(stream_ctx[stream_index].enc_ctx, &amp;enc_pkt,
           filt_frame, got_frame);

       av_frame_free(&amp;filt_frame);
       if (ret &lt; 0)
           return ret;
       if (!(*got_frame))
           return 0;

       /* prepare packet for muxing */
       /*enc_pkt.stream_index = stream_index;
       av_packet_rescale_ts(&amp;enc_pkt, stream_ctx[stream_index].enc_ctx->time_base, ofmt_ctx->streams[stream_index]->time_base);*/
       enc_pkt.stream_index = 0;
       av_packet_rescale_ts(&amp;enc_pkt, stream_ctx[stream_index].enc_ctx->time_base, ofmt_ctx->streams[0]->time_base);

       av_log(NULL, AV_LOG_DEBUG, "Muxing frame\n");
       /* mux encoded frame */
       ret = av_interleaved_write_frame(ofmt_ctx, &amp;enc_pkt);
       return ret;
    }

    static int filter_encode_write_frame(AVFrame *frame, unsigned int stream_index, const int videoStreamIndex)
    {
       // Just video, all else crashes
       if (stream_index != videoStreamIndex)
           return 0;

       int ret;
       AVFrame *filt_frame;

       // av_log(NULL, AV_LOG_INFO, "Pushing decoded frame to filters\n");
       /* push the decoded frame into the filtergraph */
       ret = av_buffersrc_add_frame_flags(filter_ctx[stream_index].buffersrc_ctx,
           frame, 0);
       if (ret &lt; 0) {
           av_log(NULL, AV_LOG_ERROR, "Error while feeding the filtergraph\n");
           return ret;
       }

       /* pull filtered frames from the filtergraph */
       while (1) {
           filt_frame = av_frame_alloc();
           if (!filt_frame) {
               ret = AVERROR(ENOMEM);
               break;
           }
           // av_log(NULL, AV_LOG_INFO, "Pulling filtered frame from filters\n");
           ret = av_buffersink_get_frame(filter_ctx[stream_index].buffersink_ctx,
               filt_frame);
           if (ret &lt; 0) {
               /* if no more frames for output - returns AVERROR(EAGAIN)
               * if flushed and no more frames for output - returns AVERROR_EOF
               * rewrite retcode to 0 to show it as normal procedure completion
               */
               if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
                   ret = 0;
               av_frame_free(&amp;filt_frame);
               break;
           }

           filt_frame->pict_type = AV_PICTURE_TYPE_NONE;
           ret = encode_write_frame(filt_frame, stream_index, NULL, videoStreamIndex);
           if (ret &lt; 0)
               break;
       }

       return ret;
    }

    static int flush_encoder(unsigned int stream_index, const int videoStreamIndex)
    {
       int ret;
       int got_frame;

       // Just video
       if (stream_index != videoStreamIndex)
           return 0;

       if (!(stream_ctx[stream_index].enc_ctx->codec->capabilities &amp;
           AV_CODEC_CAP_DELAY))
           return 0;

       while (1) {
           av_log(NULL, AV_LOG_INFO, "Flushing stream #%u encoder\n", stream_index);
           ret = encode_write_frame(NULL, stream_index, &amp;got_frame, videoStreamIndex);
           if (ret &lt; 0)
               break;
           if (!got_frame)
               return 0;
       }
       return ret;
    }


    #include <vector>

    int main(int argc, char **argv)
    {
       int ret;

       AVPacket packet;
       packet.data = NULL;
       packet.size = 0;

       AVFrame *frame = NULL;
       enum AVMediaType type;
       unsigned int stream_index;
       unsigned int i;
       int got_frame;
       int(*dec_func)(AVCodecContext *, AVFrame *, int *, const AVPacket *);


    #ifdef _DEBUG
       // Hardcoded arguments
       std::vector varguments;
       {
           varguments.push_back(argv[0]);

           // Source
           varguments.push_back("./big_buck_bunny_short.mp4 ");

           // Destination
           varguments.push_back("./big_buck_bunny_short-processed.mp4");
       }

       char** arguments = new char*[varguments.size()];
       for (unsigned int i = 0; i &lt; varguments.size(); i++)
       {
           arguments[i] = varguments[i];
       }
       argc = varguments.size();
       argv = arguments;
    #endif // _DEBUG


       if (argc != 3) {
           av_log(NULL, AV_LOG_ERROR, "Usage: %s <input file="file" /> <output file="file">\n", argv[0]);
           return 1;
       }

       av_register_all();
       avfilter_register_all();

       int videoStreamIndex = -1;
       if ((ret = open_input_file(argv[1], videoStreamIndex)) &lt; 0)
           goto end;
       if ((ret = open_output_file(argv[2], videoStreamIndex)) &lt; 0)
           goto end;
       if ((ret = init_filters(videoStreamIndex)) &lt; 0)
           goto end;

       // Stop after a couple of frames
       int framesToGet = 100;

       /* read all packets */
       //while (framesToGet--)
       while(1)
       {
           if ((ret = av_read_frame(ifmt_ctx, &amp;packet)) &lt; 0)
               break;
           stream_index = packet.stream_index;

           // I just need video
           if (stream_index != videoStreamIndex) {
               av_packet_unref(&amp;packet);
               continue;
           }

           type = ifmt_ctx->streams[packet.stream_index]->codecpar->codec_type;
           av_log(NULL, AV_LOG_DEBUG, "Demuxer gave frame of stream_index %u\n",
               stream_index);

           if (filter_ctx[stream_index].filter_graph) {
               av_log(NULL, AV_LOG_DEBUG, "Going to reencode&amp;filter the frame\n");
               frame = av_frame_alloc();
               if (!frame) {
                   ret = AVERROR(ENOMEM);
                   break;
               }
               av_packet_rescale_ts(&amp;packet,
                   ifmt_ctx->streams[stream_index]->time_base,
                   stream_ctx[stream_index].dec_ctx->time_base);
               dec_func = (type == AVMEDIA_TYPE_VIDEO) ? avcodec_decode_video2 :
                   avcodec_decode_audio4;
               ret = dec_func(stream_ctx[stream_index].dec_ctx, frame,
                   &amp;got_frame, &amp;packet);
               if (ret &lt; 0) {
                   av_frame_free(&amp;frame);
                   av_log(NULL, AV_LOG_ERROR, "Decoding failed\n");
                   break;
               }

               if (got_frame) {
                   frame->pts = frame->best_effort_timestamp;
                   ret = filter_encode_write_frame(frame, stream_index, videoStreamIndex);
                   av_frame_free(&amp;frame);
                   if (ret &lt; 0)
                       goto end;
               }
               else {
                   av_frame_free(&amp;frame);
               }
           }
           else {
               /* remux this frame without reencoding */
               av_packet_rescale_ts(&amp;packet,
                   ifmt_ctx->streams[stream_index]->time_base,
                   ofmt_ctx->streams[stream_index]->time_base);

               ret = av_interleaved_write_frame(ofmt_ctx, &amp;packet);
               if (ret &lt; 0)
                   goto end;
           }
           av_packet_unref(&amp;packet);
       }

       /* flush filters and encoders */
       for (i = 0; i &lt; ifmt_ctx->nb_streams; i++) {
           /* flush filter */
           if (!filter_ctx[i].filter_graph)
               continue;
           ret = filter_encode_write_frame(NULL, i, videoStreamIndex);
           if (ret &lt; 0) {
               av_log(NULL, AV_LOG_ERROR, "Flushing filter failed\n");
               goto end;
           }

           /* flush encoder */
           ret = flush_encoder(i, videoStreamIndex);
           if (ret &lt; 0) {
               av_log(NULL, AV_LOG_ERROR, "Flushing encoder failed\n");
               goto end;
           }
       }

       av_write_trailer(ofmt_ctx);
    end:
       av_packet_unref(&amp;packet);
       av_frame_free(&amp;frame);
       for (i = 0; i &lt; ifmt_ctx->nb_streams; i++) {
           // Just video
           if (i != videoStreamIndex)
               continue;
           avcodec_free_context(&amp;stream_ctx[i].dec_ctx);
           if (ofmt_ctx &amp;&amp; ofmt_ctx->nb_streams > i &amp;&amp; ofmt_ctx->streams[i] &amp;&amp; stream_ctx[i].enc_ctx)
               avcodec_free_context(&amp;stream_ctx[i].enc_ctx);
           if (filter_ctx &amp;&amp; filter_ctx[i].filter_graph)
               avfilter_graph_free(&amp;filter_ctx[i].filter_graph);
       }
       av_free(filter_ctx);
       av_free(stream_ctx);
       avformat_close_input(&amp;ifmt_ctx);
       if (ofmt_ctx &amp;&amp; !(ofmt_ctx->oformat->flags &amp; AVFMT_NOFILE))
           avio_closep(&amp;ofmt_ctx->pb);
       avformat_free_context(ofmt_ctx);

       /*if (ret &lt; 0)
           av_log(NULL, AV_LOG_ERROR, "Error occurred: %s\n", av_err2str(ret));*/

       return ret ? 1 : 0;
    }
    </output></vector>
  • ffmpeg conversion .dav to any video files

    2 février 2021, par Marcello Galvão

    I am trying for days to convert .dav file (file generated by dvrs [image recorders]). I have tried several variations with ffmpeg and can not succeed.

    &#xA;&#xA;

    Command and console output :

    &#xA;&#xA;

    $ ffmpeg -i input.dav -codec:v libx264 -crf 23 -preset medium -codec:a libfdk_aac -vbr 4 -movflags faststart -vf scale=-1:720,format=yuv420p output.mp4&#xA;ffmpeg version 2.8 Copyright (c) 2000-2015 the FFmpeg developers&#xA;  built with gcc 4.8 (Ubuntu 4.8.4-2ubuntu1~14.04)&#xA;  configuration: --extra-libs=-ldl --prefix=/opt/ffmpeg --enable-avresample --disable-debug --enable-nonfree --enable-gpl --enable-version3 --enable-libopencore-amrnb --enable-libopencore-amrwb --disable-decoder=amrnb --disable-decoder=amrwb --enable-libpulse --enable-libdcadec --enable-libfreetype --enable-libx264 --enable-libx265 --enable-libfdk-aac --enable-libvorbis --enable-libmp3lame --enable-libopus --enable-libvpx --enable-libspeex --enable-libass --enable-avisynth --enable-libsoxr --enable-libxvid --enable-libvo-aacenc --enable-libvidstab&#xA;  libavutil      54. 31.100 / 54. 31.100&#xA;  libavcodec     56. 60.100 / 56. 60.100&#xA;  libavformat    56. 40.101 / 56. 40.101&#xA;  libavdevice    56.  4.100 / 56.  4.100&#xA;  libavfilter     5. 40.101 /  5. 40.101&#xA;  libavresample   2.  1.  0 /  2.  1.  0&#xA;  libswscale      3.  1.101 /  3.  1.101&#xA;  libswresample   1.  2.101 /  1.  2.101&#xA;  libpostproc    53.  3.100 / 53.  3.100&#xA;Input #0, h264, from &#x27;input.Dav&#x27;:&#xA;  Duration: N/A, bitrate: N/A&#xA;    Stream #0:0: Video: h264 (Baseline), yuv420p, 704x480, 25 fps, 25 tbr, 1200k tbn, 50 tbc&#xA;Codec AVOption vbr (VBR mode (1-5)) specified for output file #0 (output.mp4) has not been used for any stream. The most likely reason is either wrong type (e.g. a video option with no video streams) or that it is a private option of some encoder which was not actually used for any stream.&#xA;[libx264 @ 0x2d99e00] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2&#xA;[libx264 @ 0x2d99e00] profile High, level 3.1&#xA;[libx264 @ 0x2d99e00] 264 - core 142 r2491 24e4fed - H.264/MPEG-4 AVC codec - Copyleft 2003-2014 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=3 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00&#xA;Output #0, mp4, to &#x27;output.mp4&#x27;:&#xA;  Metadata:&#xA;    encoder         : Lavf56.40.101&#xA;    Stream #0:0: Video: h264 (libx264) ([33][0][0][0] / 0x0021), yuv420p, 1056x720, q=-1--1, 25 fps, 12800 tbn, 25 tbc&#xA;    Metadata:&#xA;      encoder         : Lavc56.60.100 libx264&#xA;Stream mapping:&#xA;  Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))&#xA;Press [q] to stop, [?] for help&#xA;frame=   58 fps=0.0 q=28.0 size=      93kB time=00:00:00.36 bitrate=2124.9kbits/s    &#xA;frame=   76 fps= 71 q=28.0 size=     178kB time=00:00:01.08 bitrate=1347.6kbits/s    &#xA;frame=   94 fps= 58 q=28.0 size=     275kB time=00:00:01.80 bitrate=1251.3kbits/s    &#xA;frame=  106 fps= 50 q=28.0 size=     393kB time=00:00:02.28 bitrate=1412.9kbits/s    &#xA;frame=  122 fps= 46 q=28.0 size=     504kB time=00:00:02.92 bitrate=1413.9kbits/s    &#xA;frame=  138 fps= 43 q=28.0 size=     586kB time=00:00:03.56 bitrate=1348.4kbits/s    &#xA;frame=  153 fps= 41 q=28.0 size=     676kB time=00:00:04.16 bitrate=1330.4kbits/s    &#xA;[h264 @ 0x3348440] Frame num change from 35 to 162&#xA;[h264 @ 0x3348440] decode_slice_header error&#xA;frame=  166 fps= 39 q=28.0 size=     785kB time=00:00:04.68 bitrate=1374.8kbits/s    &#xA;frame=  179 fps= 38 q=28.0 size=     894kB time=00:00:05.20 bitrate=1407.9kbits/s    &#xA;frame=  191 fps= 36 q=28.0 size=    1010kB time=00:00:05.68 bitrate=1457.0kbits/s    &#xA;frame=  206 fps= 36 q=28.0 size=    1137kB time=00:00:06.28 bitrate=1482.7kbits/s    &#xA;frame=  222 fps= 35 q=28.0 size=    1229kB time=00:00:06.92 bitrate=1455.2kbits/s    &#xA;frame=  239 fps= 35 q=28.0 size=    1327kB time=00:00:07.60 bitrate=1430.8kbits/s    &#xA;frame=  258 fps= 35 q=28.0 size=    1409kB time=00:00:08.36 bitrate=1380.4kbits/s    &#xA;frame=  273 fps= 35 q=28.0 size=    1496kB time=00:00:08.96 bitrate=1367.5kbits/s    &#xA;frame=  288 fps= 34 q=28.0 size=    1599kB time=00:00:09.56 bitrate=1370.3kbits/s    &#xA;frame=  301 fps= 34 q=28.0 size=    1730kB time=00:00:10.08 bitrate=1405.7kbits/s    &#xA;frame=  318 fps= 34 q=28.0 size=    1807kB time=00:00:10.76 bitrate=1376.0kbits/s    &#xA;frame=  336 fps= 34 q=28.0 size=    1873kB time=00:00:11.48 bitrate=1336.3kbits/s    &#xA;frame=  358 fps= 34 q=28.0 size=    1938kB time=00:00:12.36 bitrate=1284.4kbits/s    &#xA;frame=  378 fps= 34 q=28.0 size=    1995kB time=00:00:13.16 bitrate=1242.1kbits/s    &#xA;frame=  398 fps= 35 q=28.0 size=    2053kB time=00:00:13.96 bitrate=1204.5kbits/s    &#xA;frame=  415 fps= 35 q=28.0 size=    2115kB time=00:00:14.64 bitrate=1183.4kbits/s    &#xA;frame=  434 fps= 35 q=28.0 size=    2165kB time=00:00:15.40 bitrate=1151.4kbits/s    &#xA;frame=  454 fps= 35 q=28.0 size=    2220kB time=00:00:16.20 bitrate=1122.8kbits/s    &#xA;frame=  470 fps= 35 q=28.0 size=    2279kB time=00:00:16.84 bitrate=1108.5kbits/s    &#xA;frame=  489 fps= 35 q=28.0 size=    2350kB time=00:00:17.60 bitrate=1093.9kbits/s    &#xA;frame=  505 fps= 35 q=28.0 size=    2410kB time=00:00:18.24 bitrate=1082.5kbits/s    &#xA;frame=  515 fps= 34 q=28.0 size=    2534kB time=00:00:18.64 bitrate=1113.5kbits/s    &#xA;frame=  531 fps= 34 q=28.0 size=    2668kB time=00:00:19.28 bitrate=1133.8kbits/s    &#xA;frame=  547 fps= 34 q=28.0 size=    2782kB time=00:00:19.92 bitrate=1144.2kbits/s    &#xA;frame=  565 fps= 34 q=28.0 size=    2925kB time=00:00:20.64 bitrate=1160.9kbits/s    &#xA;frame=  581 fps= 34 q=28.0 size=    3043kB time=00:00:21.28 bitrate=1171.3kbits/s    &#xA;frame=  595 fps= 34 q=28.0 size=    3136kB time=00:00:21.84 bitrate=1176.1kbits/s    &#xA;frame=  611 fps= 34 q=28.0 size=    3240kB time=00:00:22.48 bitrate=1180.5kbits/s    &#xA;frame=  630 fps= 34 q=28.0 size=    3351kB time=00:00:23.24 bitrate=1181.2kbits/s    &#xA;frame=  651 fps= 34 q=28.0 size=    3451kB time=00:00:24.08 bitrate=1174.1kbits/s    &#xA;frame=  675 fps= 34 q=28.0 size=    3528kB time=00:00:25.04 bitrate=1154.2kbits/s    &#xA;frame=  700 fps= 35 q=28.0 size=    3612kB time=00:00:26.04 bitrate=1136.2kbits/s    &#xA;frame=  724 fps= 35 q=28.0 size=    3701kB time=00:00:27.00 bitrate=1122.9kbits/s    &#xA;frame=  747 fps= 35 q=28.0 size=    3808kB time=00:00:27.92 bitrate=1117.4kbits/s    &#xA;frame=  768 fps= 35 q=28.0 size=    3884kB time=00:00:28.76 bitrate=1106.4kbits/s    &#xA;frame=  799 fps= 36 q=28.0 size=    3983kB time=00:00:30.00 bitrate=1087.6kbits/s    &#xA;frame=  834 fps= 36 q=28.0 size=    4052kB time=00:00:31.40 bitrate=1057.1kbits/s    &#xA;frame=  868 fps= 37 q=28.0 size=    4097kB time=00:00:32.76 bitrate=1024.5kbits/s    &#xA;frame=  894 fps= 37 q=28.0 size=    4141kB time=00:00:33.80 bitrate=1003.6kbits/s    &#xA;frame=  914 fps= 37 q=28.0 size=    4234kB time=00:00:34.60 bitrate=1002.5kbits/s    &#xA;frame=  933 fps= 37 q=28.0 size=    4363kB time=00:00:35.36 bitrate=1010.8kbits/s    &#xA;frame=  954 fps= 37 q=28.0 size=    4442kB time=00:00:36.20 bitrate=1005.3kbits/s    &#xA;frame=  976 fps= 37 q=28.0 size=    4510kB time=00:00:37.08 bitrate= 996.3kbits/s    &#xA;frame=  994 fps= 37 q=28.0 size=    4579kB time=00:00:37.80 bitrate= 992.3kbits/s    &#xA;frame= 1010 fps= 37 q=28.0 size=    4663kB time=00:00:38.44 bitrate= 993.7kbits/s    &#xA;frame= 1030 fps= 37 q=28.0 size=    4734kB time=00:00:39.24 bitrate= 988.3kbits/s    &#xA;frame= 1043 fps= 37 q=28.0 size=    4843kB time=00:00:39.76 bitrate= 997.9kbits/s    &#xA;frame= 1065 fps= 37 q=28.0 size=    5021kB time=00:00:40.64 bitrate=1012.1kbits/s    &#xA;frame= 1092 fps= 38 q=28.0 size=    5052kB time=00:00:41.72 bitrate= 991.9kbits/s    &#xA;frame= 1118 fps= 38 q=28.0 size=    5129kB time=00:00:42.76 bitrate= 982.6kbits/s    &#xA;frame= 1145 fps= 38 q=28.0 size=    5185kB time=00:00:43.84 bitrate= 968.8kbits/s    &#xA;frame= 1174 fps= 38 q=28.0 size=    5214kB time=00:00:45.00 bitrate= 949.1kbits/s    &#xA;frame= 1202 fps= 39 q=28.0 size=    5256kB time=00:00:46.12 bitrate= 933.7kbits/s    &#xA;frame= 1220 fps= 39 q=28.0 size=    5341kB time=00:00:46.84 bitrate= 934.1kbits/s    &#xA;frame= 1236 fps= 38 q=28.0 size=    5432kB time=00:00:47.48 bitrate= 937.2kbits/s    &#xA;[h264 @ 0x2d68ca0] A non-intra slice in an IDR NAL unit.&#xA;[h264 @ 0x2d68ca0] decode_slice_header error&#xA;frame= 1252 fps= 38 q=28.0 size=    5552kB time=00:00:48.12 bitrate= 945.2kbits/s    &#xA;frame= 1269 fps= 38 q=28.0 size=    5666kB time=00:00:48.80 bitrate= 951.2kbits/s    &#xA;frame= 1286 fps= 38 q=28.0 size=    5773kB time=00:00:49.48 bitrate= 955.7kbits/s    &#xA;frame= 1302 fps= 38 q=28.0 size=    5908kB time=00:00:50.12 bitrate= 965.7kbits/s    &#xA;frame= 1324 fps= 38 q=28.0 size=    6011kB time=00:00:51.00 bitrate= 965.6kbits/s    &#xA;frame= 1349 fps= 38 q=28.0 size=    6103kB time=00:00:52.00 bitrate= 961.4kbits/s    &#xA;frame= 1373 fps= 38 q=28.0 size=    6200kB time=00:00:52.96 bitrate= 959.1kbits/s    &#xA;frame= 1399 fps= 39 q=28.0 size=    6284kB time=00:00:54.00 bitrate= 953.3kbits/s    &#xA;frame= 1424 fps= 39 q=28.0 size=    6388kB time=00:00:55.00 bitrate= 951.5kbits/s    &#xA;frame= 1447 fps= 39 q=28.0 size=    6492kB time=00:00:55.92 bitrate= 951.1kbits/s    &#xA;frame= 1476 fps= 39 q=28.0 size=    6530kB time=00:00:57.08 bitrate= 937.2kbits/s    &#xA;frame= 1503 fps= 39 q=28.0 size=    6580kB time=00:00:58.16 bitrate= 926.8kbits/s    &#xA;frame= 1518 fps= 39 q=28.0 size=    6709kB time=00:00:58.76 bitrate= 935.4kbits/s    &#xA;frame= 1542 fps= 39 q=28.0 size=    6835kB time=00:00:59.72 bitrate= 937.6kbits/s    &#xA;[h264 @ 0x3348440] data partitioning is not implemented. Update your FFmpeg version to the newest one from Git. If the problem still occurs, it means that your file has a feature which has not been implemented.&#xA;[h264 @ 0x3348440] If you want to help, upload a sample of this file to ftp://upload.ffmpeg.org/incoming/ and contact the ffmpeg-devel mailing list. (ffmpeg-devel@ffmpeg.org)&#xA;frame= 1568 fps= 39 q=28.0 size=    6958kB time=00:01:00.76 bitrate= 938.1kbits/s    &#xA;frame= 1596 fps= 39 q=28.0 size=    7006kB time=00:01:01.88 bitrate= 927.5kbits/s    &#xA;frame= 1619 fps= 39 q=28.0 size=    7096kB time=00:01:02.80 bitrate= 925.6kbits/s    &#xA;frame= 1646 fps= 40 q=28.0 size=    7152kB time=00:01:03.88 bitrate= 917.2kbits/s    &#xA;frame= 1671 fps= 40 q=28.0 size=    7205kB time=00:01:04.88 bitrate= 909.8kbits/s    &#xA;frame= 1698 fps= 40 q=28.0 size=    7268kB time=00:01:05.96 bitrate= 902.7kbits/s    &#xA;frame= 1725 fps= 40 q=28.0 size=    7328kB time=00:01:07.04 bitrate= 895.5kbits/s    &#xA;frame= 1752 fps= 40 q=28.0 size=    7382kB time=00:01:08.12 bitrate= 887.7kbits/s    &#xA;frame= 1779 fps= 40 q=28.0 size=    7433kB time=00:01:09.20 bitrate= 879.9kbits/s    &#xA;frame= 1803 fps= 40 q=28.0 size=    7580kB time=00:01:10.16 bitrate= 885.1kbits/s    &#xA;frame= 1827 fps= 41 q=28.0 size=    7643kB time=00:01:11.12 bitrate= 880.4kbits/s    &#xA;frame= 1852 fps= 41 q=28.0 size=    7703kB time=00:01:12.12 bitrate= 875.0kbits/s    &#xA;frame= 1879 fps= 41 q=28.0 size=    7751kB time=00:01:13.20 bitrate= 867.4kbits/s    &#xA;frame= 1899 fps= 41 q=28.0 size=    7840kB time=00:01:14.00 bitrate= 867.9kbits/s    &#xA;frame= 1918 fps= 41 q=28.0 size=    7946kB time=00:01:14.76 bitrate= 870.7kbits/s    &#xA;frame= 1938 fps= 41 q=28.0 size=    8046kB time=00:01:15.56 bitrate= 872.3kbits/s    &#xA;frame= 1959 fps= 41 q=28.0 size=    8134kB time=00:01:16.40 bitrate= 872.1kbits/s    &#xA;frame= 1978 fps= 41 q=28.0 size=    8227kB time=00:01:17.16 bitrate= 873.5kbits/s    &#xA;frame= 1997 fps= 41 q=28.0 size=    8322kB time=00:01:17.92 bitrate= 874.9kbits/s    &#xA;frame= 2022 fps= 41 q=28.0 size=    8390kB time=00:01:18.92 bitrate= 870.9kbits/s    &#xA;[h264 @ 0x2d64180] concealing 1320 DC, 1320 AC, 1320 MV errors in I frame&#xA;[mp4 @ 0x2cdb900] Starting second pass: moving the moov atom to the beginning of the file&#xA;frame= 2041 fps= 40 q=-1.0 Lsize=    8657kB time=00:01:21.56 bitrate= 869.5kbits/s    &#xA;video:8633kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.275387%&#xA;[libx264 @ 0x2d99e00] frame I:9     Avg QP:18.32  size: 48212&#xA;[libx264 @ 0x2d99e00] frame P:698   Avg QP:22.05  size:  9056&#xA;[libx264 @ 0x2d99e00] frame B:1334  Avg QP:27.18  size:  1562&#xA;[libx264 @ 0x2d99e00] consecutive B-frames: 10.6%  5.0%  5.4% 79.0%&#xA;[libx264 @ 0x2d99e00] mb I  I16..4: 18.4% 57.3% 24.2%&#xA;[libx264 @ 0x2d99e00] mb P  I16..4:  5.3%  8.2%  1.0%  P16..4: 26.3%  9.1%  4.0%  0.0%  0.0%    skip:46.0%&#xA;[libx264 @ 0x2d99e00] mb B  I16..4:  0.2%  0.1%  0.0%  B16..8: 20.6%  1.8%  0.3%  direct: 0.8%  skip:76.2%  L0:38.8% L1:57.6% BI: 3.6%&#xA;[libx264 @ 0x2d99e00] 8x8 transform intra:56.1% inter:75.9%&#xA;[libx264 @ 0x2d99e00] coded y,uvDC,uvAC intra: 35.0% 44.9% 12.4% inter: 6.5% 8.1% 0.2%&#xA;[libx264 @ 0x2d99e00] i16 v,h,dc,p: 34% 40%  3% 22%&#xA;[libx264 @ 0x2d99e00] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 36% 26% 16%  3%  4%  4%  5%  4%  4%&#xA;[libx264 @ 0x2d99e00] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 28% 41% 10%  2%  4%  4%  5%  3%  3%&#xA;[libx264 @ 0x2d99e00] i8c dc,h,v,p: 47% 24% 24%  5%&#xA;[libx264 @ 0x2d99e00] Weighted P-Frames: Y:0.0% UV:0.0%&#xA;[libx264 @ 0x2d99e00] ref P L0: 72.8% 10.0% 13.7%  3.5%&#xA;[libx264 @ 0x2d99e00] ref B L0: 90.8%  7.9%  1.2%&#xA;[libx264 @ 0x2d99e00] ref B L1: 96.5%  3.5%&#xA;[libx264 @ 0x2d99e00] kb/s:866.17&#xA;

    &#xA;

  • RTP packets detected as UDP

    8 juillet 2024, par fritz

    Here is what I am trying to do :

    &#xA;&#xA;

    WebRTC endpoint > RTP Endpoint > ffmpeg > RTMP server.&#xA;

    &#xA;&#xA;

    This is what my SDP file looks like.

    &#xA;&#xA;

    var cm_offer = "v=0\n" &#x2B;&#xA;              "o=- 3641290734 3641290734 IN IP4 127.0.0.1\n" &#x2B;&#xA;              "s=nginx\n" &#x2B;&#xA;              "c=IN IP4 127.0.0.1\n" &#x2B;&#xA;              "t=0 0\n" &#x2B;&#xA;              "m=audio 60820 RTP/AVP 0\n" &#x2B;&#xA;              "a=rtpmap:0 PCMU/8000\n" &#x2B;&#xA;              "a=recvonly\n" &#x2B;&#xA;              "m=video 59618 RTP/AVP 101\n" &#x2B;&#xA;              "a=rtpmap:101 H264/90000\n" &#x2B;&#xA;              "a=recvonly\n";&#xA;

    &#xA;&#xA;

    What's happening is that wireshark can detect the incoming packets at port 59618, but not as RTP packets but UDP packets. I am trying to capture the packets using ffmpeg with the following command :

    &#xA;&#xA;

    ubuntu@ip-132-31-40-100:~$ ffmpeg -i udp://127.0.0.1:59618 -vcodec copy stream.mp4&#xA;ffmpeg version git-2017-01-22-f1214ad Copyright (c) 2000-2017 the FFmpeg developers&#xA;  built with gcc 4.8 (Ubuntu 4.8.4-2ubuntu1~14.04.3)&#xA;  configuration: --extra-libs=-ldl --prefix=/opt/ffmpeg --mandir=/usr/share/man --enable-avresample --disable-debug --enable-nonfree --enable-gpl --enable-version3 --enable-libopencore-amrnb --enable-libopencore-amrwb --disable-decoder=amrnb --disable-decoder=amrwb --enable-libpulse --enable-libfreetype --enable-gnutls --enable-libx264 --enable-libx265 --enable-libfdk-aac --enable-libvorbis --enable-libmp3lame --enable-libopus --enable-libvpx --enable-libspeex --enable-libass --enable-avisynth --enable-libsoxr --enable-libxvid --enable-libvidstab --enable-libwavpack --enable-nvenc&#xA;  libavutil      55. 44.100 / 55. 44.100&#xA;  libavcodec     57. 75.100 / 57. 75.100&#xA;  libavformat    57. 63.100 / 57. 63.100&#xA;  libavdevice    57.  2.100 / 57.  2.100&#xA;  libavfilter     6. 69.100 /  6. 69.100&#xA;  libavresample   3.  2.  0 /  3.  2.  0&#xA;  libswscale      4.  3.101 /  4.  3.101&#xA;  libswresample   2.  4.100 /  2.  4.100&#xA;  libpostproc    54.  2.100 / 54.  2.100 &#xA;

    &#xA;&#xA;

    All I get is a blinking cursor and The stream.mp4 file is not written to disk after I exit (ctrl+c).

    &#xA;&#xA;

    So can you help me figure out :

    &#xA;&#xA;

      &#xA;
    1. why wireshark cannot detect the packets as RTP (I suspect it has something to do with SDP)
    2. &#xA;

    3. How to handle SDP answer when the RTP endpoint is pushing to ffmpeg which doesn't send an answer back.
    4. &#xA;

    &#xA;&#xA;

    Here is the entire code (hello world tutorial modified)

    &#xA;&#xA;

    /*&#xA;     * (C) Copyright 2014-2015 Kurento (http://kurento.org/)&#xA;     *&#xA;     * Licensed under the Apache License, Version 2.0 (the "License");&#xA;     * you may not use this file except in compliance with the License.&#xA;     * You may obtain a copy of the License at&#xA;     *&#xA;     *   http://www.apache.org/licenses/LICENSE-2.0&#xA;     *&#xA;     * Unless required by applicable law or agreed to in writing, software&#xA;     * distributed under the License is distributed on an "AS IS" BASIS,&#xA;     * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.&#xA;     * See the License for the specific language governing permissions and&#xA;     * limitations under the License.&#xA;     */&#xA;&#xA;    function getopts(args, opts)&#xA;    {&#xA;      var result = opts.default || {};&#xA;      args.replace(&#xA;          new RegExp("([^?=&amp;]&#x2B;)(=([^&amp;]*))?", "g"),&#xA;          function($0, $1, $2, $3) { result[$1] = decodeURI($3); });&#xA;&#xA;      return result;&#xA;    };&#xA;&#xA;    var args = getopts(location.search,&#xA;    {&#xA;      default:&#xA;      {&#xA;        ws_uri: &#x27;wss://&#x27; &#x2B; location.hostname &#x2B; &#x27;:8433/kurento&#x27;,&#xA;        ice_servers: undefined&#xA;      }&#xA;    });&#xA;&#xA;    function setIceCandidateCallbacks(webRtcPeer, webRtcEp, onerror)&#xA;    {&#xA;      webRtcPeer.on(&#x27;icecandidate&#x27;, function(candidate) {&#xA;        console.log("Local candidate:",candidate);&#xA;&#xA;        candidate = kurentoClient.getComplexType(&#x27;IceCandidate&#x27;)(candidate);&#xA;&#xA;        webRtcEp.addIceCandidate(candidate, onerror)&#xA;      });&#xA;&#xA;      webRtcEp.on(&#x27;OnIceCandidate&#x27;, function(event) {&#xA;        var candidate = event.candidate;&#xA;&#xA;        console.log("Remote candidate:",candidate);&#xA;&#xA;        webRtcPeer.addIceCandidate(candidate, onerror);&#xA;      });&#xA;    }&#xA;&#xA;&#xA;    function setIceCandidateCallbacks2(webRtcPeer, rtpEp, onerror)&#xA;    {&#xA;      webRtcPeer.on(&#x27;icecandidate&#x27;, function(candidate) {&#xA;        console.log("Localr candidate:",candidate);&#xA;&#xA;        candidate = kurentoClient.getComplexType(&#x27;IceCandidate&#x27;)(candidate);&#xA;&#xA;        rtpEp.addIceCandidate(candidate, onerror)&#xA;      });&#xA;    }&#xA;&#xA;&#xA;    window.addEventListener(&#x27;load&#x27;, function()&#xA;    {&#xA;      console = new Console();&#xA;&#xA;      var webRtcPeer;&#xA;      var pipeline;&#xA;      var webRtcEpt;&#xA;&#xA;      var videoInput = document.getElementById(&#x27;videoInput&#x27;);&#xA;      var videoOutput = document.getElementById(&#x27;videoOutput&#x27;);&#xA;&#xA;      var startButton = document.getElementById("start");&#xA;      var stopButton = document.getElementById("stop");&#xA;&#xA;      startButton.addEventListener("click", function()&#xA;      {&#xA;        showSpinner(videoInput, videoOutput);&#xA;&#xA;        var options = {&#xA;          localVideo: videoInput,&#xA;          remoteVideo: videoOutput&#xA;        };&#xA;&#xA;&#xA;        if (args.ice_servers) {&#xA;         console.log("Use ICE servers: " &#x2B; args.ice_servers);&#xA;         options.configuration = {&#xA;           iceServers : JSON.parse(args.ice_servers)&#xA;         };&#xA;        } else {&#xA;         console.log("Use freeice")&#xA;        }&#xA;&#xA;        webRtcPeer = kurentoUtils.WebRtcPeer.WebRtcPeerSendrecv(options, function(error)&#xA;        {&#xA;          if(error) return onError(error)&#xA;&#xA;          this.generateOffer(onOffer)&#xA;        });&#xA;&#xA;        function onOffer(error, sdpOffer)&#xA;        {&#xA;          if(error) return onError(error)&#xA;&#xA;          kurentoClient(args.ws_uri, function(error, client)&#xA;          {&#xA;            if(error) return onError(error);&#xA;&#xA;            client.create("MediaPipeline", function(error, _pipeline)&#xA;            {&#xA;              if(error) return onError(error);&#xA;&#xA;              pipeline = _pipeline;&#xA;&#xA;              pipeline.create("WebRtcEndpoint", function(error, webRtc){&#xA;                if(error) return onError(error);&#xA;&#xA;                webRtcEpt = webRtc;&#xA;&#xA;                setIceCandidateCallbacks(webRtcPeer, webRtc, onError)&#xA;&#xA;                webRtc.processOffer(sdpOffer, function(error, sdpAnswer){&#xA;                  if(error) return onError(error);&#xA;&#xA;                  webRtcPeer.processAnswer(sdpAnswer, onError);&#xA;                });&#xA;                webRtc.gatherCandidates(onError);&#xA;&#xA;                webRtc.connect(webRtc, function(error){&#xA;                  if(error) return onError(error);&#xA;&#xA;                  console.log("Loopback established");&#xA;                });&#xA;              });&#xA;&#xA;&#xA;&#xA;            pipeline.create("RtpEndpoint", function(error, rtp){&#xA;                if(error) return onError(error);&#xA;&#xA;                //setIceCandidateCallbacks2(webRtcPeer, rtp, onError)&#xA;&#xA;&#xA;                var cm_offer = "v=0\n" &#x2B;&#xA;                      "o=- 3641290734 3641290734 IN IP4 127.0.0.1\n" &#x2B;&#xA;                      "s=nginx\n" &#x2B;&#xA;                      "c=IN IP4 127.0.0.1\n" &#x2B;&#xA;                      "t=0 0\n" &#x2B;&#xA;                      "m=audio 60820 RTP/AVP 0\n" &#x2B;&#xA;                      "a=rtpmap:0 PCMU/8000\n" &#x2B;&#xA;                      "a=recvonly\n" &#x2B;&#xA;                      "m=video 59618 RTP/AVP 101\n" &#x2B;&#xA;                      "a=rtpmap:101 H264/90000\n" &#x2B;&#xA;                      "a=recvonly\n";&#xA;&#xA;&#xA;&#xA;                rtp.processOffer(cm_offer, function(error, cm_sdpAnswer){&#xA;                  if(error) return onError(error);&#xA;&#xA;                  //webRtcPeer.processAnswer(cm_sdpAnswer, onError);&#xA;                });&#xA;                //rtp.gatherCandidates(onError);&#xA;&#xA;                webRtcEpt.connect(rtp, function(error){&#xA;                  if(error) return onError(error);&#xA;&#xA;                  console.log("RTP endpoint connected to webRTC");&#xA;                });&#xA;              });&#xA;&#xA;&#xA;&#xA;&#xA;&#xA;&#xA;&#xA;&#xA;&#xA;            });&#xA;          });&#xA;        }&#xA;      });&#xA;      stopButton.addEventListener("click", stop);&#xA;&#xA;&#xA;      function stop() {&#xA;        if (webRtcPeer) {&#xA;          webRtcPeer.dispose();&#xA;          webRtcPeer = null;&#xA;        }&#xA;&#xA;        if(pipeline){&#xA;          pipeline.release();&#xA;          pipeline = null;&#xA;        }&#xA;&#xA;        hideSpinner(videoInput, videoOutput);&#xA;      }&#xA;&#xA;      function onError(error) {&#xA;        if(error)&#xA;        {&#xA;          console.error(error);&#xA;          stop();&#xA;        }&#xA;      }&#xA;    })&#xA;&#xA;&#xA;    function showSpinner() {&#xA;      for (var i = 0; i &lt; arguments.length; i&#x2B;&#x2B;) {&#xA;        arguments[i].poster = &#x27;img/transparent-1px.png&#x27;;&#xA;        arguments[i].style.background = "center transparent url(&#x27;img/spinner.gif&#x27;) no-repeat";&#xA;      }&#xA;    }&#xA;&#xA;    function hideSpinner() {&#xA;      for (var i = 0; i &lt; arguments.length; i&#x2B;&#x2B;) {&#xA;        arguments[i].src = &#x27;&#x27;;&#xA;        arguments[i].poster = &#x27;img/webrtc.png&#x27;;&#xA;        arguments[i].style.background = &#x27;&#x27;;&#xA;      }&#xA;    }&#xA;&#xA;    /**&#xA;     * Lightbox utility (to display media pipeline image in a modal dialog)&#xA;     */&#xA;    $(document).delegate(&#x27;*[data-toggle="lightbox"]&#x27;, &#x27;click&#x27;, function(event) {&#xA;      event.preventDefault();&#xA;      $(this).ekkoLightbox();&#xA;    });&#xA;

    &#xA;