Recherche avancée

Médias (0)

Mot : - Tags -/utilisateurs

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (48)

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

Sur d’autres sites (8154)

  • Encoding using ffmpeg library fails

    29 septembre 2012, par Erik Swansson

    I've spent some time looking at the ffmpeg library and setting things up, I'm opening a .flv file. Reading and decoding and the frames, now I'm trying to encode it to MP4 but my packets end up empty.

    My code as follows

    int main (){

       avformat_open_input(&pFC, "c://wav//test2.flv", NULL, NULL);

       po = av_find_stream_info(pFC);

       //ADD LOGIC TO FIND VIDEO STREAM
       pCodecC = pFC->streams[0]->codec;

       decoder = avcodec_find_decoder(pCodecC->codec_id);
       encoder = avcodec_find_encoder(pCodecC->codec_id);
       po = avcodec_open(pCodecC, decoder);

       pCodecE =  avcodec_alloc_context3(encoder);
       /* put sample parameters */
       pCodecE->bit_rate = 400000;
       /* resolution must be a multiple of two */
       pCodecE->width = 352;
       pCodecE->height = 288;
       /* frames per second */
       pCodecE->time_base.den = 25;
       pCodecE->time_base.num = 1;
       pCodecE->gop_size = 10; /* emit one intra frame every ten frames */
       pCodecE->max_b_frames=1;
       pCodecE->pix_fmt = PIX_FMT_YUV420P;

       if(pCodecC->codec_id == CODEC_ID_H264)
           av_opt_set(pCodecE->priv_data, "preset", "slow", 0);

       po =  avcodec_open2(pCodecE, encoder, NULL);

       AVFrame *pFrame;
       // Allocate an AVFrame structure



       // Allocate video frame
       pFrame=avcodec_alloc_frame();
       int frameFinished = 0;
       int frame = 0;
       int gotpacket = 0;

       while(av_read_frame(pFC, &packet) >= 0)
       {
           if(packet.stream_index==0) //the video stream is 0
           {
               int len = avcodec_decode_video2(pCodecC, pFrame, &frameFinished, &packet);
               if(frameFinished)
               {
                   printf("frame # %i", frame);

                   po =avcodec_encode_video2(pCodecE, &spacket, pFrame, &gotpacket);
                   if(gotpacket)
                   {
                       printf("packet recieved");
                   }
                   frame++;
               }
           }
           av_free_packet(&packet);
       }


       printf("encoding done");

       return 0;
    }

    Basically everything works up to

    po =avcodec_encode_video2(pCodecE, &spacket, pFrame, &gotpacket);

    Where &gotpacket returns 0, as in an empty frame.

    Not sure what I'm doing wrong.

  • Sending raw h264 video and aac audio frames to an RTMP server using ffmpeg

    15 août 2016, par codeimpaler

    I am receiving raw h264 and aac audio frames from an even driven source. I am trying to send these frames to an rtmp server.
    I started working from the ffmpeg example muxing.c which successfully sends a custom stream to the rtmp server. I figure I just need to replace their frame data with my own.I found this suggestion online. I have tried How to pack raw h264 stream to flv container and send over rtmp using ffmpeg (not command)
    and
    How to publish selfmade stream with ffmpeg and c++ to rtmp server ?
    and a few other suggestions but none have worked for me.
    I have tried to directly memcpy my byte buffer but my code keeps failing
    at ret = avcodec_encode_video2(c, &pkt, frame, &got_packet).
    Specifically, I get an invalid access error.
    For a little more context, anytime I receive a frame (which is event driven), void RTMPWriter::WriteVideoFrame(...) is called. Assume the constructor has already been called before the first frame is received.
    I am not that familiar with ffmpeg and there could be several things wrong with the code. Any input will be really appreciated.

       #define STREAM_FRAME_RATE 25 /* 25 images/s */
       #define STREAM_PIX_FMT    AV_PIX_FMT_YUV420P /* default pix_fmt */
       #define SCALE_FLAGS SWS_BICUBIC
       RTMPWriter::RTMPWriter()
         : seenKeyFrame(false),
           video_st({ 0 }),
           audio_st({ 0 }),
           have_video(0),
           have_audio(0)
       {

           const char *filename;
           AVCodec *audio_codec = NULL, *video_codec = NULL;
           int ret;

           int encode_video = 0, encode_audio = 0;
           AVDictionary *opt = NULL;
           int i;

           /* Initialize libavcodec, and register all codecs and formats. */
           av_register_all();

           avformat_network_init();

          String^ StreamURL = "StreamURL";
          String^ out_uri = safe_cast(ApplicationData::Current->LocalSettings->Values->Lookup(StreamURL));
          std::wstring out_uriW(out_uri->Begin());
          std::string out_uriA(out_uriW.begin(), out_uriW.end());
          filename = out_uriA.c_str();  

          /* allocate the output media context */
          avformat_alloc_output_context2(&oc, NULL, "flv", filename);
          if (!oc)
          {
              OutputDebugString(L"Could not deduce output format from file extension: using MPEG.\n");
              avformat_alloc_output_context2(&oc, NULL, "mpeg", filename);
          }
          if (!oc)
          {
              OutputDebugString(L"Could not allocate  using MPEG.\n");
          }


          fmt = oc->oformat;

          /* Add the audio and video streams using the default format codecs
          * and initialize the codecs. */
          if (fmt->video_codec != AV_CODEC_ID_NONE) {
              add_stream(&video_st, oc, &video_codec, fmt->video_codec);
              have_video = 1;
              encode_video = 1;
          }
          if (fmt->audio_codec != AV_CODEC_ID_NONE) {
              add_stream(&audio_st, oc, &audio_codec, fmt->audio_codec);
              have_audio = 1;
              encode_audio = 1;
          }

          /* Now that all the parameters are set, we can open the audio and
           * video codecs and allocate the necessary encode buffers. */
          if (have_video)
          {
              open_video(oc, video_codec, &video_st, opt);
          }

          if (have_audio)
          {
              open_audio(oc, audio_codec, &audio_st, opt);
          }

          av_dump_format(oc, 0, filename, 1);

          /* open the output file, if needed */
          if (!(fmt->flags & AVFMT_NOFILE))
          {
              ret = avio_open(&oc->pb, filename, AVIO_FLAG_WRITE);
              if (ret < 0)
              {
                  OutputDebugString(L"Could not open ");
                  OutputDebugString(out_uri->Data());
              }
          }

          /* Write the stream header, if any. */
          ret = avformat_write_header(oc, &opt);
          if (ret < 0)
          {
              OutputDebugString(L"Error occurred when writing stream header \n");
          }

       }

       void RTMPWriter::WriteVideoFrame(
           boolean isKeyFrame,
           boolean hasDiscontinuity,
           UINT64 frameId,
           UINT32 videoBufferLength,
           BYTE *videoBytes)
       {

           int ret;
           AVCodecContext *c;
           AVFrame* frame;
           int got_packet = 0;
           AVPacket pkt = { 0 };

           c = video_st.enc;

           frame = get_video_frame(videoBufferLength, videoBytes);

           /* encode the image */
           ret = avcodec_encode_video2(c, &pkt, frame, &got_packet);
           if (ret < 0) {
                OutputDebugString(L"Error encoding video frame: \n")
           }

           if (got_packet)
           {
               ret = write_frame(oc, &c->time_base, video_st.st, &pkt);
           }
           else {
               ret = 0;
           }

           if (ret < 0) {
                OutputDebugString(L"Error while writing video frame: %s\n");
           }
       }

       AVFrame * RTMPWriter::get_video_frame(
          UINT32 videoBufferLength,
          BYTE *videoBytes)
       {
           AVCodecContext *c = video_st.enc;

           if (c->pix_fmt != AV_PIX_FMT_YUV420P) {
               /* as we only generate a YUV420P picture, we must convert it
               * to the codec pixel format if needed */
               if (!video_st.sws_ctx) {
                   video_st.sws_ctx = sws_getContext(c->width, c->height,
                       AV_PIX_FMT_YUV420P,
                       c->width, c->height,
                       c->pix_fmt,
                       SCALE_FLAGS, NULL, NULL, NULL);
                   if (!video_st.sws_ctx) {
                       fprintf(stderr,
                           "Could not initialize the conversion context\n");
                           exit(1);
                   }
               }
               fill_yuv_image(video_st.tmp_frame, video_st.next_pts, c->width, c->height, videoBufferLength, videoBytes);
               sws_scale(video_st.sws_ctx,
               (const uint8_t * const *)video_st.tmp_frame->data, video_st.tmp_frame->linesize,
               0, c->height, video_st.frame->data, video_st.frame->linesize);
           }
           else {
               fill_yuv_image(video_st.frame, video_st.next_pts, c->width, c->height, videoBufferLength, videoBytes);
           }

           video_st.frame->pts = video_st.next_pts++;

           return video_st.frame;
       }

       /* Prepare a dummy image. */
       void  RTMPWriter::fill_yuv_image(
            AVFrame *pict,
            int frame_index,
            int width,
            int height,
            UINT32 videoBufferLength,
            BYTE *videoBytes)
       {
           //int x, y, i, ret;

           /* when we pass a frame to the encoder, it may keep a reference to it
           * internally;
           * make sure we do not overwrite it here
           */
           ret = av_frame_make_writable(pict);
           if (ret < 0)
           {
                OutputDebugString(L"Unable to make piture writable");
           }

           memcpy(pict->data, videoBytes, videoBufferLength);

           //i = frame_index;

           ///* Y */
           //for (y = 0; y < height; y++)
           //  for (x = 0; x < width; x++)
           //      pict->data[0][y * pict->linesize[0] + x] = x + y + i * 3;

           ///* Cb and Cr */
           //for (y = 0; y < height / 2; y++) {
           //  for (x = 0; x < width / 2; x++) {
           //      pict->data[1][y * pict->linesize[1] + x] = 128 + y + i * 2;
           //      pict->data[2][y * pict->linesize[2] + x] = 64 + x + i * 5;
           //  }
           //}
       }

       void RTMPWriter::WriteAudioFrame()
       {

       }

       /* Add an output stream. */
       void  RTMPWriter::add_stream(
           OutputStream *ost,
           AVFormatContext *oc,
           AVCodec **codec,
           enum AVCodecID codec_id)
      {
       AVCodecContext *c;
       int i;

       /* find the encoder */
       *codec = avcodec_find_encoder(codec_id);
       if (!(*codec)) {
           OutputDebugString(L"Could not find encoder for '%s'\n");
           //avcodec_get_name(codec_id));
           exit(1);
       }

       ost->st = avformat_new_stream(oc, NULL);
       if (!ost->st) {
           OutputDebugString(L"Could not allocate stream\n");
           exit(1);
       }
       ost->st->id = oc->nb_streams - 1;
       c = avcodec_alloc_context3(*codec);
       if (!c) {
           OutputDebugString(L"Could not alloc an encoding context\n");
           exit(1);
       }
       ost->enc = c;

       switch ((*codec)->type) {
       case AVMEDIA_TYPE_AUDIO:
           c->sample_fmt = (*codec)->sample_fmts ?
               (*codec)->sample_fmts[0] : AV_SAMPLE_FMT_FLTP;
           c->bit_rate = 64000;
           c->sample_rate = 44100;
           if ((*codec)->supported_samplerates) {
               c->sample_rate = (*codec)->supported_samplerates[0];
               for (i = 0; (*codec)->supported_samplerates[i]; i++) {
                   if ((*codec)->supported_samplerates[i] == 44100)
                       c->sample_rate = 44100;
               }
           }
           c->channels = av_get_channel_layout_nb_channels(c->channel_layout);
           c->channel_layout = AV_CH_LAYOUT_STEREO;
           if ((*codec)->channel_layouts) {
               c->channel_layout = (*codec)->channel_layouts[0];
               for (i = 0; (*codec)->channel_layouts[i]; i++) {
                   if ((*codec)->channel_layouts[i] == AV_CH_LAYOUT_STEREO)
                       c->channel_layout = AV_CH_LAYOUT_STEREO;
               }
           }
           c->channels = av_get_channel_layout_nb_channels(c->channel_layout);
           ost->st->time_base = /*(AVRational)*/{ 1, c->sample_rate };
           break;

       case AVMEDIA_TYPE_VIDEO:
           c->codec_id = codec_id;

           c->bit_rate = 400000;
           /* Resolution must be a multiple of two. */
           c->width = 352;
           c->height = 288;
           /* timebase: This is the fundamental unit of time (in seconds) in terms
           * of which frame timestamps are represented. For fixed-fps content,
           * timebase should be 1/framerate and timestamp increments should be
           * identical to 1. */
           ost->st->time_base = /*(AVRational)*/{ 1, STREAM_FRAME_RATE };
           c->time_base = ost->st->time_base;

           c->gop_size = 12; /* emit one intra frame every twelve frames at most */
           c->pix_fmt = STREAM_PIX_FMT;
               if (c->codec_id == AV_CODEC_ID_MPEG2VIDEO) {
                   /* just for testing, we also add B-frames */
                   c->max_b_frames = 2;
               }
               if (c->codec_id == AV_CODEC_ID_MPEG1VIDEO) {
                   /* Needed to avoid using macroblocks in which some coeffs overflow.
                   * This does not happen with normal video, it just happens here as
                   * the motion of the chroma plane does not match the luma plane. */
                   c->mb_decision = 2;
               }
               break;

           default:
               break;
           }

            /* Some formats want stream headers to be separate. */
           if (oc->oformat->flags & AVFMT_GLOBALHEADER)
               c->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
       }

    AVFrame * RTMPWriter::alloc_audio_frame(
       enum AVSampleFormat sample_fmt,
       uint64_t channel_layout,
       int sample_rate, int nb_samples)
    {
       AVFrame *frame = av_frame_alloc();
       int ret;

       if (!frame) {
           OutputDebugString(L"Error allocating an audio frame\n");
           exit(1);
       }

       frame->format = sample_fmt;
       frame->channel_layout = channel_layout;
       frame->sample_rate = sample_rate;
       frame->nb_samples = nb_samples;

       if (nb_samples) {
           ret = av_frame_get_buffer(frame, 0);
           if (ret < 0) {
               OutputDebugString(L"Error allocating an audio buffer\n");
               exit(1);
           }
       }

           return frame;
       }




    void  RTMPWriter::open_audio(
       AVFormatContext *oc,
       AVCodec *codec,
       OutputStream *ost,
       AVDictionary *opt_arg)
    {
       AVCodecContext *c;
       int nb_samples;
       int ret;
       AVDictionary *opt = NULL;

       c = ost->enc;

       /* open it */
       av_dict_copy(&opt, opt_arg, 0);
       ret = avcodec_open2(c, codec, &opt);
       av_dict_free(&opt);
       if (ret < 0) {
           OutputDebugString(L"Could not open audio codec: %s\n");// , av_err2str(ret));
           exit(1);
       }

       /* init signal generator */
       ost->t = 0;
       ost->tincr = 2 * M_PI * 110.0 / c->sample_rate;
       /* increment frequency by 110 Hz per second */
       ost->tincr2 = 2 * M_PI * 110.0 / c->sample_rate / c->sample_rate;

       if (c->codec->capabilities & AV_CODEC_CAP_VARIABLE_FRAME_SIZE)
           nb_samples = 10000;
       else
           nb_samples = c->frame_size;

       ost->frame = alloc_audio_frame(c->sample_fmt, c->channel_layout,
           c->sample_rate, nb_samples);
       ost->tmp_frame = alloc_audio_frame(AV_SAMPLE_FMT_S16, c->channel_layout,
           c->sample_rate, nb_samples);

       /* copy the stream parameters to the muxer */
       ret = avcodec_parameters_from_context(ost->st->codecpar, c);
       if (ret < 0) {
           OutputDebugString(L"Could not copy the stream parameters\n");
           exit(1);
       }

       /* create resampler context */
       ost->swr_ctx = swr_alloc();
       if (!ost->swr_ctx) {
           OutputDebugString(L"Could not allocate resampler context\n");
           exit(1);
       }

       /* set options */
       av_opt_set_int(ost->swr_ctx, "in_channel_count", c->channels, 0);
       av_opt_set_int(ost->swr_ctx, "in_sample_rate", c->sample_rate, 0);
       av_opt_set_sample_fmt(ost->swr_ctx, "in_sample_fmt", AV_SAMPLE_FMT_S16, 0);
       av_opt_set_int(ost->swr_ctx, "out_channel_count", c->channels, 0);
       av_opt_set_int(ost->swr_ctx, "out_sample_rate", c->sample_rate, 0);
       av_opt_set_sample_fmt(ost->swr_ctx, "out_sample_fmt", c->sample_fmt, 0);

       /* initialize the resampling context */
       if ((ret = swr_init(ost->swr_ctx)) < 0) {
           OutputDebugString(L"Failed to initialize the resampling context\n");
           exit(1);
       }
    }

    int RTMPWriter::write_frame(
       AVFormatContext *fmt_ctx,
       const AVRational *time_base,
       AVStream *st,
       AVPacket *pkt)
    {
       /* rescale output packet timestamp values from codec to stream timebase */
       av_packet_rescale_ts(pkt, *time_base, st->time_base);
       pkt->stream_index = st->index;

       /* Write the compressed frame to the media file. */
       //log_packet(fmt_ctx, pkt);
       OutputDebugString(L"Actually sending video frame: %s\n");
       return av_interleaved_write_frame(fmt_ctx, pkt);
    }


    AVFrame  *RTMPWriter::alloc_picture(
       enum AVPixelFormat pix_fmt,
       int width,
       int height)
    {
       AVFrame *picture;
       int ret;

       picture = av_frame_alloc();
       if (!picture)
           return NULL;

       picture->format = pix_fmt;
       picture->width = width;
       picture->height = height;

       /* allocate the buffers for the frame data */
       ret = av_frame_get_buffer(picture, 32);
       if (ret < 0) {
           fprintf(stderr, "Could not allocate frame data.\n");
           exit(1);
       }

       return picture;
    }

    void RTMPWriter::open_video(
       AVFormatContext *oc,
       AVCodec *codec,
       OutputStream *ost,
       AVDictionary *opt_arg)
    {
       int ret;
       AVCodecContext *c = ost->enc;
       AVDictionary *opt = NULL;

       av_dict_copy(&opt, opt_arg, 0);

       /* open the codec */
       ret = avcodec_open2(c, codec, &opt);
       av_dict_free(&opt);
       if (ret < 0) {
           OutputDebugString(L"Could not open video codec: %s\n");// , av_err2str(ret));
           exit(1);
       }

       /* allocate and init a re-usable frame */
       ost->frame = alloc_picture(c->pix_fmt, c->width, c->height);
       if (!ost->frame) {
           OutputDebugString(L"Could not allocate video frame\n");
           exit(1);
       }

       /* If the output format is not YUV420P, then a temporary YUV420P
       * picture is needed too. It is then converted to the required
       * output format. */
       ost->tmp_frame = NULL;
       if (c->pix_fmt != AV_PIX_FMT_YUV420P) {
           ost->tmp_frame = alloc_picture(AV_PIX_FMT_YUV420P, c->width, c->height);
           if (!ost->tmp_frame) {
               OutputDebugString(L"Could not allocate temporary picture\n");
               exit(1);
           }
       }

       /* copy the stream parameters to the muxer */
       ret = avcodec_parameters_from_context(ost->st->codecpar, c);
       if (ret < 0) {
           OutputDebugString(L"Could not copy the stream parameters\n");
           exit(1);
       }
    }

    void RTMPWriter::close_stream(AVFormatContext *oc, OutputStream *ost)
    {
       avcodec_free_context(&ost->enc);
       av_frame_free(&ost->frame);
       av_frame_free(&ost->tmp_frame);
       sws_freeContext(ost->sws_ctx);
       swr_free(&ost->swr_ctx);
    }

    RTMPWriter::~RTMPWriter()
    {
       av_write_trailer(oc);
       /* Close each codec. */
       if (have_video)
           close_stream(oc, &video_st);
       if (have_audio)
           close_stream(oc, &audio_st);

       if (!(fmt->flags & AVFMT_NOFILE))
           /* Close the output file. */
           avio_closep(&oc->pb);

       /* free the stream */
       avformat_free_context(oc);
    }
  • FFmpeg RTP payload 96 instead of 97

    26 octobre 2016, par bot1131357

    I am trying to create an rtp audio stream with ffmpeg. The application output and SDP file configuration are as follows :

    Output #0, rtp, to 'rtp://127.0.0.1:8554':
       Stream #0:0: Audio: pcm_s16be, 8000 Hz, stereo, s16, 256 kb/s

    SDP:    
    v=0
    o=- 0 0 IN IP4 127.0.0.1
    s=No Name
    c=IN IP4 127.0.0.1
    t=0 0
    a=tool:libavformat 57.25.101
    m=audio 8554 RTP/AVP 96
    b=AS:256
    a=rtpmap:96 L16/8000/2

    However, when I try to read it with ffplay -i test.sdp -protocol_whitelist file,udp,rtp, it fails,shows the following :

    ffplay version N-78598-g98a0053 Copyright (c) 2003-2016 the FFmpeg developers
     built with gcc 5.3.0 (GCC)
     configuration: --disable-static --enable-shared --enable-gpl --enable-version3 --disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libdcadec --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-decklink --enable-zlib
     libavutil      55. 18.100 / 55. 18.100
     libavcodec     57. 24.103 / 57. 24.103
     libavformat    57. 25.101 / 57. 25.101
     libavdevice    57.  0.101 / 57.  0.101
     libavfilter     6. 34.100 /  6. 34.100
     libswscale      4.  0.100 /  4.  0.100
     libswresample   2.  0.101 /  2.  0.101
     libpostproc    54.  0.100 / 54.  0.100
       nan    :  0.000 fd=   0 aq=    0KB vq=    0KB sq=    0B f=0/0
       (...waits indefinitely.)

    The only way to make it work again is to modify the payload type in the SDP file from 96 to 97. Can someone tell me why ? Where is this number defined ?

    Here is my source. See if you can replicate it.

    #include
    extern "C"
    {
    #include <libavutil></libavutil>opt.h>
    #include <libavcodec></libavcodec>avcodec.h>
    #include <libavutil></libavutil>channel_layout.h>
    #include <libavutil></libavutil>common.h>
    #include <libavutil></libavutil>imgutils.h>
    #include <libavutil></libavutil>mathematics.h>
    #include <libavutil></libavutil>samplefmt.h>
    #include <libavformat></libavformat>avformat.h>
    }


    static int write_frame(AVFormatContext *fmt_ctx, const AVRational *time_base, AVStream *st, AVPacket *pkt)
    {
       /* rescale output packet timestamp values from codec to stream timebase */
       av_packet_rescale_ts(pkt, *time_base, st->time_base);

       /* Write the compressed frame to the media file. */
       return av_interleaved_write_frame(fmt_ctx, pkt);
    }

    /*
    * Audio encoding example
    */
    static void audio_encode_example(const char *filename)
    {
       AVPacket pkt;
       int i, j, k, ret, got_output;
       int buffer_size;

       uint16_t *samples;
       float t, tincr;

       AVCodec *outCodec = NULL;
       AVCodecContext *outCodecCtx = NULL;
       AVFormatContext *outFormatCtx = NULL;
       AVStream * outAudioStream = NULL;
       AVFrame *outFrame = NULL;

       ret = avformat_alloc_output_context2(&amp;outFormatCtx, NULL, "rtp", filename);
       if (!outFormatCtx || ret &lt; 0)
       {
           fprintf(stderr, "Could not allocate output context");
       }

       outFormatCtx->flags |= AVFMT_FLAG_NOBUFFER | AVFMT_FLAG_FLUSH_PACKETS;
       outFormatCtx->oformat->audio_codec = AV_CODEC_ID_PCM_S16BE;

       /* find the encoder */
       outCodec = avcodec_find_encoder(outFormatCtx->oformat->audio_codec);
       if (!outCodec) {
           fprintf(stderr, "Codec not found\n");
           exit(1);
       }

       outAudioStream = avformat_new_stream(outFormatCtx, outCodec);
       if (!outAudioStream)
       {
           fprintf(stderr, "Cannot add new audio stream\n");
           exit(1);
       }

       outAudioStream->id = outFormatCtx->nb_streams - 1;
       outCodecCtx = outAudioStream->codec;
       outCodecCtx->sample_fmt = AV_SAMPLE_FMT_S16;

       /* select other audio parameters supported by the encoder */
       outCodecCtx->sample_rate = 8000;
       outCodecCtx->channel_layout = AV_CH_LAYOUT_STEREO;
       outCodecCtx->channels = 2;

       /* open it */
       if (avcodec_open2(outCodecCtx, outCodec, NULL) &lt; 0) {
           fprintf(stderr, "Could not open codec\n");
           exit(1);
       }

       // PCM has no frame, so we have to explicitly specify
       outCodecCtx->frame_size = 1152;

       av_dump_format(outFormatCtx, 0, filename, 1);

       char buff[10000] = { 0 };
       ret = av_sdp_create(&amp;outFormatCtx, 1, buff, sizeof(buff));
       printf("%s", buff);

       ret = avio_open2(&amp;outFormatCtx->pb, filename, AVIO_FLAG_WRITE, NULL, NULL);
       ret = avformat_write_header(outFormatCtx, NULL);
       printf("ret = %d\n", ret);
       if (ret &lt;0) {
           exit(1);
       }

       /* frame containing input audio */
       outFrame = av_frame_alloc();
       if (!outFrame) {
           fprintf(stderr, "Could not allocate audio frame\n");
           exit(1);
       }

       outFrame->nb_samples = outCodecCtx->frame_size;
       outFrame->format = outCodecCtx->sample_fmt;
       outFrame->channel_layout = outCodecCtx->channel_layout;

       /* we calculate the size of the samples buffer in bytes */
       buffer_size = av_samples_get_buffer_size(NULL, outCodecCtx->channels, outCodecCtx->frame_size,
           outCodecCtx->sample_fmt, 0);
       if (buffer_size &lt; 0) {
           fprintf(stderr, "Could not get sample buffer size\n");
           exit(1);
       }
       samples = (uint16_t*)av_malloc(buffer_size);
       if (!samples) {
           fprintf(stderr, "Could not allocate %d bytes for samples buffer\n",
               buffer_size);
           exit(1);
       }
       /* setup the data pointers in the AVFrame */
       ret = avcodec_fill_audio_frame(outFrame, outCodecCtx->channels, outCodecCtx->sample_fmt,
           (const uint8_t*)samples, buffer_size, 0);
       if (ret &lt; 0) {
           fprintf(stderr, "Could not setup audio frame\n");
           exit(1);
       }

       /* encode a single tone sound */
       t = 0;
       int next_pts = 0;
       tincr = 2 * M_PI * 440.0 / outCodecCtx->sample_rate;
       for (i = 0; i &lt; 400000; i++) {
           av_init_packet(&amp;pkt);
           pkt.data = NULL; // packet data will be allocated by the encoder
           pkt.size = 0;

           for (j = 0; j &lt; outCodecCtx->frame_size; j++) {
               samples[2 * j] = (uint16_t)(sin(t) * 10000);

               for (k = 1; k &lt; outCodecCtx->channels; k++)
                   samples[2 * j + k] = samples[2 * j];
               t += tincr;
           }
           t = (t > 50000) ? 0 : t;

           // Sets time stamp
           next_pts += outFrame->nb_samples;
           outFrame->pts = next_pts;

           /* encode the samples */
           ret = avcodec_encode_audio2(outCodecCtx, &amp;pkt, outFrame, &amp;got_output);
           if (ret &lt; 0) {
               fprintf(stderr, "Error encoding audio frame\n");
               exit(1);
           }
           if (got_output) {
               write_frame(outFormatCtx, &amp;outCodecCtx->time_base, outAudioStream, &amp;pkt);
               av_packet_unref(&amp;pkt);
           }

           printf("i:%d\n", i); // waste some time to avoid over-filling jitter buffer
           printf("Audio: %d\t%d\n", samples[0], samples[1]); // waste some time to avoid over-filling jitter buffer
           printf("t: %f\n", t); // waste some time to avoid over-filling jitter buffer
       }

       /* get the delayed frames */
       for (got_output = 1; got_output; i++) {
           ret = avcodec_encode_audio2(outCodecCtx, &amp;pkt, NULL, &amp;got_output);
           if (ret &lt; 0) {
               fprintf(stderr, "Error encoding frame\n");
               exit(1);
           }

           if (got_output) {
               pkt.pts = AV_NOPTS_VALUE;
               write_frame(outFormatCtx, &amp;outCodecCtx->time_base, outAudioStream, &amp;pkt);
               av_packet_unref(&amp;pkt);
           }
       }

       av_freep(&amp;samples);
       av_frame_free(&amp;outFrame);
       avcodec_close(outCodecCtx);
       av_free(outCodecCtx);
    }


    int main(int argc, char **argv)
    {
       const char *output;

       av_register_all();
       avformat_network_init(); // for network streaming

       audio_encode_example("rtp://127.0.0.1:8554");

       return 0;
    }

    Update

    Curiously, running on Linux Ubuntu gives me the following instead :

    Output #0, rtp, to 'rtp://127.0.0.1:8554':
       Stream #0:0: Unknown: none (pcm_s16be)
    v=0
    o=- 0 0 IN IP4 127.0.0.1
    s=No Name
    c=IN IP4 127.0.0.1
    t=0 0
    a=tool:libavformat 57.48.100
    m=application 8554 RTP/AVP 3

    Does anyone know why the stream has been changed from audio to application ?