Recherche avancée

Médias (91)

Autres articles (61)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

Sur d’autres sites (9126)

  • Output RTSP stream with ffmpeg

    28 juillet 2015, par sipwiz

    I’m attempting to use the ffmpeg libraries to send a video stream from my application to a media server (in this case wowza). I have been able to do the reverse and consume an RTSP stream but I’m having a few issues writing an RTSP stream.

    I have found a few examples and attempted to utilise the relevant bits. The code is below. I have simplified it as much as I can. I do only want to send a single H264 bit stream to the wowza server and which it can handle.

    I get an "Integer division by zero" exception whenever in the av_interleaved_write_frame function when I try and send a packet. The exception looks like it’s related to the packet timestamps not being set correctly. I’ve tried different values and can get past the exception by setting some contrived values but then the write call fails.

    #include <iostream>
    #include <fstream>
    #include <sstream>
    #include <cstring>

    #include "stdafx.h"
    #include "windows.h"

    extern "C"
    {
       #include
       #include
       #include
       #include
    }

    using namespace std;

    static int video_is_eof;

    #define STREAM_DURATION   50.0
    #define STREAM_FRAME_RATE 25 /* 25 images/s */
    #define STREAM_PIX_FMT    AV_PIX_FMT_YUV420P /* default pix_fmt */
    #define VIDEO_CODEC_ID CODEC_ID_H264

    static int sws_flags = SWS_BICUBIC;

    /* video output */
    static AVFrame *frame;
    static AVPicture src_picture, dst_picture;
    static int frame_count;

    static int write_frame(AVFormatContext *fmt_ctx, const AVRational *time_base, AVStream *st, AVPacket *pkt)
    {
       /* rescale output packet timestamp values from codec to stream timebase */
       pkt->pts = av_rescale_q_rnd(pkt->pts, *time_base, st->time_base, AVRounding(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
       pkt->dts = av_rescale_q_rnd(pkt->dts, *time_base, st->time_base, AVRounding(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
       pkt->duration = av_rescale_q(pkt->duration, *time_base, st->time_base);

       pkt->stream_index = st->index;

       // Exception occurs here.
       return av_interleaved_write_frame(fmt_ctx, pkt);
    }

    /* Add an output stream. */
    static AVStream *add_stream(AVFormatContext *oc, AVCodec **codec, enum AVCodecID codec_id)
    {
       AVCodecContext *c;
       AVStream *st;

       /* find the encoder */
       *codec = avcodec_find_encoder(codec_id);
       if (!(*codec)) {
           fprintf(stderr, "Could not find encoder for '%s'\n", avcodec_get_name(codec_id));
           exit(1);
       }

       st = avformat_new_stream(oc, *codec);
       if (!st) {
           fprintf(stderr, "Could not allocate stream\n");
           exit(1);
       }

       st->id = oc->nb_streams - 1;
       c = st->codec;
       c->codec_id = codec_id;
       c->bit_rate = 400000;
       c->width = 352;
       c->height = 288;
       c->time_base.den = STREAM_FRAME_RATE;
       c->time_base.num = 1;
       c->gop_size = 12; /* emit one intra frame every twelve frames at most */
       c->pix_fmt = STREAM_PIX_FMT;

       return st;
    }

    static void open_video(AVFormatContext *oc, AVCodec *codec, AVStream *st)
    {
       int ret;
       AVCodecContext *c = st->codec;

       /* open the codec */
       ret = avcodec_open2(c, codec, NULL);
       if (ret &lt; 0) {
           fprintf(stderr, "Could not open video codec: ");
           exit(1);
       }

       /* allocate and init a re-usable frame */
       frame = av_frame_alloc();
       if (!frame) {
           fprintf(stderr, "Could not allocate video frame\n");
           exit(1);
       }
       frame->format = c->pix_fmt;
       frame->width = c->width;
       frame->height = c->height;

       /* Allocate the encoded raw picture. */
       ret = avpicture_alloc(&amp;dst_picture, c->pix_fmt, c->width, c->height);
       if (ret &lt; 0) {
           fprintf(stderr, "Could not allocate picture: ");
           exit(1);
       }

       /* copy data and linesize picture pointers to frame */
       *((AVPicture *)frame) = dst_picture;
    }

    /* Prepare a dummy image. */
    static void fill_yuv_image(AVPicture *pict, int frame_index, int width, int height)
    {
       int x, y, i;

       i = frame_index;

       /* Y */
       for (y = 0; y &lt; height; y++)
           for (x = 0; x &lt; width; x++)
               pict->data[0][y * pict->linesize[0] + x] = x + y + i * 3;

       /* Cb and Cr */
       for (y = 0; y &lt; height / 2; y++) {
           for (x = 0; x &lt; width / 2; x++) {
               pict->data[1][y * pict->linesize[1] + x] = 128 + y + i * 2;
               pict->data[2][y * pict->linesize[2] + x] = 64 + x + i * 5;
           }
       }
    }

    static void write_video_frame(AVFormatContext *oc, AVStream *st, int flush)
    {
       int ret;
       AVCodecContext *c = st->codec;

       if (!flush) {
           fill_yuv_image(&amp;dst_picture, frame_count, c->width, c->height);
       }

       AVPacket pkt = { 0 };
       int got_packet;
       av_init_packet(&amp;pkt);

       /* encode the image */
       frame->pts = frame_count;
       ret = avcodec_encode_video2(c, &amp;pkt, flush ? NULL : frame, &amp;got_packet);
       if (ret &lt; 0) {
           fprintf(stderr, "Error encoding video frame:");
           exit(1);
       }
       /* If size is zero, it means the image was buffered. */

       if (got_packet) {
           ret = write_frame(oc, &amp;c->time_base, st, &amp;pkt);
       }
       else {
           if (flush) {
               video_is_eof = 1;
           }
           ret = 0;
       }

       if (ret &lt; 0) {
           fprintf(stderr, "Error while writing video frame: ");
           exit(1);
       }
       frame_count++;
    }

    static void close_video(AVFormatContext *oc, AVStream *st)
    {
       avcodec_close(st->codec);
       av_free(src_picture.data[0]);
       av_free(dst_picture.data[0]);
       av_frame_free(&amp;frame);
    }

    int _tmain(int argc, _TCHAR* argv[])
    {
       printf("starting...\n");

       const char *filename = "rtsp://test:password@192.168.33.19:1935/ffmpeg/0";

       AVOutputFormat *fmt;
       AVFormatContext *oc;
       AVStream *video_st;
       AVCodec *video_codec;
       double video_time;
       int flush, ret;

       /* Initialize libavcodec, and register all codecs and formats. */
       av_register_all();
       avformat_network_init();

       AVOutputFormat* oFmt = av_oformat_next(NULL);
       while (oFmt) {
           if (oFmt->video_codec == VIDEO_CODEC_ID) {
               break;
           }
           oFmt = av_oformat_next(oFmt);
       }

       if (!oFmt) {
           printf("Could not find the required output format.\n");
           exit(1);
       }

       /* allocate the output media context */
       avformat_alloc_output_context2(&amp;oc, oFmt, "rtsp", filename);

       if (!oc) {
           printf("Could not set the output media context.\n");
           exit(1);
       }

       fmt = oc->oformat;
       if (!fmt) {
           printf("Could not create the output format.\n");
           exit(1);
       }

       video_st = NULL;

       cout &lt;&lt; "Codec = " &lt;&lt; avcodec_get_name(fmt->video_codec) &lt;&lt; endl;
       if (fmt->video_codec != AV_CODEC_ID_NONE)
       {
           video_st = add_stream(oc, &amp;video_codec, fmt->video_codec);
       }

       /* Now that all the parameters are set, we can open the video codec and allocate the necessary encode buffers. */
       if (video_st) {
           open_video(oc, video_codec, video_st);
       }

       av_dump_format(oc, 0, filename, 1);
       char errorBuff[80];

       if (!(fmt->flags &amp; AVFMT_NOFILE)) {
           ret = avio_open(&amp;oc->pb, filename, AVIO_FLAG_WRITE);
           if (ret &lt; 0) {
               fprintf(stderr, "Could not open outfile '%s': %s", filename, av_make_error_string(errorBuff, 80, ret));
               return 1;
           }
       }

       flush = 0;
       while (video_st &amp;&amp; !video_is_eof) {
           /* Compute current video time. */
           video_time = (video_st &amp;&amp; !video_is_eof) ? video_st->pts.val * av_q2d(video_st->time_base) : INFINITY;

           if (!flush &amp;&amp; (!video_st || video_time >= STREAM_DURATION)) {
               flush = 1;
           }

           if (video_st &amp;&amp; !video_is_eof) {
               write_video_frame(oc, video_st, flush);
           }
       }

       if (video_st) {
           close_video(oc, video_st);
       }

       if ((fmt->flags &amp; AVFMT_NOFILE)) {
           avio_close(oc->pb);
       }

       avformat_free_context(oc);

       printf("finished.\n");

       getchar();

       return 0;
    }
    </cstring></sstream></fstream></iostream>

    Does anyone have any insights about how the packet timestamps can be successfully set ?

  • ffmpeg copyts to preserve timestamp

    13 avril 2015, par Bala

    I am trying to modify an HLS segment transport stream, and preserve its start time with ffmpeg. However the output does not preserve the input file’s start_time value, even if -copyts is mentioned. Here’s my command line :

    ffmpeg  -i fileSequence1.ts -i x.png -filter_complex '[0:v][1:v]overlay[out]' -map '[out]' -map 0:1 -acodec copy -vsync 0 -vcodec libx264 -streamid 0:257 -streamid 1:258 -copyts -profile:v baseline -level 3 output.ts

    The start_time value is delayed about 2 seconds consistently.

    /Users/macadmin/>ffmpeg -y -v verbose -i fileSequence0.ts -map 0:0 -vcodec libx264 -copyts -vsync 0 -async 0 output.ts
    ffmpeg version 2.5.3 Copyright (c) 2000-2015 the FFmpeg developers
     built on Mar 29 2015 21:31:57 with Apple LLVM version 6.0 (clang-600.0.54) (based on LLVM 3.5svn)
     configuration: --prefix=/usr/local/Cellar/ffmpeg/2.5.3 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-libx264 --enable-libmp3lame --enable-libvo-aacenc --enable-libxvid --enable-libfreetype --enable-libvorbis --enable-libvpx --enable-libass --enable-ffplay --enable-libfdk-aac --enable-libopus --enable-libquvi --enable-libx265 --enable-nonfree --enable-vda
     libavutil      54. 15.100 / 54. 15.100
     libavcodec     56. 13.100 / 56. 13.100
     libavformat    56. 15.102 / 56. 15.102
     libavdevice    56.  3.100 / 56.  3.100
     libavfilter     5.  2.103 /  5.  2.103
     libavresample   2.  1.  0 /  2.  1.  0
     libswscale      3.  1.101 /  3.  1.101
     libswresample   1.  1.100 /  1.  1.100
     libpostproc    53.  3.100 / 53.  3.100
    [h264 @ 0x7fa93b800000] Current profile doesn't provide more RBSP data in PPS, skipping
       Last message repeated 2 times
    [mpegts @ 0x7fa93a80da00] max_analyze_duration 5000000 reached at 5000000 microseconds
    Input #0, mpegts, from 'fileSequence0.ts':
     Duration: 00:00:09.65, start: 9.952111, bitrate: 412 kb/s
     Program 1
       Stream #0:0[0x101]: Video: h264 (Constrained Baseline) ([27][0][0][0] / 0x001B), yuv420p, 640x360 (640x368), 25 fps, 25 tbr, 90k tbn, 50 tbc
       Stream #0:1[0x102]: Audio: aac (LC) ([15][0][0][0] / 0x000F), 44100 Hz, stereo, fltp, 101 kb/s
    [graph 0 input from stream 0:0 @ 0x7fa93a5229c0] w:640 h:360 pixfmt:yuv420p tb:1/90000 fr:25/1 sar:0/1 sws_param:flags=2
    [libx264 @ 0x7fa93b800c00] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
    [libx264 @ 0x7fa93b800c00] profile High, level 3.0
    [mpegts @ 0x7fa93b800600] muxrate VBR, pcr every 2 pkts, sdt every 200, pat/pmt every 40 pkts
    Output #0, mpegts, to 'output.ts':
     Metadata:
       encoder         : Lavf56.15.102
       Stream #0:0: Video: h264 (libx264), yuv420p, 640x360, q=-1--1, 25 fps, 90k tbn, 25 tbc
       Metadata:
         encoder         : Lavc56.13.100 libx264
    Stream mapping:
     Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
    Press [q] to stop, [?] for help
    [NULL @ 0x7fa93b800000] Current profile doesn't provide more RBSP data in PPS, skipping
       Last message repeated 1 times
    frame=   87 fps=0.0 q=28.0 size=      91kB time=00:00:11.40 bitrate=  65.0kbits/[NULL @ 0x7fa93b800000] Current profile doesn't provide more RBSP data in PPS, skipping
    frame=  152 fps=151 q=28.0 size=     204kB time=00:00:14.00 bitrate= 119.4kbits/[NULL @ 0x7fa93b800000] Current profile doesn't provide more RBSP data in PPS, skipping
    frame=  224 fps=148 q=28.0 size=     306kB time=00:00:16.88 bitrate= 148.5kbits/No more output streams to write to, finishing.
    frame=  240 fps=125 q=-1.0 Lsize=     392kB time=00:00:19.52 bitrate= 164.6kbits/s    
    video:334kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 17.347548%
    Input file #0 (fileSequence0.ts):
     Input stream #0:0 (video): 240 packets read (360450 bytes); 240 frames decoded;
     Input stream #0:1 (audio): 0 packets read (0 bytes);
     Total: 240 packets (360450 bytes) demuxed
    Output file #0 (output.ts):
     Output stream #0:0 (video): 240 frames encoded; 240 packets muxed (342204 bytes);
     Total: 240 packets (342204 bytes) muxed
    [libx264 @ 0x7fa93b800c00] frame I:3     Avg QP:15.08  size:  7856
    [libx264 @ 0x7fa93b800c00] frame P:81    Avg QP:21.03  size:  2807
    [libx264 @ 0x7fa93b800c00] frame B:156   Avg QP:23.40  size:   585
    [libx264 @ 0x7fa93b800c00] consecutive B-frames: 11.7%  2.5%  7.5% 78.3%
    [libx264 @ 0x7fa93b800c00] mb I  I16..4: 57.4% 17.5% 25.1%
    [libx264 @ 0x7fa93b800c00] mb P  I16..4:  8.0%  8.2%  1.0%  P16..4: 30.5% 11.3%  4.6%  0.0%  0.0%    skip:36.4%
    [libx264 @ 0x7fa93b800c00] mb B  I16..4:  0.1%  0.1%  0.0%  B16..8: 34.6%  2.7%  0.2%  direct: 1.3%  skip:60.9%  L0:47.3% L1:49.1% BI: 3.6%
    [libx264 @ 0x7fa93b800c00] 8x8 transform intra:42.2% inter:73.3%
    [libx264 @ 0x7fa93b800c00] coded y,uvDC,uvAC intra: 26.2% 43.0% 6.8% inter: 5.4% 8.5% 0.1%
    [libx264 @ 0x7fa93b800c00] i16 v,h,dc,p: 46% 26%  6% 21%
    [libx264 @ 0x7fa93b800c00] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 50% 14% 23%  1%  2%  6%  1%  3%  1%
    [libx264 @ 0x7fa93b800c00] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 40% 32% 10%  3%  3%  4%  2%  5%  2%
    [libx264 @ 0x7fa93b800c00] i8c dc,h,v,p: 48% 23% 26%  3%
    [libx264 @ 0x7fa93b800c00] Weighted P-Frames: Y:1.2% UV:0.0%
    [libx264 @ 0x7fa93b800c00] ref P L0: 71.5% 10.7% 14.8%  2.9%  0.1%
    [libx264 @ 0x7fa93b800c00] ref B L0: 95.5%  4.0%  0.5%
    [libx264 @ 0x7fa93b800c00] ref B L1: 96.8%  3.2%
    [libx264 @ 0x7fa93b800c00] kb/s:285.17

    ----------- FFProbe source video
    /Users/macadmin>ffprobe fileSequence0.ts
    ffprobe version 2.5.3 Copyright (c) 2007-2015 the FFmpeg developers
     built on Mar 29 2015 21:31:57 with Apple LLVM version 6.0 (clang-600.0.54) (based on LLVM 3.5svn)
     configuration: --prefix=/usr/local/Cellar/ffmpeg/2.5.3 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-libx264 --enable-libmp3lame --enable-libvo-aacenc --enable-libxvid --enable-libfreetype --enable-libvorbis --enable-libvpx --enable-libass --enable-ffplay --enable-libfdk-aac --enable-libopus --enable-libquvi --enable-libx265 --enable-nonfree --enable-vda
     libavutil      54. 15.100 / 54. 15.100
     libavcodec     56. 13.100 / 56. 13.100
     libavformat    56. 15.102 / 56. 15.102
     libavdevice    56.  3.100 / 56.  3.100
     libavfilter     5.  2.103 /  5.  2.103
     libavresample   2.  1.  0 /  2.  1.  0
     libswscale      3.  1.101 /  3.  1.101
     libswresample   1.  1.100 /  1.  1.100
     libpostproc    53.  3.100 / 53.  3.100
    Input #0, mpegts, from 'fileSequence0.ts':
     Duration: 00:00:09.65, start: 9.952111, bitrate: 412 kb/s
     Program 1
       Stream #0:0[0x101]: Video: h264 (Constrained Baseline) ([27][0][0][0] / 0x001B), yuv420p, 640x360, 25 fps, 25 tbr, 90k tbn, 50 tbc
       Stream #0:1[0x102]: Audio: aac (LC) ([15][0][0][0] / 0x000F), 44100 Hz, stereo, fltp, 101 kb/s

    ------ FFPROBE result video
    /Users/macadmin>ffprobe output.ts
    ffprobe version 2.5.3 Copyright (c) 2007-2015 the FFmpeg developers
     built on Mar 29 2015 21:31:57 with Apple LLVM version 6.0 (clang-600.0.54) (based on LLVM 3.5svn)
     configuration: --prefix=/usr/local/Cellar/ffmpeg/2.5.3 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-libx264 --enable-libmp3lame --enable-libvo-aacenc --enable-libxvid --enable-libfreetype --enable-libvorbis --enable-libvpx --enable-libass --enable-ffplay --enable-libfdk-aac --enable-libopus --enable-libquvi --enable-libx265 --enable-nonfree --enable-vda
     libavutil      54. 15.100 / 54. 15.100
     libavcodec     56. 13.100 / 56. 13.100
     libavformat    56. 15.102 / 56. 15.102
     libavdevice    56.  3.100 / 56.  3.100
     libavfilter     5.  2.103 /  5.  2.103
     libavresample   2.  1.  0 /  2.  1.  0
     libswscale      3.  1.101 /  3.  1.101
     libswresample   1.  1.100 /  1.  1.100
     libpostproc    53.  3.100 / 53.  3.100
    Input #0, mpegts, from 'output.ts':
     Duration: 00:00:09.60, start: 11.400000, bitrate: 334 kb/s
     Program 1
       Metadata:
         service_name    : Service01
         service_provider: FFmpeg
       Stream #0:0[0x100]: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p, 640x360, 25 fps, 25 tbr, 90k tbn, 50 tbc

    How do I ensure that output file has the same start_time ? Thanks.

  • encode h264 video using ffmpeg library memory issues

    31 mars 2015, par Zeppa

    I’m trying to do screen capture on OS X using ffmpeg’s avfoundation library. I capture frames from the screen and encode it using H264 into an flv container.

    Here’s the command line output of the program :

    Input #0, avfoundation, from 'Capture screen 0':
     Duration: N/A, start: 9.253649, bitrate: N/A
       Stream #0:0: Video: rawvideo (UYVY / 0x59565955), uyvy422, 1440x900, 14.58 tbr, 1000k tbn, 1000k tbc
    raw video is inCodec
    FLV (Flash Video)http://localhost:8090/test.flv
    [libx264 @ 0x102038e00] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
    [libx264 @ 0x102038e00] profile High, level 4.0
    [libx264 @ 0x102038e00] 264 - core 142 r2495 6a301b6 - H.264/MPEG-4 AVC codec - Copyleft 2003-2014 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=1 weightp=2 keyint=50 keyint_min=5 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=abr mbtree=1 bitrate=400 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
    [tcp @ 0x101a5fe70] Connection to tcp://localhost:8090 failed (Connection refused), trying next address
    [tcp @ 0x101a5fe70] Connection to tcp://localhost:8090 failed: Connection refused
    url_fopen failed: Operation now in progress
    [flv @ 0x102038800] Using AVStream.codec.time_base as a timebase hint to the muxer is deprecated. Set AVStream.time_base instead.
    encoded frame #0
    encoded frame #1
    ......
    encoded frame #49
    encoded frame #50
    testmee(8404,0x7fff7e05c300) malloc: *** error for object 0x102053e08: incorrect checksum for freed object - object was probably modified after being freed.
    *** set a breakpoint in malloc_error_break to debug
    (lldb) bt
    * thread #10: tid = 0x43873, 0x00007fff95639286 libsystem_kernel.dylib`__pthread_kill + 10, stop reason = signal SIGABRT
     * frame #0: 0x00007fff95639286 libsystem_kernel.dylib`__pthread_kill + 10
       frame #1: 0x00007fff9623742f libsystem_pthread.dylib`pthread_kill + 90
       frame #2: 0x00007fff977ceb53 libsystem_c.dylib`abort + 129
       frame #3: 0x00007fff9ab59e06 libsystem_malloc.dylib`szone_error + 625
       frame #4: 0x00007fff9ab4f799 libsystem_malloc.dylib`small_malloc_from_free_list + 1105
       frame #5: 0x00007fff9ab4d3bc libsystem_malloc.dylib`szone_malloc_should_clear + 1449
       frame #6: 0x00007fff9ab4c877 libsystem_malloc.dylib`malloc_zone_malloc + 71
       frame #7: 0x00007fff9ab4b395 libsystem_malloc.dylib`malloc + 42
       frame #8: 0x00007fff94aa63d2 IOSurface`IOSurfaceClientLookupFromMachPort + 40
       frame #9: 0x00007fff94aa6b38 IOSurface`IOSurfaceLookupFromMachPort + 12
       frame #10: 0x00007fff92bfa6b2 CoreGraphics`_CGYDisplayStreamFrameAvailable + 342
       frame #11: 0x00007fff92f6759c CoreGraphics`CGYDisplayStreamNotification_server + 336
       frame #12: 0x00007fff92bfada6 CoreGraphics`display_stream_runloop_callout + 46
       frame #13: 0x00007fff956eba07 CoreFoundation`__CFMachPortPerform + 247
       frame #14: 0x00007fff956eb8f9 CoreFoundation`__CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE1_PERFORM_FUNCTION__ + 41
       frame #15: 0x00007fff956eb86b CoreFoundation`__CFRunLoopDoSource1 + 475
       frame #16: 0x00007fff956dd3e7 CoreFoundation`__CFRunLoopRun + 2375
       frame #17: 0x00007fff956dc858 CoreFoundation`CFRunLoopRunSpecific + 296
       frame #18: 0x00007fff95792ef1 CoreFoundation`CFRunLoopRun + 97
       frame #19: 0x0000000105f79ff1 CMIOUnits`___lldb_unnamed_function2148$$CMIOUnits + 875
       frame #20: 0x0000000105f6f2c2 CMIOUnits`___lldb_unnamed_function2127$$CMIOUnits + 14
       frame #21: 0x00007fff97051765 CoreMedia`figThreadMain + 417
       frame #22: 0x00007fff96235268 libsystem_pthread.dylib`_pthread_body + 131
       frame #23: 0x00007fff962351e5 libsystem_pthread.dylib`_pthread_start + 176
       frame #24: 0x00007fff9623341d libsystem_pthread.dylib`thread_start + 13

    I’ve attached the code I used below.

    #include <libavcodec></libavcodec>avcodec.h>
    #include <libavformat></libavformat>avformat.h>
    #include <libswscale></libswscale>swscale.h>
    #include <libavdevice></libavdevice>avdevice.h>
    #include <libavutil></libavutil>opt.h>
    #include
    #include
    #include
    /* compile using
    gcc -g -o stream test.c -lavformat -lavutil -lavcodec -lavdevice -lswscale
    */

    // void show_av_device() {

    //    inFmt->get_device_list(inFmtCtx, device_list);
    //    printf("Device Info=============\n");
    //    //avformat_open_input(&amp;inFmtCtx,"video=Capture screen 0",inFmt,&amp;inOptions);
    //    printf("===============================\n");
    // }

    void AVFAIL (int code, const char *what) {
       char msg[500];
       av_strerror(code, msg, sizeof(msg));
       fprintf(stderr, "failed: %s\nerror: %s\n", what, msg);
       exit(2);
    }

    #define AVCHECK(f) do { int e = (f); if (e &lt; 0) AVFAIL(e, #f); } while (0)
    #define AVCHECKPTR(p,f) do { p = (f); if (!p) AVFAIL(AVERROR_UNKNOWN, #f); } while (0)

    void registerLibs() {
       av_register_all();
       avdevice_register_all();
       avformat_network_init();
       avcodec_register_all();
    }

    int main(int argc, char *argv[]) {

       //conversion variables
       struct SwsContext *swsCtx = NULL;
       //input stream variables
       AVFormatContext   *inFmtCtx = NULL;
       AVCodecContext    *inCodecCtx = NULL;
       AVCodec           *inCodec = NULL;
       AVInputFormat     *inFmt = NULL;
       AVFrame           *inFrame = NULL;
       AVDictionary      *inOptions = NULL;
       const char *streamURL = "http://localhost:8090/test.flv";
       const char *name = "avfoundation";

    //    AVFrame           *inFrameYUV = NULL;
       AVPacket          inPacket;


       //output stream variables
       AVCodecContext    *outCodecCtx = NULL;
       AVCodec           *outCodec;
       AVFormatContext   *outFmtCtx = NULL;
       AVOutputFormat    *outFmt = NULL;
       AVFrame           *outFrameYUV = NULL;
       AVStream          *stream = NULL;

       int               i, videostream, ret;
       int               numBytes, frameFinished;

       registerLibs();
       inFmtCtx = avformat_alloc_context(); //alloc input context
       av_dict_set(&amp;inOptions, "pixel_format", "uyvy422", 0);
       av_dict_set(&amp;inOptions, "probesize", "7000000", 0);

       inFmt = av_find_input_format(name);
       ret = avformat_open_input(&amp;inFmtCtx, "Capture screen 0:", inFmt, &amp;inOptions);
       if (ret &lt; 0) {
           printf("Could not load the context for the input device\n");
           return -1;
       }
       if (avformat_find_stream_info(inFmtCtx, NULL) &lt; 0) {
           printf("Could not find stream info for screen\n");
           return -1;
       }
       av_dump_format(inFmtCtx, 0, "Capture screen 0", 0);
       // inFmtCtx->streams is an array of pointers of size inFmtCtx->nb_stream

       videostream = av_find_best_stream(inFmtCtx, AVMEDIA_TYPE_VIDEO, -1, -1, &amp;inCodec, 0);
       if (videostream == -1) {
           printf("no video stream found\n");
           return -1;
       } else {
           printf("%s is inCodec\n", inCodec->long_name);
       }
       inCodecCtx = inFmtCtx->streams[videostream]->codec;
       // open codec
       if (avcodec_open2(inCodecCtx, inCodec, NULL) > 0) {
           printf("Couldn't open codec");
           return -1;  // couldn't open codec
       }


           //setup output params
       outFmt = av_guess_format(NULL, streamURL, NULL);
       if(outFmt == NULL) {
           printf("output format was not guessed properly");
           return -1;
       }

       if((outFmtCtx = avformat_alloc_context()) &lt; 0) {
           printf("output context not allocated. ERROR");
           return -1;
       }

       printf("%s", outFmt->long_name);

       outFmtCtx->oformat = outFmt;

       snprintf(outFmtCtx->filename, sizeof(outFmtCtx->filename), streamURL);
       printf("%s\n", outFmtCtx->filename);

       outCodec = avcodec_find_encoder(AV_CODEC_ID_H264);
       if(!outCodec) {
           printf("could not find encoder for H264 \n" );
           return -1;
       }

       stream = avformat_new_stream(outFmtCtx, outCodec);
       outCodecCtx = stream->codec;
       avcodec_get_context_defaults3(outCodecCtx, outCodec);

       outCodecCtx->codec_id = AV_CODEC_ID_H264;
       outCodecCtx->codec_type = AVMEDIA_TYPE_VIDEO;
       outCodecCtx->flags = CODEC_FLAG_GLOBAL_HEADER;
       outCodecCtx->width = inCodecCtx->width;
       outCodecCtx->height = inCodecCtx->height;
       outCodecCtx->time_base.den = 25;
       outCodecCtx->time_base.num = 1;
       outCodecCtx->pix_fmt = AV_PIX_FMT_YUV420P;
       outCodecCtx->gop_size = 50;
       outCodecCtx->bit_rate = 400000;

       //setup output encoders etc
       if(stream) {
           ret = avcodec_open2(outCodecCtx, outCodec, NULL);
           if (ret &lt; 0) {
               printf("Could not open output encoder");
               return -1;
           }
       }

       if (avio_open(&amp;outFmtCtx->pb, outFmtCtx->filename, AVIO_FLAG_WRITE ) &lt; 0) {
           perror("url_fopen failed");
       }

       avio_open_dyn_buf(&amp;outFmtCtx->pb);
       ret = avformat_write_header(outFmtCtx, NULL);
       if (ret != 0) {
           printf("was not able to write header to output format");
           return -1;
       }
       unsigned char *pb_buffer;
       int len = avio_close_dyn_buf(outFmtCtx->pb, (unsigned char **)(&amp;pb_buffer));
       avio_write(outFmtCtx->pb, (unsigned char *)pb_buffer, len);


       numBytes = avpicture_get_size(PIX_FMT_UYVY422, inCodecCtx->width, inCodecCtx->height);
       // Allocate video frame
       inFrame = av_frame_alloc();

       swsCtx = sws_getContext(inCodecCtx->width, inCodecCtx->height, inCodecCtx->pix_fmt, inCodecCtx->width,
                               inCodecCtx->height, PIX_FMT_YUV420P, SWS_BILINEAR, NULL, NULL, NULL);
       int frame_count = 0;
       while(av_read_frame(inFmtCtx, &amp;inPacket) >= 0) {
           if(inPacket.stream_index == videostream) {
               avcodec_decode_video2(inCodecCtx, inFrame, &amp;frameFinished, &amp;inPacket);
               // 1 Frame might need more than 1 packet to be filled
               if(frameFinished) {
                   outFrameYUV = av_frame_alloc();

                   uint8_t *buffer = (uint8_t *)av_malloc(numBytes);

                   int ret = avpicture_fill((AVPicture *)outFrameYUV, buffer, PIX_FMT_YUV420P,
                                            inCodecCtx->width, inCodecCtx->height);
                   if(ret &lt; 0){
                       printf("%d is return val for fill\n", ret);
                       return -1;
                   }
                   //convert image to YUV
                   sws_scale(swsCtx, (uint8_t const * const* )inFrame->data,
                             inFrame->linesize, 0, inCodecCtx->height,
                             outFrameYUV->data, outFrameYUV->linesize);
                   //outFrameYUV now holds the YUV scaled frame/picture
                   outFrameYUV->format = outCodecCtx->pix_fmt;
                   outFrameYUV->width = outCodecCtx->width;
                   outFrameYUV->height = outCodecCtx->height;


                   AVPacket pkt;
                   int got_output;
                   av_init_packet(&amp;pkt);
                   pkt.data = NULL;
                   pkt.size = 0;

                   outFrameYUV->pts = frame_count;

                   ret = avcodec_encode_video2(outCodecCtx, &amp;pkt, outFrameYUV, &amp;got_output);
                   if (ret &lt; 0) {
                       fprintf(stderr, "Error encoding video frame: %s\n", av_err2str(ret));
                       return -1;
                   }

                   if(got_output) {
                       if(stream->codec->coded_frame->key_frame) {
                           pkt.flags |= AV_PKT_FLAG_KEY;
                       }
                       pkt.stream_index = stream->index;
                       if(pkt.pts != AV_NOPTS_VALUE)
                           pkt.pts = av_rescale_q(pkt.pts, stream->codec->time_base, stream->time_base);
                       if(pkt.dts != AV_NOPTS_VALUE)
                           pkt.dts = av_rescale_q(pkt.dts, stream->codec->time_base, stream->time_base);
                       if(avio_open_dyn_buf(&amp;outFmtCtx->pb)!= 0) {
                           printf("ERROR: Unable to open dynamic buffer\n");
                       }
                       ret = av_interleaved_write_frame(outFmtCtx, &amp;pkt);
                       unsigned char *pb_buffer;
                       int len = avio_close_dyn_buf(outFmtCtx->pb, (unsigned char **)&amp;pb_buffer);
                       avio_write(outFmtCtx->pb, (unsigned char *)pb_buffer, len);

                   } else {
                       ret = 0;
                   }
                   if(ret != 0) {
                       fprintf(stderr, "Error while writing video frame: %s\n", av_err2str(ret));
                       exit(1);
                   }

                   fprintf(stderr, "encoded frame #%d\n", frame_count);
                   frame_count++;

                   av_free_packet(&amp;pkt);
                   av_frame_free(&amp;outFrameYUV);
                   av_free(buffer);

               }
           }
           av_free_packet(&amp;inPacket);
       }
       av_write_trailer(outFmtCtx);

       //close video stream
       if(stream) {
           avcodec_close(outCodecCtx);
       }
       for (i = 0; i &lt; outFmtCtx->nb_streams; i++) {
           av_freep(&amp;outFmtCtx->streams[i]->codec);
           av_freep(&amp;outFmtCtx->streams[i]);
       }
       if (!(outFmt->flags &amp; AVFMT_NOFILE))
       /* Close the output file. */
           avio_close(outFmtCtx->pb);
       /* free the output format context */
       avformat_free_context(outFmtCtx);

       // Free the YUV frame populated by the decoder
       av_free(inFrame);

       // Close the video codec (decoder)
       avcodec_close(inCodecCtx);

       // Close the input video file
       avformat_close_input(&amp;inFmtCtx);

       return 1;

    }

    I’m not sure what I’ve done wrong here. But, what I’ve observed is that for each frame thats been encoded, my memory usage goes up by about 6MB. Backtracking afterward usually leads one of the following two culprits :

    1. avf_read_frame function in avfoundation.m
    2. av_dup_packet function in avpacket.h

    Can I also get advice on the way I’m using avio_open_dyn_buff function to be able to stream over http? I’ve also attached my ffmpeg library versions below :

       ffmpeg version N-70876-g294bb6c Copyright (c) 2000-2015 the FFmpeg developers
     built with Apple LLVM version 6.0 (clang-600.0.56) (based on LLVM 3.5svn)
     configuration: --prefix=/usr/local --enable-gpl --enable-postproc --enable-pthreads --enable-libmp3lame --enable-libtheora --enable-libx264 --enable-libvorbis --disable-mmx --disable-ssse3 --disable-armv5te --disable-armv6 --disable-neon --enable-shared --disable-static --disable-stripping
     libavutil      54. 20.100 / 54. 20.100
     libavcodec     56. 29.100 / 56. 29.100
     libavformat    56. 26.101 / 56. 26.101
     libavdevice    56.  4.100 / 56.  4.100
     libavfilter     5. 13.101 /  5. 13.101
     libswscale      3.  1.101 /  3.  1.101
     libswresample   1.  1.100 /  1.  1.100
     libpostproc    53.  3.100 / 53.  3.100
    Hyper fast Audio and Video encoder

    Valgrind analysis attached here because I exceeded stack overflow’s character limit. http://pastebin.com/MPeRhjhN