Recherche avancée

Médias (1)

Mot : - Tags -/framasoft

Autres articles (98)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

  • Installation en mode standalone

    4 février 2011, par

    L’installation de la distribution MediaSPIP se fait en plusieurs étapes : la récupération des fichiers nécessaires. À ce moment là deux méthodes sont possibles : en installant l’archive ZIP contenant l’ensemble de la distribution ; via SVN en récupérant les sources de chaque modules séparément ; la préconfiguration ; l’installation définitive ;
    [mediaspip_zip]Installation de l’archive ZIP de MediaSPIP
    Ce mode d’installation est la méthode la plus simple afin d’installer l’ensemble de la distribution (...)

Sur d’autres sites (8731)

  • Correct command to transmit audio to ip camera using ffmpeg ?

    4 novembre 2016, par the_naive

    So I found some hints in this discussion on the correct command to transmit audio to Axis IP camera through using ffmpeg in windows, but still I have not managed to successfully transmit audio to the camera.

    The command I’m using is the following :

    ffmpeg -v debug -y -re -f dshow -i "audio=Microphone (2- High Definition Audio Device)" -c:a pcm_mulaw -ac 1 -ar 16000 -b:a 128k -f flv http://oper
    ator:operator@10.10.210.2/axis-cgi/audio/transmit.cgi -multiple_requests 1 -reconnect_at_eof 1 -reconnect_streamed 1 -content_type "audio/basic" -report

    The ouput I get following this command is the following :

    ffmpeg started on 2016-11-04 at 17:32:13
    Report written to "ffmpeg-20161104-173213.log"
    Command line:
    ffmpeg -v debug -y -re -f dshow -i "audio=Microphone (2- High Definition Audio Device)" -c:a pcm_mulaw -ac 1 -ar 16000 -b:a 128k -f flv http://operator:operator@10.10.210.2/axis-cgi/audio/transmit.cgi -content_type audio/basic -multiple_requests 1 -reconnect 1 -reconnect_at_eof 1 -reconnect_streamed 1 -report
    ffmpeg version N-82225-gb4e9252 Copyright (c) 2000-2016 the FFmpeg developers
     built with gcc 5.4.0 (GCC)
     configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-dxva2 --enable-libmfx --enable-nvenc --enable-avisynth --enable-bzlib --enable-libebur128 --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-decklink --enable-zlib
     libavutil      55. 35.100 / 55. 35.100
     libavcodec     57. 66.101 / 57. 66.101
     libavformat    57. 57.100 / 57. 57.100
     libavdevice    57.  2.100 / 57.  2.100
     libavfilter     6. 66.100 /  6. 66.100
     libswscale      4.  3.100 /  4.  3.100
     libswresample   2.  4.100 /  2.  4.100
     libpostproc    54.  2.100 / 54.  2.100
    Splitting the commandline.
    Reading option '-v' ... matched as option 'v' (set logging level) with argument 'debug'.
    Reading option '-y' ... matched as option 'y' (overwrite output files) with argument '1'.
    Reading option '-re' ... matched as option 're' (read input at native frame rate) with argument '1'.
    Reading option '-f' ... matched as option 'f' (force format) with argument 'dshow'.
    Reading option '-i' ... matched as input file with argument 'audio=Microphone (2- High Definition Audio Device)'.
    Reading option '-c:a' ... matched as option 'c' (codec name) with argument 'pcm_mulaw'.
    Reading option '-ac' ... matched as option 'ac' (set number of audio channels) with argument '1'.
    Reading option '-ar' ... matched as option 'ar' (set audio sampling rate (in Hz)) with argument '16000'.
    Reading option '-b:a' ... matched as option 'b' (video bitrate (please use -b:v)) with argument '128k'.
    Reading option '-f' ... matched as option 'f' (force format) with argument 'flv'.
    Reading option 'http://operator:operator@10.10.210.2/axis-cgi/audio/transmit.cgi' ... matched as output file.
    Reading option '-content_type' ... matched as AVOption 'content_type' with argument 'audio/basic'.
    Reading option '-multiple_requests' ... matched as AVOption 'multiple_requests' with argument '1'.
    Reading option '-reconnect' ... matched as AVOption 'reconnect' with argument '1'.
    Reading option '-reconnect_at_eof' ... matched as AVOption 'reconnect_at_eof' with argument '1'.
    Reading option '-reconnect_streamed' ... matched as AVOption 'reconnect_streamed' with argument '1'.
    Reading option '-report' ... matched as option 'report' (generate a report) with argument '1'.
    Trailing options were found on the commandline.
    Finished splitting the commandline.
    Parsing a group of options: global .
    Applying option v (set logging level) with argument debug.
    Applying option y (overwrite output files) with argument 1.
    Applying option report (generate a report) with argument 1.
    Successfully parsed a group of options.
    Parsing a group of options: input file audio=Microphone (2- High Definition Audio Device).
    Applying option re (read input at native frame rate) with argument 1.
    Applying option f (force format) with argument dshow.
    Successfully parsed a group of options.
    Opening an input file: audio=Microphone (2- High Definition Audio Device).
    [dshow @ 00000000000279e0] Selecting pin Capture on audio only
    dshow passing through packet of type audio size    88200 timestamp 310221040000 orig timestamp 310221040000 graph timestamp 310226130000 diff 5090000 Microphone (2- High Definition Audio Device)
    [dshow @ 00000000000279e0] All info found
    Guessed Channel Layout for Input Stream #0.0 : stereo
    Input #0, dshow, from 'audio=Microphone (2- High Definition Audio Device)':
     Duration: N/A, start: 31022.104000, bitrate: 1411 kb/s
       Stream #0:0, 1, 1/10000000: Audio: pcm_s16le, 44100 Hz, stereo, s16, 1411 kb/s
    Successfully opened the file.
    Parsing a group of options: output file http://operator:operator@10.10.210.2/axis-cgi/audio/transmit.cgi.
    Applying option c:a (codec name) with argument pcm_mulaw.
    Applying option ac (set number of audio channels) with argument 1.
    Applying option ar (set audio sampling rate (in Hz)) with argument 16000.
    Applying option b:a (video bitrate (please use -b:v)) with argument 128k.
    Applying option f (force format) with argument flv.
    Successfully parsed a group of options.
    Opening an output file: http://operator:operator@10.10.210.2/axis-cgi/audio/transmit.cgi.
    [http @ 0000000001c94040] Setting default whitelist 'http,https,tls,rtp,tcp,udp,crypto,httpproxy'
    [http @ 0000000001c94040] request: POST /axis-cgi/audio/transmit.cgi HTTP/1.1

    Transfer-Encoding: chunked

    User-Agent: Lavf/57.57.100

    Accept: */*

    Expect: 100-continue

    Connection: close

    Host: 10.10.210.2

    Icy-MetaData: 1




    [http @ 0000000001c94040] request: POST /axis-cgi/audio/transmit.cgi HTTP/1.1

    Transfer-Encoding: chunked

    User-Agent: Lavf/57.57.100

    Accept: */*

    Connection: close

    Host: 10.10.210.2

    Icy-MetaData: 1

    Authorization: Digest username="operator", realm="AXIS_ACCC8E027F47", nonce="0EcsO3xABQA=ab5efc4740a6c625ecf6a6729d0d67d2b62b615a", uri="/axis-cgi/audio/transmit.cgi", response="4bd3a627b20d6bcaba9e2f595ef6cd2a", algorithm="MD5", qop="auth", cnonce="6a579dd6664b57eb", nc=00000001




    Successfully opened the file.
    detected 8 logical cores
    [graph 0 input from stream 0:0 @ 0000000001c9f6e0] Setting 'time_base' to value '1/44100'
    [graph 0 input from stream 0:0 @ 0000000001c9f6e0] Setting 'sample_rate' to value '44100'
    [graph 0 input from stream 0:0 @ 0000000001c9f6e0] Setting 'sample_fmt' to value 's16'
    [graph 0 input from stream 0:0 @ 0000000001c9f6e0] Setting 'channel_layout' to value '0x3'
    [graph 0 input from stream 0:0 @ 0000000001c9f6e0] tb:1/44100 samplefmt:s16 samplerate:44100 chlayout:0x3
    [audio format for output stream 0:0 @ 0000000001c9fa20] Setting 'sample_fmts' to value 's16'
    [audio format for output stream 0:0 @ 0000000001c9fa20] Setting 'sample_rates' to value '16000'
    [audio format for output stream 0:0 @ 0000000001c9fa20] Setting 'channel_layouts' to value '0x4'
    [audio format for output stream 0:0 @ 0000000001c9fa20] auto-inserting filter 'auto-inserted resampler 0' between the filter 'Parsed_anull_0' and the filter 'audio format for output stream 0:0'
    [AVFilterGraph @ 000000000002ab20] query_formats: 4 queried, 6 merged, 3 already done, 0 delayed
    [auto-inserted resampler 0 @ 0000000001ca4060] [SWR @ 0000000001ca4a80] Using s16p internally between filters
    [auto-inserted resampler 0 @ 0000000001ca4060] [SWR @ 0000000001ca4a80] Matrix coefficients:
    [auto-inserted resampler 0 @ 0000000001ca4060] [SWR @ 0000000001ca4a80] FC: FL:0.500000 FR:0.500000
    [auto-inserted resampler 0 @ 0000000001ca4060] ch:2 chl:stereo fmt:s16 r:44100Hz -> ch:1 chl:mono fmt:s16 r:16000Hz
    Output #0, flv, to 'http://operator:operator@10.10.210.2/axis-cgi/audio/transmit.cgi':
     Metadata:
       encoder         : Lavf57.57.100
       Stream #0:0, 0, 1/1000: Audio: pcm_mulaw ([8][0][0][0] / 0x0008), 16000 Hz, mono, s16, 128 kb/s
       Metadata:
         encoder         : Lavc57.66.101 pcm_mulaw
    Stream mapping:
     Stream #0:0 -> #0:0 (pcm_s16le (native) -> pcm_mulaw (native))
    Press [q] to stop, [?] for help
    cur_dts is invalid (this is harmless if it occurs once at the start per stream)
    av_interleaved_write_frame(): Unknown error
    No more output streams to write to, finishing.
    Error writing trailer of http://operator:operator@10.10.210.2/axis-cgi/audio/transmit.cgi: Error number -10053 occurredsize=       8kB time=00:00:00.49 bitrate= 131.2kbits/s speed=79.6x    
    video:0kB audio:8kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 2.492485%
    Input file #0 (audio=Microphone (2- High Definition Audio Device)):
     Input stream #0:0 (audio): 1 packets read (88200 bytes); 1 frames decoded (22050 samples);
     Total: 1 packets (88200 bytes) demuxed
    Output file #0 (http://operator:operator@10.10.210.2/axis-cgi/audio/transmit.cgi):
     Output stream #0:0 (audio): 1 frames encoded (7984 samples); 1 packets muxed (7984 bytes);
     Total: 1 packets (7984 bytes) muxed
    1 frames successfully decoded, 0 decoding errors
    [AVIOContext @ 0000000001c9e4c0] Statistics: 0 seeks, 2 writeouts
    dshow passing through packet of type audio size    12152 timestamp 310226130000 orig timestamp 310226130000 graph timestamp 310226820000 diff 690000 Microphone (2- High Definition Audio Device)
    Conversion failed!

    For some reason, despite setting multiple_requests, reconnect_eof, reconnect_streamed all to true, connection becomes closed.

    Could you please tell me what I’m doing wrong ?

  • Checking processor capabilities in android

    29 avril 2014, par StackOverflowed

    I’m using FFMPEG in an app, and I’m using the following configuration :

    --extra-cflags=' -march=armv7-a -mfloat-abi=softfp -mfpu=neon'

    I’m targeting 4.0+ so I believe armv7-a should be supported by most non Intel devices, and I’m sure the neon extension is supported in most devices as well, but I’m not sure how I can find that out for all 2000+ devices.

    Is there a way to check in Android the processor type and extensions and/or in the Google Play Store to limit the apk to devices with certain processors ?

  • Transcode of H.264 to VP8 using libav* has incorrect frame rate

    17 avril 2014, par Kevin Watson

    I’ve so far failed to get the correct output frame rate when transcoding H.264 to VP8 with the libav* libraries. I created a functioning encode of Sintel.2010.720p.mkv as WebM (VP8/Vorbis) using a modification of the transcoding.c example in the FFmpeg source. Unfortunately the resulting file is 48 fps unlike the 24 fps of the original and the output of the ffmpeg command I’m trying to mimic.

    I noticed ffprobe produces a tbc of double the fps for this and other H.264 videos, while the tbc of the resulting VP8 stream produced by the ffmpeg command is the default 1000. The stock transcoding.c example copies the time base of the decoder to the encoder AVCodecContext, which is 1/48. Running the ffmpeg command through gdb it looks like the time base of the AVCodecContext is set to 1/24, but making that change alone only causes the resulting video to be slowed to twice the duration at 24 fps.

    I can create a usable video, but the frame rate doubles. When the output frame rate is the correct 24 fps, the video is smooth but slowed to half speed.

    Here is my modification of the example.

    /*
     * Copyright (c) 2010 Nicolas George
     * Copyright (c) 2011 Stefano Sabatini
     * Copyright (c) 2014 Andrey Utkin
     *
     * Permission is hereby granted, free of charge, to any person obtaining a copy
     * of this software and associated documentation files (the "Software"), to deal
     * in the Software without restriction, including without limitation the rights
     * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
     * copies of the Software, and to permit persons to whom the Software is
     * furnished to do so, subject to the following conditions:
     *
     * The above copyright notice and this permission notice shall be included in
     * all copies or substantial portions of the Software.
     *
     * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
     * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
     * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
     * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
     * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
     * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
     * THE SOFTWARE.
     */

    /**
     * @file
     * API example for demuxing, decoding, filtering, encoding and muxing
     * @example doc/examples/transcoding.c
     */

    #include <libavcodec></libavcodec>avcodec.h>
    #include <libavformat></libavformat>avformat.h>
    #include <libavfilter></libavfilter>avfiltergraph.h>
    #include <libavfilter></libavfilter>avcodec.h>
    #include <libavfilter></libavfilter>buffersink.h>
    #include <libavfilter></libavfilter>buffersrc.h>
    #include <libavutil></libavutil>opt.h>
    #include <libavutil></libavutil>pixdesc.h>

    #define STATS_LOG "stats.log"

    static AVFormatContext *ifmt_ctx;
    static AVFormatContext *ofmt_ctx;
    typedef struct FilteringContext {
      AVFilterContext *buffersink_ctx;
      AVFilterContext *buffersrc_ctx;
      AVFilterGraph *filter_graph;
    } FilteringContext;
    static FilteringContext *filter_ctx;

    static int open_input_file(const char *filename) {
      int ret;
      unsigned int i;

      ifmt_ctx = NULL;
      if ((ret = avformat_open_input(&amp;ifmt_ctx, filename, NULL, NULL)) &lt; 0) {
    av_log(NULL, AV_LOG_ERROR, "Cannot open input file\n");
    return ret;
      }

      if ((ret = avformat_find_stream_info(ifmt_ctx, NULL)) &lt; 0) {
    av_log(NULL, AV_LOG_ERROR, "Cannot find stream information\n");
    return ret;
      }

      for (i = 0; i &lt; ifmt_ctx->nb_streams; i++) {
    AVStream *stream;
    AVCodecContext *codec_ctx;
    stream = ifmt_ctx->streams[i];
    codec_ctx = stream->codec;
    /* Reencode video &amp; audio and remux subtitles etc. */
    if (codec_ctx->codec_type == AVMEDIA_TYPE_VIDEO
        || codec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
      /* Open decoder */
      ret = avcodec_open2(codec_ctx,
                  avcodec_find_decoder(codec_ctx->codec_id), NULL);
      if (ret &lt; 0) {
        av_log(NULL, AV_LOG_ERROR, "Failed to open decoder for stream #%u\n", i);
        return ret;
      }
    }
      }

      av_dump_format(ifmt_ctx, 0, filename, 0);
      return 0;
    }

    static int init_output_context(char* filename) {
      int ret;
      ofmt_ctx = NULL;

      avformat_alloc_output_context2(&amp;ofmt_ctx, NULL, NULL, filename);
      if (!ofmt_ctx) {
    av_log(NULL, AV_LOG_ERROR, "Could not create output context\n");
    return AVERROR_UNKNOWN;
      }

      return 0;
    }

    static int init_webm_encoders(int audioBitRate, int crf, int videoMaxBitRate, int threads,
                  char* quality, int speed, int pass, char* stats) {
      AVStream *out_stream;
      AVStream *in_stream;
      AVCodecContext *dec_ctx, *enc_ctx;
      AVCodec *encoder;
      int ret;
      unsigned int i;

      for (i = 0; i &lt; ifmt_ctx->nb_streams; i++) {
    in_stream = ifmt_ctx->streams[i];
    dec_ctx = in_stream->codec;
    if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO || dec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {

      AVDictionary *opts = NULL;
      if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO) {
        encoder = avcodec_find_encoder(AV_CODEC_ID_VP8);
        out_stream = avformat_new_stream(ofmt_ctx, encoder);
        if (!out_stream) {
          av_log(NULL, AV_LOG_ERROR, "Failed allocating output stream\n");
          return AVERROR_UNKNOWN;
        }

        enc_ctx = out_stream->codec;
        enc_ctx->height = dec_ctx->height;
        enc_ctx->width = dec_ctx->width;
        enc_ctx->sample_aspect_ratio = dec_ctx->sample_aspect_ratio;
        /* take first format from list of supported formats */
        enc_ctx->pix_fmt = encoder->pix_fmts[0];
        /* video time_base can be set to whatever is handy and supported by encoder */
        enc_ctx->time_base = dec_ctx->time_base;
        /* enc_ctx->time_base.num = 1; */
        /* enc_ctx->time_base.den = 24; */

        enc_ctx->bit_rate = videoMaxBitRate;
        enc_ctx->thread_count = threads;
        switch (pass) {
        case 1:
          enc_ctx->flags |= CODEC_FLAG_PASS1;
          break;
        case 2:
          enc_ctx->flags |= CODEC_FLAG_PASS2;
          if (stats) {
        enc_ctx->stats_in = stats;
          }
          break;
        }

        char crfString[3];
        snprintf(crfString, 3, "%d", crf);
        av_dict_set(&amp;opts, "crf", crfString, 0);
        av_dict_set(&amp;opts, "quality", quality, 0);
        char speedString[3];
        snprintf(speedString, 3, "%d", speed);
        av_dict_set(&amp;opts, "speed", speedString, 0);
      } else {
        encoder = avcodec_find_encoder(AV_CODEC_ID_VORBIS);
        out_stream = avformat_new_stream(ofmt_ctx, encoder);
        if (!out_stream) {
          av_log(NULL, AV_LOG_ERROR, "Failed allocating output stream\n");
          return AVERROR_UNKNOWN;
        }

        /* in_stream = ifmt_ctx->streams[i]; */
        /* dec_ctx = in_stream->codec; */
        enc_ctx = out_stream->codec;
        /* encoder = out_stream->codec->codec; */

        enc_ctx->sample_rate = dec_ctx->sample_rate;
        enc_ctx->channel_layout = dec_ctx->channel_layout;
        enc_ctx->channels = av_get_channel_layout_nb_channels(enc_ctx->channel_layout);
        /* take first format from list of supported formats */
        enc_ctx->sample_fmt = encoder->sample_fmts[0];
        enc_ctx->time_base = (AVRational){1, enc_ctx->sample_rate};
        enc_ctx->bit_rate = audioBitRate;
      }

      /* Open codec with the set options */
      ret = avcodec_open2(enc_ctx, encoder, &amp;opts);
      if (ret &lt; 0) {
        av_log(NULL, AV_LOG_ERROR, "Cannot open video encoder for stream #%u\n", i);
        return ret;
      }
      int unused = av_dict_count(opts);
      if (unused > 0) {
        av_log(NULL, AV_LOG_WARNING, "%d unused options\n", unused);
      }
      /* } else if (dec_ctx->codec_type == AVMEDIA_TYPE_UNKNOWN) { */
    } else {
      av_log(NULL, AV_LOG_FATAL, "Elementary stream #%d is of unknown type, cannot proceed\n", i);
      return AVERROR_INVALIDDATA;
    } /* else { */
      /*   /\* if this stream must be remuxed *\/ */
      /*   ret = avcodec_copy_context(ofmt_ctx->streams[i]->codec, */
      /*                ifmt_ctx->streams[i]->codec); */
      /*   if (ret &lt; 0) { */
      /*   av_log(NULL, AV_LOG_ERROR, "Copying stream context failed\n"); */
      /*   return ret; */
      /*   } */
      /* } */

    if (ofmt_ctx->oformat->flags &amp; AVFMT_GLOBALHEADER)
      enc_ctx->flags |= CODEC_FLAG_GLOBAL_HEADER;
      }

      return 0;
    }

    static int open_output_file(const char *filename) {
      int ret;

      av_dump_format(ofmt_ctx, 0, filename, 1);

      if (!(ofmt_ctx->oformat->flags &amp; AVFMT_NOFILE)) {
    ret = avio_open(&amp;ofmt_ctx->pb, filename, AVIO_FLAG_WRITE);
    if (ret &lt; 0) {
      av_log(NULL, AV_LOG_ERROR, "Could not open output file &#39;%s&#39;", filename);
      return ret;
    }
      }

      /* init muxer, write output file header */
      ret = avformat_write_header(ofmt_ctx, NULL);
      if (ret &lt; 0) {
    av_log(NULL, AV_LOG_ERROR, "Error occurred when opening output file\n");
    return ret;
      }

      return 0;
    }

    static int init_filter(FilteringContext* fctx, AVCodecContext *dec_ctx,
               AVCodecContext *enc_ctx, const char *filter_spec) {
      char args[512];
      int ret = 0;
      AVFilter *buffersrc = NULL;
      AVFilter *buffersink = NULL;
      AVFilterContext *buffersrc_ctx = NULL;
      AVFilterContext *buffersink_ctx = NULL;
      AVFilterInOut *outputs = avfilter_inout_alloc();
      AVFilterInOut *inputs  = avfilter_inout_alloc();
      AVFilterGraph *filter_graph = avfilter_graph_alloc();

      if (!outputs || !inputs || !filter_graph) {
    ret = AVERROR(ENOMEM);
    goto end;
      }

      if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO) {
    buffersrc = avfilter_get_by_name("buffer");
    buffersink = avfilter_get_by_name("buffersink");
    if (!buffersrc || !buffersink) {
      av_log(NULL, AV_LOG_ERROR, "filtering source or sink element not found\n");
      ret = AVERROR_UNKNOWN;
      goto end;
    }

    snprintf(args, sizeof(args),
         "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
         dec_ctx->width, dec_ctx->height, dec_ctx->pix_fmt,
         dec_ctx->time_base.num, dec_ctx->time_base.den,
         dec_ctx->sample_aspect_ratio.num,
         dec_ctx->sample_aspect_ratio.den);

    ret = avfilter_graph_create_filter(&amp;buffersrc_ctx, buffersrc, "in",
                       args, NULL, filter_graph);
    if (ret &lt; 0) {
      av_log(NULL, AV_LOG_ERROR, "Cannot create buffer source\n");
      goto end;
    }

    ret = avfilter_graph_create_filter(&amp;buffersink_ctx, buffersink, "out",
                       NULL, NULL, filter_graph);
    if (ret &lt; 0) {
      av_log(NULL, AV_LOG_ERROR, "Cannot create buffer sink\n");
      goto end;
    }

    ret = av_opt_set_bin(buffersink_ctx, "pix_fmts",
                 (uint8_t*)&amp;enc_ctx->pix_fmt, sizeof(enc_ctx->pix_fmt),
                 AV_OPT_SEARCH_CHILDREN);
    if (ret &lt; 0) {
      av_log(NULL, AV_LOG_ERROR, "Cannot set output pixel format\n");
      goto end;
    }
      } else if (dec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
    buffersrc = avfilter_get_by_name("abuffer");
    buffersink = avfilter_get_by_name("abuffersink");
    if (!buffersrc || !buffersink) {
      av_log(NULL, AV_LOG_ERROR, "filtering source or sink element not found\n");
      ret = AVERROR_UNKNOWN;
      goto end;
    }

    if (!dec_ctx->channel_layout)
      dec_ctx->channel_layout =
        av_get_default_channel_layout(dec_ctx->channels);
    snprintf(args, sizeof(args),
         "time_base=%d/%d:sample_rate=%d:sample_fmt=%s:channel_layout=0x%"PRIx64,
         dec_ctx->time_base.num, dec_ctx->time_base.den, dec_ctx->sample_rate,
         av_get_sample_fmt_name(dec_ctx->sample_fmt),
         dec_ctx->channel_layout);
    ret = avfilter_graph_create_filter(&amp;buffersrc_ctx, buffersrc, "in",
                       args, NULL, filter_graph);
    if (ret &lt; 0) {
      av_log(NULL, AV_LOG_ERROR, "Cannot create audio buffer source\n");
      goto end;
    }

    ret = avfilter_graph_create_filter(&amp;buffersink_ctx, buffersink, "out",
                       NULL, NULL, filter_graph);
    if (ret &lt; 0) {
      av_log(NULL, AV_LOG_ERROR, "Cannot create audio buffer sink\n");
      goto end;
    }

    ret = av_opt_set_bin(buffersink_ctx, "sample_fmts",
                 (uint8_t*)&amp;enc_ctx->sample_fmt, sizeof(enc_ctx->sample_fmt),
                 AV_OPT_SEARCH_CHILDREN);
    if (ret &lt; 0) {
      av_log(NULL, AV_LOG_ERROR, "Cannot set output sample format\n");
      goto end;
    }

    ret = av_opt_set_bin(buffersink_ctx, "channel_layouts",
                 (uint8_t*)&amp;enc_ctx->channel_layout,
                 sizeof(enc_ctx->channel_layout), AV_OPT_SEARCH_CHILDREN);
    if (ret &lt; 0) {
      av_log(NULL, AV_LOG_ERROR, "Cannot set output channel layout\n");
      goto end;
    }

    ret = av_opt_set_bin(buffersink_ctx, "sample_rates",
                 (uint8_t*)&amp;enc_ctx->sample_rate, sizeof(enc_ctx->sample_rate),
                 AV_OPT_SEARCH_CHILDREN);
    if (ret &lt; 0) {
      av_log(NULL, AV_LOG_ERROR, "Cannot set output sample rate\n");
      goto end;
    }
      } else {
    ret = AVERROR_UNKNOWN;
    goto end;
      }

      /* Endpoints for the filter graph. */
      outputs->name       = av_strdup("in");
      outputs->filter_ctx = buffersrc_ctx;
      outputs->pad_idx    = 0;
      outputs->next       = NULL;

      inputs->name       = av_strdup("out");
      inputs->filter_ctx = buffersink_ctx;
      inputs->pad_idx    = 0;
      inputs->next       = NULL;

      if (!outputs->name || !inputs->name) {
    ret = AVERROR(ENOMEM);
    goto end;
      }

      if ((ret = avfilter_graph_parse_ptr(filter_graph, filter_spec,
                      &amp;inputs, &amp;outputs, NULL)) &lt; 0)
    goto end;

      if ((ret = avfilter_graph_config(filter_graph, NULL)) &lt; 0)
    goto end;

      /* Fill FilteringContext */
      fctx->buffersrc_ctx = buffersrc_ctx;
      fctx->buffersink_ctx = buffersink_ctx;
      fctx->filter_graph = filter_graph;

     end:
      avfilter_inout_free(&amp;inputs);
      avfilter_inout_free(&amp;outputs);

      return ret;
    }

    static int init_filters(enum AVCodecID audioCodec) {
      const char *filter_spec;
      unsigned int i;
      int ret;
      filter_ctx = av_malloc_array(ifmt_ctx->nb_streams, sizeof(*filter_ctx));
      if (!filter_ctx)
    return AVERROR(ENOMEM);

      for (i = 0; i &lt; ifmt_ctx->nb_streams; i++) {
    filter_ctx[i].buffersrc_ctx  = NULL;
    filter_ctx[i].buffersink_ctx = NULL;
    filter_ctx[i].filter_graph   = NULL;
    /* Skip streams that are neither audio nor video */
    if (!(ifmt_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO
          || ifmt_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO))
      continue;


    if (ifmt_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO)
      filter_spec = "null"; /* passthrough (dummy) filter for video */
    else
      /* TODO: make this more general */
      if (audioCodec == AV_CODEC_ID_VORBIS) {
        filter_spec = "asetnsamples=n=64";
      } else {
        /* filter_spec = "null"; /\* passthrough (dummy) filter for audio *\/ */
        filter_spec = "fps=24";
        /* filter_spec = "settb=expr=1/24"; */
      }
    ret = init_filter(&amp;filter_ctx[i], ifmt_ctx->streams[i]->codec,
              ofmt_ctx->streams[i]->codec, filter_spec);
    if (ret)
      return ret;
      }
      return 0;
    }

    static int encode_write_frame(AVFrame *filt_frame, unsigned int stream_index, int *got_frame) {
      int ret;
      int got_frame_local;
      AVPacket enc_pkt;
      int (*enc_func)(AVCodecContext *, AVPacket *, const AVFrame *, int *) =
    (ifmt_ctx->streams[stream_index]->codec->codec_type ==
     AVMEDIA_TYPE_VIDEO) ? avcodec_encode_video2 : avcodec_encode_audio2;

      if (!got_frame)
    got_frame = &amp;got_frame_local;

      /* av_log(NULL, AV_LOG_INFO, "Encoding frame\n"); */
      /* encode filtered frame */
      enc_pkt.data = NULL;
      enc_pkt.size = 0;
      av_init_packet(&amp;enc_pkt);
      ret = enc_func(ofmt_ctx->streams[stream_index]->codec, &amp;enc_pkt,
             filt_frame, got_frame);
      av_frame_free(&amp;filt_frame);
      if (ret &lt; 0)
    return ret;
      if (!(*got_frame))
    return 0;

      /* prepare packet for muxing */
      enc_pkt.stream_index = stream_index;
      enc_pkt.dts = av_rescale_q_rnd(enc_pkt.dts,
                     ofmt_ctx->streams[stream_index]->codec->time_base,
                     ofmt_ctx->streams[stream_index]->time_base,
                     AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
      enc_pkt.pts = av_rescale_q_rnd(enc_pkt.pts,
                     ofmt_ctx->streams[stream_index]->codec->time_base,
                     ofmt_ctx->streams[stream_index]->time_base,
                     AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
      enc_pkt.duration = av_rescale_q(enc_pkt.duration,
                      ofmt_ctx->streams[stream_index]->codec->time_base,
                      ofmt_ctx->streams[stream_index]->time_base);

      /* av_log(NULL, AV_LOG_DEBUG, "Muxing frame\n"); */
      /* mux encoded frame */
      ret = av_interleaved_write_frame(ofmt_ctx, &amp;enc_pkt);
      return ret;
    }

    static int filter_encode_write_frame(AVFrame *frame, unsigned int stream_index) {
      int ret;
      AVFrame *filt_frame;

      /* av_log(NULL, AV_LOG_INFO, "Pushing decoded frame to filters\n"); */
      /* push the decoded frame into the filtergraph */
      ret = av_buffersrc_add_frame_flags(filter_ctx[stream_index].buffersrc_ctx,
                     frame, 0);
      if (ret &lt; 0) {
    av_log(NULL, AV_LOG_ERROR, "Error while feeding the filtergraph\n");
    return ret;
      }

      /* pull filtered frames from the filtergraph */
      while (1) {
    filt_frame = av_frame_alloc();
    if (!filt_frame) {
      ret = AVERROR(ENOMEM);
      break;
    }
    /* av_log(NULL, AV_LOG_INFO, "Pulling filtered frame from filters\n"); */
    ret = av_buffersink_get_frame(filter_ctx[stream_index].buffersink_ctx,
                      filt_frame);
    if (ret &lt; 0) {
      /* if no more frames for output - returns AVERROR(EAGAIN)
       * if flushed and no more frames for output - returns AVERROR_EOF
       * rewrite retcode to 0 to show it as normal procedure completion
       */
      if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
        ret = 0;
      av_frame_free(&amp;filt_frame);
      break;
    }

    filt_frame->pict_type = AV_PICTURE_TYPE_NONE;
    ret = encode_write_frame(filt_frame, stream_index, NULL);
    if (ret &lt; 0)
      break;
      }

      return ret;
    }

    static int flush_encoder(unsigned int stream_index) {
      int ret;
      int got_frame;

      if (!(ofmt_ctx->streams[stream_index]->codec->codec->capabilities &amp;
        CODEC_CAP_DELAY))
    return 0;

      while (1) {
    av_log(NULL, AV_LOG_INFO, "Flushing stream #%u encoder\n", stream_index);
    ret = encode_write_frame(NULL, stream_index, &amp;got_frame);
    if (ret &lt; 0)
      break;
    if (!got_frame)
      return 0;
      }
      return ret;
    }

    static int transcode() {
      int ret;
      AVPacket packet = { .data = NULL, .size = 0 };
      AVFrame *frame = NULL;
      enum AVMediaType type;
      unsigned int stream_index;
      unsigned int i;
      int got_frame;
      int (*dec_func)(AVCodecContext *, AVFrame *, int *, const AVPacket *);

      /* read all packets */
      while (1) {
    if ((ret = av_read_frame(ifmt_ctx, &amp;packet)) &lt; 0)
      break;
    stream_index = packet.stream_index;
    type = ifmt_ctx->streams[packet.stream_index]->codec->codec_type;
    av_log(NULL, AV_LOG_DEBUG, "Demuxer gave frame of stream_index %u\n",
       stream_index);

    if (filter_ctx[stream_index].filter_graph) {
      av_log(NULL, AV_LOG_DEBUG, "Going to reencode&amp;filter the frame\n");
      frame = av_frame_alloc();
      if (!frame) {
        ret = AVERROR(ENOMEM);
        break;
      }
      packet.dts = av_rescale_q_rnd(packet.dts,
                    ifmt_ctx->streams[stream_index]->time_base,
                    ifmt_ctx->streams[stream_index]->codec->time_base,
                    AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
      packet.pts = av_rescale_q_rnd(packet.pts,
                    ifmt_ctx->streams[stream_index]->time_base,
                    ifmt_ctx->streams[stream_index]->codec->time_base,
                    AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
      dec_func = (type == AVMEDIA_TYPE_VIDEO) ? avcodec_decode_video2 :
        avcodec_decode_audio4;
      ret = dec_func(ifmt_ctx->streams[stream_index]->codec, frame,
             &amp;got_frame, &amp;packet);
      if (ret &lt; 0) {
        av_frame_free(&amp;frame);
        av_log(NULL, AV_LOG_ERROR, "Decoding failed\n");
        break;
      }

      if (got_frame) {
        frame->pts = av_frame_get_best_effort_timestamp(frame);
        ret = filter_encode_write_frame(frame, stream_index);
        av_frame_free(&amp;frame);
        if (ret &lt; 0)
          goto end;
      } else {
        av_frame_free(&amp;frame);
      }
    } else {
      /* remux this frame without reencoding */
      packet.dts = av_rescale_q_rnd(packet.dts,
                    ifmt_ctx->streams[stream_index]->time_base,
                    ofmt_ctx->streams[stream_index]->time_base,
                    AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
      packet.pts = av_rescale_q_rnd(packet.pts,
                    ifmt_ctx->streams[stream_index]->time_base,
                    ofmt_ctx->streams[stream_index]->time_base,
                    AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);

      ret = av_interleaved_write_frame(ofmt_ctx, &amp;packet);
      if (ret &lt; 0)
        goto end;
    }
    av_free_packet(&amp;packet);
      }

      /* flush filters and encoders */
      for (i = 0; i &lt; ifmt_ctx->nb_streams; i++) {
    /* flush filter */
    if (!filter_ctx[i].filter_graph)
      continue;
    ret = filter_encode_write_frame(NULL, i);
    if (ret &lt; 0) {
      av_log(NULL, AV_LOG_ERROR, "Flushing filter failed\n");
      goto end;
    }

    /* flush encoder */
    ret = flush_encoder(i);
    if (ret &lt; 0) {
      av_log(NULL, AV_LOG_ERROR, "Flushing encoder failed\n");
      goto end;
    }
      }

      av_write_trailer(ofmt_ctx);

      // Retrieve and store the first instance of codec statistics
      // TODO: less naive, deal with multiple instances of statistics
      for (i = 0; i &lt; ofmt_ctx->nb_streams; i++) {
    AVCodecContext* codec = ofmt_ctx->streams[i]->codec;
    if ((codec->flags &amp; CODEC_FLAG_PASS1) &amp;&amp; (codec->stats_out)){
      FILE* logfile = fopen(STATS_LOG, "wb");
      fprintf(logfile, "%s", codec->stats_out);
      if (fclose(logfile) &lt; 0) {
        av_log(NULL, AV_LOG_ERROR, "Error closing log file.\n");
      }
      break;
    }
      }

      av_log(NULL, AV_LOG_INFO, "output duration = %" PRId64 "\n", ofmt_ctx->duration);

     end:
      av_free_packet(&amp;packet);
      av_frame_free(&amp;frame);
      for (i = 0; i &lt; ifmt_ctx->nb_streams; i++) {
    avcodec_close(ifmt_ctx->streams[i]->codec);
    if (ofmt_ctx &amp;&amp; ofmt_ctx->nb_streams > i &amp;&amp; ofmt_ctx->streams[i] &amp;&amp; ofmt_ctx->streams[i]->codec)
      avcodec_close(ofmt_ctx->streams[i]->codec);
    if (filter_ctx &amp;&amp; filter_ctx[i].filter_graph)
      avfilter_graph_free(&amp;filter_ctx[i].filter_graph);
      }
      av_free(filter_ctx);
      avformat_close_input(&amp;ifmt_ctx);
      if (ofmt_ctx &amp;&amp; !(ofmt_ctx->oformat->flags &amp; AVFMT_NOFILE))
    avio_close(ofmt_ctx->pb);
      avformat_free_context(ofmt_ctx);

      if (ret &lt; 0)
    av_log(NULL, AV_LOG_ERROR, "Error occurred: %s\n", av_err2str(ret));

      return ret ? 1 : 0;
    }

    int TranscodeToWebM(char* inputPath, char* outputPath, int audioBitRate, int crf, int videoMaxBitRate, int threads,
            char* quality, int speed) {
      int ret;
      unsigned int pass;
      char* stats = NULL;

      av_register_all();
      avfilter_register_all();

      for (pass = 1; pass &lt;= 2; pass++) {
    if ((ret = open_input_file(inputPath)) &lt; 0)
      goto end;

    if ((ret = init_output_context(outputPath)) &lt; 0)
      goto end;

    if (pass == 2) {
      size_t stats_length;
      if (cmdutils_read_file(STATS_LOG, &amp;stats, &amp;stats_length) &lt; 0) {
        av_log(NULL, AV_LOG_ERROR, "Error reading stats file.\n");
        break;
      }
    }

    if ((ret = init_webm_encoders(audioBitRate, crf, videoMaxBitRate, threads, quality, speed, pass, stats)) &lt; 0)
      goto end;

    if ((ret = open_output_file(outputPath)) &lt; 0)
      goto end;

    if ((ret = init_filters(AV_CODEC_ID_VORBIS)) &lt; 0)
      goto end;

    if ((ret = transcode()) &lt; 0)
      goto end;
      }

      if (remove(STATS_LOG) != 0) {
    av_log(NULL, AV_LOG_ERROR, "Failed to remove %s\n", STATS_LOG);
      }

     end:
      if (ret &lt; 0) {
    av_log(NULL, AV_LOG_ERROR, "Error occurred: %s\n", av_err2str(ret));
    return ret;
      }

      return 0;
    }

    Here is the output from the ffmpeg command I am trying to mimic.

    ffmpeg version N-62301-g59a5384 Copyright (c) 2000-2014 the FFmpeg developers
     built on Apr  9 2014 09:58:44 with gcc 4.8.2 (GCC) 20140206 (prerelease)
     configuration: --prefix=/opt/ffmpeg --extra-cflags=-I/opt/x264/include --extra-ldflags=-L/opt/x264/lib --extra-libs=-ldl --enable-gpl --enable-nonfree --enable-libfdk-aac --enable-libopus --enable-libvorbis --enable-libvpx --enable-libx264
     libavutil      52. 75.100 / 52. 75.100
     libavcodec     55. 58.103 / 55. 58.103
     libavformat    55. 36.102 / 55. 36.102
     libavdevice    55. 11.100 / 55. 11.100
     libavfilter     4.  3.100 /  4.  3.100
     libswscale      2.  6.100 /  2.  6.100
     libswresample   0. 18.100 /  0. 18.100
     libpostproc    52.  3.100 / 52.  3.100
    Input #0, matroska,webm, from &#39;/mnt/scratch/test_source/Sintel.2010.720p.mkv&#39;:
     Metadata:
    encoder         : libebml v1.0.0 + libmatroska v1.0.0
    creation_time   : 2011-04-24 17:20:33
     Duration: 00:14:48.03, start: 0.000000, bitrate: 6071 kb/s
    Chapter #0.0: start 0.000000, end 103.125000
    Metadata:
     title           : Chapter 01
    Chapter #0.1: start 103.125000, end 148.667000
    Metadata:
     title           : Chapter 02
    Chapter #0.2: start 148.667000, end 349.792000
    Metadata:
     title           : Chapter 03
    Chapter #0.3: start 349.792000, end 437.208000
    Metadata:
     title           : Chapter 04
    Chapter #0.4: start 437.208000, end 472.075000
    Metadata:
     title           : Chapter 05
    Chapter #0.5: start 472.075000, end 678.833000
    Metadata:
     title           : Chapter 06
    Chapter #0.6: start 678.833000, end 744.083000
    Metadata:
     title           : Chapter 07
    Chapter #0.7: start 744.083000, end 888.032000
    Metadata:
     title           : Chapter 08
    Stream #0:0(eng): Video: h264 (High), yuv420p(tv, bt709), 1280x544, SAR 1:1 DAR 40:17, 24 fps, 24 tbr, 1k tbn, 48 tbc
    Stream #0:1(eng): Audio: ac3, 48000 Hz, 5.1(side), fltp, 640 kb/s
    Metadata:
     title           : AC3 5.1 @ 640 Kbps
    Stream #0:2(ger): Subtitle: subrip
    Stream #0:3(eng): Subtitle: subrip
    Stream #0:4(spa): Subtitle: subrip
    Stream #0:5(fre): Subtitle: subrip
    Stream #0:6(ita): Subtitle: subrip
    Stream #0:7(dut): Subtitle: subrip
    Stream #0:8(pol): Subtitle: subrip
    Stream #0:9(por): Subtitle: subrip
    Stream #0:10(rus): Subtitle: subrip
    Stream #0:11(vie): Subtitle: subrip
    [libvpx @ 0x24b74c0] v1.3.0
    Output #0, webm, to &#39;/mnt/scratch/test_out/Sintel.2010.720p.script.webm&#39;:
     Metadata:
    encoder         : Lavf55.36.102
    Chapter #0.0: start 0.000000, end 103.125000
    Metadata:
     title           : Chapter 01
    Chapter #0.1: start 103.125000, end 148.667000
    Metadata:
     title           : Chapter 02
    Chapter #0.2: start 148.667000, end 349.792000
    Metadata:
     title           : Chapter 03
    Chapter #0.3: start 349.792000, end 437.208000
    Metadata:
     title           : Chapter 04
    Chapter #0.4: start 437.208000, end 472.075000
    Metadata:
     title           : Chapter 05
    Chapter #0.5: start 472.075000, end 678.833000
    Metadata:
     title           : Chapter 06
    Chapter #0.6: start 678.833000, end 744.083000
    Metadata:
     title           : Chapter 07
    Chapter #0.7: start 744.083000, end 888.032000
    Metadata:
     title           : Chapter 08
    Stream #0:0(eng): Video: vp8 (libvpx), yuv420p, 1280x544 [SAR 1:1 DAR 40:17], q=-1--1, pass 2, 60000 kb/s, 1k tbn, 24 tbc
    Stream #0:1(eng): Audio: vorbis (libvorbis), 48000 Hz, 5.1(side), fltp, 384 kb/s
    Metadata:
     title           : AC3 5.1 @ 640 Kbps
    Stream mapping:
     Stream #0:0 -> #0:0 (h264 -> libvpx)
     Stream #0:1 -> #0:1 (ac3 -> libvorbis)
    Press [q] to stop, [?] for help
    frame=21312 fps= 11 q=0.0 Lsize=  567191kB time=00:14:48.01 bitrate=5232.4kbits/s    
    video:537377kB audio:29266kB subtitle:0kB other streams:0kB global headers:7kB muxing overhead: 0.096885%