Recherche avancée

Médias (0)

Mot : - Tags -/xmlrpc

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (39)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Mise à disposition des fichiers

    14 avril 2011, par

    Par défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
    Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
    Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

Sur d’autres sites (6355)

  • javax.media.NoDataSinkException

    23 novembre 2022, par Divya

    I am trying to convert Jpeg images into .mov video file

    



     package com.ecomm.pl4mms.test;&#xA;&#xA;import java.io.*;&#xA;import java.util.*;&#xA;import java.awt.Dimension;&#xA;&#xA;import javax.media.*;&#xA;import javax.media.control.*;&#xA;import javax.media.protocol.*;&#xA;import javax.media.protocol.DataSource;&#xA;import javax.media.datasink.*;&#xA;import javax.media.format.VideoFormat;&#xA;import javax.media.format.JPEGFormat;&#xA;&#xA;public class JpegImagesToMovie implements ControllerListener, DataSinkListener {&#xA;&#xA;    public boolean doItPath(int width, int height, int frameRate, Vector inFiles, String outputURL) {&#xA;        // Check for output file extension.&#xA;        if (!outputURL.endsWith(".mov") &amp;&amp; !outputURL.endsWith(".MOV")) {&#xA;            // System.err.println("The output file extension should end with a&#xA;            // .mov extension");&#xA;            prUsage();&#xA;        }&#xA;&#xA;        // Generate the output media locators.&#xA;        MediaLocator oml;&#xA;&#xA;        if ((oml = createMediaLocator("file:" &#x2B; outputURL)) == null) {&#xA;            // System.err.println("Cannot build media locator from: " &#x2B;&#xA;            // outputURL);&#xA;            //System.exit(0);&#xA;        }&#xA;&#xA;        boolean success = doIt(width, height, frameRate, inFiles, oml);&#xA;&#xA;        System.gc();&#xA;        return success;&#xA;    }&#xA;&#xA;    public boolean doIt(int width, int height, int frameRate, Vector inFiles, MediaLocator outML) {&#xA;        try {&#xA;            System.out.println(inFiles.size());&#xA;            ImageDataSource ids = new ImageDataSource(width, height, frameRate, inFiles);&#xA;&#xA;            Processor p;&#xA;&#xA;            try {&#xA;                // System.err.println("- create processor for the image&#xA;                // datasource ...");&#xA;                System.out.println("processor");&#xA;                p = Manager.createProcessor(ids);&#xA;                System.out.println("success");&#xA;            } catch (Exception e) {&#xA;                // System.err.println("Yikes! Cannot create a processor from the&#xA;                // data source.");&#xA;                return false;&#xA;            }&#xA;&#xA;            p.addControllerListener(this);&#xA;&#xA;            // Put the Processor into configured state so we can set&#xA;            // some processing options on the processor.&#xA;            p.configure();&#xA;            if (!waitForState(p, p.Configured)) {&#xA;                System.out.println("Issue configuring");&#xA;                // System.err.println("Failed to configure the processor.");&#xA;                p.close();&#xA;                p.deallocate();&#xA;                return false;&#xA;            }&#xA;            System.out.println("Configured");&#xA;&#xA;            // Set the output content descriptor to QuickTime.&#xA;            p.setContentDescriptor(new ContentDescriptor(FileTypeDescriptor.QUICKTIME));&#xA;System.out.println(outML);&#xA;            // Query for the processor for supported formats.&#xA;            // Then set it on the processor.&#xA;            TrackControl tcs[] = p.getTrackControls();&#xA;            Format f[] = tcs[0].getSupportedFormats();&#xA;            System.out.println(f[0].getEncoding());&#xA;            if (f == null || f.length &lt;= 0) {&#xA;                 System.err.println("The mux does not support the input format: " &#x2B; tcs[0].getFormat());&#xA;                p.close();&#xA;                p.deallocate();&#xA;                return false;&#xA;            }&#xA;&#xA;            tcs[0].setFormat(f[0]);&#xA;&#xA;            // System.err.println("Setting the track format to: " &#x2B; f[0]);&#xA;&#xA;            // We are done with programming the processor. Let&#x27;s just&#xA;            // realize it.&#xA;            p.realize();&#xA;            if (!waitForState(p, p.Realized)) {&#xA;                // System.err.println("Failed to realize the processor.");&#xA;                p.close();&#xA;                p.deallocate();&#xA;                return false;&#xA;            }&#xA;&#xA;            // Now, we&#x27;ll need to create a DataSink.&#xA;            DataSink dsink;&#xA;            if ((dsink = createDataSink(p, outML)) == null) {&#xA;                // System.err.println("Failed to create a DataSink for the given&#xA;                // output MediaLocator: " &#x2B; outML);&#xA;                p.close();&#xA;                p.deallocate();&#xA;                return false;&#xA;            }&#xA;&#xA;            dsink.addDataSinkListener(this);&#xA;            fileDone = false;&#xA;&#xA;            // System.err.println("start processing...");&#xA;&#xA;            // OK, we can now start the actual transcoding.&#xA;            try {&#xA;                p.start();&#xA;                dsink.start();&#xA;            } catch (IOException e) {&#xA;                p.close();&#xA;                p.deallocate();&#xA;                dsink.close();&#xA;                // System.err.println("IO error during processing");&#xA;                return false;&#xA;            }&#xA;&#xA;            // Wait for EndOfStream event.&#xA;            waitForFileDone();&#xA;&#xA;            // Cleanup.&#xA;            try {&#xA;                dsink.close();&#xA;            } catch (Exception e) {&#xA;            }&#xA;            p.removeControllerListener(this);&#xA;&#xA;            // System.err.println("...done processing.");&#xA;&#xA;            p.close();&#xA;&#xA;            return true;&#xA;        } catch (NotConfiguredError e) {&#xA;            // TODO Auto-generated catch block&#xA;            e.printStackTrace();&#xA;        }&#xA;&#xA;        return false;&#xA;    }&#xA;&#xA;    /**&#xA;     * Create the DataSink.&#xA;     */&#xA;    DataSink createDataSink(Processor p, MediaLocator outML) {&#xA;System.out.println("In data sink");&#xA;        DataSource ds;&#xA;&#xA;        if ((ds = p.getDataOutput()) == null) {&#xA;         System.out.println("Something is really wrong: the processor does not have an output DataSource");&#xA;            return null;&#xA;        }&#xA;&#xA;        DataSink dsink;&#xA;&#xA;        try {&#xA;             System.out.println("- create DataSink for: " &#x2B; ds.toString()&#x2B;ds.getContentType());&#xA;            dsink = Manager.createDataSink(ds, outML);&#xA;            dsink.open();&#xA;            System.out.println("Done data sink");&#xA;        } catch (Exception e) {&#xA;             System.err.println("Cannot create the DataSink: " &#x2B;e);&#xA;             e.printStackTrace();&#xA;            return null;&#xA;        }&#xA;&#xA;        return dsink;&#xA;    }&#xA;&#xA;    Object waitSync = new Object();&#xA;    boolean stateTransitionOK = true;&#xA;&#xA;    /**&#xA;     * Block until the processor has transitioned to the given state. Return&#xA;     * false if the transition failed.&#xA;     */&#xA;    boolean waitForState(Processor p, int state) {&#xA;        synchronized (waitSync) {&#xA;            try {&#xA;                while (p.getState() &lt; state &amp;&amp; stateTransitionOK)&#xA;                    waitSync.wait();&#xA;            } catch (Exception e) {&#xA;            }&#xA;        }&#xA;        return stateTransitionOK;&#xA;    }&#xA;&#xA;    /**&#xA;     * Controller Listener.&#xA;     */&#xA;    public void controllerUpdate(ControllerEvent evt) {&#xA;&#xA;        if (evt instanceof ConfigureCompleteEvent || evt instanceof RealizeCompleteEvent&#xA;                || evt instanceof PrefetchCompleteEvent) {&#xA;            synchronized (waitSync) {&#xA;                stateTransitionOK = true;&#xA;                waitSync.notifyAll();&#xA;            }&#xA;        } else if (evt instanceof ResourceUnavailableEvent) {&#xA;            synchronized (waitSync) {&#xA;                stateTransitionOK = false;&#xA;                waitSync.notifyAll();&#xA;            }&#xA;        } else if (evt instanceof EndOfMediaEvent) {&#xA;            evt.getSourceController().stop();&#xA;            evt.getSourceController().close();&#xA;        }&#xA;    }&#xA;&#xA;    Object waitFileSync = new Object();&#xA;    boolean fileDone = false;&#xA;    boolean fileSuccess = true;&#xA;&#xA;    /**&#xA;     * Block until file writing is done.&#xA;     */&#xA;    boolean waitForFileDone() {&#xA;        synchronized (waitFileSync) {&#xA;            try {&#xA;                while (!fileDone)&#xA;                    waitFileSync.wait();&#xA;            } catch (Exception e) {&#xA;            }&#xA;        }&#xA;        return fileSuccess;&#xA;    }&#xA;&#xA;    /**&#xA;     * Event handler for the file writer.&#xA;     */&#xA;    public void dataSinkUpdate(DataSinkEvent evt) {&#xA;&#xA;        if (evt instanceof EndOfStreamEvent) {&#xA;            synchronized (waitFileSync) {&#xA;                fileDone = true;&#xA;                waitFileSync.notifyAll();&#xA;            }&#xA;        } else if (evt instanceof DataSinkErrorEvent) {&#xA;            synchronized (waitFileSync) {&#xA;                fileDone = true;&#xA;                fileSuccess = false;&#xA;                waitFileSync.notifyAll();&#xA;            }&#xA;        }&#xA;    }&#xA;&#xA;    public static void main(String arg[]) {&#xA;        try {&#xA;            String args[] = { "-w 100 -h 100 -f 100 -o F:\\test.mov F:\\Text69.jpg F:\\Textnew.jpg" };&#xA;            if (args.length == 0)&#xA;                prUsage();&#xA;&#xA;            // Parse the arguments.&#xA;            int i = 0;&#xA;            int width = -1, height = -1, frameRate = 1;&#xA;            Vector inputFiles = new Vector();&#xA;            String outputURL = null;&#xA;&#xA;            while (i &lt; args.length) {&#xA;&#xA;                if (args[i].equals("-w")) {&#xA;                    i&#x2B;&#x2B;;&#xA;                    if (i >= args.length)&#xA;                        width = new Integer(args[i]).intValue();&#xA;                } else if (args[i].equals("-h")) {&#xA;                    i&#x2B;&#x2B;;&#xA;                    if (i >= args.length)&#xA;                        height = new Integer(args[i]).intValue();&#xA;                } else if (args[i].equals("-f")) {&#xA;                    i&#x2B;&#x2B;;&#xA;                    if (i >= args.length)&#xA;                        frameRate = new Integer(args[i]).intValue();&#xA;                } else if (args[i].equals("-o")) {&#xA;                    System.out.println("in ou");&#xA;                    i&#x2B;&#x2B;;&#xA;                    System.out.println(i);&#xA;                    if (i >= args.length)&#xA;                        outputURL = args[i];&#xA;                    System.out.println(outputURL);&#xA;                } else {&#xA;                    System.out.println("adding"&#x2B;args[i]);&#xA;                    inputFiles.addElement(args[i]);&#xA;                }&#xA;                i&#x2B;&#x2B;;&#xA;&#xA;            }&#xA;            inputFiles.addElement("F:\\Textnew.jpg");&#xA;            outputURL = "F:\\test.mov";&#xA;            System.out.println(inputFiles.size() &#x2B; outputURL);&#xA;            if (outputURL == null || inputFiles.size() == 0)&#xA;                prUsage();&#xA;&#xA;            // Check for output file extension.&#xA;            if (!outputURL.endsWith(".mov") &amp;&amp; !outputURL.endsWith(".MOV")) {&#xA;                System.err.println("The output file extension should end with a .mov extension");&#xA;                prUsage();&#xA;            }&#xA;            width = 100;&#xA;            height = 100;&#xA;            if (width &lt; 0 || height &lt; 0) {&#xA;                System.err.println("Please specify the correct image size.");&#xA;                prUsage();&#xA;            }&#xA;&#xA;            // Check the frame rate.&#xA;            if (frameRate &lt; 1)&#xA;                frameRate = 1;&#xA;&#xA;            // Generate the output media locators.&#xA;            MediaLocator oml;&#xA;            oml = createMediaLocator(outputURL);&#xA;            System.out.println("Media" &#x2B; oml);&#xA;            if (oml == null) {&#xA;                System.err.println("Cannot build media locator from: " &#x2B; outputURL);&#xA;                // //System.exit(0);&#xA;            }&#xA;            System.out.println("Before change");&#xA;System.out.println(inputFiles.size());&#xA;            JpegImagesToMovie imageToMovie = new JpegImagesToMovie();&#xA;            boolean status = imageToMovie.doIt(width, height, frameRate, inputFiles, oml);&#xA;            System.out.println("Status"&#x2B;status);&#xA;            //System.exit(0);&#xA;        } catch (Exception e) {&#xA;            // TODO Auto-generated catch block&#xA;            e.printStackTrace();&#xA;        }&#xA;    }&#xA;&#xA;    static void prUsage() {&#xA;        System.err.println(&#xA;                "Usage: java JpegImagesToMovie -w <width> -h <height> -f  -o <output url="url"> <input jpeg="jpeg" file="file" 1="1" /> <input jpeg="jpeg" file="file" 2="2" /> ...");&#xA;        //System.exit(-1);&#xA;    }&#xA;&#xA;    /**&#xA;     * Create a media locator from the given string.&#xA;     */&#xA;    static MediaLocator createMediaLocator(String url) {&#xA;        System.out.println(url);&#xA;        MediaLocator ml;&#xA;&#xA;        if (url.indexOf(":") > 0 &amp;&amp; (ml = new MediaLocator(url)) != null)&#xA;            return ml;&#xA;&#xA;        if (url.startsWith(File.separator)) {&#xA;            if ((ml = new MediaLocator("file:" &#x2B; url)) != null)&#xA;                return ml;&#xA;        } else {&#xA;            String file = "file:" &#x2B; System.getProperty("user.dir") &#x2B; File.separator &#x2B; url;&#xA;            if ((ml = new MediaLocator(file)) != null)&#xA;                return ml;&#xA;        }&#xA;&#xA;        return null;&#xA;    }&#xA;&#xA;    ///////////////////////////////////////////////&#xA;    //&#xA;    // Inner classes.&#xA;    ///////////////////////////////////////////////&#xA;&#xA;    /**&#xA;     * A DataSource to read from a list of JPEG image files and turn that into a&#xA;     * stream of JMF buffers. The DataSource is not seekable or positionable.&#xA;     */&#xA;    class ImageDataSource extends PullBufferDataSource {&#xA;&#xA;        ImageSourceStream streams[];&#xA;&#xA;        ImageDataSource(int width, int height, int frameRate, Vector images) {&#xA;            streams = new ImageSourceStream[1];&#xA;            streams[0] = new ImageSourceStream(width, height, frameRate, images);&#xA;        }&#xA;&#xA;        public void setLocator(MediaLocator source) {&#xA;        }&#xA;&#xA;        public MediaLocator getLocator() {&#xA;            return null;&#xA;        }&#xA;&#xA;        /**&#xA;         * Content type is of RAW since we are sending buffers of video frames&#xA;         * without a container format.&#xA;         */&#xA;        public String getContentType() {&#xA;            return ContentDescriptor.RAW;&#xA;        }&#xA;&#xA;        public void connect() {&#xA;        }&#xA;&#xA;        public void disconnect() {&#xA;        }&#xA;&#xA;        public void start() {&#xA;        }&#xA;&#xA;        public void stop() {&#xA;        }&#xA;&#xA;        /**&#xA;         * Return the ImageSourceStreams.&#xA;         */&#xA;        public PullBufferStream[] getStreams() {&#xA;            return streams;&#xA;        }&#xA;&#xA;        /**&#xA;         * We could have derived the duration from the number of frames and&#xA;         * frame rate. But for the purpose of this program, it&#x27;s not necessary.&#xA;         */&#xA;        public Time getDuration() {&#xA;            return DURATION_UNKNOWN;&#xA;        }&#xA;&#xA;        public Object[] getControls() {&#xA;            return new Object[0];&#xA;        }&#xA;&#xA;        public Object getControl(String type) {&#xA;            return null;&#xA;        }&#xA;    }&#xA;&#xA;    /**&#xA;     * The source stream to go along with ImageDataSource.&#xA;     */&#xA;    class ImageSourceStream implements PullBufferStream {&#xA;&#xA;        Vector images;&#xA;        int width, height;&#xA;        VideoFormat format;&#xA;&#xA;        int nextImage = 0; // index of the next image to be read.&#xA;        boolean ended = false;&#xA;&#xA;        public ImageSourceStream(int width, int height, int frameRate, Vector images) {&#xA;            this.width = width;&#xA;            this.height = height;&#xA;            this.images = images;&#xA;&#xA;            format = new JPEGFormat(new Dimension(width, height), Format.NOT_SPECIFIED, Format.byteArray,&#xA;                    (float) frameRate, 75, JPEGFormat.DEC_422);&#xA;        }&#xA;&#xA;        /**&#xA;         * We should never need to block assuming data are read from files.&#xA;         */&#xA;        public boolean willReadBlock() {&#xA;            return false;&#xA;        }&#xA;&#xA;        /**&#xA;         * This is called from the Processor to read a frame worth of video&#xA;         * data.&#xA;         */&#xA;        public void read(Buffer buf) throws IOException {&#xA;&#xA;            // Check if we&#x27;ve finished all the frames.&#xA;            if (nextImage >= images.size()) {&#xA;                // We are done. Set EndOfMedia.&#xA;                System.err.println("Done reading all images.");&#xA;                buf.setEOM(true);&#xA;                buf.setOffset(0);&#xA;                buf.setLength(0);&#xA;                ended = true;&#xA;                return;&#xA;            }&#xA;&#xA;            String imageFile = (String) images.elementAt(nextImage);&#xA;            nextImage&#x2B;&#x2B;;&#xA;&#xA;            System.err.println("  - reading image file: " &#x2B; imageFile);&#xA;&#xA;            // Open a random access file for the next image.&#xA;            RandomAccessFile raFile;&#xA;            raFile = new RandomAccessFile(imageFile, "r");&#xA;&#xA;            byte data[] = null;&#xA;&#xA;            // Check the input buffer type &amp; size.&#xA;&#xA;            if (buf.getData() instanceof byte[])&#xA;                data = (byte[]) buf.getData();&#xA;&#xA;            // Check to see the given buffer is big enough for the frame.&#xA;            if (data == null || data.length &lt; raFile.length()) {&#xA;                data = new byte[(int) raFile.length()];&#xA;                buf.setData(data);&#xA;            }&#xA;&#xA;            // Read the entire JPEG image from the file.&#xA;            raFile.readFully(data, 0, (int) raFile.length());&#xA;&#xA;            System.err.println("    read " &#x2B; raFile.length() &#x2B; " bytes.");&#xA;&#xA;            buf.setOffset(0);&#xA;            buf.setLength((int) raFile.length());&#xA;            buf.setFormat(format);&#xA;            buf.setFlags(buf.getFlags() | buf.FLAG_KEY_FRAME);&#xA;&#xA;            // Close the random access file.&#xA;            raFile.close();&#xA;        }&#xA;&#xA;        /**&#xA;         * Return the format of each video frame. That will be JPEG.&#xA;         */&#xA;        public Format getFormat() {&#xA;            return format;&#xA;        }&#xA;&#xA;        public ContentDescriptor getContentDescriptor() {&#xA;            return new ContentDescriptor(ContentDescriptor.RAW);&#xA;        }&#xA;&#xA;        public long getContentLength() {&#xA;            return 0;&#xA;        }&#xA;&#xA;        public boolean endOfStream() {&#xA;            return ended;&#xA;        }&#xA;&#xA;        public Object[] getControls() {&#xA;            return new Object[0];&#xA;        }&#xA;&#xA;        public Object getControl(String type) {&#xA;            return null;&#xA;        }&#xA;    }&#xA;}&#xA;</output></height></width>

    &#xA;&#xA;

    I am getting

    &#xA;&#xA;

        Cannot create the DataSink: javax.media.NoDataSinkException: Cannot find a DataSink for: com.sun.media.multiplexer.BasicMux$BasicMuxDataSource@d7b1517&#xA;javax.media.NoDataSinkException: Cannot find a DataSink for: com.sun.media.multiplexer.BasicMux$BasicMuxDataSource@d7b1517&#xA;    at javax.media.Manager.createDataSink(Manager.java:1894)&#xA;    at com.ecomm.pl4mms.test.JpegImagesToMovie.createDataSink(JpegImagesToMovie.java:168)&#xA;    at com.ecomm.pl4mms.test.JpegImagesToMovie.doIt(JpegImagesToMovie.java:104)&#xA;    at com.ecomm.pl4mms.test.JpegImagesToMovie.main(JpegImagesToMovie.java:330)&#xA;

    &#xA;&#xA;

    Please help me to resolve this and let me what can be the cause of this

    &#xA;&#xA;

    I am using java 1.8 and trying to create video with jpeg images and using

    &#xA;&#xA;

    javax.media to perform this action. and i followed http://www.oracle.com/technetwork/java/javase/documentation/jpegimagestomovie-176885.html&#xA;to write the code

    &#xA;

  • Revision 32596 : minuscules et fin pour aujourd’hui

    1er novembre 2009, par fil@… — Log

    minuscules et fin pour aujourd’hui

  • Transcode of H.264 to VP8 using libav* has incorrect frame rate

    17 avril 2014, par Kevin Watson

    I’ve so far failed to get the correct output frame rate when transcoding H.264 to VP8 with the libav* libraries. I created a functioning encode of Sintel.2010.720p.mkv as WebM (VP8/Vorbis) using a modification of the transcoding.c example in the FFmpeg source. Unfortunately the resulting file is 48 fps unlike the 24 fps of the original and the output of the ffmpeg command I’m trying to mimic.

    I noticed ffprobe produces a tbc of double the fps for this and other H.264 videos, while the tbc of the resulting VP8 stream produced by the ffmpeg command is the default 1000. The stock transcoding.c example copies the time base of the decoder to the encoder AVCodecContext, which is 1/48. Running the ffmpeg command through gdb it looks like the time base of the AVCodecContext is set to 1/24, but making that change alone only causes the resulting video to be slowed to twice the duration at 24 fps.

    I can create a usable video, but the frame rate doubles. When the output frame rate is the correct 24 fps, the video is smooth but slowed to half speed.

    Here is my modification of the example.

    /*
     * Copyright (c) 2010 Nicolas George
     * Copyright (c) 2011 Stefano Sabatini
     * Copyright (c) 2014 Andrey Utkin
     *
     * Permission is hereby granted, free of charge, to any person obtaining a copy
     * of this software and associated documentation files (the "Software"), to deal
     * in the Software without restriction, including without limitation the rights
     * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
     * copies of the Software, and to permit persons to whom the Software is
     * furnished to do so, subject to the following conditions:
     *
     * The above copyright notice and this permission notice shall be included in
     * all copies or substantial portions of the Software.
     *
     * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
     * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
     * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
     * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
     * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
     * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
     * THE SOFTWARE.
     */

    /**
     * @file
     * API example for demuxing, decoding, filtering, encoding and muxing
     * @example doc/examples/transcoding.c
     */

    #include <libavcodec></libavcodec>avcodec.h>
    #include <libavformat></libavformat>avformat.h>
    #include <libavfilter></libavfilter>avfiltergraph.h>
    #include <libavfilter></libavfilter>avcodec.h>
    #include <libavfilter></libavfilter>buffersink.h>
    #include <libavfilter></libavfilter>buffersrc.h>
    #include <libavutil></libavutil>opt.h>
    #include <libavutil></libavutil>pixdesc.h>

    #define STATS_LOG "stats.log"

    static AVFormatContext *ifmt_ctx;
    static AVFormatContext *ofmt_ctx;
    typedef struct FilteringContext {
      AVFilterContext *buffersink_ctx;
      AVFilterContext *buffersrc_ctx;
      AVFilterGraph *filter_graph;
    } FilteringContext;
    static FilteringContext *filter_ctx;

    static int open_input_file(const char *filename) {
      int ret;
      unsigned int i;

      ifmt_ctx = NULL;
      if ((ret = avformat_open_input(&amp;ifmt_ctx, filename, NULL, NULL)) &lt; 0) {
    av_log(NULL, AV_LOG_ERROR, "Cannot open input file\n");
    return ret;
      }

      if ((ret = avformat_find_stream_info(ifmt_ctx, NULL)) &lt; 0) {
    av_log(NULL, AV_LOG_ERROR, "Cannot find stream information\n");
    return ret;
      }

      for (i = 0; i &lt; ifmt_ctx->nb_streams; i++) {
    AVStream *stream;
    AVCodecContext *codec_ctx;
    stream = ifmt_ctx->streams[i];
    codec_ctx = stream->codec;
    /* Reencode video &amp; audio and remux subtitles etc. */
    if (codec_ctx->codec_type == AVMEDIA_TYPE_VIDEO
        || codec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
      /* Open decoder */
      ret = avcodec_open2(codec_ctx,
                  avcodec_find_decoder(codec_ctx->codec_id), NULL);
      if (ret &lt; 0) {
        av_log(NULL, AV_LOG_ERROR, "Failed to open decoder for stream #%u\n", i);
        return ret;
      }
    }
      }

      av_dump_format(ifmt_ctx, 0, filename, 0);
      return 0;
    }

    static int init_output_context(char* filename) {
      int ret;
      ofmt_ctx = NULL;

      avformat_alloc_output_context2(&amp;ofmt_ctx, NULL, NULL, filename);
      if (!ofmt_ctx) {
    av_log(NULL, AV_LOG_ERROR, "Could not create output context\n");
    return AVERROR_UNKNOWN;
      }

      return 0;
    }

    static int init_webm_encoders(int audioBitRate, int crf, int videoMaxBitRate, int threads,
                  char* quality, int speed, int pass, char* stats) {
      AVStream *out_stream;
      AVStream *in_stream;
      AVCodecContext *dec_ctx, *enc_ctx;
      AVCodec *encoder;
      int ret;
      unsigned int i;

      for (i = 0; i &lt; ifmt_ctx->nb_streams; i++) {
    in_stream = ifmt_ctx->streams[i];
    dec_ctx = in_stream->codec;
    if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO || dec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {

      AVDictionary *opts = NULL;
      if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO) {
        encoder = avcodec_find_encoder(AV_CODEC_ID_VP8);
        out_stream = avformat_new_stream(ofmt_ctx, encoder);
        if (!out_stream) {
          av_log(NULL, AV_LOG_ERROR, "Failed allocating output stream\n");
          return AVERROR_UNKNOWN;
        }

        enc_ctx = out_stream->codec;
        enc_ctx->height = dec_ctx->height;
        enc_ctx->width = dec_ctx->width;
        enc_ctx->sample_aspect_ratio = dec_ctx->sample_aspect_ratio;
        /* take first format from list of supported formats */
        enc_ctx->pix_fmt = encoder->pix_fmts[0];
        /* video time_base can be set to whatever is handy and supported by encoder */
        enc_ctx->time_base = dec_ctx->time_base;
        /* enc_ctx->time_base.num = 1; */
        /* enc_ctx->time_base.den = 24; */

        enc_ctx->bit_rate = videoMaxBitRate;
        enc_ctx->thread_count = threads;
        switch (pass) {
        case 1:
          enc_ctx->flags |= CODEC_FLAG_PASS1;
          break;
        case 2:
          enc_ctx->flags |= CODEC_FLAG_PASS2;
          if (stats) {
        enc_ctx->stats_in = stats;
          }
          break;
        }

        char crfString[3];
        snprintf(crfString, 3, "%d", crf);
        av_dict_set(&amp;opts, "crf", crfString, 0);
        av_dict_set(&amp;opts, "quality", quality, 0);
        char speedString[3];
        snprintf(speedString, 3, "%d", speed);
        av_dict_set(&amp;opts, "speed", speedString, 0);
      } else {
        encoder = avcodec_find_encoder(AV_CODEC_ID_VORBIS);
        out_stream = avformat_new_stream(ofmt_ctx, encoder);
        if (!out_stream) {
          av_log(NULL, AV_LOG_ERROR, "Failed allocating output stream\n");
          return AVERROR_UNKNOWN;
        }

        /* in_stream = ifmt_ctx->streams[i]; */
        /* dec_ctx = in_stream->codec; */
        enc_ctx = out_stream->codec;
        /* encoder = out_stream->codec->codec; */

        enc_ctx->sample_rate = dec_ctx->sample_rate;
        enc_ctx->channel_layout = dec_ctx->channel_layout;
        enc_ctx->channels = av_get_channel_layout_nb_channels(enc_ctx->channel_layout);
        /* take first format from list of supported formats */
        enc_ctx->sample_fmt = encoder->sample_fmts[0];
        enc_ctx->time_base = (AVRational){1, enc_ctx->sample_rate};
        enc_ctx->bit_rate = audioBitRate;
      }

      /* Open codec with the set options */
      ret = avcodec_open2(enc_ctx, encoder, &amp;opts);
      if (ret &lt; 0) {
        av_log(NULL, AV_LOG_ERROR, "Cannot open video encoder for stream #%u\n", i);
        return ret;
      }
      int unused = av_dict_count(opts);
      if (unused > 0) {
        av_log(NULL, AV_LOG_WARNING, "%d unused options\n", unused);
      }
      /* } else if (dec_ctx->codec_type == AVMEDIA_TYPE_UNKNOWN) { */
    } else {
      av_log(NULL, AV_LOG_FATAL, "Elementary stream #%d is of unknown type, cannot proceed\n", i);
      return AVERROR_INVALIDDATA;
    } /* else { */
      /*   /\* if this stream must be remuxed *\/ */
      /*   ret = avcodec_copy_context(ofmt_ctx->streams[i]->codec, */
      /*                ifmt_ctx->streams[i]->codec); */
      /*   if (ret &lt; 0) { */
      /*   av_log(NULL, AV_LOG_ERROR, "Copying stream context failed\n"); */
      /*   return ret; */
      /*   } */
      /* } */

    if (ofmt_ctx->oformat->flags &amp; AVFMT_GLOBALHEADER)
      enc_ctx->flags |= CODEC_FLAG_GLOBAL_HEADER;
      }

      return 0;
    }

    static int open_output_file(const char *filename) {
      int ret;

      av_dump_format(ofmt_ctx, 0, filename, 1);

      if (!(ofmt_ctx->oformat->flags &amp; AVFMT_NOFILE)) {
    ret = avio_open(&amp;ofmt_ctx->pb, filename, AVIO_FLAG_WRITE);
    if (ret &lt; 0) {
      av_log(NULL, AV_LOG_ERROR, "Could not open output file &#39;%s&#39;", filename);
      return ret;
    }
      }

      /* init muxer, write output file header */
      ret = avformat_write_header(ofmt_ctx, NULL);
      if (ret &lt; 0) {
    av_log(NULL, AV_LOG_ERROR, "Error occurred when opening output file\n");
    return ret;
      }

      return 0;
    }

    static int init_filter(FilteringContext* fctx, AVCodecContext *dec_ctx,
               AVCodecContext *enc_ctx, const char *filter_spec) {
      char args[512];
      int ret = 0;
      AVFilter *buffersrc = NULL;
      AVFilter *buffersink = NULL;
      AVFilterContext *buffersrc_ctx = NULL;
      AVFilterContext *buffersink_ctx = NULL;
      AVFilterInOut *outputs = avfilter_inout_alloc();
      AVFilterInOut *inputs  = avfilter_inout_alloc();
      AVFilterGraph *filter_graph = avfilter_graph_alloc();

      if (!outputs || !inputs || !filter_graph) {
    ret = AVERROR(ENOMEM);
    goto end;
      }

      if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO) {
    buffersrc = avfilter_get_by_name("buffer");
    buffersink = avfilter_get_by_name("buffersink");
    if (!buffersrc || !buffersink) {
      av_log(NULL, AV_LOG_ERROR, "filtering source or sink element not found\n");
      ret = AVERROR_UNKNOWN;
      goto end;
    }

    snprintf(args, sizeof(args),
         "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
         dec_ctx->width, dec_ctx->height, dec_ctx->pix_fmt,
         dec_ctx->time_base.num, dec_ctx->time_base.den,
         dec_ctx->sample_aspect_ratio.num,
         dec_ctx->sample_aspect_ratio.den);

    ret = avfilter_graph_create_filter(&amp;buffersrc_ctx, buffersrc, "in",
                       args, NULL, filter_graph);
    if (ret &lt; 0) {
      av_log(NULL, AV_LOG_ERROR, "Cannot create buffer source\n");
      goto end;
    }

    ret = avfilter_graph_create_filter(&amp;buffersink_ctx, buffersink, "out",
                       NULL, NULL, filter_graph);
    if (ret &lt; 0) {
      av_log(NULL, AV_LOG_ERROR, "Cannot create buffer sink\n");
      goto end;
    }

    ret = av_opt_set_bin(buffersink_ctx, "pix_fmts",
                 (uint8_t*)&amp;enc_ctx->pix_fmt, sizeof(enc_ctx->pix_fmt),
                 AV_OPT_SEARCH_CHILDREN);
    if (ret &lt; 0) {
      av_log(NULL, AV_LOG_ERROR, "Cannot set output pixel format\n");
      goto end;
    }
      } else if (dec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
    buffersrc = avfilter_get_by_name("abuffer");
    buffersink = avfilter_get_by_name("abuffersink");
    if (!buffersrc || !buffersink) {
      av_log(NULL, AV_LOG_ERROR, "filtering source or sink element not found\n");
      ret = AVERROR_UNKNOWN;
      goto end;
    }

    if (!dec_ctx->channel_layout)
      dec_ctx->channel_layout =
        av_get_default_channel_layout(dec_ctx->channels);
    snprintf(args, sizeof(args),
         "time_base=%d/%d:sample_rate=%d:sample_fmt=%s:channel_layout=0x%"PRIx64,
         dec_ctx->time_base.num, dec_ctx->time_base.den, dec_ctx->sample_rate,
         av_get_sample_fmt_name(dec_ctx->sample_fmt),
         dec_ctx->channel_layout);
    ret = avfilter_graph_create_filter(&amp;buffersrc_ctx, buffersrc, "in",
                       args, NULL, filter_graph);
    if (ret &lt; 0) {
      av_log(NULL, AV_LOG_ERROR, "Cannot create audio buffer source\n");
      goto end;
    }

    ret = avfilter_graph_create_filter(&amp;buffersink_ctx, buffersink, "out",
                       NULL, NULL, filter_graph);
    if (ret &lt; 0) {
      av_log(NULL, AV_LOG_ERROR, "Cannot create audio buffer sink\n");
      goto end;
    }

    ret = av_opt_set_bin(buffersink_ctx, "sample_fmts",
                 (uint8_t*)&amp;enc_ctx->sample_fmt, sizeof(enc_ctx->sample_fmt),
                 AV_OPT_SEARCH_CHILDREN);
    if (ret &lt; 0) {
      av_log(NULL, AV_LOG_ERROR, "Cannot set output sample format\n");
      goto end;
    }

    ret = av_opt_set_bin(buffersink_ctx, "channel_layouts",
                 (uint8_t*)&amp;enc_ctx->channel_layout,
                 sizeof(enc_ctx->channel_layout), AV_OPT_SEARCH_CHILDREN);
    if (ret &lt; 0) {
      av_log(NULL, AV_LOG_ERROR, "Cannot set output channel layout\n");
      goto end;
    }

    ret = av_opt_set_bin(buffersink_ctx, "sample_rates",
                 (uint8_t*)&amp;enc_ctx->sample_rate, sizeof(enc_ctx->sample_rate),
                 AV_OPT_SEARCH_CHILDREN);
    if (ret &lt; 0) {
      av_log(NULL, AV_LOG_ERROR, "Cannot set output sample rate\n");
      goto end;
    }
      } else {
    ret = AVERROR_UNKNOWN;
    goto end;
      }

      /* Endpoints for the filter graph. */
      outputs->name       = av_strdup("in");
      outputs->filter_ctx = buffersrc_ctx;
      outputs->pad_idx    = 0;
      outputs->next       = NULL;

      inputs->name       = av_strdup("out");
      inputs->filter_ctx = buffersink_ctx;
      inputs->pad_idx    = 0;
      inputs->next       = NULL;

      if (!outputs->name || !inputs->name) {
    ret = AVERROR(ENOMEM);
    goto end;
      }

      if ((ret = avfilter_graph_parse_ptr(filter_graph, filter_spec,
                      &amp;inputs, &amp;outputs, NULL)) &lt; 0)
    goto end;

      if ((ret = avfilter_graph_config(filter_graph, NULL)) &lt; 0)
    goto end;

      /* Fill FilteringContext */
      fctx->buffersrc_ctx = buffersrc_ctx;
      fctx->buffersink_ctx = buffersink_ctx;
      fctx->filter_graph = filter_graph;

     end:
      avfilter_inout_free(&amp;inputs);
      avfilter_inout_free(&amp;outputs);

      return ret;
    }

    static int init_filters(enum AVCodecID audioCodec) {
      const char *filter_spec;
      unsigned int i;
      int ret;
      filter_ctx = av_malloc_array(ifmt_ctx->nb_streams, sizeof(*filter_ctx));
      if (!filter_ctx)
    return AVERROR(ENOMEM);

      for (i = 0; i &lt; ifmt_ctx->nb_streams; i++) {
    filter_ctx[i].buffersrc_ctx  = NULL;
    filter_ctx[i].buffersink_ctx = NULL;
    filter_ctx[i].filter_graph   = NULL;
    /* Skip streams that are neither audio nor video */
    if (!(ifmt_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO
          || ifmt_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO))
      continue;


    if (ifmt_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO)
      filter_spec = "null"; /* passthrough (dummy) filter for video */
    else
      /* TODO: make this more general */
      if (audioCodec == AV_CODEC_ID_VORBIS) {
        filter_spec = "asetnsamples=n=64";
      } else {
        /* filter_spec = "null"; /\* passthrough (dummy) filter for audio *\/ */
        filter_spec = "fps=24";
        /* filter_spec = "settb=expr=1/24"; */
      }
    ret = init_filter(&amp;filter_ctx[i], ifmt_ctx->streams[i]->codec,
              ofmt_ctx->streams[i]->codec, filter_spec);
    if (ret)
      return ret;
      }
      return 0;
    }

    static int encode_write_frame(AVFrame *filt_frame, unsigned int stream_index, int *got_frame) {
      int ret;
      int got_frame_local;
      AVPacket enc_pkt;
      int (*enc_func)(AVCodecContext *, AVPacket *, const AVFrame *, int *) =
    (ifmt_ctx->streams[stream_index]->codec->codec_type ==
     AVMEDIA_TYPE_VIDEO) ? avcodec_encode_video2 : avcodec_encode_audio2;

      if (!got_frame)
    got_frame = &amp;got_frame_local;

      /* av_log(NULL, AV_LOG_INFO, "Encoding frame\n"); */
      /* encode filtered frame */
      enc_pkt.data = NULL;
      enc_pkt.size = 0;
      av_init_packet(&amp;enc_pkt);
      ret = enc_func(ofmt_ctx->streams[stream_index]->codec, &amp;enc_pkt,
             filt_frame, got_frame);
      av_frame_free(&amp;filt_frame);
      if (ret &lt; 0)
    return ret;
      if (!(*got_frame))
    return 0;

      /* prepare packet for muxing */
      enc_pkt.stream_index = stream_index;
      enc_pkt.dts = av_rescale_q_rnd(enc_pkt.dts,
                     ofmt_ctx->streams[stream_index]->codec->time_base,
                     ofmt_ctx->streams[stream_index]->time_base,
                     AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
      enc_pkt.pts = av_rescale_q_rnd(enc_pkt.pts,
                     ofmt_ctx->streams[stream_index]->codec->time_base,
                     ofmt_ctx->streams[stream_index]->time_base,
                     AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
      enc_pkt.duration = av_rescale_q(enc_pkt.duration,
                      ofmt_ctx->streams[stream_index]->codec->time_base,
                      ofmt_ctx->streams[stream_index]->time_base);

      /* av_log(NULL, AV_LOG_DEBUG, "Muxing frame\n"); */
      /* mux encoded frame */
      ret = av_interleaved_write_frame(ofmt_ctx, &amp;enc_pkt);
      return ret;
    }

    static int filter_encode_write_frame(AVFrame *frame, unsigned int stream_index) {
      int ret;
      AVFrame *filt_frame;

      /* av_log(NULL, AV_LOG_INFO, "Pushing decoded frame to filters\n"); */
      /* push the decoded frame into the filtergraph */
      ret = av_buffersrc_add_frame_flags(filter_ctx[stream_index].buffersrc_ctx,
                     frame, 0);
      if (ret &lt; 0) {
    av_log(NULL, AV_LOG_ERROR, "Error while feeding the filtergraph\n");
    return ret;
      }

      /* pull filtered frames from the filtergraph */
      while (1) {
    filt_frame = av_frame_alloc();
    if (!filt_frame) {
      ret = AVERROR(ENOMEM);
      break;
    }
    /* av_log(NULL, AV_LOG_INFO, "Pulling filtered frame from filters\n"); */
    ret = av_buffersink_get_frame(filter_ctx[stream_index].buffersink_ctx,
                      filt_frame);
    if (ret &lt; 0) {
      /* if no more frames for output - returns AVERROR(EAGAIN)
       * if flushed and no more frames for output - returns AVERROR_EOF
       * rewrite retcode to 0 to show it as normal procedure completion
       */
      if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
        ret = 0;
      av_frame_free(&amp;filt_frame);
      break;
    }

    filt_frame->pict_type = AV_PICTURE_TYPE_NONE;
    ret = encode_write_frame(filt_frame, stream_index, NULL);
    if (ret &lt; 0)
      break;
      }

      return ret;
    }

    static int flush_encoder(unsigned int stream_index) {
      int ret;
      int got_frame;

      if (!(ofmt_ctx->streams[stream_index]->codec->codec->capabilities &amp;
        CODEC_CAP_DELAY))
    return 0;

      while (1) {
    av_log(NULL, AV_LOG_INFO, "Flushing stream #%u encoder\n", stream_index);
    ret = encode_write_frame(NULL, stream_index, &amp;got_frame);
    if (ret &lt; 0)
      break;
    if (!got_frame)
      return 0;
      }
      return ret;
    }

    static int transcode() {
      int ret;
      AVPacket packet = { .data = NULL, .size = 0 };
      AVFrame *frame = NULL;
      enum AVMediaType type;
      unsigned int stream_index;
      unsigned int i;
      int got_frame;
      int (*dec_func)(AVCodecContext *, AVFrame *, int *, const AVPacket *);

      /* read all packets */
      while (1) {
    if ((ret = av_read_frame(ifmt_ctx, &amp;packet)) &lt; 0)
      break;
    stream_index = packet.stream_index;
    type = ifmt_ctx->streams[packet.stream_index]->codec->codec_type;
    av_log(NULL, AV_LOG_DEBUG, "Demuxer gave frame of stream_index %u\n",
       stream_index);

    if (filter_ctx[stream_index].filter_graph) {
      av_log(NULL, AV_LOG_DEBUG, "Going to reencode&amp;filter the frame\n");
      frame = av_frame_alloc();
      if (!frame) {
        ret = AVERROR(ENOMEM);
        break;
      }
      packet.dts = av_rescale_q_rnd(packet.dts,
                    ifmt_ctx->streams[stream_index]->time_base,
                    ifmt_ctx->streams[stream_index]->codec->time_base,
                    AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
      packet.pts = av_rescale_q_rnd(packet.pts,
                    ifmt_ctx->streams[stream_index]->time_base,
                    ifmt_ctx->streams[stream_index]->codec->time_base,
                    AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
      dec_func = (type == AVMEDIA_TYPE_VIDEO) ? avcodec_decode_video2 :
        avcodec_decode_audio4;
      ret = dec_func(ifmt_ctx->streams[stream_index]->codec, frame,
             &amp;got_frame, &amp;packet);
      if (ret &lt; 0) {
        av_frame_free(&amp;frame);
        av_log(NULL, AV_LOG_ERROR, "Decoding failed\n");
        break;
      }

      if (got_frame) {
        frame->pts = av_frame_get_best_effort_timestamp(frame);
        ret = filter_encode_write_frame(frame, stream_index);
        av_frame_free(&amp;frame);
        if (ret &lt; 0)
          goto end;
      } else {
        av_frame_free(&amp;frame);
      }
    } else {
      /* remux this frame without reencoding */
      packet.dts = av_rescale_q_rnd(packet.dts,
                    ifmt_ctx->streams[stream_index]->time_base,
                    ofmt_ctx->streams[stream_index]->time_base,
                    AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
      packet.pts = av_rescale_q_rnd(packet.pts,
                    ifmt_ctx->streams[stream_index]->time_base,
                    ofmt_ctx->streams[stream_index]->time_base,
                    AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);

      ret = av_interleaved_write_frame(ofmt_ctx, &amp;packet);
      if (ret &lt; 0)
        goto end;
    }
    av_free_packet(&amp;packet);
      }

      /* flush filters and encoders */
      for (i = 0; i &lt; ifmt_ctx->nb_streams; i++) {
    /* flush filter */
    if (!filter_ctx[i].filter_graph)
      continue;
    ret = filter_encode_write_frame(NULL, i);
    if (ret &lt; 0) {
      av_log(NULL, AV_LOG_ERROR, "Flushing filter failed\n");
      goto end;
    }

    /* flush encoder */
    ret = flush_encoder(i);
    if (ret &lt; 0) {
      av_log(NULL, AV_LOG_ERROR, "Flushing encoder failed\n");
      goto end;
    }
      }

      av_write_trailer(ofmt_ctx);

      // Retrieve and store the first instance of codec statistics
      // TODO: less naive, deal with multiple instances of statistics
      for (i = 0; i &lt; ofmt_ctx->nb_streams; i++) {
    AVCodecContext* codec = ofmt_ctx->streams[i]->codec;
    if ((codec->flags &amp; CODEC_FLAG_PASS1) &amp;&amp; (codec->stats_out)){
      FILE* logfile = fopen(STATS_LOG, "wb");
      fprintf(logfile, "%s", codec->stats_out);
      if (fclose(logfile) &lt; 0) {
        av_log(NULL, AV_LOG_ERROR, "Error closing log file.\n");
      }
      break;
    }
      }

      av_log(NULL, AV_LOG_INFO, "output duration = %" PRId64 "\n", ofmt_ctx->duration);

     end:
      av_free_packet(&amp;packet);
      av_frame_free(&amp;frame);
      for (i = 0; i &lt; ifmt_ctx->nb_streams; i++) {
    avcodec_close(ifmt_ctx->streams[i]->codec);
    if (ofmt_ctx &amp;&amp; ofmt_ctx->nb_streams > i &amp;&amp; ofmt_ctx->streams[i] &amp;&amp; ofmt_ctx->streams[i]->codec)
      avcodec_close(ofmt_ctx->streams[i]->codec);
    if (filter_ctx &amp;&amp; filter_ctx[i].filter_graph)
      avfilter_graph_free(&amp;filter_ctx[i].filter_graph);
      }
      av_free(filter_ctx);
      avformat_close_input(&amp;ifmt_ctx);
      if (ofmt_ctx &amp;&amp; !(ofmt_ctx->oformat->flags &amp; AVFMT_NOFILE))
    avio_close(ofmt_ctx->pb);
      avformat_free_context(ofmt_ctx);

      if (ret &lt; 0)
    av_log(NULL, AV_LOG_ERROR, "Error occurred: %s\n", av_err2str(ret));

      return ret ? 1 : 0;
    }

    int TranscodeToWebM(char* inputPath, char* outputPath, int audioBitRate, int crf, int videoMaxBitRate, int threads,
            char* quality, int speed) {
      int ret;
      unsigned int pass;
      char* stats = NULL;

      av_register_all();
      avfilter_register_all();

      for (pass = 1; pass &lt;= 2; pass++) {
    if ((ret = open_input_file(inputPath)) &lt; 0)
      goto end;

    if ((ret = init_output_context(outputPath)) &lt; 0)
      goto end;

    if (pass == 2) {
      size_t stats_length;
      if (cmdutils_read_file(STATS_LOG, &amp;stats, &amp;stats_length) &lt; 0) {
        av_log(NULL, AV_LOG_ERROR, "Error reading stats file.\n");
        break;
      }
    }

    if ((ret = init_webm_encoders(audioBitRate, crf, videoMaxBitRate, threads, quality, speed, pass, stats)) &lt; 0)
      goto end;

    if ((ret = open_output_file(outputPath)) &lt; 0)
      goto end;

    if ((ret = init_filters(AV_CODEC_ID_VORBIS)) &lt; 0)
      goto end;

    if ((ret = transcode()) &lt; 0)
      goto end;
      }

      if (remove(STATS_LOG) != 0) {
    av_log(NULL, AV_LOG_ERROR, "Failed to remove %s\n", STATS_LOG);
      }

     end:
      if (ret &lt; 0) {
    av_log(NULL, AV_LOG_ERROR, "Error occurred: %s\n", av_err2str(ret));
    return ret;
      }

      return 0;
    }

    Here is the output from the ffmpeg command I am trying to mimic.

    ffmpeg version N-62301-g59a5384 Copyright (c) 2000-2014 the FFmpeg developers
     built on Apr  9 2014 09:58:44 with gcc 4.8.2 (GCC) 20140206 (prerelease)
     configuration: --prefix=/opt/ffmpeg --extra-cflags=-I/opt/x264/include --extra-ldflags=-L/opt/x264/lib --extra-libs=-ldl --enable-gpl --enable-nonfree --enable-libfdk-aac --enable-libopus --enable-libvorbis --enable-libvpx --enable-libx264
     libavutil      52. 75.100 / 52. 75.100
     libavcodec     55. 58.103 / 55. 58.103
     libavformat    55. 36.102 / 55. 36.102
     libavdevice    55. 11.100 / 55. 11.100
     libavfilter     4.  3.100 /  4.  3.100
     libswscale      2.  6.100 /  2.  6.100
     libswresample   0. 18.100 /  0. 18.100
     libpostproc    52.  3.100 / 52.  3.100
    Input #0, matroska,webm, from &#39;/mnt/scratch/test_source/Sintel.2010.720p.mkv&#39;:
     Metadata:
    encoder         : libebml v1.0.0 + libmatroska v1.0.0
    creation_time   : 2011-04-24 17:20:33
     Duration: 00:14:48.03, start: 0.000000, bitrate: 6071 kb/s
    Chapter #0.0: start 0.000000, end 103.125000
    Metadata:
     title           : Chapter 01
    Chapter #0.1: start 103.125000, end 148.667000
    Metadata:
     title           : Chapter 02
    Chapter #0.2: start 148.667000, end 349.792000
    Metadata:
     title           : Chapter 03
    Chapter #0.3: start 349.792000, end 437.208000
    Metadata:
     title           : Chapter 04
    Chapter #0.4: start 437.208000, end 472.075000
    Metadata:
     title           : Chapter 05
    Chapter #0.5: start 472.075000, end 678.833000
    Metadata:
     title           : Chapter 06
    Chapter #0.6: start 678.833000, end 744.083000
    Metadata:
     title           : Chapter 07
    Chapter #0.7: start 744.083000, end 888.032000
    Metadata:
     title           : Chapter 08
    Stream #0:0(eng): Video: h264 (High), yuv420p(tv, bt709), 1280x544, SAR 1:1 DAR 40:17, 24 fps, 24 tbr, 1k tbn, 48 tbc
    Stream #0:1(eng): Audio: ac3, 48000 Hz, 5.1(side), fltp, 640 kb/s
    Metadata:
     title           : AC3 5.1 @ 640 Kbps
    Stream #0:2(ger): Subtitle: subrip
    Stream #0:3(eng): Subtitle: subrip
    Stream #0:4(spa): Subtitle: subrip
    Stream #0:5(fre): Subtitle: subrip
    Stream #0:6(ita): Subtitle: subrip
    Stream #0:7(dut): Subtitle: subrip
    Stream #0:8(pol): Subtitle: subrip
    Stream #0:9(por): Subtitle: subrip
    Stream #0:10(rus): Subtitle: subrip
    Stream #0:11(vie): Subtitle: subrip
    [libvpx @ 0x24b74c0] v1.3.0
    Output #0, webm, to &#39;/mnt/scratch/test_out/Sintel.2010.720p.script.webm&#39;:
     Metadata:
    encoder         : Lavf55.36.102
    Chapter #0.0: start 0.000000, end 103.125000
    Metadata:
     title           : Chapter 01
    Chapter #0.1: start 103.125000, end 148.667000
    Metadata:
     title           : Chapter 02
    Chapter #0.2: start 148.667000, end 349.792000
    Metadata:
     title           : Chapter 03
    Chapter #0.3: start 349.792000, end 437.208000
    Metadata:
     title           : Chapter 04
    Chapter #0.4: start 437.208000, end 472.075000
    Metadata:
     title           : Chapter 05
    Chapter #0.5: start 472.075000, end 678.833000
    Metadata:
     title           : Chapter 06
    Chapter #0.6: start 678.833000, end 744.083000
    Metadata:
     title           : Chapter 07
    Chapter #0.7: start 744.083000, end 888.032000
    Metadata:
     title           : Chapter 08
    Stream #0:0(eng): Video: vp8 (libvpx), yuv420p, 1280x544 [SAR 1:1 DAR 40:17], q=-1--1, pass 2, 60000 kb/s, 1k tbn, 24 tbc
    Stream #0:1(eng): Audio: vorbis (libvorbis), 48000 Hz, 5.1(side), fltp, 384 kb/s
    Metadata:
     title           : AC3 5.1 @ 640 Kbps
    Stream mapping:
     Stream #0:0 -> #0:0 (h264 -> libvpx)
     Stream #0:1 -> #0:1 (ac3 -> libvorbis)
    Press [q] to stop, [?] for help
    frame=21312 fps= 11 q=0.0 Lsize=  567191kB time=00:14:48.01 bitrate=5232.4kbits/s    
    video:537377kB audio:29266kB subtitle:0kB other streams:0kB global headers:7kB muxing overhead: 0.096885%