Recherche avancée

Médias (1)

Mot : - Tags -/wave

Autres articles (104)

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • MediaSPIP Player : problèmes potentiels

    22 février 2011, par

    Le lecteur ne fonctionne pas sur Internet Explorer
    Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
    Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...)

Sur d’autres sites (11081)

  • WebVTT Discussions at FOMS

    1er janvier 2014, par silvia

    At the recent FOMS (Foundations of Open Media Software and Standards) Developer Workshop, we had a massive focus on WebVTT and the state of its feature set. You will find links to summaries of the individual discussions in the FOMS Schedule page. Here are some of the key results I went away with.

    1. WebVTT Regions

    The key driving force for improvements to WebVTT continues to be the accurate representation of CEA608/708 captioning. As part of that drive, we’ve introduced regions (the CEA708 “window” concept) to WebVTT. WebVTT regions satisfy multiple requirements of CEA608/708 captions :

    1. support for rollup captions
    2. support for background color and border color on a group of cues independent of the background color of the individual cue
    3. possibility to move a group of cues from one location on screen to a different
    4. support to specify an anchor point and a growth direction for cues when their text size changes
    5. support for specifying a fixed number of lines to be rendered
    6. possibility to specify which region is rendered in front of which other one when regions overlap

    While WebVTT regions enable us to satisfy all of the above points, the specification isn’t actually complete yet and some of the above needs aren’t satisfied yet.

    We have an open bug to move a region elsewhere. A first discussion at FOMS seemed to to indicate that we’ll have to add syntax for updating a region at a particular time and thus give region definitions a way to be valid only for a certain time frame. I can imagine that the region definitions that we have in the header of the WebVTT file now would have an implicitly defined time frame from the start to the end of the file, but can be overruled by a re-definition anywhere within the WebVTT file. That redefinition needs to provide a start and end time.

    We registered a bug to add specifying the width and height of regions (and possibly of cues) by em (i.e. by multiples of the largest character in a font). This should allow us to have the region grow/shrink around the region anchor point with a change of font size by script or a user. em specifications should also be applied to cues – that matches the column count of CEA708/608 better.

    When regions overlap, the original region extension spec already suggested a “layer” cue setting. It will be easy to add it.

    Another change that we will ultimately need is the “scroll” setting : we will need to introduce support for scrolling text down or from left-to-right or right-to-left, e.g. vertical scrolling text seems to be used in some Chinese caption use cases.

    2. Unify Rendering Approach

    The introduction of regions created a second code path in the rendering spec with some duplication. At FOMS we discussed if it was possible to unify that. The suggestion is to render all cues into a region. Those that are not part of a region would be rendered into an anonymous region that covers the complete viewport. There may be some consequences to this, e.g. cue settings should be usable across all cues, no matter whether or not part of a region, and avoiding cue overlap may need to be done within regions.

    Here’s a rough outline of the path of the new rendering algorithm :

    (1) Render the regions :

    Specified Region Anonymous Region
    Render values as given : Render following values :
    • width
    • lines
    • regionanchor
    • viewportanchor
    • scroll
    • 100%
    • videoheight/lineheight
    • 0,0
    • 0,0
    • none

    (2) Render the cues :

    • Create a cue box and put it in its region (anonymous if none given).
    • Calculate position & size of cue box from cue settings (position, line, size).
    • Calculate position of cue text inside cue box from remaining cue settings (vertical, align).

    3. Vertical Features

    WebVTT includes vertical rendering, both right-to-left and left-to-right. However, regions are not defined for vertical. Eventually, we’re going to have to look at the vertical features of WebVTT with more details and figure out whether the spec is working for them and what real-world requirements we have missed. We hope we can get some help from users in countries where vertically rendered captions/subtitles are the norm.

    4. Best Practices

    Some of he WebVTT users at FOMS suggested it would be advantageous to start a list of “best practices” for how to author captions with WebVTT. Example recommendations are :

    • Use line numbers only to position cues from top or bottom of viewport. Don’t use otherwise.
    • Note that when the user increases the fontsize in rollup captions and thus introduces new line breaks, your cues will roll by faster because the number of lines of a rollup is fixed.
    • Make sure to use &lrm ; and &rlm ; UTF-8 markers to control the directionality of your text.

    It would be nice if somebody started such a document.

    5. Non-caption use cases

    Instead of continuing to look back and improve our support of captions/subtitles in WebVTT, one session at FOMS also went ahead and looked forward to other use cases. The following requirements came out of this :

    5.1 Preview Thumbnails

    A common use case for timed data is the use of preview thumbnails on the navigation bar of videos. A native implementation of preview thumbnails would allow crawlers and search engines to have a standardised way of extracting timed images for media files, so introduction of a new @kind value “thumbnails” was suggested.

    The content of a “thumbnails” cue could be any of :

    • an image URL
    • a sprite URL to a single image
    • a spatial & temporal media fragment URL to a media resource
    • base64 encoded image (data URI)
    • an iframe offset to the media resource

    The suggestion is to allow anything that would work in a img @src attribute as value in a cue of @kind=”thumbnails”. Responsive images might also be useful for a track of @kind=”thumbnails”. It may even be possible to define an inband thumbnail track based on the track of @kind=”thumbnails”. Such cues should also work in the JavaScript track API.

    5.2 Chapter markers

    There is interest to put richer content than just a chapter title into chapter cues. Often, chapters consist of a title, text and and image. The text is not so important, but the image is used almost everywhere that chapters are used. There may be a need to extend chapter cue content with images, similar to what a @kind=”thumbnails” track offers.

    The conclusion that we arrived at was that we need to make @kind=”thumbnails” work first and then look at using the learnings from that to extend @kind=”chapters”.

    5.3 Inband tracks for live video

    A difficult topic was opened with the question of how to transport text tracks in live video. In live captioning, end times are never created for cues, but are implied by the start time of the next cue. This is a use case that hasn’t been addressed in HTML5/WebVTT yet. An old proposal to allow a special end time value of “NEXT” was discussed and recommended for adoption. Also, there was support for the spec change that stops blocking loading VTT until all cues have been loaded.

    5.4 Cross-domain VTT loading

    A brief discussion centered around the fact that the spec disallows cross-domain loading of WebVTT files, but that no browser implements this. This needs to be discussion at the HTML WG level.

    6. Regions in live captioning

    The final topic that we discussed was how we could provide support for regions in live captioning.

    • The currently active region definitions will need to be come part of every header of every VTT file segment that HLS uses, so it’s available in case the cues in the segment file reference it.
    • “NEXT” in end time markers would make authoring of live captioned VTT files easier.
    • If the application wants to use 1 word at a time and doesn’t want to delay sending the word until the full cue is authored (e.g. in a Hangout type environment), we will need to introduce the concept of “cue continuation markers”, so we know that a cue could be extended with the next VTT file fragment.

    This is an extensive and impressive amount of discussion around WebVTT and a lot of new work to be performed in the future. I’m very grateful for all the people who have contributed to these discussions at FOMS and will hopefully continue to help get the specifications right.

  • MediaCodec - save timing info for ffmpeg ?

    18 novembre 2014, par Mark

    I have a requirement to encrypt video before it hits the disk. It seems on Android the only way to do this is to use MediaCodec, and encrypt and save the raw h264 elementary streams. (The MediaRecorder and Muxer classes operate on FileDescriptors, not an OutputStream, so I can’t wrap it with a CipherOutputStream).

    Using the grafika code as a base, I’m able to save a raw h264 elementary stream by replacing the Muxer in the VideoEncoderCore class with a WriteableByteChannel, backed by a CipherOutputStream (code below, minus the CipherOutputStream).

    If I take the resulting output file over to the desktop I’m able to use ffmpeg to mux the h264 stream to a playable mp4 file. What’s missing however is timing information. ffmpeg always assumes 25fps. What I’m looking for is a way to save the timing info, perhaps to a separate file, that I can use to give ffmpeg the right information on the desktop.

    I’m not doing audio yet, but I can imagine I’ll need to do the same thing there, if I’m to have any hope of remotely accurate syncing.

    FWIW, I’m a total newbie here, and I really don’t know much of anything about SPS, NAL, Atoms, etc.

    /*
    * Copyright 2014 Google Inc. All rights reserved.
    *
    * Licensed under the Apache License, Version 2.0 (the "License");
    * you may not use this file except in compliance with the License.
    * You may obtain a copy of the License at
    *
    *      http://www.apache.org/licenses/LICENSE-2.0
    *
    * Unless required by applicable law or agreed to in writing, software
    * distributed under the License is distributed on an "AS IS" BASIS,
    * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    * See the License for the specific language governing permissions and
    * limitations under the License.
    */


    import android.media.MediaCodec;
    import android.media.MediaCodecInfo;
    import android.media.MediaFormat;
    import android.util.Log;
    import android.view.Surface;

    import java.io.BufferedOutputStream;
    import java.io.File;
    import java.io.FileOutputStream;
    import java.io.IOException;
    import java.nio.ByteBuffer;
    import java.nio.channels.Channels;
    import java.nio.channels.WritableByteChannel;

    /**
    * This class wraps up the core components used for surface-input video encoding.
    * <p>
    * Once created, frames are fed to the input surface.  Remember to provide the presentation
    * time stamp, and always call drainEncoder() before swapBuffers() to ensure that the
    * producer side doesn't get backed up.
    * </p><p>
    * This class is not thread-safe, with one exception: it is valid to use the input surface
    * on one thread, and drain the output on a different thread.
    */
    public class VideoEncoderCore {
       private static final String TAG = MainActivity.TAG;
       private static final boolean VERBOSE = false;

       // TODO: these ought to be configurable as well
       private static final String MIME_TYPE = "video/avc";    // H.264 Advanced Video Coding
       private static final int FRAME_RATE = 30;               // 30fps
       private static final int IFRAME_INTERVAL = 5;           // 5 seconds between I-frames

       private Surface mInputSurface;
       private MediaCodec mEncoder;
       private MediaCodec.BufferInfo mBufferInfo;
       private int mTrackIndex;
       //private MediaMuxer mMuxer;
       //private boolean mMuxerStarted;
       private WritableByteChannel outChannel;

       /**
        * Configures encoder and muxer state, and prepares the input Surface.
        */
       public VideoEncoderCore(int width, int height, int bitRate, File outputFile)
               throws IOException {
           mBufferInfo = new MediaCodec.BufferInfo();

           MediaFormat format = MediaFormat.createVideoFormat(MIME_TYPE, width, height);

           // Set some properties.  Failing to specify some of these can cause the MediaCodec
           // configure() call to throw an unhelpful exception.
           format.setInteger(MediaFormat.KEY_COLOR_FORMAT,
                   MediaCodecInfo.CodecCapabilities.COLOR_FormatSurface);
           format.setInteger(MediaFormat.KEY_BIT_RATE, bitRate);
           format.setInteger(MediaFormat.KEY_FRAME_RATE, FRAME_RATE);
           format.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, IFRAME_INTERVAL);
           if (VERBOSE) Log.d(TAG, "format: " + format);

           // Create a MediaCodec encoder, and configure it with our format.  Get a Surface
           // we can use for input and wrap it with a class that handles the EGL work.
           mEncoder = MediaCodec.createEncoderByType(MIME_TYPE);
           mEncoder.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
           mInputSurface = mEncoder.createInputSurface();
           mEncoder.start();

           // Create a MediaMuxer.  We can't add the video track and start() the muxer here,
           // because our MediaFormat doesn't have the Magic Goodies.  These can only be
           // obtained from the encoder after it has started processing data.
           //
           // We're not actually interested in multiplexing audio.  We just want to convert
           // the raw H.264 elementary stream we get from MediaCodec into a .mp4 file.
           //mMuxer = new MediaMuxer(outputFile.toString(),
           //        MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);

           mTrackIndex = -1;
           //mMuxerStarted = false;
           outChannel = Channels.newChannel(new BufferedOutputStream(new FileOutputStream(outputFile)));
       }

       /**
        * Returns the encoder's input surface.
        */
       public Surface getInputSurface() {
           return mInputSurface;
       }

       /**
        * Releases encoder resources.
        */
       public void release() {
           if (VERBOSE) Log.d(TAG, "releasing encoder objects");
           if (mEncoder != null) {
               mEncoder.stop();
               mEncoder.release();
               mEncoder = null;
           }
           try {
               outChannel.close();
           }
           catch (Exception e) {
               Log.e(TAG,"Couldn't close output stream.");
           }
       }

       /**
        * Extracts all pending data from the encoder and forwards it to the muxer.
        * </p><p>
        * If endOfStream is not set, this returns when there is no more data to drain.  If it
        * is set, we send EOS to the encoder, and then iterate until we see EOS on the output.
        * Calling this with endOfStream set should be done once, right before stopping the muxer.
        * </p><p>
        * We're just using the muxer to get a .mp4 file (instead of a raw H.264 stream).  We're
        * not recording audio.
        */
       public void drainEncoder(boolean endOfStream) {
           final int TIMEOUT_USEC = 10000;
           if (VERBOSE) Log.d(TAG, "drainEncoder(" + endOfStream + ")");

           if (endOfStream) {
               if (VERBOSE) Log.d(TAG, "sending EOS to encoder");
               mEncoder.signalEndOfInputStream();
           }

           ByteBuffer[] encoderOutputBuffers = mEncoder.getOutputBuffers();
           while (true) {
               int encoderStatus = mEncoder.dequeueOutputBuffer(mBufferInfo, TIMEOUT_USEC);
               if (encoderStatus == MediaCodec.INFO_TRY_AGAIN_LATER) {
                   // no output available yet
                   if (!endOfStream) {
                       break;      // out of while
                   } else {
                       if (VERBOSE) Log.d(TAG, "no output available, spinning to await EOS");
                   }
               } else if (encoderStatus == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
                   // not expected for an encoder
                   encoderOutputBuffers = mEncoder.getOutputBuffers();
               } else if (encoderStatus == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
                   // should happen before receiving buffers, and should only happen once
                   //if (mMuxerStarted) {
                   //    throw new RuntimeException("format changed twice");
                   //}
                   MediaFormat newFormat = mEncoder.getOutputFormat();
                   Log.d(TAG, "encoder output format changed: " + newFormat);

                   // now that we have the Magic Goodies, start the muxer
                   //mTrackIndex = mMuxer.addTrack(newFormat);
                   //mMuxer.start();
                   //mMuxerStarted = true;
               } else if (encoderStatus &lt; 0) {
                   Log.w(TAG, "unexpected result from encoder.dequeueOutputBuffer: " +
                           encoderStatus);
                   // let's ignore it
               } else {
                   ByteBuffer encodedData = encoderOutputBuffers[encoderStatus];
                   if (encodedData == null) {
                       throw new RuntimeException("encoderOutputBuffer " + encoderStatus +
                               " was null");
                   }

                   /*
                      FFMPEG needs this info.
                   if ((mBufferInfo.flags &amp; MediaCodec.BUFFER_FLAG_CODEC_CONFIG) != 0) {
                       // The codec config data was pulled out and fed to the muxer when we got
                       // the INFO_OUTPUT_FORMAT_CHANGED status.  Ignore it.
                       if (VERBOSE) Log.d(TAG, "ignoring BUFFER_FLAG_CODEC_CONFIG");
                       mBufferInfo.size = 0;
                   }
                   */

                   if (mBufferInfo.size != 0) {
                       /*
                       if (!mMuxerStarted) {
                           throw new RuntimeException("muxer hasn't started");
                       }
                       */

                       // adjust the ByteBuffer values to match BufferInfo (not needed?)
                       encodedData.position(mBufferInfo.offset);
                       encodedData.limit(mBufferInfo.offset + mBufferInfo.size);

                       try {
                           outChannel.write(encodedData);
                       }
                       catch (Exception e) {
                           Log.e(TAG,"Error writing output.",e);
                       }
                       if (VERBOSE) {
                           Log.d(TAG, "sent " + mBufferInfo.size + " bytes to muxer, ts=" +
                                   mBufferInfo.presentationTimeUs);
                       }
                   }

                   mEncoder.releaseOutputBuffer(encoderStatus, false);

                   if ((mBufferInfo.flags &amp; MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) {
                       if (!endOfStream) {
                           Log.w(TAG, "reached end of stream unexpectedly");
                       } else {
                           if (VERBOSE) Log.d(TAG, "end of stream reached");
                       }
                       break;      // out of while
                   }
               }
           }
       }
    }
    </p>
  • How to play raw h264 produced by MediaCodec encoder ?

    1er novembre 2014, par jackos2500

    I’m a bit new when it comes to MediaCodec (and video encoding/decoding in general), so correct me if anything I say here is wrong.

    I want to play the raw h264 output of MediaCodec with VLC/ffplay. I need this to play becuase my end goal is to stream some live video to a computer, and MediaMuxer only produces a file on disk rather than something I can stream with (very) low latency to a desktop. (I’m open to other solutions, but I have not found anything else that fits the latency requirement)

    Here is the code I’m using encode the video and write it to a file : (it’s based off the MediaCodec example found here, only with the MediaMuxer part removed)

    package com.jackos2500.droidtop;

    import android.media.MediaCodec;
    import android.media.MediaCodecInfo;
    import android.media.MediaFormat;
    import android.opengl.EGL14;
    import android.opengl.EGLConfig;
    import android.opengl.EGLContext;
    import android.opengl.EGLDisplay;
    import android.opengl.EGLExt;
    import android.opengl.EGLSurface;
    import android.opengl.GLES20;
    import android.os.Environment;
    import android.util.Log;
    import android.view.Surface;

    import java.io.BufferedOutputStream;
    import java.io.File;
    import java.io.FileOutputStream;
    import java.io.IOException;
    import java.nio.ByteBuffer;

    public class StreamH264 {
       private static final String TAG = "StreamH264";
       private static final boolean VERBOSE = true;           // lots of logging

       // where to put the output file (note: /sdcard requires WRITE_EXTERNAL_STORAGE permission)
       private static final File OUTPUT_DIR = Environment.getExternalStorageDirectory();

       public static int MEGABIT = 1000 * 1000;
       private static final int IFRAME_INTERVAL = 10;

       private static final int TEST_R0 = 0;
       private static final int TEST_G0 = 136;
       private static final int TEST_B0 = 0;
       private static final int TEST_R1 = 236;
       private static final int TEST_G1 = 50;
       private static final int TEST_B1 = 186;

       private MediaCodec codec;
       private CodecInputSurface inputSurface;
       private BufferedOutputStream out;

       private MediaCodec.BufferInfo bufferInfo;
       public StreamH264() {

       }

       private void prepareEncoder() throws IOException {
           bufferInfo = new MediaCodec.BufferInfo();

           MediaFormat format = MediaFormat.createVideoFormat("video/avc", 1280, 720);
           format.setInteger(MediaFormat.KEY_BIT_RATE, 2 * MEGABIT);
           format.setInteger(MediaFormat.KEY_FRAME_RATE, 30);
           format.setInteger(MediaFormat.KEY_COLOR_FORMAT, MediaCodecInfo.CodecCapabilities.COLOR_FormatSurface);
           format.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, IFRAME_INTERVAL);

           codec = MediaCodec.createEncoderByType("video/avc");
           codec.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
           inputSurface = new CodecInputSurface(codec.createInputSurface());
           codec.start();

           File dst = new File(OUTPUT_DIR, "test.264");
           out = new BufferedOutputStream(new FileOutputStream(dst));
       }
       private void releaseEncoder() throws IOException {
           if (VERBOSE) Log.d(TAG, "releasing encoder objects");
           if (codec != null) {
               codec.stop();
               codec.release();
               codec = null;
           }
           if (inputSurface != null) {
               inputSurface.release();
               inputSurface = null;
           }
           if (out != null) {
               out.flush();
               out.close();
               out = null;
           }
       }
       public void stream() throws IOException {
           try {
               prepareEncoder();
               inputSurface.makeCurrent();
               for (int i = 0; i &lt; (30 * 5); i++) {
                   // Feed any pending encoder output into the file.
                   drainEncoder(false);

                   // Generate a new frame of input.
                   generateSurfaceFrame(i);
                   inputSurface.setPresentationTime(computePresentationTimeNsec(i, 30));

                   // Submit it to the encoder.  The eglSwapBuffers call will block if the input
                   // is full, which would be bad if it stayed full until we dequeued an output
                   // buffer (which we can't do, since we're stuck here).  So long as we fully drain
                   // the encoder before supplying additional input, the system guarantees that we
                   // can supply another frame without blocking.
                   if (VERBOSE) Log.d(TAG, "sending frame " + i + " to encoder");
                   inputSurface.swapBuffers();
               }
               // send end-of-stream to encoder, and drain remaining output
               drainEncoder(true);
           } finally {
               // release encoder, muxer, and input Surface
               releaseEncoder();
           }
       }

       private void drainEncoder(boolean endOfStream) throws IOException {
           final int TIMEOUT_USEC = 10000;
           if (VERBOSE) Log.d(TAG, "drainEncoder(" + endOfStream + ")");

           if (endOfStream) {
               if (VERBOSE) Log.d(TAG, "sending EOS to encoder");
               codec.signalEndOfInputStream();
           }
           ByteBuffer[] outputBuffers = codec.getOutputBuffers();
           while (true) {
               int encoderStatus = codec.dequeueOutputBuffer(bufferInfo, TIMEOUT_USEC);
               if (encoderStatus == MediaCodec.INFO_TRY_AGAIN_LATER) {
                   // no output available yet
                   if (!endOfStream) {
                       break;      // out of while
                   } else {
                       if (VERBOSE) Log.d(TAG, "no output available, spinning to await EOS");
                   }
               } else if (encoderStatus == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
                   // not expected for an encoder
                   outputBuffers = codec.getOutputBuffers();
               } else if (encoderStatus == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
                   // should happen before receiving buffers, and should only happen once
                   MediaFormat newFormat = codec.getOutputFormat();
                   Log.d(TAG, "encoder output format changed: " + newFormat);
               } else if (encoderStatus &lt; 0) {
                   Log.w(TAG, "unexpected result from encoder.dequeueOutputBuffer: " + encoderStatus);
                   // let's ignore it
               } else {
                   ByteBuffer encodedData = outputBuffers[encoderStatus];
                   if (encodedData == null) {
                       throw new RuntimeException("encoderOutputBuffer " + encoderStatus + " was null");
                   }

                   if ((bufferInfo.flags &amp; MediaCodec.BUFFER_FLAG_CODEC_CONFIG) != 0) {
                       // The codec config data was pulled out and fed to the muxer when we got
                       // the INFO_OUTPUT_FORMAT_CHANGED status.  Ignore it.
                       if (VERBOSE) Log.d(TAG, "ignoring BUFFER_FLAG_CODEC_CONFIG");
                       bufferInfo.size = 0;
                   }

                   if (bufferInfo.size != 0) {
                       // adjust the ByteBuffer values to match BufferInfo (not needed?)
                       encodedData.position(bufferInfo.offset);
                       encodedData.limit(bufferInfo.offset + bufferInfo.size);

                       byte[] data = new byte[bufferInfo.size];
                       encodedData.get(data);
                       out.write(data);
                       if (VERBOSE) Log.d(TAG, "sent " + bufferInfo.size + " bytes to file");
                   }

                   codec.releaseOutputBuffer(encoderStatus, false);

                   if ((bufferInfo.flags &amp; MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) {
                       if (!endOfStream) {
                           Log.w(TAG, "reached end of stream unexpectedly");
                       } else {
                           if (VERBOSE) Log.d(TAG, "end of stream reached");
                       }
                       break;      // out of while
                   }
               }
           }
       }
       private void generateSurfaceFrame(int frameIndex) {
           frameIndex %= 8;

           int startX, startY;
           if (frameIndex &lt; 4) {
               // (0,0) is bottom-left in GL
               startX = frameIndex * (1280 / 4);
               startY = 720 / 2;
           } else {
               startX = (7 - frameIndex) * (1280 / 4);
               startY = 0;
           }

           GLES20.glClearColor(TEST_R0 / 255.0f, TEST_G0 / 255.0f, TEST_B0 / 255.0f, 1.0f);
           GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);

           GLES20.glEnable(GLES20.GL_SCISSOR_TEST);
           GLES20.glScissor(startX, startY, 1280 / 4, 720 / 2);
           GLES20.glClearColor(TEST_R1 / 255.0f, TEST_G1 / 255.0f, TEST_B1 / 255.0f, 1.0f);
           GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
           GLES20.glDisable(GLES20.GL_SCISSOR_TEST);
       }
       private static long computePresentationTimeNsec(int frameIndex, int frameRate) {
           final long ONE_BILLION = 1000000000;
           return frameIndex * ONE_BILLION / frameRate;
       }

       /**
        * Holds state associated with a Surface used for MediaCodec encoder input.
        * <p>
        * The constructor takes a Surface obtained from MediaCodec.createInputSurface(), and uses that
        * to create an EGL window surface.  Calls to eglSwapBuffers() cause a frame of data to be sent
        * to the video encoder.
        * </p><p>
        * This object owns the Surface -- releasing this will release the Surface too.
        */
       private static class CodecInputSurface {
           private static final int EGL_RECORDABLE_ANDROID = 0x3142;

           private EGLDisplay mEGLDisplay = EGL14.EGL_NO_DISPLAY;
           private EGLContext mEGLContext = EGL14.EGL_NO_CONTEXT;
           private EGLSurface mEGLSurface = EGL14.EGL_NO_SURFACE;

           private Surface mSurface;

           /**
            * Creates a CodecInputSurface from a Surface.
            */
           public CodecInputSurface(Surface surface) {
               if (surface == null) {
                   throw new NullPointerException();
               }
               mSurface = surface;

               eglSetup();
           }

           /**
            * Prepares EGL.  We want a GLES 2.0 context and a surface that supports recording.
            */
           private void eglSetup() {
               mEGLDisplay = EGL14.eglGetDisplay(EGL14.EGL_DEFAULT_DISPLAY);
               if (mEGLDisplay == EGL14.EGL_NO_DISPLAY) {
                   throw new RuntimeException("unable to get EGL14 display");
               }
               int[] version = new int[2];
               if (!EGL14.eglInitialize(mEGLDisplay, version, 0, version, 1)) {
                   throw new RuntimeException("unable to initialize EGL14");
               }

               // Configure EGL for recording and OpenGL ES 2.0.
               int[] attribList = {
                       EGL14.EGL_RED_SIZE, 8,
                       EGL14.EGL_GREEN_SIZE, 8,
                       EGL14.EGL_BLUE_SIZE, 8,
                       EGL14.EGL_ALPHA_SIZE, 8,
                       EGL14.EGL_RENDERABLE_TYPE, EGL14.EGL_OPENGL_ES2_BIT,
                       EGL_RECORDABLE_ANDROID, 1,
                       EGL14.EGL_NONE
               };
               EGLConfig[] configs = new EGLConfig[1];
               int[] numConfigs = new int[1];
               EGL14.eglChooseConfig(mEGLDisplay, attribList, 0, configs, 0, configs.length,
                       numConfigs, 0);
               checkEglError("eglCreateContext RGB888+recordable ES2");

               // Configure context for OpenGL ES 2.0.
               int[] attrib_list = {
                       EGL14.EGL_CONTEXT_CLIENT_VERSION, 2,
                       EGL14.EGL_NONE
               };
               mEGLContext = EGL14.eglCreateContext(mEGLDisplay, configs[0], EGL14.EGL_NO_CONTEXT,
                       attrib_list, 0);
               checkEglError("eglCreateContext");

               // Create a window surface, and attach it to the Surface we received.
               int[] surfaceAttribs = {
                       EGL14.EGL_NONE
               };
               mEGLSurface = EGL14.eglCreateWindowSurface(mEGLDisplay, configs[0], mSurface,
                       surfaceAttribs, 0);
               checkEglError("eglCreateWindowSurface");
           }

           /**
            * Discards all resources held by this class, notably the EGL context.  Also releases the
            * Surface that was passed to our constructor.
            */
           public void release() {
               if (mEGLDisplay != EGL14.EGL_NO_DISPLAY) {
                   EGL14.eglMakeCurrent(mEGLDisplay, EGL14.EGL_NO_SURFACE, EGL14.EGL_NO_SURFACE,
                           EGL14.EGL_NO_CONTEXT);
                   EGL14.eglDestroySurface(mEGLDisplay, mEGLSurface);
                   EGL14.eglDestroyContext(mEGLDisplay, mEGLContext);
                   EGL14.eglReleaseThread();
                   EGL14.eglTerminate(mEGLDisplay);
               }

               mSurface.release();

               mEGLDisplay = EGL14.EGL_NO_DISPLAY;
               mEGLContext = EGL14.EGL_NO_CONTEXT;
               mEGLSurface = EGL14.EGL_NO_SURFACE;

               mSurface = null;
           }

           /**
            * Makes our EGL context and surface current.
            */
           public void makeCurrent() {
               EGL14.eglMakeCurrent(mEGLDisplay, mEGLSurface, mEGLSurface, mEGLContext);
               checkEglError("eglMakeCurrent");
           }

           /**
            * Calls eglSwapBuffers.  Use this to "publish" the current frame.
            */
           public boolean swapBuffers() {
               boolean result = EGL14.eglSwapBuffers(mEGLDisplay, mEGLSurface);
               checkEglError("eglSwapBuffers");
               return result;
           }

           /**
            * Sends the presentation time stamp to EGL.  Time is expressed in nanoseconds.
            */
           public void setPresentationTime(long nsecs) {
               EGLExt.eglPresentationTimeANDROID(mEGLDisplay, mEGLSurface, nsecs);
               checkEglError("eglPresentationTimeANDROID");
           }

           /**
            * Checks for EGL errors.  Throws an exception if one is found.
            */
           private void checkEglError(String msg) {
               int error;
               if ((error = EGL14.eglGetError()) != EGL14.EGL_SUCCESS) {
                   throw new RuntimeException(msg + ": EGL error: 0x" + Integer.toHexString(error));
               }
           }
       }
    }
    </p>

    However, the file produced from this code does not play with VLC or ffplay. Can anyone tell me what I’m doing wrong ? I believe it is due to an incorrect format (or total lack) of headers required for the playing of raw h264, as I have had success playing .264 files downloaded from the internet with ffplay. Also, I’m not sure exactly how I’m going to stream this video to a computer, so if somebody could give me some suggestions as to how I might do that, I would be very grateful ! Thanks !