Recherche avancée

Médias (91)

Autres articles (40)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Other interesting software

    13 avril 2011, par

    We don’t claim to be the only ones doing what we do ... and especially not to assert claims to be the best either ... What we do, we just try to do it well and getting better ...
    The following list represents softwares that tend to be more or less as MediaSPIP or that MediaSPIP tries more or less to do the same, whatever ...
    We don’t know them, we didn’t try them, but you can take a peek.
    Videopress
    Website : http://videopress.com/
    License : GNU/GPL v2
    Source code : (...)

Sur d’autres sites (6300)

  • What ffmpeg settings to ensure 0 duplicate frames and 0 dropped frames when capturing to mpeg-2 program stream using ffmpeg/avfoundation on Mac ?

    16 février 2017, par aerodavo

    I’m trying to capture to a DVD compliant mpeg-2 file (ffmpeg : -target ntsc-dvd) from the HDMI output of a camcorder into a Magewell HDMI to USB 3.0 box into my Late 2012 15" non-retina MacBook Pro (quad core 2.3, 16gb ram, ssd), using ffmpeg/avfoundation.

    I’ve tried everything I can think of, or find online. I’m still getting duplicate and dropped frames, which either leads to audio/video sync issues, or audio dropouts, especially for longer recordings. I need this to be stable for recordings of up to 2.5 hours. This is the Terminal output for a 1.5 hour recording :

    Lapaki:~ Lapaki$ /Users/Lapaki/Desktop/ffmpeg -f avfoundation -video_size 960x540 -pixel_format uyvy422 -framerate ntsc -i "XI:XI" -vf crop=iw-240:ih:120:0 -target ntsc-dvd -aspect 4:3 -q:v 3 -ab 256k /Users/Lapaki/Desktop/FF\ Test/`date +%F`\ `date +%H_%M_%S`.mpg
    ffmpeg version 3.2.3-tessus Copyright (c) 2000-2017 the FFmpeg developers
     built with Apple LLVM version 8.0.0 (clang-800.0.42.1)
     configuration: --cc=/usr/bin/clang --prefix=/opt/ffmpeg --extra-version=tessus --enable-avisynth --enable-fontconfig --enable-gpl --enable-libass --enable-libbluray --enable-libfreetype --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopus --enable-libschroedinger --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzmq --enable-version3 --disable-ffplay --disable-indev=qtkit --disable-indev=x11grab_xcb
     libavutil      55. 34.101 / 55. 34.101
     libavcodec     57. 64.101 / 57. 64.101
     libavformat    57. 56.101 / 57. 56.101
     libavdevice    57.  1.100 / 57.  1.100
     libavfilter     6. 65.100 /  6. 65.100
     libswscale      4.  2.100 /  4.  2.100
     libswresample   2.  3.100 /  2.  3.100
     libpostproc    54.  1.100 / 54.  1.100
    Input #0, avfoundation, from 'XI:XI':
     Duration: N/A, start: 610606.984208, bitrate: N/A
       Stream #0:0: Video: rawvideo (UYVY / 0x59565955), uyvy422, 960x540, 29.97 fps, 29.97 tbr, 1000k tbn, 1000k tbc
       Stream #0:1: Audio: pcm_f32le, 48000 Hz, stereo, flt, 3072 kb/s
    Output #0, dvd, to '/Users/Lapaki/Desktop/FF Test/2017-02-15 17_46_28.mpg':
     Metadata:
       encoder         : Lavf57.56.101
       Stream #0:0: Video: mpeg2video (Main), yuv420p, 720x480 [SAR 8:9 DAR 4:3], q=2-31, 6000 kb/s, 29.97 fps, 90k tbn, 29.97 tbc
       Metadata:
         encoder         : Lavc57.64.101 mpeg2video
       Side data:
         cpb: bitrate max/min/avg: 9000000/0/6000000 buffer size: 1835008 vbv_delay: -1
       Stream #0:1: Audio: ac3, 48000 Hz, stereo, fltp, 256 kb/s
       Metadata:
         encoder         : Lavc57.64.101 ac3
    Stream mapping:
     Stream #0:0 -> #0:0 (rawvideo (native) -> mpeg2video (native))
     Stream #0:1 -> #0:1 (pcm_f32le (native) -> ac3 (native))
    Press [q] to stop, [?] for help
    [swscaler @ 0x7fd315892800] Warning: data is not aligned! This can lead to a speedloss
    frame=   20 fps=0.0 q=3.0 size=     298kB time=00:00:00.65 bitrate=3721.4kbits/sframe=   35 fps= 35 q=3.0 size=     498kB time=00:00:01.13 bitrate=3591.2kbits/sframe=   50 fps= 33 q=3.0 size=     708kB time=00:00:01.64 bitrate=3519.4kbits/sframe=   65 fps= 32 q=3.0 size=     920kB time=00:00:02.16  
    ...
    bitrate=2721.7kbits/frame=162094 fps= 30 q=3.0 size= 1796936kB time=01:30:08.47 bitrate=2721.7kbits/frame=162109 fps= 30 q=3.0 size= 1797142kB time=01:30:08.98 bitrate=2721.8kbits/frame=162110 fps= 30 q=3.0 Lsize= 1797202kB time=01:30:09.01 bitrate=2721.9kbits/s dup=221 drop=0 speed=   1x    
    video:1579050kB audio:168069kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 2.866632%

    I deleted the middle part (denoted by the "..."), which is just a lot more of the same accumulating information. By the end there are 221 duplicate frames, and for this one I didn’t get any dropped frames, but that happens every other time as well it seems.

    With this code, the audio seems to stay in pretty good sync, but I get little dropouts every 30 seconds to a minute or so. On this recording, there are dropouts at :

    00:00:43, 00:01:19, 00:01:47, 00:02:17, 00:03:18, ...

    I stopped listening there ; they happen at the end too, so I assume similarly spaced dropouts are happening throughout the file.

    Is there some secret ffmpeg code to ensure there are no dropped or duplicate frames when capturing from a live source to dvd compliant mpeg-2 files ?

    When I convert from pretty much any type of file to mpeg-2 files using -target ntsc-dvd, the speed is something like 10x on this machine, so it seems like it would have no problem keeping up with a live source, right ?

    I’ve also tried constant bit rate using -b:v 5000k -minrate 5000k -maxrate 5000k -bufsize 2500k, which also doesn’t prevent dropped/duplicate frames.

    I’ve tried separating out the audio and video inputs, which doesn’t solve it.

    I’ve tried using -vsync 0 on the video input, which does seem to solve the issue, because the output doesn’t report dups/drops, but the audio/video drift out of sync more and more, so that hasn’t worked either.

    Thanks so much for any help. I’ve been testing and testing and searching and searching for weeks...

  • How to seek mp4 aac audio using Media Source Extensions

    29 août 2018, par Chris

    Please can someone offer me a few pointers when trying to seek within streamed aac audio in mp4 containers ? I’m trying to develop a music download service that sips data via ranged requests rather than simply link to a mp4 file as an <audio></audio> src. (which will instead buffer the whole file as quickly as possible, and so be rather wasteful and expensive).

    So far I’ve managed to successfully append sequential audio range buffers to the SourceBuffer object using partial/ranged requests, attached to my suitably mime-typed MediaSource object. But as soon as I try to seek, the wheels come off and I receive a ’CHUNK_DEMUXER_ERROR_APPEND_FAILED’ error, with the specific issue : ’stream parsing failed’.

    I’ve prepared my mp4 files by encoding them with ffmpeg (via the fluent ffmpeg module), rewriting the movie header box at the start of the file (via the -movflags faststart setting) so that the duration can be parsed. I then fragment the file with mp4fragment (part of the Bento4 tools) with the default settings, and check to ensure the structure of the file matches the format specified by ISO BMFF, with pairs of movie fragments and data boxes (moof/mdat) describing the audio stream. Given the source buffer has no problem playing from the beginning, with contiguous subsequent ranges, this appears to confirm that the format of the mp4 file is acceptable.

    As an aside, I’ve tried fragmenting the file completely in ffmpeg/fluent ffmpeg (using the ’-movflags empty_moov+default_base_moof’ options), but while this works, it also removes the duration from the moov as you’d expect, so the file gets larger during playback as more fragments are fetched and appended. If I set the file duration manually, I still have the issue of not being able to seek to unbuffered audio, so I only seem to be making life more difficult trying to fragment the file solely in ffmpeg.

    So how should I go about seeking within the stream ? I gather that seeking effectively ’needle-drops’ randomly, and so the source buffer might struggle to parse the data out of context, but I imagined that it would skip to the next available fragment in the range that I fetch (which is calculated using the percentage of the seek bar width to set the player.currentTime, which is then converted to a suitable byte range using the 128kbps CBR figure to convert seconds to bytes, to send a 206 partial range request).

    I’ve seen mention of buffer offsets, but I don’t understand how these apply. Most of the dev examples I’ve seen just focus on whole files or segmented videos, rather than fragmented single audio files for seeking ? Do I need to somehow retain a portion of the data from the moov box when seeking for the source buffer to be able to parse it ? In the trun box I have a data offset that varies between two values throughout the file, 444 and 448, depending on whether the sample count is 86 or 87. I’m not sure why it’s not consistent.

    Here’s what the moov looks like from my audio file :

    [ftyp] size=8+24
     major_brand = isom
     minor_version = 200
     compatible_brand = isom
     compatible_brand = iso2
     compatible_brand = mp41
     compatible_brand = iso5
    [moov] size=8+620
     [mvhd] size=12+96
       timescale = 1000
       duration = 350047
       duration(ms) = 350047
     [trak] size=8+448
       [tkhd] size=12+80, flags=7
         enabled = 1
         id = 1
         duration = 350047
         width = 0.000000
         height = 0.000000
       [edts] size=8+28
         [elst] size=12+16
           entry count = 1
           entry/segment duration = 350000
           entry/media time = 2048
           entry/media rate = 1
       [mdia] size=8+312
         [mdhd] size=12+20
           timescale = 44100
           duration = 0
           duration(ms) = 0
           language = und
         [hdlr] size=12+41
           handler_type = soun
           handler_name = Bento4 Sound Handler
         [minf] size=8+219
           [smhd] size=12+4
             balance = 0
           [dinf] size=8+28
             [dref] size=12+16
               [url ] size=12+0, flags=1
                 location = [local to file]
           [stbl] size=8+159
             [stsd] size=12+79
               entry-count = 1
               [mp4a] size=8+67
                 data_reference_index = 1
                 channel_count = 2
                 sample_size = 16
                 sample_rate = 44100
                 [esds] size=12+27
                   [ESDescriptor] size=2+25
                     es_id = 0
                     stream_priority = 0
                     [DecoderConfig] size=2+17
                       stream_type = 5
                       object_type = 64
                       up_stream = 0
                       buffer_size = 0
                       max_bitrate = 128006
                       avg_bitrate = 128006
                       DecoderSpecificInfo = 12 10
                     [Descriptor:06] size=2+1
             [stts] size=12+4
               entry_count = 0
             [stsc] size=12+4
               entry_count = 0
             [stsz] size=12+8
               sample_size = 0
               sample_count = 0
             [stco] size=12+4
               entry_count = 0
     [mvex] size=8+48
       [mehd] size=12+4
         duration = 350047
       [trex] size=12+20
         track id = 1
         default sample description index = 1
         default sample duration = 0
         default sample size = 0
         default sample flags = 0

    And here’s a typical fragment :

    [moof] size=8+428
     [mfhd] size=12+4
       sequence number = 1
     [traf] size=8+404
       [tfhd] size=12+8, flags=20008
         track ID = 1
         default sample duration = 1024
       [tfdt] size=12+8, version=1
         base media decode time = 0
       [trun] size=12+352, flags=201
         sample count = 86
         data offset = 444
    [mdat] size=8+32653

    Does that all look good ? Any pointers for seeking within such a file would be hugely appreciated. Thanks !

  • How do I use native C libraries in Android Studio

    11 mars 2015, par Nicholas

    I created a problem some years back based on https://ikaruga2.wordpress.com/2011/06/15/video-live-wallpaper-part-1/. My project was built in the version of Eclipse provided directly by Google at the time and worked fine with a copy of the compiled ffmpeg libraries created with my app name.

    Now I’m trying to create a new app based on my old app. As Google no longer supports Eclipse I downloaded Android Studio and imported my project. With a few tweaks, I was able to successfully compile the old version of the project. So I modified the name, copied a new set of ".so" files into app\src\main\jniLibs\armeabi (where I assumed they should go) and tried running the application on my phone again with absolutely no other changes.

    The NDK throws no errors. Gradle compiles the file without errors and installs it on my phone. The app appears in my live wallpapers list and I can click it to bring up the preview. But instead of a video appearing I receive and error and logCat reports :

    02-26 21:50:31.164  18757-18757/? E/AndroidRuntime﹕ FATAL EXCEPTION: main
    java.lang.ExceptionInInitializerError
           at com.nightscapecreations.anim3free.VideoLiveWallpaper.onSharedPreferenceChanged(VideoLiveWallpaper.java:165)
           at com.nightscapecreations.anim3free.VideoLiveWallpaper.onCreate(VideoLiveWallpaper.java:81)
           at android.app.ActivityThread.handleCreateService(ActivityThread.java:2273)
           at android.app.ActivityThread.access$1600(ActivityThread.java:127)
           at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1212)
           at android.os.Handler.dispatchMessage(Handler.java:99)
           at android.os.Looper.loop(Looper.java:137)
           at android.app.ActivityThread.main(ActivityThread.java:4441)
           at java.lang.reflect.Method.invokeNative(Native Method)
           at java.lang.reflect.Method.invoke(Method.java:511)
           at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:823)
           at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:590)
           at dalvik.system.NativeStart.main(Native Method)
    Caused by: java.lang.UnsatisfiedLinkError: Cannot load library: link_image[1936]:   144 could not load needed library '/data/data/com.nightscapecreations.anim1free/lib/libavutil.so' for 'libavcore.so' (load_library[1091]: Library '/data/data/com.nightscapecreations.anim1free/lib/libavutil.so' not found)
           at java.lang.Runtime.loadLibrary(Runtime.java:370)
           at java.lang.System.loadLibrary(System.java:535)
           at com.nightscapecreations.anim3free.NativeCalls.<clinit>(NativeCalls.java:64)
           at com.nightscapecreations.anim3free.VideoLiveWallpaper.onSharedPreferenceChanged(VideoLiveWallpaper.java:165)
           at com.nightscapecreations.anim3free.VideoLiveWallpaper.onCreate(VideoLiveWallpaper.java:81)
           at android.app.ActivityThread.handleCreateService(ActivityThread.java:2273)
           at android.app.ActivityThread.access$1600(ActivityThread.java:127)
           at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1212)
           at android.os.Handler.dispatchMessage(Handler.java:99)
           at android.os.Looper.loop(Looper.java:137)
           at android.app.ActivityThread.main(ActivityThread.java:4441)
           at java.lang.reflect.Method.invokeNative(Native Method)
           at java.lang.reflect.Method.invoke(Method.java:511)
    </clinit>

    I’m a novice Android/Java/C++ developer and am not sure what this error means, but Google leads me to believe that my new libraries are not being found. In my Eclipse project I had this set of libraries in "libs\armeabi", and another copy of them in a more complicated folder structure at "jni\ffmpeg-android\build\ffmpeg\armeabi\lib". Android Studio appears to have kept everything the same, other than renaming "libs" to "jniLibs", but I’m hitting a brick wall with this error and am unsure how to proceed.

    How can I compile this new app with the new name using Android Studio ?

    In case it helps here is my Android.mk file :

       LOCAL_PATH := $(call my-dir)

       include $(CLEAR_VARS)
       MY_LIB_PATH := ffmpeg-android/build/ffmpeg/armeabi/lib
       LOCAL_MODULE := bambuser-libavcore
       LOCAL_SRC_FILES := $(MY_LIB_PATH)/libavcore.so
       include $(PREBUILT_SHARED_LIBRARY)

       include $(CLEAR_VARS)
       LOCAL_MODULE := bambuser-libavformat
       LOCAL_SRC_FILES := $(MY_LIB_PATH)/libavformat.so
       include $(PREBUILT_SHARED_LIBRARY)

       include $(CLEAR_VARS)
       LOCAL_MODULE := bambuser-libavcodec
       LOCAL_SRC_FILES := $(MY_LIB_PATH)/libavcodec.so
       include $(PREBUILT_SHARED_LIBRARY)

       include $(CLEAR_VARS)
       LOCAL_MODULE := bambuser-libavfilter
       LOCAL_SRC_FILES := $(MY_LIB_PATH)/libavfilter.so
       include $(PREBUILT_SHARED_LIBRARY)

       include $(CLEAR_VARS)
       LOCAL_MODULE := bambuser-libavutil
       LOCAL_SRC_FILES := $(MY_LIB_PATH)/libavutil.so
       include $(PREBUILT_SHARED_LIBRARY)

       include $(CLEAR_VARS)
       LOCAL_MODULE := bambuser-libswscale
       LOCAL_SRC_FILES := $(MY_LIB_PATH)/libswscale.so
       include $(PREBUILT_SHARED_LIBRARY)

       #local_PATH := $(call my-dir)

       include $(CLEAR_VARS)

       LOCAL_CFLAGS := -DANDROID_NDK \
                       -DDISABLE_IMPORTGL

       LOCAL_MODULE    := video
       LOCAL_SRC_FILES := video.c

       LOCAL_C_INCLUDES := \
           $(LOCAL_PATH)/include \
           $(LOCAL_PATH)/ffmpeg-android/ffmpeg \
           $(LOCAL_PATH)/freetype/include/freetype2 \
           $(LOCAL_PATH)/freetype/include \
           $(LOCAL_PATH)/ftgl/src \
           $(LOCAL_PATH)/ftgl
       LOCAL_LDLIBS := -L$(NDK_PLATFORMS_ROOT)/$(TARGET_PLATFORM)/arch-arm/usr/lib -L$(LOCAL_PATH) -L$(LOCAL_PATH)/ffmpeg-android/build/ffmpeg/armeabi/lib/ -lGLESv1_CM -ldl -lavformat -lavcodec -lavfilter -lavutil -lswscale -llog -lz -lm

       include $(BUILD_SHARED_LIBRARY)

    And here is my NativeCalls.java :

       package com.nightscapecreations.anim3free;

       public class NativeCalls {
           //ffmpeg
           public static native void initVideo();
           public static native void loadVideo(String fileName); //
           public static native void prepareStorageFrame();
           public static native void getFrame(); //
           public static native void freeConversionStorage();
           public static native void closeVideo();//
           public static native void freeVideo();//
           //opengl
           public static native void initPreOpenGL(); //
           public static native void initOpenGL(); //
           public static native void drawFrame(); //
           public static native void closeOpenGL(); //
           public static native void closePostOpenGL();//
           //wallpaper
           public static native void updateVideoPosition();
           public static native void setSpanVideo(boolean b);
           //getters
           public static native int getVideoHeight();
           public static native int getVideoWidth();
           //setters
           public static native void setWallVideoDimensions(int w,int h);
           public static native void setWallDimensions(int w,int h);
           public static native void setScreenPadding(int w,int h);
           public static native void setVideoMargins(int w,int h);
           public static native void setDrawDimensions(int drawWidth,int drawHeight);
           public static native void setOffsets(int x,int y);
           public static native void setSteps(int xs,int ys);
           public static native void setScreenDimensions(int w, int h);
           public static native void setTextureDimensions(int tx,
                                  int ty );
           public static native void setOrientation(boolean b);
           public static native void setPreviewMode(boolean b);
           public static native void setTonality(int t);
           public static native void toggleGetFrame(boolean b);
           //fps
           public static native void setLoopVideo(boolean b);

           static {
           System.loadLibrary("avcore");
           System.loadLibrary("avformat");
           System.loadLibrary("avcodec");
           //System.loadLibrary("avdevice");
           System.loadLibrary("avfilter");
           System.loadLibrary("avutil");
           System.loadLibrary("swscale");
           System.loadLibrary("video");
           }

       }

    EDIT

    This is the first part of my video.c file :

       #include <gles></gles>gl.h>
       #include <gles></gles>glext.h>

       #include <gles2></gles2>gl2.h>
       #include <gles2></gles2>gl2ext.h>

       #include
       #include

       #include <libavcodec></libavcodec>avcodec.h>
       #include <libavformat></libavformat>avformat.h>
       #include <libswscale></libswscale>swscale.h>

       #include  
       #include  
       #include
       #include <android></android>log.h>

       //#include <ftgl></ftgl>ftgl.h>

       //ffmpeg video variables
       int      initializedVideo=0;
       int      initializedFrame=0;
       AVFormatContext *pFormatCtx=NULL;
       int             videoStream;
       AVCodecContext  *pCodecCtx=NULL;
       AVCodec         *pCodec=NULL;
       AVFrame         *pFrame=NULL;
       AVPacket        packet;
       int             frameFinished;
       float           aspect_ratio;

       //ffmpeg video conversion variables
       AVFrame         *pFrameConverted=NULL;
       int             numBytes;
       uint8_t         *bufferConverted=NULL;

       //opengl
       int textureFormat=PIX_FMT_RGBA; // PIX_FMT_RGBA   PIX_FMT_RGB24
       int GL_colorFormat=GL_RGBA; // Must match the colorspace specified for textureFormat
       int textureWidth=256;
       int textureHeight=256;
       int nTextureHeight=-256;
       int textureL=0, textureR=0, textureW=0;
       int frameTonality;

       //GLuint textureConverted=0;
       GLuint texturesConverted[2] = { 0,1 };
       GLuint dummyTex = 2;
       static int len=0;


       static const char* BWVertexSrc =
                "attribute vec4 InVertex;\n"
                "attribute vec2 InTexCoord0;\n"
                "attribute vec2 InTexCoord1;\n"
                "uniform mat4 ProjectionModelviewMatrix;\n"
                "varying vec2 TexCoord0;\n"
                "varying vec2 TexCoord1;\n"

                "void main()\n"
                "{\n"
                "  gl_Position = ProjectionModelviewMatrix * InVertex;\n"
                "  TexCoord0 = InTexCoord0;\n"
                "  TexCoord1 = InTexCoord1;\n"
                "}\n";
       static const char* BWFragmentSrc  =

                "#version 110\n"
                "uniform sampler2D Texture0;\n"
                "uniform sampler2D Texture1;\n"

                "varying vec2 TexCoord0;\n"
                "varying vec2 TexCoord1;\n"

                "void main()\n"
                "{\n"
               "   vec3 color = texture2D(m_Texture, texCoord).rgb;\n"
               "   float gray = (color.r + color.g + color.b) / 3.0;\n"
               "   vec3 grayscale = vec3(gray);\n"

               "   gl_FragColor = vec4(grayscale, 1.0);\n"
                "}";
       static GLuint shaderProgram;


       //// Create a pixmap font from a TrueType file.
       //FTGLPixmapFont font("/home/user/Arial.ttf");
       //// Set the font size and render a small text.
       //font.FaceSize(72);
       //font.Render("Hello World!");

       //screen dimensions
       int screenWidth = 50;
       int screenHeight= 50;
       int screenL=0, screenR=0, screenW=0;
       int dPaddingX=0,dPaddingY=0;
       int drawWidth=50,drawHeight=50;

       //wallpaper
       int wallWidth = 50;
       int wallHeight = 50;
       int xOffSet, yOffSet;
       int xStep, yStep;
       jboolean spanVideo = JNI_TRUE;

       //video dimensions
       int wallVideoWidth = 0;
       int wallVideoHeight = 0;
       int marginX, marginY;
       jboolean isScreenPortrait = JNI_TRUE;
       jboolean isPreview = JNI_TRUE;
       jboolean loopVideo = JNI_TRUE;
       jboolean isGetFrame = JNI_TRUE;

       //file
       const char * szFileName;

       #define max( a, b ) ( ((a) > (b)) ? (a) : (b) )
       #define min( a, b ) ( ((a) &lt; (b)) ? (a) : (b) )

       //test variables
       #define RGBA8(r, g, b)  (((r) &lt;&lt; (24)) | ((g) &lt;&lt; (16)) | ((b) &lt;&lt; (8)) | 255)
       int sPixelsInited=JNI_FALSE;
       uint32_t *s_pixels=NULL;

       int s_pixels_size() {
         return (sizeof(uint32_t) * textureWidth * textureHeight * 5);
       }

       void render_pixels1(uint32_t *pixels, uint32_t c) {
           int x, y;
           /* fill in a square of 5 x 5 at s_x, s_y */
           for (y = 0; y &lt; textureHeight; y++) {
               for (x = 0; x &lt; textureWidth; x++) {
                   int idx = x + y * textureWidth;
                   pixels[idx++] = RGBA8(255, 255, 0);
               }
           }
       }

       void render_pixels2(uint32_t *pixels, uint32_t c) {
           int x, y;
           /* fill in a square of 5 x 5 at s_x, s_y */
           for (y = 0; y &lt; textureHeight; y++) {
               for (x = 0; x &lt; textureWidth; x++) {
                   int idx = x + y * textureWidth;
                   pixels[idx++] = RGBA8(0, 0, 255);
               }
           }
       }

       void Java_com_nightscapecreations_anim3free_NativeCalls_initVideo (JNIEnv * env, jobject this) {
           initializedVideo = 0;
           initializedFrame = 0;
       }

       /* list of things that get loaded: */
       /* buffer */
       /* pFrameConverted */
       /* pFrame */
       /* pCodecCtx */
       /* pFormatCtx */
       void Java_com_nightscapecreations_anim3free_NativeCalls_loadVideo (JNIEnv * env, jobject this, jstring fileName)  {
           jboolean isCopy;
           szFileName = (*env)->GetStringUTFChars(env, fileName, &amp;isCopy);
           //debug
           __android_log_print(ANDROID_LOG_DEBUG, "NDK: ", "NDK:LC: [%s]", szFileName);
           // Register all formats and codecs
           av_register_all();
           // Open video file
           if(av_open_input_file(&amp;pFormatCtx, szFileName, NULL, 0, NULL)!=0) {
           __android_log_print(ANDROID_LOG_DEBUG, "video.c", "NDK: Couldn't open file");
           return;
           }
           __android_log_print(ANDROID_LOG_DEBUG, "video.c", "NDK: Succesfully loaded file");
           // Retrieve stream information */
           if(av_find_stream_info(pFormatCtx)&lt;0) {
           __android_log_print(ANDROID_LOG_DEBUG, "video.c", "NDK: Couldn't find stream information");
           return;
           }
           __android_log_print(ANDROID_LOG_DEBUG, "video.c", "NDK: Found stream info");
           // Find the first video stream
           videoStream=-1;
           int i;
           for(i=0; inb_streams; i++)
               if(pFormatCtx->streams[i]->codec->codec_type==CODEC_TYPE_VIDEO) {
                   videoStream=i;
                   break;
               }
           if(videoStream==-1) {
               __android_log_print(ANDROID_LOG_DEBUG, "video.c", "NDK: Didn't find a video stream");
               return;
           }
           __android_log_print(ANDROID_LOG_DEBUG, "video.c", "NDK: Found video stream");
           // Get a pointer to the codec contetx for the video stream
           pCodecCtx=pFormatCtx->streams[videoStream]->codec;
           // Find the decoder for the video stream
           pCodec=avcodec_find_decoder(pCodecCtx->codec_id);
           if(pCodec==NULL) {
               __android_log_print(ANDROID_LOG_DEBUG, "video.c", "NDK: Unsupported codec");
               return;
           }
           // Open codec
           if(avcodec_open(pCodecCtx, pCodec)&lt;0) {
               __android_log_print(ANDROID_LOG_DEBUG, "video.c", "NDK: Could not open codec");
               return;
           }
           // Allocate video frame (decoded pre-conversion frame)
           pFrame=avcodec_alloc_frame();
           // keep track of initialization
           initializedVideo = 1;
           __android_log_print(ANDROID_LOG_DEBUG, "video.c", "NDK: Finished loading video");
       }

       //for this to work, you need to set the scaled video dimensions first
       void Java_com_nightscapecreations_anim3free_NativeCalls_prepareStorageFrame (JNIEnv * env, jobject this)  {
           // Allocate an AVFrame structure
           pFrameConverted=avcodec_alloc_frame();
           // Determine required buffer size and allocate buffer
           numBytes=avpicture_get_size(textureFormat, textureWidth, textureHeight);
           bufferConverted=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t));
           if ( pFrameConverted == NULL || bufferConverted == NULL )
               __android_log_print(ANDROID_LOG_DEBUG, "prepareStorage>>>>", "Out of memory");
           // Assign appropriate parts of buffer to image planes in pFrameRGB
           // Note that pFrameRGB is an AVFrame, but AVFrame is a superset
           // of AVPicture
           avpicture_fill((AVPicture *)pFrameConverted, bufferConverted, textureFormat, textureWidth, textureHeight);
           __android_log_print(ANDROID_LOG_DEBUG, "prepareStorage>>>>", "Created frame");
           __android_log_print(ANDROID_LOG_DEBUG, "prepareStorage>>>>", "texture dimensions: %dx%d", textureWidth, textureHeight);
           initializedFrame = 1;
       }

       jint Java_com_nightscapecreations_anim3free_NativeCalls_getVideoWidth (JNIEnv * env, jobject this)  {
           return pCodecCtx->width;
       }

       jint Java_com_nightscapecreations_anim3free_NativeCalls_getVideoHeight (JNIEnv * env, jobject this)  {
           return pCodecCtx->height;
       }

       void Java_com_nightscapecreations_anim3free_NativeCalls_getFrame (JNIEnv * env, jobject this)  {
           // keep reading packets until we hit the end or find a video packet
           while(av_read_frame(pFormatCtx, &amp;packet)>=0) {
               static struct SwsContext *img_convert_ctx;
               // Is this a packet from the video stream?
               if(packet.stream_index==videoStream) {
                   // Decode video frame
                   /* __android_log_print(ANDROID_LOG_DEBUG,  */
                   /*            "video.c",  */
                   /*            "getFrame: Try to decode frame" */
                   /*            ); */
                   avcodec_decode_video(pCodecCtx, pFrame, &amp;frameFinished, packet.data, packet.size);
                   // Did we get a video frame?
                   if(frameFinished) {
                       if(img_convert_ctx == NULL) {
                           /* get/set the scaling context */
                           int w = pCodecCtx->width;
                           int h = pCodecCtx->height;
                           img_convert_ctx = sws_getContext(w, h, pCodecCtx->pix_fmt, textureWidth,textureHeight, textureFormat, SWS_FAST_BILINEAR, NULL, NULL, NULL);
                           if(img_convert_ctx == NULL) {
                               return;
                           }
                       }
                       /* if img convert null */
                       /* finally scale the image */
                       /* __android_log_print(ANDROID_LOG_DEBUG,  */
                       /*          "video.c",  */
                       /*          "getFrame: Try to scale the image" */
                       /*          ); */

                       //pFrameConverted = pFrame;
                       sws_scale(img_convert_ctx, pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameConverted->data, pFrameConverted->linesize);
                       //av_picture_crop(pFrameConverted->data, pFrame->data, 1, pCodecCtx->height, pCodecCtx->width);
                       //av_picture_crop();
                       //avfilter_vf_crop();

                       /* do something with pFrameConverted */
                       /* ... see drawFrame() */
                       /* We found a video frame, did something with it, now free up
                          packet and return */
                       av_free_packet(&amp;packet);
       //              __android_log_print(ANDROID_LOG_INFO, "Droid Debug", "pFrame.age: %d", pFrame->age);
       //              __android_log_print(ANDROID_LOG_INFO, "Droid Debug", "pFrame.buffer_hints: %d", pFrame->buffer_hints);
       //              __android_log_print(ANDROID_LOG_INFO, "Droid Debug", "pFrame.display_picture_number: %d", pFrame->display_picture_number);
       //              __android_log_print(ANDROID_LOG_INFO, "Droid Debug", "pFrame.hwaccel_picture_private: %d", pFrame->hwaccel_picture_private);
       //              __android_log_print(ANDROID_LOG_INFO, "Droid Debug", "pFrame.key_frame: %d", pFrame->key_frame);
       //              __android_log_print(ANDROID_LOG_INFO, "Droid Debug", "pFrame.palette_has_changed: %d", pFrame->palette_has_changed);
       //              __android_log_print(ANDROID_LOG_INFO, "Droid Debug", "pFrame.pict_type: %d", pFrame->pict_type);
       //              __android_log_print(ANDROID_LOG_INFO, "Droid Debug", "pFrame.qscale_type: %d", pFrame->qscale_type);
       //              __android_log_print(ANDROID_LOG_INFO, "Droid Debug", "pFrameConverted.age: %d", pFrameConverted->age);
       //              __android_log_print(ANDROID_LOG_INFO, "Droid Debug", "pFrameConverted.buffer_hints: %d", pFrameConverted->buffer_hints);
       //              __android_log_print(ANDROID_LOG_INFO, "Droid Debug", "pFrameConverted.display_picture_number: %d", pFrameConverted->display_picture_number);
       //              __android_log_print(ANDROID_LOG_INFO, "Droid Debug", "pFrameConverted.hwaccel_picture_private: %d", pFrameConverted->hwaccel_picture_private);
       //              __android_log_print(ANDROID_LOG_INFO, "Droid Debug", "pFrameConverted.key_frame: %d", pFrameConverted->key_frame);
       //              __android_log_print(ANDROID_LOG_INFO, "Droid Debug", "pFrameConverted.palette_has_changed: %d", pFrameConverted->palette_has_changed);
       //              __android_log_print(ANDROID_LOG_INFO, "Droid Debug", "pFrameConverted.pict_type: %d", pFrameConverted->pict_type);
       //              __android_log_print(ANDROID_LOG_INFO, "Droid Debug", "pFrameConverted.qscale_type: %d", pFrameConverted->qscale_type);
                       return;
                   } /* if frame finished */
               } /* if packet video stream */
               // Free the packet that was allocated by av_read_frame
               av_free_packet(&amp;packet);
           } /* while */
           //reload video when you get to the end
           av_seek_frame(pFormatCtx,videoStream,0,AVSEEK_FLAG_ANY);
       }

       void Java_com_nightscapecreations_anim3free_NativeCalls_setLoopVideo (JNIEnv * env, jobject this, jboolean b) {
           loopVideo = b;
       }

       void Java_com_nightscapecreations_anim3free_NativeCalls_closeVideo (JNIEnv * env, jobject this) {
           if ( initializedFrame == 1 ) {
               // Free the converted image
               av_free(bufferConverted);
               av_free(pFrameConverted);
               initializedFrame = 0;
               __android_log_print(ANDROID_LOG_DEBUG, "closeVideo>>>>", "Freed converted image");
           }
           if ( initializedVideo == 1 ) {
               /* // Free the YUV frame */
               av_free(pFrame);
               /* // Close the codec */
               avcodec_close(pCodecCtx);
               // Close the video file
               av_close_input_file(pFormatCtx);
               initializedVideo = 0;
               __android_log_print(ANDROID_LOG_DEBUG, "closeVideo>>>>", "Freed video structures");
           }
       }

       void Java_com_nightscapecreations_anim3free_NativeCalls_freeVideo (JNIEnv * env, jobject this) {
           if ( initializedVideo == 1 ) {
               /* // Free the YUV frame */
               av_free(pFrame);
               /* // Close the codec */
               avcodec_close(pCodecCtx);
               // Close the video file
               av_close_input_file(pFormatCtx);
               __android_log_print(ANDROID_LOG_DEBUG, "closeVideo>>>>", "Freed video structures");
               initializedVideo = 0;
           }
       }

       void Java_com_nightscapecreations_anim3free_NativeCalls_freeConversionStorage (JNIEnv * env, jobject this) {
           if ( initializedFrame == 1 ) {
               // Free the converted image
               av_free(bufferConverted);
               av_freep(pFrameConverted);
               initializedFrame = 0;
           }
       }

       /*--- END OF VIDEO ----*/

       /* disable these capabilities. */
       static GLuint s_disable_options[] = {
           GL_FOG,
           GL_LIGHTING,
           GL_CULL_FACE,
           GL_ALPHA_TEST,
           GL_BLEND,
           GL_COLOR_LOGIC_OP,
           GL_DITHER,
           GL_STENCIL_TEST,
           GL_DEPTH_TEST,
           GL_COLOR_MATERIAL,
           0
       };

       // For stuff that opengl needs to work with,
       // like the bitmap containing the texture
       void Java_com_nightscapecreations_anim3free_NativeCalls_initPreOpenGL (JNIEnv * env, jobject this)  {

       }
       ...