
Recherche avancée
Médias (91)
-
Valkaama DVD Cover Outside
4 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
Valkaama DVD Label
4 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Image
-
Valkaama DVD Cover Inside
4 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
1,000,000
27 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Demon Seed
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
The Four of Us are Dying
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (57)
-
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Le plugin : Podcasts.
14 juillet 2010, parLe problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
Types de fichiers supportés dans les flux
Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...)
Sur d’autres sites (11519)
-
no c compiler found libx264 build [closed]
24 août 2012, par kerim yucelI have been trying to build ffmpeg for Android and I have managed to do so by using roman10's tutorial.
However, I needed x264 and I enabled libx264 in the configurations but it wasn't able to find the library. A quick internet search led me to the answer that I have to build x264 seperately and include it to ffmpeg config by editing ExternalLibs flags.
Currently I am using this script to build x264. Everything is fine except the fact that the error saying "No working C compiler is found" pops up. I have updated and installed build-essential and my gcc version is up to date.
Any help will be appreciated. Thanks a lot.
P.S. I am using ndk 4 just to make everything compatible with the tutorial link given above. I am running Ubuntu with VirtualBox from Windows 7.
Edit : I have been trying this tutorial as well, I am getting the same error along with unknown options error for each configuration options.
Another edit :
I have tried this code and obtained the following.echo 'int main() return 0 ;' > main.c && arm-eabi-gcc main.c || echo $ ?
/home/mehmet/Android_NDK_r4/build/prebuilt/linux-x86/arm-eabi-4.4.0/bin/../lib/gcc/arm-eabi/4.4.0/../../../../arm-eabi/bin/ld : crt0.o : No such file : No such file or directory collect2 : ld returned
1 exit status 1I have checked the flags as well, flags seem ok or I can't see the faulty line. Above you may find the build script I am using.
export ARM_ROOT=/home/mehmet/Android_NDK_r4
export ARM_INC=$ARM_ROOT/build/platforms/android-8/arch-arm/usr/include/
export ARM_LIB=$ARM_ROOT/build/platforms/android-8/arch-arm/usr/lib/
export ARM_TOOL=$ARM_ROOT/build/prebuilt/linux-x86/arm-eabi-4.4.0
export ARM_LIBO=$ARM_TOOL/lib/gcc/arm-eabi/4.4.0
export PATH=$ARM_TOOL/bin:$PATH
export PATH=$ARM_TOOL/arm-eabi/bin:$PATH
export ARM_PRE=arm-eabi
./configure --prefix=/sdcard/arm_and \
--disable-gpac \
--extra-cflags=" -I$ARM_INC -fPIC -DANDROID -fpic -mthumb-interwork -ffunction-sections -funwind-tables -fstack-prote
ctor -fno-short-enums -D__ARM_ARCH_5__ -D__ARM_ARCH_5T__ -D__ARM_ARCH_5E__ -D__ARM_ARCH_5TE__ -Wno-psabi -march=armv5te -mtu
ne=xscale -msoft-float -mthumb -Os -fomit-frame-pointer -fno-strict-aliasing -finline-limit=64 -DANDROID -Wa,--noexecstack -
MMD -MP " \
--extra-ldflags=" -nostdlib -Bdynamic -Wl,--no-undefined -Wl,-z,noexecstack -Wl,-z,nocopyreloc -Wl,-soname,/system
/lib/libz.so -Wl,-rpath-link=$ARM_LIB,-dynamic-linker=/system/bin/linker -L$ARM_LIB -nostdlib $ARM_LIB/crtbegin_dynamic.o $ARM_LIB/crtend_android.o -lc -lm -ldl -lgcc " \
--cross-prefix=${ARM_PRE}- \
--disable-asm\
--host=arm-linux \Third edit :
Config.log gives the following.
x264 configure script
Command line options: "--prefix=/sdcard/arm_and" "--disable-gpac" "--extra-cflags=" "-I/home/mehmet/Android_NDK_r4/build/platforms/android-8/arch-arm/usr/include/" "-fPIC" "-DANDROID" "-fpic" "-mthumb-interwork" "-ffunction-sections" "-funwind-tables" "-fstack-prote" "ctor" "-fno-short-enums" "-D__ARM_ARCH_5__" "-D__ARM_ARCH_5T__" "-D__ARM_ARCH_5E__" "-D__ARM_ARCH_5TE__" "-Wno-psabi" "-march=armv5te" "-mtu" "ne=xscale" "-msoft-float" "-mthumb" "-Os" "-fomit-frame-pointer" "-fno-strict-aliasing" "-finline-limit=64" "-DANDROID" "-Wa,--noexecstack" "-" "MMD" "-MP" "--extra-ldflags=" "-nostdlib" "-Bdynamic" "-Wl,--no-undefined" "-Wl,-z,noexecstack" "-Wl,-z,nocopyreloc" "-Wl,-soname,/system" "/lib/libz.so" "-Wl,-rpath-link=/home/mehmet/Android_NDK_r4/build/platforms/android-8/arch-arm/usr/lib/,-dynamic-linker=/system/bin/linker" "-L/home/mehmet/Android_NDK_r4/build/platforms/android-8/arch-arm/usr/lib/" "-nostdlib" "/home/mehmet/Android_NDK_r4/build/platforms/android-8/arch-arm/usr/lib//crtbegin_dynamic.o" "/home/mehmet/Android_NDK_r4/build/platforms/android-8/arch-arm/usr/lib//crtend_android.o" "-lc" "-lm" "-ldl" "-lgcc" "--cross-prefix=arm-eabi-" "--disable-asm" "--host=arm-linux"
checking whether arm-eabi-gcc works... no
Failed commandline was:
--------------------------------------------------
arm-eabi-gcc conftest.c -Wall -I. -I$(SRCPATH) -I/home/mehmet/Android_NDK_r4/build/platforms/android-8/arch-arm/usr/include/ -fPIC -DANDROID -fpic -mthumb-interwork -ffunction-sections -funwind-tables -fstack-prote
ctor -fno-short-enums -D__ARM_ARCH_5__ -D__ARM_ARCH_5T__ -D__ARM_ARCH_5E__ -D__ARM_ARCH_5TE__ -Wno-psabi -march=armv5te -mtu
ne=xscale -msoft-float -mthumb -Os -fomit-frame-pointer -fno-strict-aliasing -finline-limit=64 -DANDROID -Wa,--noexecstack -
MMD -MP -nostdlib -Bdynamic -Wl,--no-undefined -Wl,-z,noexecstack -Wl,-z,nocopyreloc -Wl,-soname,/system
/lib/libz.so -Wl,-rpath-link=/home/mehmet/Android_NDK_r4/build/platforms/android-8/arch-arm/usr/lib/,-dynamic-linker=/system/bin/linker -L/home/mehmet/Android_NDK_r4/build/platforms/android-8/arch-arm/usr/lib/ -nostdlib /home/mehmet/Android_NDK_r4/build/platforms/android-8/arch-arm/usr/lib//crtbegin_dynamic.o /home/mehmet/Android_NDK_r4/build/platforms/android-8/arch-arm/usr/lib//crtend_android.o -lc -lm -ldl -lgcc -lm -o conftest
arm-eabi-gcc: ctor: No such file or directory
arm-eabi-gcc: ne=xscale: No such file or directory
arm-eabi-gcc: MMD: No such file or directory
arm-eabi-gcc: /lib/libz.so: No such file or directory
cc1: error: unrecognized command line option "-mtu"
cc1: error: unrecognized command line option "-fstack-prote"
cc1: error: to generate dependencies you must specify either -M or -MM
arm-eabi-gcc: -E or -x required when input is from standard input
--------------------------------------------------
Failed program was:
--------------------------------------------------
int main () { return 0; }
--------------------------------------------------
DIED: No working C compiler found.Last edit
Problem solved. It was just a mistake related to typos. It works well now. -
How do I use native C libraries in Android Studio
11 mars 2015, par NicholasI created a problem some years back based on https://ikaruga2.wordpress.com/2011/06/15/video-live-wallpaper-part-1/. My project was built in the version of Eclipse provided directly by Google at the time and worked fine with a copy of the compiled ffmpeg libraries created with my app name.
Now I’m trying to create a new app based on my old app. As Google no longer supports Eclipse I downloaded Android Studio and imported my project. With a few tweaks, I was able to successfully compile the old version of the project. So I modified the name, copied a new set of ".so" files into app\src\main\jniLibs\armeabi (where I assumed they should go) and tried running the application on my phone again with absolutely no other changes.
The NDK throws no errors. Gradle compiles the file without errors and installs it on my phone. The app appears in my live wallpapers list and I can click it to bring up the preview. But instead of a video appearing I receive and error and logCat reports :
02-26 21:50:31.164 18757-18757/? E/AndroidRuntime﹕ FATAL EXCEPTION: main
java.lang.ExceptionInInitializerError
at com.nightscapecreations.anim3free.VideoLiveWallpaper.onSharedPreferenceChanged(VideoLiveWallpaper.java:165)
at com.nightscapecreations.anim3free.VideoLiveWallpaper.onCreate(VideoLiveWallpaper.java:81)
at android.app.ActivityThread.handleCreateService(ActivityThread.java:2273)
at android.app.ActivityThread.access$1600(ActivityThread.java:127)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1212)
at android.os.Handler.dispatchMessage(Handler.java:99)
at android.os.Looper.loop(Looper.java:137)
at android.app.ActivityThread.main(ActivityThread.java:4441)
at java.lang.reflect.Method.invokeNative(Native Method)
at java.lang.reflect.Method.invoke(Method.java:511)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:823)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:590)
at dalvik.system.NativeStart.main(Native Method)
Caused by: java.lang.UnsatisfiedLinkError: Cannot load library: link_image[1936]: 144 could not load needed library '/data/data/com.nightscapecreations.anim1free/lib/libavutil.so' for 'libavcore.so' (load_library[1091]: Library '/data/data/com.nightscapecreations.anim1free/lib/libavutil.so' not found)
at java.lang.Runtime.loadLibrary(Runtime.java:370)
at java.lang.System.loadLibrary(System.java:535)
at com.nightscapecreations.anim3free.NativeCalls.<clinit>(NativeCalls.java:64)
at com.nightscapecreations.anim3free.VideoLiveWallpaper.onSharedPreferenceChanged(VideoLiveWallpaper.java:165)
at com.nightscapecreations.anim3free.VideoLiveWallpaper.onCreate(VideoLiveWallpaper.java:81)
at android.app.ActivityThread.handleCreateService(ActivityThread.java:2273)
at android.app.ActivityThread.access$1600(ActivityThread.java:127)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1212)
at android.os.Handler.dispatchMessage(Handler.java:99)
at android.os.Looper.loop(Looper.java:137)
at android.app.ActivityThread.main(ActivityThread.java:4441)
at java.lang.reflect.Method.invokeNative(Native Method)
at java.lang.reflect.Method.invoke(Method.java:511)
</clinit>I’m a novice Android/Java/C++ developer and am not sure what this error means, but Google leads me to believe that my new libraries are not being found. In my Eclipse project I had this set of libraries in "libs\armeabi", and another copy of them in a more complicated folder structure at "jni\ffmpeg-android\build\ffmpeg\armeabi\lib". Android Studio appears to have kept everything the same, other than renaming "libs" to "jniLibs", but I’m hitting a brick wall with this error and am unsure how to proceed.
How can I compile this new app with the new name using Android Studio ?
In case it helps here is my Android.mk file :
LOCAL_PATH := $(call my-dir)
include $(CLEAR_VARS)
MY_LIB_PATH := ffmpeg-android/build/ffmpeg/armeabi/lib
LOCAL_MODULE := bambuser-libavcore
LOCAL_SRC_FILES := $(MY_LIB_PATH)/libavcore.so
include $(PREBUILT_SHARED_LIBRARY)
include $(CLEAR_VARS)
LOCAL_MODULE := bambuser-libavformat
LOCAL_SRC_FILES := $(MY_LIB_PATH)/libavformat.so
include $(PREBUILT_SHARED_LIBRARY)
include $(CLEAR_VARS)
LOCAL_MODULE := bambuser-libavcodec
LOCAL_SRC_FILES := $(MY_LIB_PATH)/libavcodec.so
include $(PREBUILT_SHARED_LIBRARY)
include $(CLEAR_VARS)
LOCAL_MODULE := bambuser-libavfilter
LOCAL_SRC_FILES := $(MY_LIB_PATH)/libavfilter.so
include $(PREBUILT_SHARED_LIBRARY)
include $(CLEAR_VARS)
LOCAL_MODULE := bambuser-libavutil
LOCAL_SRC_FILES := $(MY_LIB_PATH)/libavutil.so
include $(PREBUILT_SHARED_LIBRARY)
include $(CLEAR_VARS)
LOCAL_MODULE := bambuser-libswscale
LOCAL_SRC_FILES := $(MY_LIB_PATH)/libswscale.so
include $(PREBUILT_SHARED_LIBRARY)
#local_PATH := $(call my-dir)
include $(CLEAR_VARS)
LOCAL_CFLAGS := -DANDROID_NDK \
-DDISABLE_IMPORTGL
LOCAL_MODULE := video
LOCAL_SRC_FILES := video.c
LOCAL_C_INCLUDES := \
$(LOCAL_PATH)/include \
$(LOCAL_PATH)/ffmpeg-android/ffmpeg \
$(LOCAL_PATH)/freetype/include/freetype2 \
$(LOCAL_PATH)/freetype/include \
$(LOCAL_PATH)/ftgl/src \
$(LOCAL_PATH)/ftgl
LOCAL_LDLIBS := -L$(NDK_PLATFORMS_ROOT)/$(TARGET_PLATFORM)/arch-arm/usr/lib -L$(LOCAL_PATH) -L$(LOCAL_PATH)/ffmpeg-android/build/ffmpeg/armeabi/lib/ -lGLESv1_CM -ldl -lavformat -lavcodec -lavfilter -lavutil -lswscale -llog -lz -lm
include $(BUILD_SHARED_LIBRARY)And here is my NativeCalls.java :
package com.nightscapecreations.anim3free;
public class NativeCalls {
//ffmpeg
public static native void initVideo();
public static native void loadVideo(String fileName); //
public static native void prepareStorageFrame();
public static native void getFrame(); //
public static native void freeConversionStorage();
public static native void closeVideo();//
public static native void freeVideo();//
//opengl
public static native void initPreOpenGL(); //
public static native void initOpenGL(); //
public static native void drawFrame(); //
public static native void closeOpenGL(); //
public static native void closePostOpenGL();//
//wallpaper
public static native void updateVideoPosition();
public static native void setSpanVideo(boolean b);
//getters
public static native int getVideoHeight();
public static native int getVideoWidth();
//setters
public static native void setWallVideoDimensions(int w,int h);
public static native void setWallDimensions(int w,int h);
public static native void setScreenPadding(int w,int h);
public static native void setVideoMargins(int w,int h);
public static native void setDrawDimensions(int drawWidth,int drawHeight);
public static native void setOffsets(int x,int y);
public static native void setSteps(int xs,int ys);
public static native void setScreenDimensions(int w, int h);
public static native void setTextureDimensions(int tx,
int ty );
public static native void setOrientation(boolean b);
public static native void setPreviewMode(boolean b);
public static native void setTonality(int t);
public static native void toggleGetFrame(boolean b);
//fps
public static native void setLoopVideo(boolean b);
static {
System.loadLibrary("avcore");
System.loadLibrary("avformat");
System.loadLibrary("avcodec");
//System.loadLibrary("avdevice");
System.loadLibrary("avfilter");
System.loadLibrary("avutil");
System.loadLibrary("swscale");
System.loadLibrary("video");
}
}EDIT
This is the first part of my video.c file :
#include <gles></gles>gl.h>
#include <gles></gles>glext.h>
#include <gles2></gles2>gl2.h>
#include <gles2></gles2>gl2ext.h>
#include
#include
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libswscale></libswscale>swscale.h>
#include
#include
#include
#include <android></android>log.h>
//#include <ftgl></ftgl>ftgl.h>
//ffmpeg video variables
int initializedVideo=0;
int initializedFrame=0;
AVFormatContext *pFormatCtx=NULL;
int videoStream;
AVCodecContext *pCodecCtx=NULL;
AVCodec *pCodec=NULL;
AVFrame *pFrame=NULL;
AVPacket packet;
int frameFinished;
float aspect_ratio;
//ffmpeg video conversion variables
AVFrame *pFrameConverted=NULL;
int numBytes;
uint8_t *bufferConverted=NULL;
//opengl
int textureFormat=PIX_FMT_RGBA; // PIX_FMT_RGBA PIX_FMT_RGB24
int GL_colorFormat=GL_RGBA; // Must match the colorspace specified for textureFormat
int textureWidth=256;
int textureHeight=256;
int nTextureHeight=-256;
int textureL=0, textureR=0, textureW=0;
int frameTonality;
//GLuint textureConverted=0;
GLuint texturesConverted[2] = { 0,1 };
GLuint dummyTex = 2;
static int len=0;
static const char* BWVertexSrc =
"attribute vec4 InVertex;\n"
"attribute vec2 InTexCoord0;\n"
"attribute vec2 InTexCoord1;\n"
"uniform mat4 ProjectionModelviewMatrix;\n"
"varying vec2 TexCoord0;\n"
"varying vec2 TexCoord1;\n"
"void main()\n"
"{\n"
" gl_Position = ProjectionModelviewMatrix * InVertex;\n"
" TexCoord0 = InTexCoord0;\n"
" TexCoord1 = InTexCoord1;\n"
"}\n";
static const char* BWFragmentSrc =
"#version 110\n"
"uniform sampler2D Texture0;\n"
"uniform sampler2D Texture1;\n"
"varying vec2 TexCoord0;\n"
"varying vec2 TexCoord1;\n"
"void main()\n"
"{\n"
" vec3 color = texture2D(m_Texture, texCoord).rgb;\n"
" float gray = (color.r + color.g + color.b) / 3.0;\n"
" vec3 grayscale = vec3(gray);\n"
" gl_FragColor = vec4(grayscale, 1.0);\n"
"}";
static GLuint shaderProgram;
//// Create a pixmap font from a TrueType file.
//FTGLPixmapFont font("/home/user/Arial.ttf");
//// Set the font size and render a small text.
//font.FaceSize(72);
//font.Render("Hello World!");
//screen dimensions
int screenWidth = 50;
int screenHeight= 50;
int screenL=0, screenR=0, screenW=0;
int dPaddingX=0,dPaddingY=0;
int drawWidth=50,drawHeight=50;
//wallpaper
int wallWidth = 50;
int wallHeight = 50;
int xOffSet, yOffSet;
int xStep, yStep;
jboolean spanVideo = JNI_TRUE;
//video dimensions
int wallVideoWidth = 0;
int wallVideoHeight = 0;
int marginX, marginY;
jboolean isScreenPortrait = JNI_TRUE;
jboolean isPreview = JNI_TRUE;
jboolean loopVideo = JNI_TRUE;
jboolean isGetFrame = JNI_TRUE;
//file
const char * szFileName;
#define max( a, b ) ( ((a) > (b)) ? (a) : (b) )
#define min( a, b ) ( ((a) < (b)) ? (a) : (b) )
//test variables
#define RGBA8(r, g, b) (((r) << (24)) | ((g) << (16)) | ((b) << (8)) | 255)
int sPixelsInited=JNI_FALSE;
uint32_t *s_pixels=NULL;
int s_pixels_size() {
return (sizeof(uint32_t) * textureWidth * textureHeight * 5);
}
void render_pixels1(uint32_t *pixels, uint32_t c) {
int x, y;
/* fill in a square of 5 x 5 at s_x, s_y */
for (y = 0; y < textureHeight; y++) {
for (x = 0; x < textureWidth; x++) {
int idx = x + y * textureWidth;
pixels[idx++] = RGBA8(255, 255, 0);
}
}
}
void render_pixels2(uint32_t *pixels, uint32_t c) {
int x, y;
/* fill in a square of 5 x 5 at s_x, s_y */
for (y = 0; y < textureHeight; y++) {
for (x = 0; x < textureWidth; x++) {
int idx = x + y * textureWidth;
pixels[idx++] = RGBA8(0, 0, 255);
}
}
}
void Java_com_nightscapecreations_anim3free_NativeCalls_initVideo (JNIEnv * env, jobject this) {
initializedVideo = 0;
initializedFrame = 0;
}
/* list of things that get loaded: */
/* buffer */
/* pFrameConverted */
/* pFrame */
/* pCodecCtx */
/* pFormatCtx */
void Java_com_nightscapecreations_anim3free_NativeCalls_loadVideo (JNIEnv * env, jobject this, jstring fileName) {
jboolean isCopy;
szFileName = (*env)->GetStringUTFChars(env, fileName, &isCopy);
//debug
__android_log_print(ANDROID_LOG_DEBUG, "NDK: ", "NDK:LC: [%s]", szFileName);
// Register all formats and codecs
av_register_all();
// Open video file
if(av_open_input_file(&pFormatCtx, szFileName, NULL, 0, NULL)!=0) {
__android_log_print(ANDROID_LOG_DEBUG, "video.c", "NDK: Couldn't open file");
return;
}
__android_log_print(ANDROID_LOG_DEBUG, "video.c", "NDK: Succesfully loaded file");
// Retrieve stream information */
if(av_find_stream_info(pFormatCtx)<0) {
__android_log_print(ANDROID_LOG_DEBUG, "video.c", "NDK: Couldn't find stream information");
return;
}
__android_log_print(ANDROID_LOG_DEBUG, "video.c", "NDK: Found stream info");
// Find the first video stream
videoStream=-1;
int i;
for(i=0; inb_streams; i++)
if(pFormatCtx->streams[i]->codec->codec_type==CODEC_TYPE_VIDEO) {
videoStream=i;
break;
}
if(videoStream==-1) {
__android_log_print(ANDROID_LOG_DEBUG, "video.c", "NDK: Didn't find a video stream");
return;
}
__android_log_print(ANDROID_LOG_DEBUG, "video.c", "NDK: Found video stream");
// Get a pointer to the codec contetx for the video stream
pCodecCtx=pFormatCtx->streams[videoStream]->codec;
// Find the decoder for the video stream
pCodec=avcodec_find_decoder(pCodecCtx->codec_id);
if(pCodec==NULL) {
__android_log_print(ANDROID_LOG_DEBUG, "video.c", "NDK: Unsupported codec");
return;
}
// Open codec
if(avcodec_open(pCodecCtx, pCodec)<0) {
__android_log_print(ANDROID_LOG_DEBUG, "video.c", "NDK: Could not open codec");
return;
}
// Allocate video frame (decoded pre-conversion frame)
pFrame=avcodec_alloc_frame();
// keep track of initialization
initializedVideo = 1;
__android_log_print(ANDROID_LOG_DEBUG, "video.c", "NDK: Finished loading video");
}
//for this to work, you need to set the scaled video dimensions first
void Java_com_nightscapecreations_anim3free_NativeCalls_prepareStorageFrame (JNIEnv * env, jobject this) {
// Allocate an AVFrame structure
pFrameConverted=avcodec_alloc_frame();
// Determine required buffer size and allocate buffer
numBytes=avpicture_get_size(textureFormat, textureWidth, textureHeight);
bufferConverted=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t));
if ( pFrameConverted == NULL || bufferConverted == NULL )
__android_log_print(ANDROID_LOG_DEBUG, "prepareStorage>>>>", "Out of memory");
// Assign appropriate parts of buffer to image planes in pFrameRGB
// Note that pFrameRGB is an AVFrame, but AVFrame is a superset
// of AVPicture
avpicture_fill((AVPicture *)pFrameConverted, bufferConverted, textureFormat, textureWidth, textureHeight);
__android_log_print(ANDROID_LOG_DEBUG, "prepareStorage>>>>", "Created frame");
__android_log_print(ANDROID_LOG_DEBUG, "prepareStorage>>>>", "texture dimensions: %dx%d", textureWidth, textureHeight);
initializedFrame = 1;
}
jint Java_com_nightscapecreations_anim3free_NativeCalls_getVideoWidth (JNIEnv * env, jobject this) {
return pCodecCtx->width;
}
jint Java_com_nightscapecreations_anim3free_NativeCalls_getVideoHeight (JNIEnv * env, jobject this) {
return pCodecCtx->height;
}
void Java_com_nightscapecreations_anim3free_NativeCalls_getFrame (JNIEnv * env, jobject this) {
// keep reading packets until we hit the end or find a video packet
while(av_read_frame(pFormatCtx, &packet)>=0) {
static struct SwsContext *img_convert_ctx;
// Is this a packet from the video stream?
if(packet.stream_index==videoStream) {
// Decode video frame
/* __android_log_print(ANDROID_LOG_DEBUG, */
/* "video.c", */
/* "getFrame: Try to decode frame" */
/* ); */
avcodec_decode_video(pCodecCtx, pFrame, &frameFinished, packet.data, packet.size);
// Did we get a video frame?
if(frameFinished) {
if(img_convert_ctx == NULL) {
/* get/set the scaling context */
int w = pCodecCtx->width;
int h = pCodecCtx->height;
img_convert_ctx = sws_getContext(w, h, pCodecCtx->pix_fmt, textureWidth,textureHeight, textureFormat, SWS_FAST_BILINEAR, NULL, NULL, NULL);
if(img_convert_ctx == NULL) {
return;
}
}
/* if img convert null */
/* finally scale the image */
/* __android_log_print(ANDROID_LOG_DEBUG, */
/* "video.c", */
/* "getFrame: Try to scale the image" */
/* ); */
//pFrameConverted = pFrame;
sws_scale(img_convert_ctx, pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameConverted->data, pFrameConverted->linesize);
//av_picture_crop(pFrameConverted->data, pFrame->data, 1, pCodecCtx->height, pCodecCtx->width);
//av_picture_crop();
//avfilter_vf_crop();
/* do something with pFrameConverted */
/* ... see drawFrame() */
/* We found a video frame, did something with it, now free up
packet and return */
av_free_packet(&packet);
// __android_log_print(ANDROID_LOG_INFO, "Droid Debug", "pFrame.age: %d", pFrame->age);
// __android_log_print(ANDROID_LOG_INFO, "Droid Debug", "pFrame.buffer_hints: %d", pFrame->buffer_hints);
// __android_log_print(ANDROID_LOG_INFO, "Droid Debug", "pFrame.display_picture_number: %d", pFrame->display_picture_number);
// __android_log_print(ANDROID_LOG_INFO, "Droid Debug", "pFrame.hwaccel_picture_private: %d", pFrame->hwaccel_picture_private);
// __android_log_print(ANDROID_LOG_INFO, "Droid Debug", "pFrame.key_frame: %d", pFrame->key_frame);
// __android_log_print(ANDROID_LOG_INFO, "Droid Debug", "pFrame.palette_has_changed: %d", pFrame->palette_has_changed);
// __android_log_print(ANDROID_LOG_INFO, "Droid Debug", "pFrame.pict_type: %d", pFrame->pict_type);
// __android_log_print(ANDROID_LOG_INFO, "Droid Debug", "pFrame.qscale_type: %d", pFrame->qscale_type);
// __android_log_print(ANDROID_LOG_INFO, "Droid Debug", "pFrameConverted.age: %d", pFrameConverted->age);
// __android_log_print(ANDROID_LOG_INFO, "Droid Debug", "pFrameConverted.buffer_hints: %d", pFrameConverted->buffer_hints);
// __android_log_print(ANDROID_LOG_INFO, "Droid Debug", "pFrameConverted.display_picture_number: %d", pFrameConverted->display_picture_number);
// __android_log_print(ANDROID_LOG_INFO, "Droid Debug", "pFrameConverted.hwaccel_picture_private: %d", pFrameConverted->hwaccel_picture_private);
// __android_log_print(ANDROID_LOG_INFO, "Droid Debug", "pFrameConverted.key_frame: %d", pFrameConverted->key_frame);
// __android_log_print(ANDROID_LOG_INFO, "Droid Debug", "pFrameConverted.palette_has_changed: %d", pFrameConverted->palette_has_changed);
// __android_log_print(ANDROID_LOG_INFO, "Droid Debug", "pFrameConverted.pict_type: %d", pFrameConverted->pict_type);
// __android_log_print(ANDROID_LOG_INFO, "Droid Debug", "pFrameConverted.qscale_type: %d", pFrameConverted->qscale_type);
return;
} /* if frame finished */
} /* if packet video stream */
// Free the packet that was allocated by av_read_frame
av_free_packet(&packet);
} /* while */
//reload video when you get to the end
av_seek_frame(pFormatCtx,videoStream,0,AVSEEK_FLAG_ANY);
}
void Java_com_nightscapecreations_anim3free_NativeCalls_setLoopVideo (JNIEnv * env, jobject this, jboolean b) {
loopVideo = b;
}
void Java_com_nightscapecreations_anim3free_NativeCalls_closeVideo (JNIEnv * env, jobject this) {
if ( initializedFrame == 1 ) {
// Free the converted image
av_free(bufferConverted);
av_free(pFrameConverted);
initializedFrame = 0;
__android_log_print(ANDROID_LOG_DEBUG, "closeVideo>>>>", "Freed converted image");
}
if ( initializedVideo == 1 ) {
/* // Free the YUV frame */
av_free(pFrame);
/* // Close the codec */
avcodec_close(pCodecCtx);
// Close the video file
av_close_input_file(pFormatCtx);
initializedVideo = 0;
__android_log_print(ANDROID_LOG_DEBUG, "closeVideo>>>>", "Freed video structures");
}
}
void Java_com_nightscapecreations_anim3free_NativeCalls_freeVideo (JNIEnv * env, jobject this) {
if ( initializedVideo == 1 ) {
/* // Free the YUV frame */
av_free(pFrame);
/* // Close the codec */
avcodec_close(pCodecCtx);
// Close the video file
av_close_input_file(pFormatCtx);
__android_log_print(ANDROID_LOG_DEBUG, "closeVideo>>>>", "Freed video structures");
initializedVideo = 0;
}
}
void Java_com_nightscapecreations_anim3free_NativeCalls_freeConversionStorage (JNIEnv * env, jobject this) {
if ( initializedFrame == 1 ) {
// Free the converted image
av_free(bufferConverted);
av_freep(pFrameConverted);
initializedFrame = 0;
}
}
/*--- END OF VIDEO ----*/
/* disable these capabilities. */
static GLuint s_disable_options[] = {
GL_FOG,
GL_LIGHTING,
GL_CULL_FACE,
GL_ALPHA_TEST,
GL_BLEND,
GL_COLOR_LOGIC_OP,
GL_DITHER,
GL_STENCIL_TEST,
GL_DEPTH_TEST,
GL_COLOR_MATERIAL,
0
};
// For stuff that opengl needs to work with,
// like the bitmap containing the texture
void Java_com_nightscapecreations_anim3free_NativeCalls_initPreOpenGL (JNIEnv * env, jobject this) {
}
... -
WebVTT as a W3C Recommendation
1er janvier 2014, par silviaThree weeks ago I attended TPAC, the annual meeting of W3C Working Groups. One of the meetings was of the Timed Text Working Group (TT-WG), that has been specifying TTML, the Timed Text Markup Language. It is now proposed that WebVTT be also standardised through the same Working Group.
How did that happen, you may ask, in particular since WebVTT and TTML have in the past been portrayed as rival caption formats ? How will the WebVTT spec that is currently under development in the Text Track Community Group (TT-CG) move through a Working Group process ?
I’ll explain first why there is a need for WebVTT to become a W3C Recommendation, and then how this is proposed to be part of the Timed Text Working Group deliverables, and finally how I can see this working between the TT-CG and the TT-WG.
Advantages of a W3C Recommendation
TTML is a XML-based markup format for captions developed during the time that XML was all the hotness. It has become a W3C standard (a so-called “Recommendation”) despite not having been implemented in any browsers (if you ask me : that’s actually a flaw of the W3C standardisation process : it requires only two interoperable implementations of any kind – and that could be anyone’s JavaScript library or Flash demonstrator – it doesn’t actually require browser implementations. But I digress…). To be fair, a subpart of TTML is by now implemented in Internet Explorer, but all the other major browsers have thus far rejected proposals of implementation.
Because of its Recommendation status, TTML has become the basis for several other caption standards that other SDOs have picked : the SMPTE’s SMPTE-TT format, the EBU’s EBU-TT format, and the DASH Industry Forum’s use of SMPTE-TT. SMPTE-TT has also become the “safe harbour” format for the US legislation on captioning as decided by the FCC. (Note that the FCC requirements for captions on the Web are actually based on a list of features rather than requiring a specific format. But that will be the topic of a different blog post…)
WebVTT is much younger than TTML. TTML was developed as an interchange format among caption authoring systems. WebVTT was built for rendering in Web browsers and with HTML5 in mind. It meets the requirements of the <track> element and supports more than just captions/subtitles. WebVTT is popular with browser developers and has already been implemented in all major browsers (Firefox Nightly is the last to implement it – all others have support already released).
As we can see and as has been proven by the HTML spec and multiple other specs : browsers don’t wait for specifications to have W3C Recommendation status before they implement them. Nor do they really care about the status of a spec – what they care about is whether a spec makes sense for the Web developer and user communities and whether it fits in the Web platform. WebVTT has obviously achieved this status, even with an evolving spec. (Note that the spec tries very hard not to break backwards compatibility, thus all past implementations will at least be compatible with the more basic features of the spec.)
Given that Web browsers don’t need WebVTT to become a W3C standard, why then should we spend effort in moving the spec through the W3C process to become a W3C Recommendation ?
The modern Web is now much bigger than just Web browsers. Web specifications are being used in all kinds of devices including TV set-top boxes, phone and tablet apps, and even unexpected devices such as white goods. Videos are increasingly omnipresent thus exposing deaf and hard-of-hearing users to ever-growing challenges in interacting with content on diverse devices. Some of these devices will not use auto-updating software but fixed versions so can’t easily adapt to new features. Thus, caption producers (both commercial and community) need to be able to author captions (and other video accessibility content as defined by the HTML5 element) towards a feature set that is clearly defined to be supported by such non-updating devices.
Understandably, device vendors in this space have a need to build their technology on standardised specifications. SDOs for such device technologies like to reference fixed specifications so the feature set is not continually updating. To reference WebVTT, they could use a snapshot of the specification at any time and reference that, but that’s not how SDOs work. They prefer referencing an officially sanctioned and tested version of a specification – for a W3C specification that means creating a W3C Recommendation of the WebVTT spec.
Taking WebVTT on a W3C recommendation track is actually advantageous for browsers, too, because a test suite will have to be developed that proves that features are implemented in an interoperable manner. In summary, I can see the advantages and personally support the effort to take WebVTT through to a W3C Recommendation.
Choice of Working Group
FAIK this is the first time that a specification developed in a Community Group is being moved into the recommendation track. This is something that has been expected when the W3C created CGs, but not something that has an established process yet.
The first question of course is which WG would take it through to Recommendation ? Would we create a new Working Group or find an existing one to move the specification through ? Since WGs involve a lot of overhead, the preference was to add WebVTT to the charter of an existing WG. The two obvious candidates were the HTML WG and the TT-WG – the first because it’s where WebVTT originated and the latter because it’s the closest thematically.
Adding a deliverable to a WG is a major undertaking. The TT-WG is currently in the process of re-chartering and thus a suggestion was made to add WebVTT to the milestones of this WG. TBH that was not my first choice. Since I’m already an editor in the HTML WG and WebVTT is very closely related to HTML and can be tested extensively as part of HTML, I preferred the HTML WG. However, adding WebVTT to the TT-WG has some advantages, too.
Since TTML is an exchange format, lots of captions that will be created (at least professionally) will be in TTML and TTML-related formats. It makes sense to create a mapping from TTML to WebVTT for rendering in browsers. The expertise of both, TTML and WebVTT experts is required to develop a good mapping – as has been shown when we developed the mapping from CEA608/708 to WebVTT. Also, captioning experts are already in the TT-WG, so it helps to get a second set of eyes onto WebVTT.
A disadvantage of moving a specification out of a CG into a WG is, however, that you potentially lose a lot of the expertise that is already involved in the development of the spec. People don’t easily re-subscribe to additional mailing lists or want the additional complexity of involving another community (see e.g. this email).
So, a good process needs to be developed to allow everyone to contribute to the spec in the best way possible without requiring duplicate work. How can we do that ?
The forthcoming process
At TPAC the TT-WG discussed for several hours what the next steps are in taking WebVTT through the TT-WG to recommendation status (agenda with slides). I won’t bore you with the different views – if you are keen, you can read the minutes.
What I came away with is the following process :
- Fix a few more bugs in the CG until we’re happy with the feature set in the CG. This should match the feature set that we realistically expect devices to implement for a first version of the WebVTT spec.
- Make a FSA (Final Specification Agreement) in the CG to create a stable reference and a clean IPR position.
- Assuming that the TT-WG’s charter has been approved with WebVTT as a milestone, we would next bring the FSA specification into the TT-WG as FPWD (First Public Working Draft) and immediately do a Last Call which effectively freezes the feature set (this is possible because there has already been wide community review of the WebVTT spec) ; in parallel, the CG can continue to develop the next version of the WebVTT spec with new features (just like it is happening with the HTML5 and HTML5.1 specifications).
- Develop a test suite and address any issues in the Last Call document (of course, also fix these issues in the CG version of the spec).
- As per W3C process, substantive and minor changes to Last Call documents have to be reported and raised issues addressed before the spec can progress to the next level : Candidate Recommendation status.
- For the next step – Proposed Recommendation status – an implementation report is necessary, and thus the test suite needs to be finalized for the given feature set. The feature set may also be reduced at this stage to just the ones implemented interoperably, leaving any other features for the next version of the spec.
- The final step is Recommendation status, which simply requires sufficient support and endorsement by W3C members.
The first version of the WebVTT spec naturally has a focus on captioning (and subtitling), since this has been the dominant use case that we have focused on this far and it’s the part that is the most compatibly implemented feature set of WebVTT in browsers. It’s my expectation that the next version of WebVTT will have a lot more features related to audio descriptions, chapters and metadata. Thus, this seems a good time for a first version feature freeze.
There are still several obstacles towards progressing WebVTT as a milestone of the TT-WG. Apart from the need to get buy-in from the TT-WG, the TT-CG, and the AC (Adivisory Committee who have to approve the new charter), we’re also looking at the license of the specification document.
The CG specification has an open license that allows creating derivative work as long as there is attribution, while the W3C document license for documents on the recommendation track does not allow the creation of derivative work unless given explicit exceptions. This is an issue that is currently being discussed in the W3C with a proposal for a CC-BY license on the Recommendation track. However, my view is that it’s probably ok to use the different document licenses : the TT-WG will work on WebVTT 1.0 and give it a W3C document license, while the CG starts working on the next WebVTT version under the open CG license. It probably actually makes sense to have a less open license on a frozen spec.
Making the best of a complicated world
WebVTT is now proposed as part of the recharter of the TT-WG. I have no idea how complicated the process will become to achieve a W3C WebVTT 1.0 Recommendation, but I am hoping that what is outlined above will be workable in such a way that all of us get to focus on progressing the technology.
At TPAC I got the impression that the TT-WG is committed to progressing WebVTT to Recommendation status. I know that the TT-CG is committed to continue developing WebVTT to its full potential for all kinds of media-time aligned content with new kinds already discussed at FOMS. Let’s enable both groups to achieve their goals. As a consequence, we will allow the two formats to excel where they do : TTML as an interchange format and WebVTT as a browser rendering format.