
Recherche avancée
Autres articles (61)
-
Soumettre améliorations et plugins supplémentaires
10 avril 2011Si vous avez développé une nouvelle extension permettant d’ajouter une ou plusieurs fonctionnalités utiles à MediaSPIP, faites le nous savoir et son intégration dans la distribution officielle sera envisagée.
Vous pouvez utiliser la liste de discussion de développement afin de le faire savoir ou demander de l’aide quant à la réalisation de ce plugin. MediaSPIP étant basé sur SPIP, il est également possible d’utiliser le liste de discussion SPIP-zone de SPIP pour (...) -
Contribute to translation
13 avril 2011You can help us to improve the language used in the software interface to make MediaSPIP more accessible and user-friendly. You can also translate the interface into any language that allows it to spread to new linguistic communities.
To do this, we use the translation interface of SPIP where the all the language modules of MediaSPIP are available. Just subscribe to the mailing list and request further informantion on translation.
MediaSPIP is currently available in French and English (...) -
Contribute to documentation
13 avril 2011Documentation is vital to the development of improved technical capabilities.
MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
To contribute, register to the project users’ mailing (...)
Sur d’autres sites (9167)
-
More Cinepak Madness
20 octobre 2011, par Multimedia Mike — Codec TechnologyFellow digital archaeologist Clone2727 found a possible fifth variant of the Cinepak video codec. He asked me if I cared to investigate the sample. I assured him I wouldn’t be able to die a happy multimedia nerd unless I have cataloged all possible Cinepak variants known to exist in the wild. I’m sure there are chemistry nerds out there who are ecstatic when another element is added to the periodic table. Well, that’s me, except with weird multimedia formats.
Background
Cinepak is a video codec that saw widespread use in the early days of digital multimedia. To date, we have cataloged 4 variants of Cinepak in the wild. This distinction is useful when trying to write and maintain an all-in-one decoder. The variants are :- The standard type : Most Cinepak data falls into this category. It decodes to a modified/simplified YUV 4:2:0 planar colorspace and is often seen in AVI and QuickTime/MOV files.
- 8-bit greyscale : Essentially the same as the standard type but with only a Y plane. This has only been identified in AVI files and is distinguished by the file header’s video bits/pixel field being set to 8 instead of 24.
- 8-bit paletted : Again, this is identified by the video header specifying 8 bits/pixel for a Cinepak stream. There is essentially only a Y plane in the data, however, each 8-bit value is a palette index. The palette is transported along with the video header. To date, only one known sample of this format has even been spotted in the wild, and it’s classified as NSFW. It is also a QuickTime/MOV file.
- Sega/FILM CPK data : Sega Saturn games often used CPK files which stored a variant of Cinepak that, while very close the standard Cinepak, couldn’t be decoded with standard decoder components.
So, a flexible Cinepak decoder has to identify if the file’s video header specified 8 bits/pixel. How does it distinguish between greyscale and paletted ? If a file is paletted, a custom palette should have been included with the video header. Thus, if video bits/pixel is 8 and a palette is present, use paletted ; else, use greyscale. Beyond that, the Cinepak decoder has a heuristic to determine how to handle the standard type of data, which might deviate slightly if it comes from a Sega CPK file.
The Fifth Variant ?
Now, regarding this fifth variant– the reason this issue came up is because of that aforementioned heuristic. Basically, a Cinepak chunk is supposed to store the length of the entire chunk in its header. The data from a Sega CPK file plays fast and loose with this chunk size and the discrepancy makes it easy to determine if the data requires special handling. However, a handful of files discovered on a Macintosh game called “The Journeyman Project : Pegasus Prime” have chunk lengths which are sometimes in disagreement with the lengths reported in the containing QuickTime file’s stsz atom. This trips the heuristic and tries to apply the CPK rules against Cinepak data which, aside from the weird chunk length, is perfectly compliant.Here are the first few chunk sizes, as reported by the file header (stsz atom) and the chunk :
size from stsz = 7880 (0x1EC8) ; from header = 3940 (0xF64) size from stsz = 3940 (0xF64) ; from header = 3940 (0xF64) size from stsz = 15792 (0x3DB0) ; from header = 3948 (0xF6C) size from stsz = 11844 (0x2E44) ; from header = 3948 (0xF6C)
Hey, there’s a pattern here. If they don’t match, then the stsz size is an even multiple of the chunk size (2x, 3x, or 4x in my observation). I suppose I could revise the heuristic to state that if the stsz size is 2x, 3x, 4x, or equal to the chunk header, qualify it as compliant Cinepak data.
Of course it feels impure, but software engineering is rarely about programmatic purity. A decade of special cases in the FFmpeg / Libav codebases are a testament to that.
What’s A Variant ?
Suddenly, I find myself contemplating what truly constitutes a variant. Maybe this was just a broken encoder program making these files ? And for that, I assign it the designation of distinct variation, like some sort of special, unique showflake ?Then again, I documented Magic Carpet FLIC as being a distinct variant of the broader FLIC format (which has an enormous number of variants as well).
-
Encoding images into a movie file
5 avril 2014, par RuAwareI am trying to save jpgs into a movie, I have tried jcodec and alothough my s3 plays it fine other devices do not. including vlc and windows media
I have just spent most of the day playing with MediaCodec, although the SDK is so high, it will help people with jelly bean and above. But I can not work out how to get the Files to the encoder and then write the file.
Ideally I wont to support down to SDK 9/8
Has anyone got any code they can share, either to get MediaCodec to work or another option. If you say ffmpeg, I'd love to but my jin knowledge is non existent and I will need a very good guide.
Code for MediaCodec so far
public class EncodeAndMux extends AsyncTask {
private static int bitRate = 2000000;
private static int MAX_INPUT = 100000;
private static String mimeType = "video/avc";
private int frameRate = 15;
private int colorFormat;
private int stride = 1;
private int sliceHeight = 2;
private MediaCodec encoder = null;
private MediaFormat inputFormat;
private MediaCodecInfo codecInfo = null;
private MediaMuxer muxer;
private boolean mMuxerStarted = false;
private int mTrackIndex = 0;
private long presentationTime = 0;
private Paint bmpPaint;
private static int WAITTIME = 10000;
private static String TAG = "ENCODE";
private ArrayList<string> mFilePaths;
private String mPath;
private EncodeListener mListener;
private int width = 320;
private int height = 240;
private double mSpeed = 1;
public EncodeAndMux(ArrayList<string> filePaths, String savePath) {
mFilePaths = filePaths;
mPath = savePath;
// Create paint to draw BMP
bmpPaint = new Paint();
bmpPaint.setAntiAlias(true);
bmpPaint.setFilterBitmap(true);
bmpPaint.setDither(true);
}
public void setListner(EncodeListener listener) {
mListener = listener;
}
// set the speed, how many frames a second
public void setSpead(int speed) {
mSpeed = speed;
}
public double getSpeed() {
return mSpeed;
}
private long computePresentationTime(int frameIndex) {
final long ONE_SECOND = 1000000;
return (long) (frameIndex * (ONE_SECOND / mSpeed));
}
public interface EncodeListener {
public void finished();
public void errored();
}
@TargetApi(Build.VERSION_CODES.JELLY_BEAN_MR2)
@Override
protected Boolean doInBackground(Integer... params) {
try {
muxer = new MediaMuxer(mPath, OutputFormat.MUXER_OUTPUT_MPEG_4);
} catch (Exception e){
e.printStackTrace();
}
// Find a code that supports the mime type
int numCodecs = MediaCodecList.getCodecCount();
for (int i = 0; i < numCodecs && codecInfo == null; i++) {
MediaCodecInfo info = MediaCodecList.getCodecInfoAt(i);
if (!info.isEncoder()) {
continue;
}
String[] types = info.getSupportedTypes();
boolean found = false;
for (int j = 0; j < types.length && !found; j++) {
if (types[j].equals(mimeType))
found = true;
}
if (!found)
continue;
codecInfo = info;
}
for (int i = 0; i < MediaCodecList.getCodecCount(); i++) {
MediaCodecInfo info = MediaCodecList.getCodecInfoAt(i);
if (!info.isEncoder()) {
continue;
}
String[] types = info.getSupportedTypes();
for (int j = 0; j < types.length; ++j) {
if (types[j] != mimeType)
continue;
MediaCodecInfo.CodecCapabilities caps = info.getCapabilitiesForType(types[j]);
for (int k = 0; k < caps.profileLevels.length; k++) {
if (caps.profileLevels[k].profile == MediaCodecInfo.CodecProfileLevel.AVCProfileHigh && caps.profileLevels[k].level == MediaCodecInfo.CodecProfileLevel.AVCLevel4) {
codecInfo = info;
}
}
}
}
Log.d(TAG, "Found " + codecInfo.getName() + " supporting " + mimeType);
MediaCodecInfo.CodecCapabilities capabilities = codecInfo.getCapabilitiesForType(mimeType);
for (int i = 0; i < capabilities.colorFormats.length && colorFormat == 0; i++) {
int format = capabilities.colorFormats[i];
switch (format) {
case MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420Planar:
case MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420PackedPlanar:
case MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420SemiPlanar:
case MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420PackedSemiPlanar:
case MediaCodecInfo.CodecCapabilities.COLOR_TI_FormatYUV420PackedSemiPlanar:
colorFormat = format;
break;
}
}
Log.d(TAG, "Using color format " + colorFormat);
// Determine width, height and slice sizes
if (codecInfo.getName().equals("OMX.TI.DUCATI1.VIDEO.H264E")) {
// This codec doesn't support a width not a multiple of 16,
// so round down.
width &= ~15;
}
stride = width;
sliceHeight = height;
if (codecInfo.getName().startsWith("OMX.Nvidia.")) {
stride = (stride + 15) / 16 * 16;
sliceHeight = (sliceHeight + 15) / 16 * 16;
}
inputFormat = MediaFormat.createVideoFormat(mimeType, width, height);
inputFormat.setInteger(MediaFormat.KEY_BIT_RATE, bitRate);
inputFormat.setInteger(MediaFormat.KEY_FRAME_RATE, frameRate);
inputFormat.setInteger(MediaFormat.KEY_COLOR_FORMAT, colorFormat);
inputFormat.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 5);
// inputFormat.setInteger("stride", stride);
// inputFormat.setInteger("slice-height", sliceHeight);
inputFormat.setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, MAX_INPUT);
encoder = MediaCodec.createByCodecName(codecInfo.getName());
encoder.configure(inputFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
encoder.start();
ByteBuffer[] inputBuffers = encoder.getInputBuffers();
ByteBuffer[] outputBuffers = encoder.getOutputBuffers();
int inputBufferIndex= -1, outputBufferIndex= -1;
BufferInfo info = new BufferInfo();
for (int i = 0; i < mFilePaths.size(); i++) {
// use decode sample to calculate inSample size and then resize
Bitmap bitmapIn = Images.decodeSampledBitmapFromPath(mFilePaths.get(i), width, height);
// Create blank bitmap
Bitmap bitmap = Bitmap.createBitmap(width, height, Config.ARGB_8888);
// Center scaled image
Canvas canvas = new Canvas(bitmap);
canvas.drawBitmap(bitmapIn,(bitmap.getWidth()/2)-(bitmapIn.getWidth()/2),(bitmap.getHeight()/2)-(bitmapIn.getHeight()/2), bmpPaint);
Log.d(TAG, "Bitmap width: " + bitmapIn.getWidth() + " height: " + bitmapIn.getHeight() + " WIDTH: " + width + " HEIGHT: " + height);
byte[] dat = getNV12(width, height, bitmap);
bitmap.recycle();
// Exception occurred on this below line in Emulator, LINE No. 182//**
inputBufferIndex = encoder.dequeueInputBuffer(WAITTIME);
Log.i("DAT", "Size= "+dat.length);
if(inputBufferIndex >= 0){
int samplesiz= dat.length;
inputBuffers[inputBufferIndex].put(dat);
presentationTime = computePresentationTime(i);
if (i == mFilePaths.size()) {
encoder.queueInputBuffer(inputBufferIndex, 0, samplesiz, presentationTime, MediaCodec.BUFFER_FLAG_END_OF_STREAM);
Log.i(TAG, "Last Frame");
} else {
encoder.queueInputBuffer(inputBufferIndex, 0, samplesiz, presentationTime, 0);
}
while(true) {
outputBufferIndex = encoder.dequeueOutputBuffer(info, WAITTIME);
Log.i("BATA", "outputBufferIndex="+outputBufferIndex);
if (outputBufferIndex >= 0) {
ByteBuffer encodedData = outputBuffers[outputBufferIndex];
if (encodedData == null) {
throw new RuntimeException("encoderOutputBuffer " + outputBufferIndex +
" was null");
}
if ((info.flags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG) != 0) {
// The codec config data was pulled out and fed to the muxer when we got
// the INFO_OUTPUT_FORMAT_CHANGED status. Ignore it.
Log.d(TAG, "ignoring BUFFER_FLAG_CODEC_CONFIG");
info.size = 0;
}
if (info.size != 0) {
if (!mMuxerStarted) {
throw new RuntimeException("muxer hasn't started");
}
// adjust the ByteBuffer values to match BufferInfo (not needed?)
encodedData.position(info.offset);
encodedData.limit(info.offset + info.size);
muxer.writeSampleData(mTrackIndex, encodedData, info);
Log.d(TAG, "sent " + info.size + " bytes to muxer");
}
encoder.releaseOutputBuffer(outputBufferIndex, false);
inputBuffers[inputBufferIndex].clear();
outputBuffers[outputBufferIndex].clear();
if ((info.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) {
break; // out of while
}
} else if (outputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
// Subsequent data will conform to new format.
MediaFormat opmediaformat = encoder.getOutputFormat();
if (!mMuxerStarted) {
mTrackIndex = muxer.addTrack(opmediaformat);
muxer.start();
mMuxerStarted = true;
}
Log.i(TAG, "op_buf_format_changed: " + opmediaformat);
} else if(outputBufferIndex == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
outputBuffers = encoder.getOutputBuffers();
Log.d(TAG, "Output Buffer changed " + outputBuffers);
} else if(outputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER) {
// No Data, break out
break;
} else {
// Unexpected State, ignore it
Log.d(TAG, "Unexpected State " + outputBufferIndex);
}
}
}
}
if (encoder != null) {
encoder.flush();
encoder.stop();
encoder.release();
encoder = null;
}
if (muxer != null) {
muxer.stop();
muxer.release();
muxer = null;
}
return true;
};
@Override
protected void onPostExecute(Boolean result) {
if (result) {
if (mListener != null)
mListener.finished();
} else {
if (mListener != null)
mListener.errored();
}
super.onPostExecute(result);
}
byte [] getNV12(int inputWidth, int inputHeight, Bitmap scaled) {
int [] argb = new int[inputWidth * inputHeight];
scaled.getPixels(argb, 0, inputWidth, 0, 0, inputWidth, inputHeight);
byte [] yuv = new byte[inputWidth*inputHeight*3/2];
encodeYUV420SP(yuv, argb, inputWidth, inputHeight);
scaled.recycle();
return yuv;
}
void encodeYUV420SP(byte[] yuv420sp, int[] argb, int width, int height) {
final int frameSize = width * height;
int yIndex = 0;
int uvIndex = frameSize;
int a, R, G, B, Y, U, V;
int index = 0;
for (int j = 0; j < height; j++) {
for (int i = 0; i < width; i++) {
a = (argb[index] & 0xff000000) >> 24; // a is not used obviously
R = (argb[index] & 0xff0000) >> 16;
G = (argb[index] & 0xff00) >> 8;
B = (argb[index] & 0xff) >> 0;
// well known RGB to YVU algorithm
Y = ( ( 66 * R + 129 * G + 25 * B + 128) >> 8) + 16;
V = ( ( -38 * R - 74 * G + 112 * B + 128) >> 8) + 128;
U = ( ( 112 * R - 94 * G - 18 * B + 128) >> 8) + 128;
yuv420sp[yIndex++] = (byte) ((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
if (j % 2 == 0 && index % 2 == 0) {
yuv420sp[uvIndex++] = (byte)((V<0) ? 0 : ((V > 255) ? 255 : V));
yuv420sp[uvIndex++] = (byte)((U<0) ? 0 : ((U > 255) ? 255 : U));
}
index ++;
}
}
}
}
</string></string>This has now been tested on 4 of my devices and works fine, is there are way to
1/ Calculate the MAX_INPUT (to high and on the N7 II it crashes, I Don't want that happening once released)
2/ Offer an api 16 solution ?
3/ Do I need stride and stride height ?Thanks
-
How can I find out what this ffmpeg error code means ?
3 mars 2015, par AsikI’m using the function avcodec_decode_video2. On an encoding change in the stream, it returns -1094995529. The documentation only states :
On error a negative value is returned, otherwise the number of bytes
used or zero if no frame could be decompressed.But there doesn’t seem to be an enum of return codes or any other form of documentation. What does the error mean and how can I determine that in general ?