
Recherche avancée
Médias (16)
-
#7 Ambience
16 octobre 2011, par
Mis à jour : Juin 2015
Langue : English
Type : Audio
-
#6 Teaser Music
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#5 End Title
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#3 The Safest Place
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#4 Emo Creates
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#2 Typewriter Dance
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
Autres articles (77)
-
Demande de création d’un canal
12 mars 2010, parEn fonction de la configuration de la plateforme, l’utilisateur peu avoir à sa disposition deux méthodes différentes de demande de création de canal. La première est au moment de son inscription, la seconde, après son inscription en remplissant un formulaire de demande.
Les deux manières demandent les mêmes choses fonctionnent à peu près de la même manière, le futur utilisateur doit remplir une série de champ de formulaire permettant tout d’abord aux administrateurs d’avoir des informations quant à (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Gestion de la ferme
2 mars 2010, parLa ferme est gérée dans son ensemble par des "super admins".
Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
Dans un premier temps il utilise le plugin "Gestion de mutualisation"
Sur d’autres sites (10198)
-
ffmpeg replace part of audio file with looped audio gives queued buffers
1er décembre 2015, par user1202648I am quite new to ffmpeg and I am trying to replace a part of a first audio file with another second file. The second file can be too short, so some sort of loop should exist.
After some research I came up with the following command arguments and it gives me the output as long as I only do one replacement. But I would like to do multiple replacements. So any help on what I am doing wrong ? Any suggestions/remarks on the way of working are also very welcome.
(Any typos in the commands below can be ignored, I generate the command by script and for ease of use I simplified the names.)
Works (One replacement) :
"ffmpeg.exe" -y -i "first.wav" -i "second.wav" -filter_complex "[1:a][1:a][1:a]concat=n=3:v=0:a=1,asetpts=PTS-STARTPTS[replaceBase];[0:a]atrim=0:3,asetpts=PTS-STARTPTS[partA];[replaceBase]atrim=0:2,asetpts=PTS-STARTPTS[replaceA];[0:a]atrim=start=5,asetpts=PTS-STARTPTS[partB];[partA][replaceA][partB]concat=n=3:v=0:a=1[aout]" -map "[aout]" Out.wav
Works Not (Multiple replacements) :
"ffmpeg.exe" -y -i "first.wav" -i "second.wav" -filter_complex "[1:a][1:a][1:a]concat=n=3:v=0:a=1,asetpts=PTS-STARTPTS[replaceBase];[0:a]atrim=0:3,asetpts=PTS-STARTPTS[partA];[replaceBase]atrim=0:2,asetpts=PTS-STARTPTS[replaceA];[0:a]atrim=5:4,asetpts=PTS-STARTPTS[partB];[replaceBase]atrim=0:2,asetpts=PTS-STARTPTS[replaceB];[0:a]atrim=start=6,asetpts=PTS-STARTPTS[partC];[partA][replaceA][partB][replaceB][PartC]concat=n=4:v=0:a=1[aout]" -map "[aout]" Out.wav
ffmpeg version N-76860-g72eaf72 Copyright (c) 2000-2015 the FFmpeg developers
built with gcc 5.2.0 (GCC)
configuration : —enable-gpl —enable-version3 —disable-w32threads —enable-avisynth —enable-bzlib —enable-fontconfig —enable-frei0r —enable-gnutls —enable-iconv —enable-libass —enable-libbluray —enable-libbs2b —enable-libcaca —enable-libdcadec —enable-libfreetype —enable-libgme —enable-libgsm —enable-libilbc —enable-libmodplug —enable-libmp3lame —enable-libopencore-amrnb —enable-libopencore-amrwb —enable-libopenjpeg —enable-libopus —enable-librtmp —enable-libschroedinger —enable-libsoxr —enable-libspeex —enable-libtheora —enable-libtwolame —enable-libvidstab —enable-libvo-aacenc —enable-libvo-amrwbenc —enable-libvorbis —enable-libvpx —enable-libwavpack —enable-libwebp —enable-libx264 —enable-libx265 —enable-libxavs —enable-libxvid —enable-libzimg —enable-lzma —enable-decklink —enable-zlib
libavutil 55. 9.100 / 55. 9.100
libavcodec 57. 16.100 / 57. 16.100
libavformat 57. 19.100 / 57. 19.100
libavdevice 57. 0.100 / 57. 0.100
libavfilter 6. 15.100 / 6. 15.100
libswscale 4. 0.100 / 4. 0.100
libswresample 2. 0.101 / 2. 0.101
libpostproc 54. 0.100 / 54. 0.100
Guessed Channel Layout for Input Stream #0.0 : stereo
Input #0, wav, from ’3897583stereo.wav’ :
Duration : 00:00:12.07, bitrate : 256 kb/s
Stream #0:0 : Audio : pcm_s16le ([1][0][0][0] / 0x0001), 8000 Hz, 2 channels, s16, 256 kb/s
Guessed Channel Layout for Input Stream #1.0 : stereo
Input #1, wav, from ’beep-021.wav’ :
Metadata :
encoder : Lavf57.19.100
Duration : 00:00:00.30, bitrate : 1413 kb/s
Stream #1:0 : Audio : pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, 2 channels, s16, 1411 kb/s
[wav @ 057242c0] Invalid stream specifier : replaceBase.
Last message repeated 1 times
Stream specifier ’STREAM CUT matches no streams.Thanks in advance !
-
FFMPEG : AV out of sync when writing a part of a video to a new file
30 mars 2017, par IT_LaymanI’m developing a data preprocessing program for a computer vision project using FFMPEG and Face detection API. In this program, I need to extract the shots that contain human faces from a given input video file and output them into a new file. But when I played the output video file generated by that program, the video and audio track was out of sync. I think a possible reason is that the timestamp of video frame or audio frame is set incorrectly, but I can’t fix it by myself as I’m not very familiar with FFMPEG library, Please help me solving this out-of-sync issue.
To simplify the code shown below, I have removed all face detection code and use an empty function called
faceDetect
to represent it instead.// ffmpegAPI.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
#include <iostream>
extern "C" {
#include <libavutil></libavutil>opt.h>
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavutil></libavutil>avutil.h>
#include <libavutil></libavutil>channel_layout.h>
#include <libavutil></libavutil>common.h>
#include <libavutil></libavutil>imgutils.h>
#include <libavutil></libavutil>mathematics.h>
#include <libavutil></libavutil>samplefmt.h>
#include <libavutil></libavutil>pixdesc.h>
#include <libswscale></libswscale>swscale.h>
}
bool faceDetect(AVFrame *frame)
{
/*...*/
return true;
}
int main(int argc, char **argv)
{
int64_t videoPts = 0, audioPts = 0;
int samples_count = 0;
AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
AVOutputFormat *ofmt = NULL;
AVPacket pkt;
AVFrame *frame = NULL;
int videoindex = -1; int audioindex = -1;
double videoTime = DBL_MAX;
const char *in_filename, *out_filename;
int ret, i;
in_filename = "C:\\input.flv";//Input file name
out_filename = "C:\\output.avi";//Output file name
av_register_all();
//Open input file
if ((ret = avformat_open_input(&ifmt_ctx, in_filename, 0, 0)) < 0) {
fprintf(stderr, "Could not open input file '%s'", in_filename);
goto end;
}
//Find input streams
if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) < 0) {
fprintf(stderr, "Failed to retrieve input stream information");
goto end;
}
//Retrive AV stream information
for (i = 0; i < ifmt_ctx->nb_streams; i++)
{
AVStream *stream;
AVCodecContext *codec_ctx;
stream = ifmt_ctx->streams[i];//Get current stream
codec_ctx = stream->codec;//Get current stream codec
if (codec_ctx->codec_type == AVMEDIA_TYPE_VIDEO)
{
videoindex = i;//video stream index
}
else if (codec_ctx->codec_type == AVMEDIA_TYPE_AUDIO)
{
audioindex = i;//audio stream index
}
if (videoindex == -1)//no video stream is found
{
printf("can't find video stream\n");
goto end;
}
}
av_dump_format(ifmt_ctx, 0, in_filename, 0);
//Configure output
avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, out_filename);
if (!ofmt_ctx) {
fprintf(stderr, "Could not create output context\n");
ret = AVERROR_UNKNOWN;
goto end;
}
ofmt = ofmt_ctx->oformat;
//Configure output streams
for (i = 0; i < ifmt_ctx->nb_streams; i++) {//Traversal input streams
AVStream *in_stream = ifmt_ctx->streams[i];//Get current stream
AVStream *out_stream = avformat_new_stream(ofmt_ctx, in_stream->codec->codec);//Create a corresponding output stream
if (!out_stream) {
fprintf(stderr, "Failed allocating output stream\n");
ret = AVERROR_UNKNOWN;
goto end;
}
//Copy codec from current input stream to corresponding output stream
ret = avcodec_copy_context(out_stream->codec, in_stream->codec);
if (ret < 0) {
fprintf(stderr, "Failed to copy context from input to output stream codec context\n");
goto end;
}
if (i == videoindex)//Video stream
{
if (out_stream->codec->codec_id == AV_CODEC_ID_H264)
{
out_stream->codec->me_range = 16;
out_stream->codec->max_qdiff = 4;
out_stream->codec->qmin = 10;
out_stream->codec->qmax = 51;
out_stream->codec->qcompress = 1;
}
}
AVCodecContext *codec_ctx = out_stream->codec;
if (codec_ctx->codec_type == AVMEDIA_TYPE_VIDEO
|| codec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
//Find codec encoder
AVCodec *encoder = avcodec_find_encoder(codec_ctx->codec_id);
if (!encoder) {
av_log(NULL, AV_LOG_FATAL, "Necessary encoder not found\n");
ret = AVERROR_INVALIDDATA;
goto end;
}
//Open encoder
ret = avcodec_open2(codec_ctx, encoder, NULL);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot open video encoder for stream #%u\n", i);
goto end;
}
out_stream->codec->codec_tag = 0;
if (ofmt_ctx->oformat->flags & AVFMT_GLOBALHEADER)
out_stream->codec->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
}
//Open the decoder for input stream
codec_ctx = in_stream->codec;
if (codec_ctx->codec_type == AVMEDIA_TYPE_VIDEO
|| codec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
ret = avcodec_open2(codec_ctx,
avcodec_find_decoder(codec_ctx->codec_id), NULL);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Failed to open decoder for stream #%u\n", i);
}
}
}
av_dump_format(ofmt_ctx, 0, out_filename, 1);
//Open output file for writing
if (!(ofmt->flags & AVFMT_NOFILE)) {
ret = avio_open(&ofmt_ctx->pb, out_filename, AVIO_FLAG_WRITE);
if (ret < 0) {
fprintf(stderr, "Could not open output file '%s'", out_filename);
goto end;
}
}
//Write video header
ret = avformat_write_header(ofmt_ctx, NULL);
if (ret < 0) {
fprintf(stderr, "Error occurred when opening output file\n");
goto end;
}
//Write frames in a loop
while (1) {
AVStream *in_stream, *out_stream;
//Read one frame from the input file
ret = av_read_frame(ifmt_ctx, &pkt);
if (ret < 0)
break;
in_stream = ifmt_ctx->streams[pkt.stream_index];//Get current input stream
out_stream = ofmt_ctx->streams[pkt.stream_index];//Get current output stream
if (pkt.stream_index == videoindex)//video frame
{
int got_frame;
frame = av_frame_alloc();
if (!frame) {
ret = AVERROR(ENOMEM);
break;
}
//Readjust packet timestamp for decoding
av_packet_rescale_ts(&pkt,
in_stream->time_base,
in_stream->codec->time_base);
//Decode video frame
int len = avcodec_decode_video2(in_stream->codec, frame, &got_frame, &pkt);
if (len < 0)
{
av_frame_free(&frame);
av_log(NULL, AV_LOG_ERROR, "Decoding failed\n");
break;
}
if (got_frame)//Got a decoded video frame
{
int64_t pts = av_frame_get_best_effort_timestamp(frame);
//determine if the frame image contains human face
bool result = faceDetect(frame);
if (result) //face contained
{
videoTime = pts* av_q2d(out_stream->time_base);
frame->pts = videoPts++;//Set pts of video frame
AVPacket enc_pkt;
av_log(NULL, AV_LOG_INFO, "Encoding video frame\n");
//Create packet for encoding
enc_pkt.data = NULL;
enc_pkt.size = 0;
av_init_packet(&enc_pkt);
//Encoding frame
ret = avcodec_encode_video2(out_stream->codec, &enc_pkt,
frame, &got_frame);
av_frame_free(&frame);
if (!(got_frame))
ret = 0;
/* Configure encoding properties */
enc_pkt.stream_index = videoindex;
av_packet_rescale_ts(&enc_pkt,
out_stream->codec->time_base,
out_stream->time_base);
av_log(NULL, AV_LOG_DEBUG, "Muxing frame\n");
/* Write encoded frame */
ret = av_interleaved_write_frame(ofmt_ctx, &enc_pkt);
if (ret < 0)
break;
}
else //no face contained
{
//Set the videoTime as maximum double value,
//making the corresponding audio frame not been processed
if (videoTime < DBL_MAX)
videoTime = DBL_MAX;
}
}
else
{
av_frame_free(&frame);
}
}
else//Audio frame
{
//Get current frame time
double audioTime = pkt.pts * av_q2d(in_stream->time_base);
if (audioTime >= videoTime)
{//The current frame should be written into output file
int got_frame;
frame = av_frame_alloc();
if (!frame) {
ret = AVERROR(ENOMEM);
break;
}
//Readjust packet timestamp for decoding
av_packet_rescale_ts(&pkt,
in_stream->time_base,
in_stream->codec->time_base);
//Decode audio frame
int len = avcodec_decode_audio4(in_stream->codec, frame, &got_frame, &pkt);
if (len < 0)
{
av_frame_free(&frame);
av_log(NULL, AV_LOG_ERROR, "Decoding failed\n");
break;
}
if (got_frame)//Got a decoded audio frame
{
//Set pts of audio frame
frame->pts = audioPts;
audioPts += frame->nb_samples;
AVPacket enc_pkt;
av_log(NULL, AV_LOG_INFO, "Encoding audio frame");
//Create packet for encoding
enc_pkt.data = NULL;
enc_pkt.size = 0;
av_init_packet(&enc_pkt);
//Encode audio frame
ret = avcodec_encode_audio2(out_stream->codec, &enc_pkt,
frame, &got_frame);
av_frame_free(&frame);
if (!(got_frame))
ret = 0;
/* Configure encoding properties */
enc_pkt.stream_index = audioindex;
av_packet_rescale_ts(&enc_pkt,
out_stream->codec->time_base,
out_stream->time_base);
av_log(NULL, AV_LOG_DEBUG, "Muxing frame\n");
/* Write encoded frame */
ret = av_interleaved_write_frame(ofmt_ctx, &enc_pkt);
if (ret < 0)
break;
}
else //Shouldn't be written
{
av_frame_free(&frame);
}
}
}
av_packet_unref(&pkt);
}
//Write video trailer
av_write_trailer(ofmt_ctx);
end://Clean up
av_log(NULL, AV_LOG_INFO, "Clean up\n");
av_frame_free(&frame);
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
avcodec_close(ifmt_ctx->streams[i]->codec);
if (ofmt_ctx && ofmt_ctx->nb_streams > i && ofmt_ctx->streams[i] && ofmt_ctx->streams[i]->codec)
avcodec_close(ofmt_ctx->streams[i]->codec);
}
avformat_close_input(&ifmt_ctx);
/* Close output file */
if (ofmt_ctx && !(ofmt_ctx->oformat->flags & AVFMT_NOFILE))
avio_closep(&ofmt_ctx->pb);
avformat_free_context(ofmt_ctx);
if (ret < 0 && ret != AVERROR_EOF) {
char buf[256];
av_strerror(ret, buf, sizeof(buf));
av_log(NULL, AV_LOG_ERROR, "Error occurred:%s\n", buf);
system("Pause");
return 1;
}
//Program end
printf("The End.\n");
system("Pause");
return 0;
}
</iostream> -
Android FFmpeg Video Recording Delete Last Recorded Part
17 avril 2015, par user3587194I’m trying to do exactly what this picture shows.
Anyway, how can I delete part of a video ? The code I was testing is on github.
It uses a progress bar so that when you record the progress bar will move, and keep them in separate segments. What is confusing to me is trying to figure out where and how to grab each segment to see if I want to delete that segment or not.
@Override
public void onPreviewFrame(byte[] data, Camera camera) {
long frameTimeStamp = 0L;
if (mAudioTimestamp == 0L && firstTime > 0L)
frameTimeStamp = 1000L * (System.currentTimeMillis() - firstTime);
else if (mLastAudioTimestamp == mAudioTimestamp)
frameTimeStamp = mAudioTimestamp + frameTime;
else {
long l2 = (System.nanoTime() - mAudioTimeRecorded) / 1000L;
frameTimeStamp = l2 + mAudioTimestamp;
mLastAudioTimestamp = mAudioTimestamp;
}
synchronized (mVideoRecordLock) {
//Log.e("recorder", "mVideoRecordLock " + mVideoRecordLock);
if (recording && rec && lastSavedframe != null && lastSavedframe.getFrameBytesData() != null && yuvIplImage != null) {
if (isFirstFrame) {
isFirstFrame = false;
firstData = data;
}
totalTime = System.currentTimeMillis() - firstTime - pausedTime - ((long) (1.0 / (double) frameRate) * 1000);
if (lastSavedframe != null && !deleteEnabled) {
deleteEnabled = true;
deleteBtn.setVisibility(View.VISIBLE);
cancelBtn.setVisibility(View.GONE);
}
if (!nextEnabled && totalTime >= recordingChangeTime) {
Log.e("recording", "totalTime >= recordingChangeTime " + totalTime + " " + recordingChangeTime);
nextEnabled = true;
nextBtn.setVisibility(View.VISIBLE);
}
if (nextEnabled && totalTime >= recordingMinimumTime) {
mHandler.sendEmptyMessage(5);
}
if (currentRecorderState == RecorderState.PRESS && totalTime >= recordingChangeTime) {
currentRecorderState = RecorderState.LOOSEN;
mHandler.sendEmptyMessage(2);
}
mVideoTimestamp += frameTime;
if (lastSavedframe.getTimeStamp() > mVideoTimestamp)
mVideoTimestamp = lastSavedframe.getTimeStamp();
try {
yuvIplImage.getByteBuffer().put(lastSavedframe.getFrameBytesData());
videoRecorder.setTimestamp(lastSavedframe.getTimeStamp());
videoRecorder.record(yuvIplImage);
} catch (com.googlecode.javacv.FrameRecorder.Exception e) {
e.printStackTrace();
}
}
byte[] tempData = rotateYUV420Degree90(data, previewWidth, previewHeight);
if (cameraSelection == 1)
tempData = rotateYUV420Degree270(data, previewWidth, previewHeight);
lastSavedframe = new SavedFrames(tempData, frameTimeStamp);
//Log.e("recorder", "lastSavedframe " + lastSavedframe);
}
}
}
public class Util {
public static ContentValues videoContentValues = null;
public static String getRecordingTimeFromMillis(long millis) {
String strRecordingTime = null;
int seconds = (int) (millis / 1000);
int minutes = seconds / 60;
int hours = minutes / 60;
if (hours >= 0 && hours < 10)
strRecordingTime = "0" + hours + ":";
else
strRecordingTime = hours + ":";
if (hours > 0)
minutes = minutes % 60;
if (minutes >= 0 && minutes < 10)
strRecordingTime += "0" + minutes + ":";
else
strRecordingTime += minutes + ":";
seconds = seconds % 60;
if (seconds >= 0 && seconds < 10)
strRecordingTime += "0" + seconds ;
else
strRecordingTime += seconds ;
return strRecordingTime;
}
public static int determineDisplayOrientation(Activity activity, int defaultCameraId) {
int displayOrientation = 0;
if (Build.VERSION.SDK_INT > Build.VERSION_CODES.FROYO) {
CameraInfo cameraInfo = new CameraInfo();
Camera.getCameraInfo(defaultCameraId, cameraInfo);
int degrees = getRotationAngle(activity);
if (cameraInfo.facing == Camera.CameraInfo.CAMERA_FACING_FRONT) {
displayOrientation = (cameraInfo.orientation + degrees) % 360;
displayOrientation = (360 - displayOrientation) % 360;
} else {
displayOrientation = (cameraInfo.orientation - degrees + 360) % 360;
}
}
return displayOrientation;
}
public static int getRotationAngle(Activity activity) {
int rotation = activity.getWindowManager().getDefaultDisplay().getRotation();
int degrees = 0;
switch (rotation) {
case Surface.ROTATION_0:
degrees = 0;
break;
case Surface.ROTATION_90:
degrees = 90;
break;
case Surface.ROTATION_180:
degrees = 180;
break;
case Surface.ROTATION_270:
degrees = 270;
break;
}
return degrees;
}
public static int getRotationAngle(int rotation) {
int degrees = 0;
switch (rotation) {
case Surface.ROTATION_0:
degrees = 0;
break;
case Surface.ROTATION_90:
degrees = 90;
break;
case Surface.ROTATION_180:
degrees = 180;
break;
case Surface.ROTATION_270:
degrees = 270;
break;
}
return degrees;
}
public static String createImagePath(Context context){
long dateTaken = System.currentTimeMillis();
String title = Constants.FILE_START_NAME + dateTaken;
String filename = title + Constants.IMAGE_EXTENSION;
String dirPath = Environment.getExternalStorageDirectory()+"/Android/data/" + context.getPackageName()+"/video";
File file = new File(dirPath);
if(!file.exists() || !file.isDirectory())
file.mkdirs();
String filePath = dirPath + "/" + filename;
return filePath;
}
public static String createFinalPath(Context context) {
Log.e("util", "createFinalPath");
long dateTaken = System.currentTimeMillis();
String title = Constants.FILE_START_NAME + dateTaken;
String filename = title + Constants.VIDEO_EXTENSION;
String filePath = genrateFilePath(context, String.valueOf(dateTaken), true, null);
ContentValues values = new ContentValues(7);
values.put(Video.Media.TITLE, title);
values.put(Video.Media.DISPLAY_NAME, filename);
values.put(Video.Media.DATE_TAKEN, dateTaken);
values.put(Video.Media.MIME_TYPE, "video/3gpp");
values.put(Video.Media.DATA, filePath);
videoContentValues = values;
Log.e("util", "filePath " + filePath);
return filePath;
}
public static void deleteTempVideo(Context context) {
final String filePath = Environment.getExternalStorageDirectory() + "/Android/data/" + context.getPackageName() + "/video";
new Thread(new Runnable() {
@Override
public void run() {
File file = new File(filePath);
if (file != null && file.isDirectory()) {
Log.e("util", "file.isDirectory() " + file.isDirectory());
for (File file2 : file.listFiles()) {
Log.e("util", "file.listFiles() " + file.listFiles());
file2.delete();
}
}
}
}).start();
}
private static String genrateFilePath(Context context,String uniqueId, boolean isFinalPath, File tempFolderPath) {
String fileName = Constants.FILE_START_NAME + uniqueId + Constants.VIDEO_EXTENSION;
String dirPath = Environment.getExternalStorageDirectory() + "/Android/data/" + context.getPackageName() + "/video";
if (isFinalPath) {
File file = new File(dirPath);
if (!file.exists() || !file.isDirectory())
file.mkdirs();
} else
dirPath = tempFolderPath.getAbsolutePath();
String filePath = dirPath + "/" + fileName;
return filePath;
}
public static String createTempPath(Context context, File tempFolderPath ) {
long dateTaken = System.currentTimeMillis();
String filePath = genrateFilePath(context,String.valueOf(dateTaken), false, tempFolderPath);
return filePath;
}
public static File getTempFolderPath() {
File tempFolder = new File(Constants.TEMP_FOLDER_PATH +"_" +System.currentTimeMillis());
return tempFolder;
}
public static List getResolutionList(Camera camera) {
Parameters parameters = camera.getParameters();
List previewSizes = parameters.getSupportedPreviewSizes();
return previewSizes;
}
public static RecorderParameters getRecorderParameter(int currentResolution) {
RecorderParameters parameters = new RecorderParameters();
if (currentResolution == Constants.RESOLUTION_HIGH_VALUE) {
parameters.setAudioBitrate(128000);
parameters.setVideoQuality(0);
} else if (currentResolution == Constants.RESOLUTION_MEDIUM_VALUE) {
parameters.setAudioBitrate(128000);
parameters.setVideoQuality(5);
} else if (currentResolution == Constants.RESOLUTION_LOW_VALUE) {
parameters.setAudioBitrate(96000);
parameters.setVideoQuality(20);
}
return parameters;
}
public static int calculateMargin(int previewWidth, int screenWidth) {
int margin = 0;
if (previewWidth <= Constants.RESOLUTION_LOW) {
margin = (int) (screenWidth*0.12);
} else if (previewWidth > Constants.RESOLUTION_LOW && previewWidth <= Constants.RESOLUTION_MEDIUM) {
margin = (int) (screenWidth*0.08);
} else if (previewWidth > Constants.RESOLUTION_MEDIUM && previewWidth <= Constants.RESOLUTION_HIGH) {
margin = (int) (screenWidth*0.08);
}
return margin;
}
public static int setSelectedResolution(int previewHeight) {
int selectedResolution = 0;
if(previewHeight <= Constants.RESOLUTION_LOW) {
selectedResolution = 0;
} else if (previewHeight > Constants.RESOLUTION_LOW && previewHeight <= Constants.RESOLUTION_MEDIUM) {
selectedResolution = 1;
} else if (previewHeight > Constants.RESOLUTION_MEDIUM && previewHeight <= Constants.RESOLUTION_HIGH) {
selectedResolution = 2;
}
return selectedResolution;
}
public static class ResolutionComparator implements Comparator {
@Override
public int compare(Camera.Size size1, Camera.Size size2) {
if(size1.height != size2.height)
return size1.height -size2.height;
else
return size1.width - size2.width;
}
}
public static void concatenateMultipleFiles(String inpath, String outpath)
{
File Folder = new File(inpath);
File files[];
files = Folder.listFiles();
if(files.length>0)
{
for(int i = 0;ilibencoding.so";
}
private static HashMap getMetaData()
{
HashMap localHashMap = new HashMap();
localHashMap.put("creation_time", new SimpleDateFormat("yyyy_MM_dd_HH_mm_ss_SSSZ").format(new Date()));
return localHashMap;
}
public static int getTimeStampInNsFromSampleCounted(int paramInt) {
return (int)(paramInt / 0.0441D);
}
/*public static void saveReceivedFrame(SavedFrames frame) {
File cachePath = new File(frame.getCachePath());
BufferedOutputStream bos;
try {
bos = new BufferedOutputStream(new FileOutputStream(cachePath));
if (bos != null) {
bos.write(frame.getFrameBytesData());
bos.flush();
bos.close();
}
} catch (FileNotFoundException e) {
e.printStackTrace();
cachePath = null;
} catch (IOException e) {
e.printStackTrace();
cachePath = null;
}
}*/
public static Toast showToast(Context context, String textMessage, int timeDuration) {
if (null == context) {
return null;
}
textMessage = (null == textMessage ? "Oops! " : textMessage.trim());
Toast t = Toast.makeText(context, textMessage, timeDuration);
t.show();
return t;
}
public static void showDialog(Context context, String title, String content, int type, final Handler handler) {
final Dialog dialog = new Dialog(context, R.style.Dialog_loading);
dialog.setCancelable(true);
LayoutInflater inflater = LayoutInflater.from(context);
View view = inflater.inflate(R.layout.global_dialog_tpl, null);
Button confirmButton = (Button) view.findViewById(R.id.setting_account_bind_confirm);
Button cancelButton = (Button) view.findViewById(R.id.setting_account_bind_cancel);
TextView dialogTitle = (TextView) view.findViewById(R.id.global_dialog_title);
View line_hori_center = view.findViewById(R.id.line_hori_center);
confirmButton.setVisibility(View.GONE);
line_hori_center.setVisibility(View.GONE);
TextView textView = (TextView) view.findViewById(R.id.setting_account_bind_text);
Window dialogWindow = dialog.getWindow();
WindowManager.LayoutParams lp = dialogWindow.getAttributes();
lp.width = (int) (context.getResources().getDisplayMetrics().density*288);
dialogWindow.setAttributes(lp);
if(type != 1 && type != 2){
type = 1;
}
dialogTitle.setText(title);
textView.setText(content);
if(type == 1 || type == 2){
confirmButton.setVisibility(View.VISIBLE);
confirmButton.setOnClickListener(new OnClickListener(){
@Override
public void onClick(View v){
if(handler != null){
Message msg = handler.obtainMessage();
msg.what = 1;
handler.sendMessage(msg);
}
dialog.dismiss();
}
});
}
// 取消按钮事件
if(type == 2){
cancelButton.setVisibility(View.VISIBLE);
line_hori_center.setVisibility(View.VISIBLE);
cancelButton.setOnClickListener(new OnClickListener(){
@Override
public void onClick(View v){
if(handler != null){
Message msg = handler.obtainMessage();
msg.what = 0;
handler.sendMessage(msg);
}
dialog.dismiss();
}
});
}
dialog.addContentView(view, new LayoutParams(LayoutParams.MATCH_PARENT, LayoutParams.MATCH_PARENT));
dialog.setCancelable(true);// 点击返回键关闭
dialog.setCanceledOnTouchOutside(true);// 点击外部关闭
dialog.show();
}
public IplImage getFrame(String filePath) {
Log.e("util", "getFrame" + filePath);
CvCapture capture = cvCreateFileCapture(filePath);
Log.e("util", "capture " + capture);
IplImage image = cvQueryFrame(capture);
Log.e("util", "image " + image);
return image;
}