
Recherche avancée
Médias (2)
-
Granite de l’Aber Ildut
9 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
-
Géodiversité
9 septembre 2011, par ,
Mis à jour : Août 2018
Langue : français
Type : Texte
Autres articles (103)
-
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...) -
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Mise à disposition des fichiers
14 avril 2011, parPar défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)
Sur d’autres sites (6930)
-
Is there a case where I can not create a file if I run a lot of threads ? ( feat. Ffmpeg, thread)
26 juin 2018, par JunburgI am now creating an app that will duet with the singer. We use a thread (mAudioPlayer) that outputs background music, a vAudioPlayer that prints the voice of the singer, a mRecordThread that records the voice of the user, and a thread that weaves and attaches each file (mp3ConcatThread).
It works by stopping the singer’s voice while recording background music and recording by the user. Of course, when the user does not record, the singer’s voice is output. In this way, each section must be made into an mp3 file and merged into a single file. However, it often happens that files that do not record and merge properly are created.
Audio processing is done using Ffmpeg. I guess the following error might be the reason.
06-26 21:37:11.084 13017-13017/com.softcode.kihnoplay I/Choreographer: Skipped 72 frames! The application may be doing too much work on its main thread.
Could not generate a file because of this kind of error ?
If you know the answer to this question, please answer. Thank you.
Below are related codes. For more information, please leave a comment.
Because the code is too long, I’ve included only the code that looks like it’s needed.Record Thread.class
public class Record_Thread {
private static final String LOG_TAG = Record_Thread.class.getSimpleName();
private static final int SAMPLE_RATE = 44100;
private int bufferSize = 0;
private String currentOutFile = null;
private Context context;
byte RECORDER_BPP = 16;
public Record_Thread(Record_interface listener) {
mListener = listener;
Player.currentCreateFileName = SmcInfo.APPDIRPATH + "/ucc/" + Player.getCurrentTime(false);
currentOutFile = Player.currentCreateFileName + ".pcm";
}
public Record_Thread(Record_interface listener, Context context) {
mListener = listener;
RecordActivity.currentCreateFileName = SmcInfo.APPDIRPATH + "/ucc/" + RecordActivity.getCurrentTime(false);
currentOutFile = RecordActivity.currentCreateFileName + ".pcm";
this.context = context;
}
private boolean isSampleTranspo;
private boolean isRecording;
public boolean isSharding = false;
private Record_interface mListener;
private Thread mThread;
public boolean recording() {
return mThread != null;
}
public void setSampleTranspo(boolean booleanValue) {
this.isSampleTranspo = booleanValue;
}
public boolean getSampleTranspo() {
return this.isSampleTranspo;
}
long startpoint = 0;
boolean posWrite = false;
public void startRecording() {
if (mThread != null)
return;
isRecording = true;
mThread = new Thread(new Runnable() {
@Override
public void run() {
record();
}
});
mThread.start();
}
public void stopRecording() {
if (mThread == null)
return;
isRecording = false;
mThread = null;
posWrite = false;
startpoint = 0;
}
public void startFileWrite(long startpoint) {
this.startpoint = startpoint;
this.posWrite = true;
}
public void stopFileWrite() {
this.posWrite = false;
}
private void record() {
try {
Log.v(LOG_TAG, "Start");
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_AUDIO);
bufferSize = AudioRecord.getMinBufferSize(SAMPLE_RATE, AudioFormat.CHANNEL_IN_STEREO, AudioFormat.ENCODING_PCM_16BIT);
if (bufferSize == AudioRecord.ERROR || bufferSize == AudioRecord.ERROR_BAD_VALUE) {
bufferSize = SAMPLE_RATE * 2;
}
short[] audioBuffer = new short[bufferSize];
short[] audioZero = new short[bufferSize];
AudioRecord record = new AudioRecord(MediaRecorder.AudioSource.MIC, SAMPLE_RATE, AudioFormat.CHANNEL_IN_STEREO, AudioFormat.ENCODING_PCM_16BIT, bufferSize);
if (record.getState() != AudioRecord.STATE_INITIALIZED) {
Log.e(LOG_TAG, "Audio Record can't initialize!");
return;
}
record.startRecording();
Log.v(LOG_TAG, "Start recording");
long shortsRead = 0;
int readsize = 0;
File tempFile = new File(currentOutFile);
if (tempFile.exists())
tempFile.delete();
FileOutputStream fos = new FileOutputStream(currentOutFile);
byte[] audiodata = new byte[bufferSize];
while (isRecording && record != null) {
readsize = record.read(audiodata, 0, audiodata.length);
if (AudioRecord.ERROR_INVALID_OPERATION != readsize && fos != null) {
try {
if (readsize > 0 && readsize <= audiodata.length) {
fos.write(audiodata, 0, readsize);//TypeCast.shortToByte(audioBuffer)
fos.flush();
}
} catch (Exception ex) {
Log.e("AudioRecorder", ex.getMessage());
}
}
ShortBuffer sb = ByteBuffer.wrap(audiodata).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer();
short[] samples = new short[sb.limit()];
sb.get(samples);
if (isSampleTranspo) {
mListener.onAudioDataReceived(samples);
} else {
mListener.onAudioDataReceived(audioZero);
}
if (posWrite) {
FileOutputStream pos = null;
try {
if (startpoint > 0) {
if (context instanceof RecordActivity) {
pos = new FileOutputStream(RecordActivity.currentCreateFileName.replaceAll("/ucc/", "/tmp/") + "_" + caltime(String.valueOf((int) (startpoint / 1000)), false) + "_uv.pcm", true);/////파일에 이어서 쓰기
Log.d(TAG, "record: " + pos.toString());
} else {
pos = new FileOutputStream(Player.currentCreateFileName.replaceAll("/ucc/", "/tmp/") + "_" + caltime(String.valueOf((int) (startpoint / 1000)), false) + "_uv.pcm", true);/////파일에 이어서 쓰기
}
}
pos.write(audiodata);
pos.flush();
} catch (Exception e) {
e.printStackTrace();
} finally {
pos.close();
pos = null;
}
}
}
if (fos != null)
fos.close();
mListener.onRecordEnd();
record.stop();
record.release();
} catch (IOException e) {
Log.e("AudioRecorder", e.getMessage());
}
}
private String caltime(String sMillis, boolean timeFormat) {
double dMillis = 0;
int minutes = 0;
int seconds = 0;
int millis = 0;
String sTime;
try {
dMillis = Double.parseDouble(sMillis);
} catch (Exception e) {
System.out.println(e.getMessage());
}
seconds = (int) (dMillis / 1000) % 60;
millis = (int) (dMillis % 1000);
if (seconds > 0) {
minutes = (int) (dMillis / 1000 / 60) % 60;
if (minutes > 0) {
if (timeFormat)
sTime = String.format("%02d:%02d.%d", minutes, seconds, millis);
else
sTime = String.format("%02d%02d%d", minutes, seconds, millis);
} else {
if (timeFormat)
sTime = String.format("%02d:%02d.%d", 0, seconds, millis);
else
sTime = String.format("%02d%02d%d", 0, seconds, millis);
}
} else {
if (timeFormat)
sTime = String.format("%02d:%02d.%d", 0, 0, millis);
else
sTime = String.format("%02d%02d%d", 0, 0, millis);
}
return sTime;
}
}RecordActivity.class
public class RecordActivity extends AppCompatActivity implements Player_interface, SeekBar.OnSeekBarChangeListener {
private static final String TAG = "RecordActivity";
public Context context = this;
private LinearLayout recordLayout;
private RelativeLayout recordBtn, saveBtn;
private CircleImageView userImg, artistImg;
private TextView songTitleTxt, playTimeTxt, progressTimeTxt;
private BlurBitmap blurBitmap;
private SeekBar seekBar;
private ImageView micBg1, micBg2;
private String assPath;
private String ampPath;
private int deviceWidth, deviceHeight;
public static AssRenderView assView;
public static LinearLayout lyricsLayout;
public static int lyricsWidth, lyricsHeight, layoutWidth;
public static LinearLayout.LayoutParams assViewParams;
public static String currentCreateFileName = null;
public static String mrPath;
public static String voicePath;
private String recMusicPath;
Player_Thread mAudioPlayer = null, vAudioPlayer = null, testPlayer = null;
private Record_Thread mRecordThread;
public static Mp3Concat_Thread mMp3ConcatThread;
long lastDuration = 0L;
private boolean isSeekbarTouch = false;
private ArrayList<long> combineList;
CNetProgressdialog createMp3Dialog;
int bufferSize = 7104;
int SAMPLE_RATE = 44100;
int RECORDER_SAMPLERATE = 44100;
byte RECORDER_BPP = 16;
@Override
protected void onCreate(@Nullable Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
removeDir(SmcInfo.APPDIRPATH + "/tmp");
setContentView(R.layout.activity_record_phone);
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.KITKAT) {
Window window = getWindow();
window.addFlags(WindowManager.LayoutParams.FLAG_LAYOUT_NO_LIMITS);
window.addFlags(WindowManager.LayoutParams.FLAG_TRANSLUCENT_NAVIGATION);
}
recordLayout = (LinearLayout) findViewById(R.id.record_layout);
userImg = (CircleImageView) findViewById(R.id.user_img);
artistImg = (CircleImageView) findViewById(R.id.artist_img);
songTitleTxt = (TextView) findViewById(R.id.song_title_txt);
progressTimeTxt = (TextView) findViewById(R.id.progress_time_txt);
playTimeTxt = (TextView) findViewById(R.id.play_time_txt);
recordBtn = (RelativeLayout) findViewById(R.id.record_btn);
saveBtn = (RelativeLayout) findViewById(R.id.save_btn);
seekBar = (SeekBar) findViewById(R.id.seek_bar);
micBg1 = (ImageView) findViewById(R.id.mic_bg_small);
micBg2 = (ImageView) findViewById(R.id.mic_bg_big);
createMp3Dialog = new CNetProgressdialog(this);
GradientDrawable drawable = new GradientDrawable();
drawable.setColors(new int[]{
Color.parseColor("#32c49b"),
Color.parseColor("#19b2c3")
});
Intent intent = getIntent();
final String artistImgPath = intent.getStringExtra("artistImgPath");
final String songTitle = intent.getStringExtra("songTitle");
assPath = intent.getStringExtra("assPath");
ampPath = intent.getStringExtra("ampPath");
String playTime = intent.getStringExtra("playTime");
blurBitmap = new BlurBitmap();
songTitleTxt.setText(songTitle);
playTimeTxt.setText(playTime);
final Bitmap artistImgBitmap = blurBitmap.toBitmap(artistImgPath);
final Bitmap userImgBitmap = BitmapFactory.decodeResource(context.getResources(), R.drawable.dummy_artist_2);
final Bitmap userBlurImg = blurBitmap.blurRenderScript(this, userImgBitmap, 25);
final Bitmap artistBlurImg = blurBitmap.blurRenderScript(this, artistImgBitmap, 25);
artistImg.setImageBitmap(artistImgBitmap);
userImg.setImageBitmap(userBlurImg);
drawable.setGradientType(GradientDrawable.LINEAR_GRADIENT);
drawable.setOrientation(GradientDrawable.Orientation.TOP_BOTTOM);
recordLayout.setBackground(drawable);
play(ampToMp3(ampPath));
mRecordThread = new Record_Thread(new Record_interface() {
@Override
public void onAudioDataReceived(short[] data) {
}
@Override
public void onRecordEnd() {
}
}, context);
mMp3ConcatThread = new Mp3Concat_Thread(new Mp3Concat_interface() {
@Override
public void onAudioDataReceived(short[] data) {
}
@Override
public void onRecordEnd() {
createMp3Dialog.dismiss();
startPrelisteningActivity(recMusicPath, songTitle);
}
}, this);
if (!mRecordThread.recording()) {
mRecordThread.startRecording();
}
final Animation animZoomIn = AnimationUtils.loadAnimation(this, R.anim.zoom_in);
final Animation animZoomOut = AnimationUtils.loadAnimation(this, R.anim.zoom_out);
final Animation animMic1 = AnimationUtils.loadAnimation(this, R.anim.bg_mic_anim_1_phone);
final Animation animMic2 = AnimationUtils.loadAnimation(this, R.anim.bg_mic_anim_2_phone);
artistImg.startAnimation(animZoomIn);
combineList = new ArrayList<long>();
recordBtn.setOnTouchListener(new View.OnTouchListener() {
@Override
public boolean onTouch(View view, MotionEvent motionEvent) {
switch (motionEvent.getAction()) {
case MotionEvent.ACTION_DOWN: {
long currentDuration = vAudioPlayer.getCurrentDuration();
// 녹음 시작 ( combineList 사이즈가 짝수일 때 )
if (mRecordThread != null) {
if (combineList.size() % 2 == 0) {
mRecordThread.startFileWrite(currentDuration);
combineList.add(currentDuration);
}
vAudioPlayer.setSampleTranspo(true);
mRecordThread.setSampleTranspo(true);
}
}
micBg1.setVisibility(View.VISIBLE);
micBg2.setVisibility(View.VISIBLE);
micBg1.startAnimation(animMic1);
micBg2.startAnimation(animMic2);
userImg.setImageBitmap(userImgBitmap);
userImg.startAnimation(animZoomIn);
artistImg.setImageBitmap(artistBlurImg);
artistImg.startAnimation(animZoomOut);
break;
case MotionEvent.ACTION_UP: {
long currentDuration = vAudioPlayer.getCurrentDuration();
if (mRecordThread != null) {
if (combineList.size() % 2 == 1) {
mRecordThread.startRecording();
mRecordThread.stopFileWrite();
File waveFile = new File(RecordActivity.currentCreateFileName.replaceAll("/ucc/", "/tmp/")
+ "_" + caltime(combineList.get(combineList.size() - 1) / 1000, false) + "_uv.pcm");
if (waveFile.exists()) {
copyWaveFile(RecordActivity.currentCreateFileName.replaceAll("/ucc/", "/tmp/") + "_" + caltime(combineList.get(combineList.size() - 1) / 1000, false) + "_uv.pcm",
RecordActivity.currentCreateFileName.replaceAll("/ucc/", "/tmp/") + "_" + caltime(combineList.get(combineList.size() - 1) / 1000, false) + "_u0.wav");
Log.d(TAG, "onTouch: " + currentCreateFileName);
if (mMp3ConcatThread != null) {
mMp3ConcatThread.startCombine(null, 3333333333333333333L, combineList.get(combineList.size() - 1), currentDuration);
}
}
combineList.add(currentDuration);
Log.d(TAG, "onTouch: " + combineList.size());
if (combineList.size() == 2) {
mMp3ConcatThread.startCombine(null, 0, combineList.get(combineList.size() - 2), currentDuration);
} else {
mMp3ConcatThread.startCombine(null, combineList.get(combineList.size() - 3), combineList.get(combineList.size() - 2), currentDuration);
}
}
vAudioPlayer.setSampleTranspo(false);
mRecordThread.setSampleTranspo(false);
}
}
micBg1.setVisibility(View.GONE);
micBg2.setVisibility(View.GONE);
micBg1.clearAnimation();
micBg2.clearAnimation();
userImg.setImageBitmap(userBlurImg);
userImg.startAnimation(animZoomOut);
artistImg.setImageBitmap(artistImgBitmap);
artistImg.startAnimation(animZoomIn);
break;
}
return false;
}
});
saveBtn.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
createMp3Dialog.show();
vAudioPlayer.setSampleTranspo(true);
vAudioPlayer.setlistenerStop(true);
if (assView != null)
assView.Destroy();
if (lyricsLayout != null) {
lyricsLayout.removeAllViews();
}
seekBar.setProgress(0);
seekBar.setMax(100);
Log.d(TAG, "donep3: " + "done");
if (mMp3ConcatThread != null) {
try {
mMp3ConcatThread.startCombine(combineList, 7777777777777777777L, combineList.get(combineList.size() - 1), lastDuration);
} catch (ArrayIndexOutOfBoundsException e) {
e.getMessage();
finish();
}
}
releaseAudioPlayer();
recMusicPath = SmcInfo.APPDIRPATH + "/ucc/" + currentCreateFileName.substring(currentCreateFileName.lastIndexOf('/') + 1, currentCreateFileName.length()) + ".mp3";
}
});
DisplayMetrics displayMetrics = new DisplayMetrics();
getWindowManager().getDefaultDisplay().getMetrics(displayMetrics);
deviceWidth = displayMetrics.widthPixels;
deviceHeight = displayMetrics.heightPixels;
lyricsWidth = deviceWidth;
lyricsHeight = deviceHeight;
Log.d(TAG, "onCreate: " + lyricsWidth + "/" + lyricsHeight);
layoutWidth = lyricsWidth * 2 / 3;
int parentAssViewHeight = ((lyricsHeight * 50) / 91) - 2;
if (layoutWidth > parentAssViewHeight)
layoutWidth = (parentAssViewHeight * 8) / 10;
assViewParams = new LinearLayout.LayoutParams(new ViewGroup.LayoutParams(layoutWidth * 2, layoutWidth));
assViewParams.gravity = Gravity.CENTER;
lyricsLayout = (LinearLayout)
findViewById(R.id.lyrics_layout);
if (assView != null) {
assView.Destroy();
}
if (lyricsLayout != null) {
lyricsLayout.removeAllViews();
}
assView = new AssRenderView(getApplicationContext(), layoutWidth * 13 / 10, layoutWidth);
File assFile = new File(assPath);
if (assFile.exists()) {
assView.ReadASSFile(assFile.toString(), true, layoutWidth * 2, layoutWidth * 5 / 7);
}
lyricsLayout.addView(assView, assViewParams);
lyricsLayout.setGravity(Gravity.CENTER);
assView.ShowASS(true);
seekBar.setOnSeekBarChangeListener(this);
seekBar.setProgress(0);
seekBar.setMax(100);
}
private void startPrelisteningActivity(String recMusicPath, String songTitle) {
Intent intent = new Intent(RecordActivity.this, PrelisteningActivity.class);
intent.putExtra("recMusicPath", recMusicPath);
intent.putExtra("songTitle", songTitle);
startActivityForResult(intent, 1);
}
private String[] ampToMp3(String ampPath) {
String[] pathArray = new String[2];
try {
File ampFile = new File(ampPath);
String ampName = ampFile.getName();
int size;
BufferedInputStream buf = null;
FileInputStream fis = null;
size = (int) ampFile.length();
byte[] bytes = new byte[size];
fis = new FileInputStream(ampFile);
buf = new BufferedInputStream(fis, 8 * 1024);
buf.read(bytes, 0, bytes.length);
byte[] vocalbytes = AMPFileUtility.getByteData(bytes, "voice");
byte[] mrbytes = AMPFileUtility.getByteData(bytes, "mr");
voicePath = SmcInfo.APPDIRPATH + "/audio/" + ampName.replaceAll(".amp", "") + "_voice.mp3";
mrPath = SmcInfo.APPDIRPATH + "/audio/" + ampName.replaceAll(".amp", "") + "_mr.mp3";
BufferedOutputStream bosVocal = new BufferedOutputStream(new FileOutputStream(voicePath));
bosVocal.write(vocalbytes);
bosVocal.flush();
bosVocal.close();
BufferedOutputStream bosMr = new BufferedOutputStream(new FileOutputStream(mrPath));
bosMr.write(mrbytes);
bosMr.flush();
bosMr.close();
} catch (Exception e) {
e.getMessage();
}
pathArray[0] = voicePath;
pathArray[1] = mrPath;
return pathArray;
}
private void play(String[] pathArray) {
releaseAudioPlayer();
String voicePath = pathArray[0];
String mrPath = pathArray[1];
mAudioPlayer = new Player_Thread();
mAudioPlayer.setOnAudioStreamInterface(this);
mAudioPlayer.setUrlString(mrPath);
mAudioPlayer.setlistenerStop(true);
vAudioPlayer = new Player_Thread();
vAudioPlayer.setOnAudioStreamInterface(this);
vAudioPlayer.setUrlString(voicePath);
vAudioPlayer.setlistenerStop(false);
try {
mAudioPlayer.play();
vAudioPlayer.play();
} catch (IOException e) {
e.printStackTrace();
}
}
private void releaseAudioPlayer() {
if (mAudioPlayer != null) {
mAudioPlayer.stop();
mAudioPlayer.release();
mAudioPlayer = null;
}
if (vAudioPlayer != null) {
vAudioPlayer.stop();
vAudioPlayer.release();
vAudioPlayer = null;
}
if (mRecordThread != null) {
mRecordThread.stopRecording();
}
}
public static String getCurrentTime(boolean dateForm) {
SimpleDateFormat dateFormat;
if (dateForm)
dateFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss.SSS"); //SSS가 밀리세컨드 표시
else
dateFormat = new SimpleDateFormat("yyyyMMdd_HHmmssSSS");
Calendar calendar = Calendar.getInstance();
return dateFormat.format(calendar.getTime());
}
private String caltime(long sMillis, boolean timeFormat) {
double dMillis = 0;
int minutes = 0;
int seconds = 0;
int millis = 0;
String sTime;
try {
dMillis = Double.parseDouble(String.valueOf(sMillis));
} catch (Exception e) {
System.out.println(e.getMessage());
}
seconds = (int) (dMillis / 1000) % 60;
millis = (int) (dMillis % 1000);
if (seconds > 0) {
minutes = (int) (dMillis / 1000 / 60) % 60;
if (minutes > 0) {
if (timeFormat)
sTime = String.format("%02d:%02d.%d", minutes, seconds, millis);
else
sTime = String.format("%02d%02d%d", minutes, seconds, millis);
} else {
if (timeFormat)
sTime = String.format("%02d:%02d.%d", 0, seconds, millis);
else
sTime = String.format("%02d%02d%d", 0, seconds, millis);
}
} else {
if (timeFormat)
sTime = String.format("%02d:%02d.%d", 0, 0, millis);
else
sTime = String.format("%02d%02d%d", 0, 0, millis);
}
Log.d(TAG, "caltime: " + sTime);
return sTime;
}
public void copyWaveFile(String inFilename, String outFilename) {
FileInputStream in = null;
FileOutputStream out = null;
long totalAudioLen = 0;
long totalDataLen = totalAudioLen + 36;
long longSampleRate = SAMPLE_RATE;
int channels = 2;/////////////////byte 저장은 1에서 완벽함 AudioFormat.CHANNEL_IN_MONO: channels = 1;AudioFormat.CHANNEL_IN_STEREO: channels = 2;
long byteRate = RECORDER_BPP * SAMPLE_RATE * channels / 8;
try {
in = new FileInputStream(inFilename);
out = new FileOutputStream(outFilename);
byte[] data = new byte[bufferSize];
totalAudioLen = in.getChannel().size();
totalDataLen = totalAudioLen + 36;
AppLog.logString("File size: " + totalDataLen);
WriteWaveFileHeader(out, totalAudioLen, totalDataLen, longSampleRate, channels, byteRate);
while (in.read(data) != -1) {
out.write(data);
}
in.close();
out.close();
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
public void WriteWaveFileHeader(FileOutputStream out, long totalAudioLen, long totalDataLen, long longSampleRate, int channels, long byteRate) throws IOException {
byte[] header = new byte[44];
header[0] = 'R';
header[1] = 'I';
header[2] = 'F';
header[3] = 'F';
header[4] = (byte) (totalDataLen & 0xff);
header[5] = (byte) ((totalDataLen >> 8) & 0xff);
header[6] = (byte) ((totalDataLen >> 16) & 0xff);
header[7] = (byte) ((totalDataLen >> 24) & 0xff);
header[8] = 'W';
header[9] = 'A';
header[10] = 'V';
header[11] = 'E';
header[12] = 'f';
header[13] = 'm';
header[14] = 't';
header[15] = ' ';
header[16] = 16;
header[17] = 0;
header[18] = 0;
header[19] = 0;
header[20] = 1;
header[21] = 0;
header[22] = (byte) channels;
header[23] = 0;
header[24] = (byte) (longSampleRate & 0xff);
header[25] = (byte) ((longSampleRate >> 8) & 0xff);
header[26] = (byte) ((longSampleRate >> 16) & 0xff);
header[27] = (byte) ((longSampleRate >> 24) & 0xff);
header[28] = (byte) (byteRate & 0xff);
header[29] = (byte) ((byteRate >> 8) & 0xff);
header[30] = (byte) ((byteRate >> 16) & 0xff);
header[31] = (byte) ((byteRate >> 24) & 0xff);
header[32] = (byte) (2 * 16 / 8);
header[33] = 0;
header[34] = RECORDER_BPP;
header[35] = 0;
header[36] = 'd';
header[37] = 'a';
header[38] = 't';
header[39] = 'a';
header[40] = (byte) (totalAudioLen & 0xff);
header[41] = (byte) ((totalAudioLen >> 8) & 0xff);
header[42] = (byte) ((totalAudioLen >> 16) & 0xff);
header[43] = (byte) ((totalAudioLen >> 24) & 0xff);
out.write(header, 0, 44);
}
</long></long> -
How to stream synchronized video and audio in real-time from an Android smartphone using HLS while preserving orientation metadata ?
6 mars, par Jérôme LAROSEHello, 
I am working on an Android application where I need to stream video
from one or two cameras on my smartphone, along with audio from the
microphone, in real-time via a link or web page accessible to users.
The stream should be live, allow rewinding (DVR functionality), and be
recorded simultaneously. A latency of 1 to 2 minutes is acceptable,
and the streaming is one-way. 

I have chosen HLS (HTTP Live Streaming) for its browser compatibility
and DVR support. However, I am encountering issues with audio-video
synchronization, managing camera orientation metadata, and format
conversions.



Here are my attempts :


- 

-
MP4 segmentation with
MediaRecorder


- 

- I used
MediaRecorder
withsetNextOutputFile
to generate short MP4 segments, thenffmpeg-kit
to convert them to fMP4 for HLS. - Expected : Well-aligned segments for smooth HLS playback.
- Result : Timestamp issues causing jumps or interruptions in playback.








- I used
-
MPEG2-TS via local socket


- 

- I configured
MediaRecorder
to produce an MPEG2-TS stream sent via a local socket toffmpeg-kit
. - Expected : Stable streaming with preserved metadata.
- Result : Streaming works, but orientation metadata is lost, leading to incorrectly oriented video (e.g., rotated 90°).








- I configured
-
Orientation correction with
ffmpeg


- 

- I tested
-vf transpose=1
inffmpeg
to correct the orientation. - Expected : Correctly oriented video without excessive latency.
- Result : Re-encoding takes too long for real-time streaming, causing unacceptable latency.








- I tested
-
MPEG2-TS to fMP4 conversion


- 

- I converted the MPEG2-TS stream to fMP4 with
ffmpeg
to preserve orientation. - Expected : Perfect audio-video synchronization.
- Result : Slight desynchronization between audio and video, affecting the user experience.








- I converted the MPEG2-TS stream to fMP4 with










I am looking for a solution to :


- 

- Stream an HLS feed from Android with correctly timestamped segments.
- Preserve orientation metadata without heavy re-encoding.
- Ensure perfect audio-video synchronization.








UPDATE


package com.example.angegardien

import android.Manifest
import android.content.Context
import android.content.pm.PackageManager
import android.graphics.SurfaceTexture
import android.hardware.camera2.*
import android.media.*
import android.os.*
import android.util.Log
import android.view.Surface
import android.view.TextureView
import android.view.WindowManager
import androidx.activity.ComponentActivity
import androidx.core.app.ActivityCompat
import com.arthenica.ffmpegkit.FFmpegKit
import fi.iki.elonen.NanoHTTPD
import kotlinx.coroutines.*
import java.io.File
import java.io.IOException
import java.net.ServerSocket
import android.view.OrientationEventListener

/**
 * MainActivity class:
 * - Manages camera operations using the Camera2 API.
 * - Records video using MediaRecorder.
 * - Pipes data to FFmpeg to generate HLS segments.
 * - Hosts a local HLS server using NanoHTTPD to serve the generated HLS content.
 */
class MainActivity : ComponentActivity() {

 // TextureView used for displaying the camera preview.
 private lateinit var textureView: TextureView
 // Camera device instance.
 private lateinit var cameraDevice: CameraDevice
 // Camera capture session for managing capture requests.
 private lateinit var cameraCaptureSession: CameraCaptureSession
 // CameraManager to access camera devices.
 private lateinit var cameraManager: CameraManager
 // Directory where HLS output files will be stored.
 private lateinit var hlsDir: File
 // Instance of the HLS server.
 private lateinit var hlsServer: HlsServer

 // Camera id ("1" corresponds to the rear camera).
 private val cameraId = "1"
 // Flag indicating whether recording is currently active.
 private var isRecording = false

 // MediaRecorder used for capturing audio and video.
 private lateinit var activeRecorder: MediaRecorder
 // Surface for the camera preview.
 private lateinit var previewSurface: Surface
 // Surface provided by MediaRecorder for recording.
 private lateinit var recorderSurface: Surface

 // Port for the FFmpeg local socket connection.
 private val ffmpegPort = 8080

 // Coroutine scope to manage asynchronous tasks.
 private val scope = CoroutineScope(Dispatchers.IO + SupervisorJob())

 // Variables to track current device rotation and listen for orientation changes.
 private var currentRotation = 0
 private lateinit var orientationListener: OrientationEventListener

 override fun onCreate(savedInstanceState: Bundle?) {
 super.onCreate(savedInstanceState)

 // Initialize the TextureView and set it as the content view.
 textureView = TextureView(this)
 setContentView(textureView)

 // Get the CameraManager system service.
 cameraManager = getSystemService(CAMERA_SERVICE) as CameraManager
 // Setup the directory for HLS output.
 setupHLSDirectory()

 // Start the local HLS server on port 8081.
 hlsServer = HlsServer(8081, hlsDir, this)
 try {
 hlsServer.start()
 Log.d("HLS_SERVER", "HLS Server started on port 8081")
 } catch (e: IOException) {
 Log.e("HLS_SERVER", "Error starting HLS Server", e)
 }

 // Initialize the current rotation.
 currentRotation = getDeviceRotation()

 // Add a listener to detect orientation changes.
 orientationListener = object : OrientationEventListener(this) {
 override fun onOrientationChanged(orientation: Int) {
 if (orientation == ORIENTATION_UNKNOWN) return // Skip unknown orientations.
 // Determine the new rotation angle.
 val newRotation = when {
 orientation >= 315 || orientation < 45 -> 0
 orientation >= 45 && orientation < 135 -> 90
 orientation >= 135 && orientation < 225 -> 180
 orientation >= 225 && orientation < 315 -> 270
 else -> 0
 }
 // If the rotation has changed and recording is active, update the rotation.
 if (newRotation != currentRotation && isRecording) {
 Log.d("ROTATION", "Orientation change detected: $newRotation")
 currentRotation = newRotation
 }
 }
 }
 orientationListener.enable()

 // Set up the TextureView listener to know when the surface is available.
 textureView.surfaceTextureListener = object : TextureView.SurfaceTextureListener {
 override fun onSurfaceTextureAvailable(surface: SurfaceTexture, width: Int, height: Int) {
 // Open the camera when the texture becomes available.
 openCamera()
 }
 override fun onSurfaceTextureSizeChanged(surface: SurfaceTexture, width: Int, height: Int) {}
 override fun onSurfaceTextureDestroyed(surface: SurfaceTexture) = false
 override fun onSurfaceTextureUpdated(surface: SurfaceTexture) {}
 }
 }

 /**
 * Sets up the HLS directory in the public Downloads folder.
 * If the directory exists, it deletes it recursively and creates a new one.
 */
 private fun setupHLSDirectory() {
 val downloadsDir = Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_DOWNLOADS)
 hlsDir = File(downloadsDir, "HLS_Output")

 if (hlsDir.exists()) {
 hlsDir.deleteRecursively()
 }
 hlsDir.mkdirs()

 Log.d("HLS", "📂 HLS folder created: ${hlsDir.absolutePath}")
 }

 /**
 * Opens the camera after checking for necessary permissions.
 */
 private fun openCamera() {
 if (ActivityCompat.checkSelfPermission(this, Manifest.permission.CAMERA) != PackageManager.PERMISSION_GRANTED ||
 ActivityCompat.checkSelfPermission(this, Manifest.permission.RECORD_AUDIO) != PackageManager.PERMISSION_GRANTED) {
 // Request permissions if they are not already granted.
 ActivityCompat.requestPermissions(this, arrayOf(Manifest.permission.CAMERA, Manifest.permission.RECORD_AUDIO), 101)
 return
 }

 try {
 // Open the specified camera using its cameraId.
 cameraManager.openCamera(cameraId, object : CameraDevice.StateCallback() {
 override fun onOpened(camera: CameraDevice) {
 cameraDevice = camera
 // Start the recording session once the camera is opened.
 startNextRecording()
 }
 override fun onDisconnected(camera: CameraDevice) { camera.close() }
 override fun onError(camera: CameraDevice, error: Int) { camera.close() }
 }, null)
 } catch (e: CameraAccessException) {
 e.printStackTrace()
 }
 }

 /**
 * Starts a new recording session:
 * - Sets up the preview and recorder surfaces.
 * - Creates a pipe for MediaRecorder output.
 * - Creates a capture session for simultaneous preview and recording.
 */
 private fun startNextRecording() {
 // Get the SurfaceTexture from the TextureView and set its default buffer size.
 val texture = textureView.surfaceTexture!!
 texture.setDefaultBufferSize(1920, 1080)
 // Create the preview surface.
 previewSurface = Surface(texture)

 // Create and configure the MediaRecorder.
 activeRecorder = createMediaRecorder()

 // Create a pipe to route MediaRecorder data.
 val pipe = ParcelFileDescriptor.createPipe()
 val pfdWrite = pipe[1] // Write end used by MediaRecorder.
 val pfdRead = pipe[0] // Read end used by the local socket server.

 // Set MediaRecorder output to the file descriptor of the write end.
 activeRecorder.setOutputFile(pfdWrite.fileDescriptor)
 setupMediaRecorder(activeRecorder)
 // Obtain the recorder surface from MediaRecorder.
 recorderSurface = activeRecorder.surface

 // Create a capture request using the RECORD template.
 val captureRequestBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_RECORD)
 captureRequestBuilder.addTarget(previewSurface)
 captureRequestBuilder.addTarget(recorderSurface)

 // Create a capture session including both preview and recorder surfaces.
 cameraDevice.createCaptureSession(
 listOf(previewSurface, recorderSurface),
 object : CameraCaptureSession.StateCallback() {
 override fun onConfigured(session: CameraCaptureSession) {
 cameraCaptureSession = session
 captureRequestBuilder.set(CaptureRequest.CONTROL_MODE, CameraMetadata.CONTROL_MODE_AUTO)
 // Start a continuous capture request.
 cameraCaptureSession.setRepeatingRequest(captureRequestBuilder.build(), null, null)

 // Launch a coroutine to start FFmpeg and MediaRecorder with synchronization.
 scope.launch {
 startFFmpeg()
 delay(500) // Wait for FFmpeg to be ready.
 activeRecorder.start()
 isRecording = true
 Log.d("HLS", "🎥 Recording started...")
 }

 // Launch a coroutine to run the local socket server to forward data.
 scope.launch {
 startLocalSocketServer(pfdRead)
 }
 }
 override fun onConfigureFailed(session: CameraCaptureSession) {
 Log.e("Camera2", "❌ Configuration failed")
 }
 },
 null
 )
 }

 /**
 * Coroutine to start a local socket server.
 * It reads from the MediaRecorder pipe and sends the data to FFmpeg.
 */
 private suspend fun startLocalSocketServer(pfdRead: ParcelFileDescriptor) {
 withContext(Dispatchers.IO) {
 val serverSocket = ServerSocket(ffmpegPort)
 Log.d("HLS", "Local socket server started on port $ffmpegPort")

 // Accept connection from FFmpeg.
 val socket = serverSocket.accept()
 Log.d("HLS", "Connection accepted from FFmpeg")

 // Read data from the pipe and forward it through the socket.
 val inputStream = ParcelFileDescriptor.AutoCloseInputStream(pfdRead)
 val outputStream = socket.getOutputStream()
 val buffer = ByteArray(8192)
 var bytesRead: Int
 while (inputStream.read(buffer).also { bytesRead = it } != -1) {
 outputStream.write(buffer, 0, bytesRead)
 }
 outputStream.close()
 inputStream.close()
 socket.close()
 serverSocket.close()
 }
 }

 /**
 * Coroutine to start FFmpeg using a local TCP input.
 * Applies a video rotation filter based on device orientation and generates HLS segments.
 */
 private suspend fun startFFmpeg() {
 withContext(Dispatchers.IO) {
 // Retrieve the appropriate transpose filter based on current rotation.
 val transposeFilter = getTransposeFilter(currentRotation)

 // FFmpeg command to read from the TCP socket and generate an HLS stream.
 // Two alternative commands are commented below.
 // val ffmpegCommand = "-fflags +genpts -i tcp://localhost:$ffmpegPort -c copy -bsf:a aac_adtstoasc -movflags +faststart -f dash -seg_duration 10 -hls_playlist 1 ${hlsDir.absolutePath}/manifest.mpd"
 // val ffmpegCommand = "-fflags +genpts -i tcp://localhost:$ffmpegPort -c copy -bsf:a aac_adtstoasc -movflags +faststart -f hls -hls_time 5 -hls_segment_type fmp4 -hls_flags split_by_time -hls_list_size 0 -hls_playlist_type event -hls_fmp4_init_filename init.mp4 -hls_segment_filename ${hlsDir.absolutePath}/segment_%03d.m4s ${hlsDir.absolutePath}/playlist.m3u8"
 val ffmpegCommand = "-fflags +genpts -i tcp://localhost:$ffmpegPort -vf $transposeFilter -c:v libx264 -preset ultrafast -crf 23 -c:a copy -movflags +faststart -f hls -hls_time 0.1 -hls_segment_type mpegts -hls_flags split_by_time -hls_list_size 0 -hls_playlist_type event -hls_segment_filename ${hlsDir.absolutePath}/segment_%03d.ts ${hlsDir.absolutePath}/playlist.m3u8"

 FFmpegKit.executeAsync(ffmpegCommand) { session ->
 if (session.returnCode.isValueSuccess) {
 Log.d("HLS", "✅ HLS generated successfully")
 } else {
 Log.e("FFmpeg", "❌ Error generating HLS: ${session.allLogsAsString}")
 }
 }
 }
 }

 /**
 * Gets the current device rotation using the WindowManager.
 */
 private fun getDeviceRotation(): Int {
 val windowManager = getSystemService(Context.WINDOW_SERVICE) as WindowManager
 return when (windowManager.defaultDisplay.rotation) {
 Surface.ROTATION_0 -> 0
 Surface.ROTATION_90 -> 90
 Surface.ROTATION_180 -> 180
 Surface.ROTATION_270 -> 270
 else -> 0
 }
 }

 /**
 * Returns the FFmpeg transpose filter based on the rotation angle.
 * Used to rotate the video stream accordingly.
 */
 private fun getTransposeFilter(rotation: Int): String {
 return when (rotation) {
 90 -> "transpose=1" // 90° clockwise
 180 -> "transpose=2,transpose=2" // 180° rotation
 270 -> "transpose=2" // 90° counter-clockwise
 else -> "transpose=0" // No rotation
 }
 }

 /**
 * Creates and configures a MediaRecorder instance.
 * Sets up audio and video sources, formats, encoders, and bitrates.
 */
 private fun createMediaRecorder(): MediaRecorder {
 return MediaRecorder().apply {
 setAudioSource(MediaRecorder.AudioSource.MIC)
 setVideoSource(MediaRecorder.VideoSource.SURFACE)
 setOutputFormat(MediaRecorder.OutputFormat.MPEG_2_TS)
 setVideoEncodingBitRate(5000000)
 setVideoFrameRate(24)
 setVideoSize(1080, 720)
 setVideoEncoder(MediaRecorder.VideoEncoder.H264)
 setAudioEncoder(MediaRecorder.AudioEncoder.AAC)
 setAudioSamplingRate(16000)
 setAudioEncodingBitRate(96000) // 96 kbps
 }
 }

 /**
 * Prepares the MediaRecorder and logs the outcome.
 */
 private fun setupMediaRecorder(recorder: MediaRecorder) {
 try {
 recorder.prepare()
 Log.d("HLS", "✅ MediaRecorder prepared")
 } catch (e: IOException) {
 Log.e("HLS", "❌ Error preparing MediaRecorder", e)
 }
 }

 /**
 * Custom HLS server class extending NanoHTTPD.
 * Serves HLS segments and playlists from the designated HLS directory.
 */
 private inner class HlsServer(port: Int, private val hlsDir: File, private val context: Context) : NanoHTTPD(port) {
 override fun serve(session: IHTTPSession): Response {
 val uri = session.uri.trimStart('/')

 // Intercept the request for `init.mp4` and serve it from assets.
 /*
 if (uri == "init.mp4") {
 Log.d("HLS Server", "📡 Intercepting init.mp4, sending file from assets...")
 return try {
 val assetManager = context.assets
 val inputStream = assetManager.open("init.mp4")
 newFixedLengthResponse(Response.Status.OK, "video/mp4", inputStream, inputStream.available().toLong())
 } catch (e: Exception) {
 Log.e("HLS Server", "❌ Error reading init.mp4 from assets: ${e.message}")
 newFixedLengthResponse(Response.Status.INTERNAL_ERROR, MIME_PLAINTEXT, "Server error")
 }
 }
 */

 // Serve all other HLS files normally from the hlsDir.
 val file = File(hlsDir, uri)
 return if (file.exists()) {
 newFixedLengthResponse(Response.Status.OK, getMimeTypeForFile(uri), file.inputStream(), file.length())
 } else {
 newFixedLengthResponse(Response.Status.NOT_FOUND, MIME_PLAINTEXT, "File not found")
 }
 }
 }

 /**
 * Clean up resources when the activity is destroyed.
 * Stops recording, releases the camera, cancels coroutines, and stops the HLS server.
 */
 override fun onDestroy() {
 super.onDestroy()
 if (isRecording) {
 activeRecorder.stop()
 activeRecorder.release()
 }
 cameraDevice.close()
 scope.cancel()
 hlsServer.stop()
 orientationListener.disable()
 Log.d("HLS", "🛑 Activity destroyed")
 }
}



I have three examples of ffmpeg commands.


- 

- One command segments into DASH, but the camera does not have the correct rotation.
- One command segments into HLS without re-encoding with 5-second segments ; it’s fast but does not have the correct rotation.
- One command segments into HLS with re-encoding, which applies a rotation. It’s too slow for 5-second segments, so a 1-second segment was chosen.








Note :


- 

- In the second command ("One command segments into HLS without re-encoding with 5-second segments ; it’s fast but does not have the correct rotation."), it returns fMP4. To achieve the correct rotation, I provide a preconfigured
init.mp4
file during the HTTP request to retrieve it (see comment). - In the third command ("One command segments into HLS with re-encoding, which applies a rotation. It’s too slow for 5-second segments, so a 1-second segment was chosen."), it returns TS.






-
-
Issues with processing media on windows Azure
23 septembre 2015, par Ahmed MujtabaI have a website built on ASP.NET web forms that works as a media portal for users to upload videos. I’m using ffmpeg encoders to produce video contents to be streamed in the browser. I’m using the web deploy method to publish the site on the Azure server. The website get’s deployed properly however I get following issues in the live site.
-
Video never get’s encoded and published. I get some sort of error.
-
Video get’s published but the process of uploading and encoding the video is way too slow on the web server.
My project solution contains upload.ashx that handles the upload requests and makes the call to encode.ashx which is responsible for the encoding and publishing of the videos. I tried to remotely debug the site but the debugger never get’s to encode.ashx.
I was wondering if these issues can be resolved by having the website deployed with a VM ?
Script that uploads the video file :
var filesuploaded = 0;
var faileduploaded = 0;
$(function () {
var uploader = new plupload.Uploader({
runtimes: 'gears,html5,flash,silverlight,browserplus',
browse_button: '<%= pickfiles.ClientID %>',
container: 'container',
max_file_size: '<%= MaxMediaSize %>mb',
url: '<%=Config.GetUrl() %>videos/upload/upload.ashx',
flash_swf_url: '<%=Config.GetUrl() %>plupload/js/plupload.flash.swf',
silverlight_xap_url: '<%=Config.GetUrl() %>plupload/js/plupload.silverlight.xap',
chunk_size: '4mb',
<%= UniqueNames %>
filters: [
{ title: '<%= AllowedFormatsDisplay %>', extensions: '<%= AllowedFormats %>'}],
headers: { UName: '<%=UserName %>', MTP: '<%= MediaType %>' }
});
//uploader.bind('Init', function (up, params) {
// $('#filelist').html("<div>Current runtime: " + params.runtime + "</div>");
//});
uploader.init();
$('#uploadfiles').click(function (e) {
uploader.start();
e.preventDefault();
$("#uploadfiles").hide();
$("#<%= embd.ClientID %>").hide();
});
uploader.bind('FilesAdded', function (up, files) {
$("#uploadfiles").show();
$("#<%= msg.ClientID %>").html("");
var count=0;
$.each(files, function (i, file) {
$('#filelist').append(
'<div class="item_pad_4 bx_br_bt">' + (count + 1) + ': ' + file.name + ' (' + plupload.formatSize(file.size) + ') <b></b></div>' );
count++;
});
var maxupload = <%= MaxVideoUploads %>;
if(count > maxupload)
{
$.each(files, function(i, file) {
uploader.removeFile(file);
});
$('#filelist').html("");
$("#uploadfiles").hide();
Display_Message("#<%= msg.ClientID %>", "Can't upload more than " + maxupload + " records at once!", 1, 1);
return false;
}
else {
$("#tfiles").html(count);
$("#uploadfiles").removeClass("disabled");
$("#<%= pickfiles.ClientID %>").hide();
}
up.refresh(); // Reposition Flash/Silverlight
});
uploader.bind('UploadProgress', function (up, file) {
$('#' + file.id + " b").html(file.percent + "%");
});
uploader.bind('Error', function (up, err) {
$('#filelist').append("<div>Error: " + err.code +
", Message: " + err.message +
(err.file ? ", File: " + err.file.name : "") +
"</div>"
);
up.refresh(); // Reposition Flash/Silverlight
});
var failedstatus = 0;
uploader.bind('FileUploaded', function (up, file, info) {
// encode started
if (info.response != "failed" && info.response != "") {
EncodeVD(file.id, info.response, file.size);
Display_Message('#' + file.id, "Please wait for final processing", 0, 1);
if (failedstatus == 0)
Redirect(info.response);
filesuploaded++;
}
else {
Display_Message('#' + file.id, "Response is: " + info.response, 0, 1);
}
});
});
var redcnt = 0;
function Redirect(filename) {
var IntervalID = setInterval(function () {
redcnt++;
if (redcnt > 2) {
clearInterval(IntervalID);
var tfiles = $("#tfiles").html();
if(tfiles == faileduploaded) { // break further processing all videos failed to upload
}
else if (filesuploaded >= tfiles) {
document.location = "<%=ConfirmPageUrl %>?fn=" + filename + "&gid=<%=GalleryID %>&uvids=" + $("#tfiles").html() + "&mpid=" + $("#maxpid").html().trim() + "<%=GroupParam %>";
}
}
}, 2000);
}
function EncodeVD(mid, mfile, msize) {
var params = '<%= EncodingParams %>&fn=' + mfile;
$.ajax({
type: 'GET',
url: '<%= Encoding_Handler_Path %>',
data: params,
async: true,
success: function (msg) {
if (msg == "Success" || msg == "") {
$('#' + mid).html('<strong>Uploading Completed Successfully - Wait for Processing.');
}
else {
failedstatus = 1;
faileduploaded++;
Display_Message('#' + mid, "Response is: " + msg, 0, 1);
}
}
});
}
</strong>Server side code for processing the file upload :
private int MediaType = 0; // 0 : video, 1: audio
public void ProcessRequest (HttpContext context) {
try
{
context.Response.ContentType = "text/plain";
context.Response.Write(ProcessMedia(context));
}
catch (Exception ex)
{
context.Response.Write("error|" + ex.Message);
}
}
public string ProcessMedia(HttpContext context)
{
if (context.Request.Files.Count > 0)
{
int chunk = context.Request["chunk"] != null ? int.Parse(context.Request["chunk"]) : 0;
string fileName = context.Request["name"] != null ? context.Request["name"] : string.Empty;
//string _fileName = fileName.Remove(fileName.LastIndexOf(".")) + "-" + Guid.NewGuid().ToString().Substring(0, 6) + "" + fileName.Remove(0, fileName.LastIndexOf("."));
HttpPostedFile fileUpload = context.Request.Files[0];
string upath = "";
if (context.Request.Headers["UName"] != null)
upath = context.Request.Headers["UName"].ToString();
//if (CloudSettings.EnableCloudStorage && upath != "")
// _fileName = upath.Substring(0, 3) + "-" + _fileName; // avoid duplication in cloud storage
if (context.Request.Headers["MTP"] != null)
MediaType = Convert.ToInt32(context.Request.Headers["MTP"]);
//string extensions = "";
//if (MediaType == 0)
// extensions = Site_Settings.Video_Allowed_Formats;
//else
// extensions = Site_Settings.Audio_Allowed_Formats;
//bool sts = UtilityBLL.Check_File_Extension(extensions, fileName.ToLower());
//if (sts == false)
//{
// return "Invalid format, please upload proper video!"; // Invalid video format, please upload proper video
//}
int allowable_size_mb = 0;
if (MediaType == 0)
{
allowable_size_mb = Site_Settings.Video_Max_Size;
}
else
{
allowable_size_mb = Site_Settings.Audio_Max_Size;
}
int UploadSize = allowable_size_mb * 1000000;
if (fileUpload.ContentLength > UploadSize)
{
return "Video Limit Exceeds";
}
string uploadPath = "";
// check whether audio / mp3 encoding enabled
if (this.MediaType == 1)
{
// audio encoding
if (fileName.EndsWith(".mp3"))
{
// upload mp3 directly in mp3 path instead of default path
if (upath == "")
uploadPath = UrlConfig.MP3_Path(); // source video path
else
uploadPath = UrlConfig.MP3_Path(upath); // source video path
}
else
{
// default path
if (upath == "")
uploadPath = UrlConfig.Source_Video_Path(); // source video path
else
uploadPath = UrlConfig.Source_Video_Path(upath); // source video path
}
}
else
{//azure
// default path
if (upath == "")
uploadPath = UrlConfig.Source_Video_Path(); // source video path
else
uploadPath = UrlConfig.Source_Video_Path(upath); // source video path
}
FileStream fs;
using (fs = new FileStream(Path.Combine(uploadPath, fileName), chunk == 0 ? FileMode.Create : FileMode.Append))
{
byte[] buffer = new byte[fileUpload.InputStream.Length];
fileUpload.InputStream.Read(buffer, 0, buffer.Length);
fs.Write(buffer, 0, buffer.Length);
}
return fileName; // "Success";
}
else
{
return "failed";
}
return "";
}
public bool IsReusable {
get {
return false;
}
}code in encode.aspx responsible for encoding the video :
private string EncodeMedia(HttpContext context)
{
string sourcepath = "";
string publishedpath = "";
string mp3path = "";
string thumbpath = "";
if (this.UserName != "")
{//azure
sourcepath = UrlConfig.Source_Video_Path(this.UserName);
publishedpath = UrlConfig.Published_Video_Path(this.UserName);
mp3path = UrlConfig.MP3_Path(this.UserName);
thumbpath = UrlConfig.Thumbs_Path(this.UserName);
}
else
{
sourcepath = UrlConfig.Source_Video_Path();
publishedpath = UrlConfig.Published_Video_Path();
mp3path = UrlConfig.MP3_Path();
thumbpath = UrlConfig.Thumbs_Path();
}
if (this.FileName.EndsWith(".mp3") && this.MediaType == 1)
{
// mp3 and audio format
if (!File.Exists(mp3path + "/" + this.FileName))
{
return "Audio file not found!";
}
}
else
{
// rest normal video and audio encoding
if (!File.Exists(sourcepath + "/" + this.FileName))
{
return "Source file not found!";
}
}
if (CloudSettings.EnableCloudStorage && this.UserName != "")
this.FileName = this.UserName.Substring(0, 3) + "-" + this.FileName; // avoid duplication in cloud storage
//double f_contentlength = 0;
//if (Site_Settings.Feature_Packages == 1)
//{
// if (Config.GetMembershipAccountUpgradeType() != 1)
// {
// // Check whether user have enough space to upload content
// // Restriction only for normal or premium users
// f_contentlength = (double)fileUpload.ContentLength / 1000000;
// string media_field_name = "space_video";
// if (MediaType == 1)
// media_field_name = "space_audio";
// if (!User_PackagesBLL.Check_User_Space_Status(upath, media_field_name, f_contentlength) && !isAdmin)
// {
// // insufficient credits to upload content
// return "Insufficient credits to upload media file"; // Response.Redirect(Config.GetUrl("myaccount/packages.aspx?status=" + media_field_name), true);
// }
// }
//}
this.backgroundpublishing = true; // should be true on direct encoding
// Video Processing
string flv_filename = "";
string original_filename = "";
string thumb_filename = "";
string duration = "";
int duration_sec = 0;
// set video actions : 1 -> on, 0 -> off
int isenabled = 1;
int ispublished = 1;
int isreviewed = 1;
int isresponse = 0;
if (Response_VideoID > 0)
isresponse = 1;
string flv_url = "none";
string thumb_url = "none";
string org_url = "none";
string _embed = "";
string errorcode = "0";
VideoInfo info = null;
if (Site_Settings.Content_Approval == 0)
isreviewed = 0;
// check whether audio / mp3 encoding enabled
if (this.FileName.EndsWith(".mp3") && this.MediaType==1)
{
// audio encoding
// mp3 file already
// so no encoding happens
MediaHandler _minfo = new MediaHandler();
_minfo.FFMPEGPath = Encoding_Settings.FFMPEGPATH;
_minfo.FileName = FileName;
_minfo.InputPath = mp3path;
info = _minfo.Get_Info();
flv_filename = FileName;
original_filename = FileName;
duration = info.Duration;
duration_sec = info.Duration_Sec;
isenabled = 1; // enabled
}
else if (this.directpublishing)
{
// publish video
ArrayList itags = new ArrayList();
MHPEncoder encoder = new MHPEncoder();
//if (this.FileName.EndsWith(".mpeg") || this.FileName.EndsWith(".mpg")) // use mpg compatible ffmpeg encoder
// encoder.FfmpegPath = HttpContext.Current.Server.MapPath(HttpContext.Current.Request.ApplicationPath) + "\\ffmpeg\\ffmpegbk\\ffmpeg.exe";
//encoder.ThumbFfmpegPath = Encoding_Settings.FFMPEGPATH; // use normal ffmpeg encoder for thumbs processing
//azure
encoder.FfmpegPath = Encoding_Settings.FFMPEGPATH; // use normal ffmpeg encoder
encoder.FlvToolPath = Encoding_Settings.FLVTOOLPATH; // set meta information for flv videos
encoder.Mp4BoxPath = Encoding_Settings.MP4BoxPath; // set meta information for mp4 videos
encoder.SourcePath = sourcepath;
encoder.SourceFileName = this.FileName;
// No cloud storage on direct encoding
//if (CloudSettings.EnableCloudStorage)
// encoder.EnableCloudStorage = true;
if (MediaType == 1)
{
// audio encoding
itags.Add("14");
encoder.iTags = itags;
encoder.GrabThumbs = false;
encoder.PublishedPath = mp3path;
//_vprocess.OutputPath = this.MP3Path;
//_vprocess.isAudio = true;
}
else
{
// video encoding
itags.Add(EncoderSettings.DefaultITagValue.ToString()); // 5 for 360p mp4 encoding
//itags.Add(7); // this will call 7 case settings to publish next video ending with _7.mp4 instead of _5.mp4
// so there will be 2 videos with different resoultions published at the end of the process?
// yesmake sure use proper settings first test it directly via command.
//okay i got it. But i'm gonna have to use a different media players to incroporate those settings
// once published you can load different videos for different user by checking _7.mp4 (end) va
//okay got it.
//azure
encoder.PublishedPath = publishedpath;
encoder.iTags = itags;
encoder.ThumbsDirectory = thumbpath;
encoder.TotalThumbs = 15;
//_vprocess.ThumbPath = this.ThumbPath;
//_vprocess.OutputPath = this.FLVPath;
//if (Config.isPostWaterMark())
//{
// // script for posting watermark on video
// _vprocess.WaterMarkPath = Server.MapPath(Request.ApplicationPath) + "\\contents\\watermark";
// _vprocess.WaterMarkImage = "watermark.gif";
//}
}
int deleteoption = Site_Settings.Video_Delete_Original;
if (deleteoption == 1)
{
encoder.DeleteSource = true;
}
// background processing
if (this.backgroundpublishing && this.MediaType==0)
{
encoder.BackgroundProcessing = true;
// get information from source video in order to store it in database
MediaHandler _minfo = new MediaHandler();
//if (this.FileName.EndsWith(".mpeg") || this.FileName.EndsWith(".mpg")) // use mpg compatible ffmpeg encoder
// encoder.FfmpegPath = HttpContext.Current.Server.MapPath(HttpContext.Current.Request.ApplicationPath) + "\\ffmpeg\\ffmpegbk\\ffmpeg.exe";
//else
_minfo.FFMPEGPath = Encoding_Settings.FFMPEGPATH;
_minfo.FileName = FileName;
_minfo.InputPath = sourcepath;
info = _minfo.Get_Info();
}
// encode video processing
Video_Information vinfo = encoder.Process();
if (vinfo.ErrorCode > 0)
{
errorcode = vinfo.ErrorCode.ToString();
ErrorLgBLL.Add_Log("Encoding Failed Log", "", "encoding error: " + vinfo.ErrorCode.ToString() + "<br />Description: " + vinfo.ErrorDescription.ToString());
//return vinfo.ErrorDescription;
}
// Double check validation
// if published video exist
// if thumb exist
// then proceed further
if (MediaType == 0)
{
if (!File.Exists(encoder.PublishedPath + "/" + vinfo.FLVVideoName))
{
return "Video failed to published properly.";
}
if (!File.Exists(encoder.ThumbsDirectory + "/" + vinfo.ThumbFileName))
{
return "Thumbs failed to grab from video properly.";
}
}
else
{
if (vinfo.FLVVideoName == "")
{
vinfo.FLVVideoName = this.FileName.Remove(this.FileName.LastIndexOf(".")) + "_14.mp3"; // mp3 file path name
}
if (!File.Exists(encoder.PublishedPath + "/" + vinfo.FLVVideoName))
{
return "Audio failed to published properly.";
}
}
// Now thumbs and video published, procceed for data processing
// get information from vinfo object
if (this.backgroundpublishing && this.MediaType == 0)
{
string OutputFileName = this.FileName.Remove(this.FileName.LastIndexOf("."));
flv_filename = OutputFileName + "_" + EncoderSettings.DefaultITagValue + "." + EncoderSettings.Return_Output_Extension(EncoderSettings.DefaultITagValue);
original_filename = vinfo.OriginalVideoName;
thumb_filename = OutputFileName + "_008.jpg"; // info.ThumbFileName;
duration = info.Duration;
duration_sec = info.Duration_Sec;
}
else
{
flv_filename = vinfo.FLVVideoName;
original_filename = vinfo.OriginalVideoName;
thumb_filename = vinfo.ThumbFileName;
duration = vinfo.Duration;
duration_sec = vinfo.Duration_Sec;
isenabled = vinfo.isEnabled;
}
// No cloud storage on direct encoding.
// Note cloude storage only works if background processing is disabled
// Or works in cased of sheduled processing
if (CloudSettings.EnableCloudStorage && errorcode == "0")
{
flv_url = "amazon";
org_url = "https://s3.amazonaws.com/" + CloudSettings.OriginalVideoBucketName + "/" + this.FileName;
thumb_url = "https://s3.amazonaws.com/" + CloudSettings.ThumbsBucketName + "/" + thumb_filename;
}
}
else
{
// set publishing status off.
ispublished = 0;
original_filename = this.FileName;
}
// Store video information in database
string ipaddress = context.Request.ServerVariables["REMOTE_ADDR"].ToString();
// Store media information in database
Video_Struct vd = new Video_Struct();
vd.CategoryID = 0; // store categoryname or term instead of category id
vd.Categories = Categories;
vd.UserName = UserName;
vd.Title = "";
vd.Description = "";
vd.Tags = Tags;
vd.Duration = duration;
vd.Duration_Sec = duration_sec;
vd.OriginalVideoFileName = original_filename;
vd.VideoFileName = flv_filename;
vd.ThumbFileName = thumb_filename;
vd.isPrivate = Privacy;
vd.AuthKey = PAuth;
vd.isEnabled = isenabled;
vd.Response_VideoID = Response_VideoID; // video responses
vd.isResponse = isresponse;
vd.isPublished = ispublished;
vd.isReviewed = isreviewed;
vd.FLV_Url = flv_url;
vd.Thumb_Url = thumb_url;
vd.Org_Url = org_url;
vd.Embed_Script = _embed;
vd.isExternal = 0; // website own video, 1: embed video
vd.IPAddress = ipaddress;
vd.Type = MediaType;
vd.YoutubeID = "";
vd.isTagsreViewed = 1;
vd.Mode = 0; // filter videos based on website sections
//vd.ContentLength = f_contentlength;
vd.GalleryID = GID;
vd.ErrorCode = Convert.ToInt32(errorcode);
long videoid = VideoBLL.Process_Info(vd, false);
// Process tags
if (Tags != "")
{
int tag_type = 0; // represent videos
if (MediaType == 1)
tag_type = 4; // represent audio file
TagsBLL.Process_Tags(Tags, tag_type, 0);
}
if (Response_VideoID > 0)
{
VideoBLL.Update_Responses(Response_VideoID);
}
return "Success";
} -