
Recherche avancée
Autres articles (41)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Les notifications de la ferme
1er décembre 2010, parAfin d’assurer une gestion correcte de la ferme, il est nécessaire de notifier plusieurs choses lors d’actions spécifiques à la fois à l’utilisateur mais également à l’ensemble des administrateurs de la ferme.
Les notifications de changement de statut
Lors d’un changement de statut d’une instance, l’ensemble des administrateurs de la ferme doivent être notifiés de cette modification ainsi que l’utilisateur administrateur de l’instance.
À la demande d’un canal
Passage au statut "publie"
Passage au (...) -
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
Sur d’autres sites (6314)
-
Get total time/progress from multiple FFmpeg terminal commands
24 septembre 2018, par katarotyI have a class that executes multiple
FFmpeg
commands inAndroid
. The problem is that I have no idea how to get total time and then update the progress as the commands run.Since there are so many I am not even sure if it is possible.
So hence my question : Is it possible to get total time/progress or at least estimated time/progress and then update it onProgress.
And here is my class :
public class AudioProcessor {
private Context context;
private FFmpeg ffmpeg;
private AudioProcessorListener listener;
private File micPcmFile;
private File backgroundMp3File;
private File pcmtowavTempFile;
private File mp3towavTempFile;
private File combinedwavTempFile;
private File outputFile;
private File volumeChangedTempFile;
public AudioProcessor(Context context) {
ffmpeg = FFmpeg.getInstance(context);
this.context = context;
}
/**
* Program main method. Starts running program
* @throws Exception
*/
public void process() throws Exception {
if (!ffmpeg.isSupported()) {
Log.e("AudioProcessor", "FFMPEG not supported! Cannot convert audio!");
throw new Exception("FFMPeg has to be supported");
}
if (!checkIfAllFilesPresent()) {
Log.e("AudioProcessor", "All files are not set yet. Please set file first");
throw new Exception("Files are not set!");
}
listener.onStart();
prepare();
convertPCMToWav();
}
/**
* Prepares program
*/
private void prepare() {
prepareTempFiles();
}
/**
* Converts PCM to wav file. Automatically create new file.
*/
private void convertPCMToWav() {
System.out.println("AudioProcessor: Convert PCM TO Wav");
//ffmpeg -f s16le -ar 44.1k -ac 2 -i file.pcm file.wav
String[] cmd = { "-f" , "s16le", "-ar", "44.1k", "-i", micPcmFile.toString(), pcmtowavTempFile.toString()};
ffmpeg.execute(cmd, new ExecuteBinaryResponseHandler() {
@Override
public void onSuccess(String message) {
super.onSuccess(message);
convertMP3ToWav();
}
@Override
public void onFailure(String message) {
super.onFailure(message);
onError(message);
}
});
}
/**
* Converts mp3 file to wav file.
* Automatically creates Wav file
*/
private void convertMP3ToWav() {
//ffmpeg -i file.mp3 file.wav
String[] cmd = { "-i" , backgroundMp3File.toString(), mp3towavTempFile.toString() };
ffmpeg.execute(cmd, new ExecuteBinaryResponseHandler() {
@Override
public void onSuccess(String message) {
super.onSuccess(message);
changeMicAudio();
}
@Override
public void onFailure(String message) {
super.onFailure(message);
onError(message);
}
});
}
/**
* Combines 2 wav files into one wav file. Overlays audio
*/
private void combineWavs() {
//ffmpeg -i C:\Users\VR1\Desktop\_mp3.wav -i C:\Users\VR1\Desktop\_pcm.wav -filter_complex amix=inputs=2:duration=first:dropout_transition=3 C:\Users\VR1\Desktop\out.wav
String[] cmd = { "-i" , pcmtowavTempFile.toString(), "-i", volumeChangedTempFile.toString(), "-filter_complex", "amix=inputs=2:duration=first:dropout_transition=3", combinedwavTempFile.toString()};
ffmpeg.execute(cmd, new ExecuteBinaryResponseHandler() {
@Override
public void onSuccess(String message) {
super.onSuccess(message);
encodeWavToAAC();
}
@Override
public void onFailure(String message) {
super.onFailure(message);
onError(message);
}
});
}
private void changeMicAudio(){
//ffmpeg -i input.wav -filter:a "volume=1.5" output.wav
String[] cmdy = { "-i", mp3towavTempFile.toString(), "-af", "volume=0.9", volumeChangedTempFile.toString()};
ffmpeg.execute(cmdy, new ExecuteBinaryResponseHandler() {
@Override
public void onSuccess(String message) {
combineWavs();
super.onSuccess(message);
}
@Override
public void onFailure(String message) {
super.onFailure(message);
}
});
}
/**
* Do something on error. Releases program data (deletes files)
* @param message
*/
private void onError(String message) {
release();
if (listener != null) {
listener.onError(message);
}
}
/**
* Encode to AAC
*/
private void encodeWavToAAC() {
//ffmpeg -i file.wav -c:a aac -b:a 128k -f adts output.m4a
String[] cmd = { "-i" , combinedwavTempFile.toString(), "-c:a", "aac", "-b:a", "128k", "-f", "adts", outputFile.toString()};
ffmpeg.execute(cmd, new ExecuteBinaryResponseHandler() {
@Override
public void onSuccess(String message) {
super.onSuccess(message);
if (listener != null) {
listener.onSuccess(outputFile);
}
release();
}
@Override
public void onFailure(String message) {
super.onFailure(message);
onError(message);
}
});
}
/**
* Uninitializes class
*/
private void release() {
if (listener != null) {
listener.onFinish();
}
destroyTempFiles();
}
/**
* Prepares temp required files by deleteing them if they exsist.
* Files cannot exists before ffmpeg actions. FFMpeg automatically creates those files.
*/
private void prepareTempFiles() {
pcmtowavTempFile = new File(context.getFilesDir()+ Common.TEMP_LOCAL_DIR + "/" + "_pcm.wav");
mp3towavTempFile = new File(context.getFilesDir()+ Common.TEMP_LOCAL_DIR + "/" + "_mp3.wav");
combinedwavTempFile = new File(context.getFilesDir()+ Common.TEMP_LOCAL_DIR + "/" + "_combined.wav");
volumeChangedTempFile = new File(context.getFilesDir()+ Common.TEMP_LOCAL_DIR + "/" + "_volumeChanged.wav");
if (pcmtowavTempFile.exists()) {
destroyTempFiles();
}
}
/**
* Destroys temp required files
*/
private void destroyTempFiles() {
pcmtowavTempFile.delete();
mp3towavTempFile.delete();
combinedwavTempFile.delete();
volumeChangedTempFile.delete();
}
/**
* Checks if all files are set, so we can process them
* @return - all files ready
*/
private boolean checkIfAllFilesPresent() {
if(micPcmFile == null || backgroundMp3File == null || outputFile == null) {
Log.e("AudioProcessor", "All files are not set! Set all files!");
return false;
}
return true;
}
public void setOutputFile(File outputFile) {
this.outputFile = outputFile;
}
public void setListener(AudioProcessorListener listener) {
this.listener = listener;
}
public void setMicPcmFile(File micPcmFile) {
this.micPcmFile = micPcmFile;
}
public void setBackgroundMp3File(File backgroundMp3File) {
this.backgroundMp3File = backgroundMp3File;
}
public interface AudioProcessorListener {
void onStart();
void onSuccess(File output);
void onError(String message);
void onFinish();
}
}Since the whole thing is taking a long time, if someone can recommend something that would take less time, then it would also help. A 2 minute video usually takes like 50s.
-
Get Total number of frames and FPS faster than with OpenCV library in C++
20 juillet 2018, par daniels_paI need to check which video can be analyzed and which cannot given the total number of frames in a video and the fps of the video. I created a c++ program to do the checking. Analyzing each video is not an option since analyzing is time consuming.
I used the OpenCV library for starters :
cv::VideoCapture vid_to_analyze;
vid_to_analyze.open( me_vid.vid_path.string() );
me_vid.total_frames= static_cast<int>(vid_to_analyze.get(CV_CAP_PROP_FRAME_COUNT));
me_vid.fps=vid_to_analyze.get(CV_CAP_PROP_FPS);
if (!vid_to_analyze.isOpened())
{
std::cout << "Skipping vid: "<< me_vid.vid_path.string()<<", couldn't open it" << std::endl;
}
if (me_vid.fps != me_vid.fps || me_vid.fps <= 0)
{
std::cout << "For video " << me_vid.vid_path.string() << std::endl;
std::cout << "FPS of the video file cannot be determined, assuming 30"<< std::endl;
me_vid.fps = 30;
}
vid_to_analyze.release();
</int>However when debugging it becomes painfully slow (the program is faster running without the debugger attached but still very slow given the number of videos it needs to cover). I think that has something to do with 4 threads being created and deleted each time a video is opened (released).
How to get total number of frames and fps in a faster manner ( without actually creating 4 threads !!) if i am not interested in actually grabbing frames from the video just the number of frames and fps.
Is there a way to use ffmpeg library from c++, would that be faster and where to start ?
EDIT : Valgrind seems to agree since (Ir=)91.66% of time spend in the
vid_to_analyze.open
phase -
How do I set total file duration generating a piped mp4 with ffmpeg ?
9 avril 2021, par Mikhail NovikovMy task is to generate (by piping, so that a file can be played at the same time with generation) an mp4 file which is a part of a larger file, with the result looking like a static file link, being seekable before it fully loads (i.e. supporting range headers).


Here is how I do it now :


ffmpeg -ss $1 -i teststream_hls1_replay.mp4 -t $2 -timecode $3 \
 -codec copy -movflags frag_keyframe+faststart -f mp4 pipe:1



Result is OK (video starts from the right point) except the player does not see the total duration of a file so a controlbar looks weird, and seeking isn't possible properly, just because the controlbar jumps all the time.


How do I indicate to ffmpeg that it has to set moov atom to contain right duration ?


Basically the question boils down to : how do I force set some arbitrary duration of file in a moov atom, when I am generating a fragmented mp4 ? ffmpeg will not get know how long will it be, so explainably it can't do it itself, but I know... is there a command line parameters to specify a 'forced duration' ?