
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (35)
-
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
-
Librairies et logiciels spécifiques aux médias
10 décembre 2010, parPour un fonctionnement correct et optimal, plusieurs choses sont à prendre en considération.
Il est important, après avoir installé apache2, mysql et php5, d’installer d’autres logiciels nécessaires dont les installations sont décrites dans les liens afférants. Un ensemble de librairies multimedias (x264, libtheora, libvpx) utilisées pour l’encodage et le décodage des vidéos et sons afin de supporter le plus grand nombre de fichiers possibles. Cf. : ce tutoriel ; FFMpeg avec le maximum de décodeurs et (...) -
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;
Sur d’autres sites (5204)
-
FFmpeg Processes keep running even after finishAndRemoveTask()
9 octobre 2018, par katarotyI have a program that gets
Audio files
and converts them usingBravobit FFmpeg
Bravobit FFmpeg github.It works as intended until someone minimizes screen during the convertion process. I am trying to fix that but so far I have not been successful.
This part is in the
Main Activity
that creates and starts myAudioProcessor
:AudioProcessor ac = new AudioProcessor(getApplicationContext(), PostNewActivity.this);
ac.setBackgroundMp3File(backgroundAudio);
ac.setMicPcmFile(micOutputPCM);
ac.setOutputFile(tempOutFile);
ac.setListener(new AudioProcessor.AudioProcessorListener() {
public void onStart() {
Log.d("Audioprocessor", "Audioprocessor is successful");
}
public void onSuccess(File output) {
Log.d("Audioprocessor", "Audioprocessor is successful");
goToPublishView();
}
public void onError(String message) {
System.out.println("Audioprocessor: " + message);
}
public void onFinish() {
Log.d("Audioprocessor", "Audioprocessor is finshed");
}
});
try {
if (tempOutFile.exists()) {
tempOutFile.delete();
}
ac.process();
} catch (Exception ex) {
System.out.println("Processing failed!");
ex.printStackTrace();
}And here is the
AudioProcessor
itself :public class AudioProcessor {
private Context context;
private FFmpeg ffmpeg;
private AudioProcessorListener listener;
private File micPcmFile;
private File backgroundMp3File;
private File pcmtowavTempFile;
private File mp3towavTempFile;
private File combinedwavTempFile;
private File outputFile;
private File volumeChangedTempFile;
TextView extensionDownload, percentProgress;
public AudioProcessor(Context context, Activity activity) {
ffmpeg = null;
ffmpeg = FFmpeg.getInstance(context);
percentProgress = activity.findViewById(R.id.percentProgress);
percentProgress.setSingleLine(false);
this.context = context;
prepare();
}
/**
* Program main method. Starts running program
* @throws Exception
*/
public void process() throws Exception {
if (!ffmpeg.isSupported()) {
Log.e("AudioProcessor", "FFMPEG not supported! Cannot convert audio!");
throw new Exception("FFMPeg has to be supported");
}
if (!checkIfAllFilesPresent()) {
Log.e("AudioProcessor", "All files are not set yet. Please set file first");
throw new Exception("Files are not set!");
}
Log.e("AudioProcessor", "Start processing audio");
listener.onStart();
Handler handler = new Handler();
handler.postDelayed(new Runnable() {
@Override
public void run() {
convertPCMToWav();
}
}, 200);
}
/**
* Prepares program
*/
private void prepare() {
prepareTempFiles();
}
/**
* Converts PCM to wav file. Automatically create new file.
*/
private void convertPCMToWav() {
Log.e("AudioProcessor", "Convert PCM TO Wav");
//ffmpeg -f s16le -ar 44.1k -ac 2 -i file.pcm file.wav
String[] cmd = { "-f" , "s16le", "-ar", "44.1k", "-i", micPcmFile.toString(), "-y", pcmtowavTempFile.toString()};
ffmpeg.execute(cmd, new ExecuteBinaryResponseHandler() {
@Override
public void onStart() {
super.onStart();
percentProgress.setVisibility(View.VISIBLE);
percentProgress.setText("Converting your recording\n"+"1/5");
}
@Override
public void onSuccess(String message) {
super.onSuccess(message);
convertMP3ToWav();
}
@Override
public void onFailure(String message) {
super.onFailure(message);
onError(message);
convertPCMToWav();
}
});
}
/**
* Converts mp3 file to wav file.
* Automatically creates Wav file
*/
private void convertMP3ToWav() {
Log.e("AudioProcessor", "Convert MP3 TO Wav");
//ffmpeg -i file.mp3 file.wav
String[] cmd = { "-i" , backgroundMp3File.toString(), "-y", mp3towavTempFile.toString() };
ffmpeg.execute(cmd, new ExecuteBinaryResponseHandler() {
@Override
public void onStart() {
super.onStart();
percentProgress.setText("Converting background audio\n"+"2/5");
Log.e("AudioProcessor", "Starging convertgf MP3 TO Wav");
}
@Override
public void onSuccess(String message) {
super.onSuccess(message);
changeMicAudio();
}
@Override
public void onFailure(String message) {
super.onFailure(message);
Log.e("AudioProcessor", "Failed to convert MP3 TO Wav");
//onError(message);
throw new RuntimeException("Failed to convert MP3 TO Wav");
//convertMP3ToWav();
}
});
}
/**
* Combines 2 wav files into one wav file. Overlays audio
*/
private void combineWavs() {
Log.e("AudioProcessor", "Combine wavs");
//ffmpeg -i C:\Users\VR1\Desktop\_mp3.wav -i C:\Users\VR1\Desktop\_pcm.wav -filter_complex amix=inputs=2:duration=first:dropout_transition=3 C:\Users\VR1\Desktop\out.wav
String[] cmd = { "-i" , pcmtowavTempFile.toString(), "-i", volumeChangedTempFile.toString(), "-filter_complex", "amix=inputs=2:duration=first:dropout_transition=3", "-y",combinedwavTempFile.toString()};
ffmpeg.execute(cmd, new ExecuteBinaryResponseHandler() {
@Override
public void onStart() {
super.onStart();
percentProgress.setText("Combining the two audio files\n"+"4/5");
}
@Override
public void onSuccess(String message) {
super.onSuccess(message);
encodeWavToAAC();
}
@Override
public void onFailure(String message) {
super.onFailure(message);
onError(message);
}
});
}
private void changeMicAudio(){
Log.e("AudioProcessor", "Change audio volume");
//ffmpeg -i input.wav -filter:a "volume=1.5" output.wav
String[] cmdy = { "-i", mp3towavTempFile.toString(), "-af", "volume=0.9", "-y",volumeChangedTempFile.toString()};
ffmpeg.execute(cmdy, new ExecuteBinaryResponseHandler() {
@Override
public void onStart() {
super.onStart();
percentProgress.setText("Normalizing volume\n"+"3/5");
}
@Override
public void onSuccess(String message) {
combineWavs();
super.onSuccess(message);
}
@Override
public void onFailure(String message) {
super.onFailure(message);
Log.e("AudioProcessor", message);
}
});
}
/**
* Do something on error. Releases program data (deletes files)
* @param message
*/
private void onError(String message) {
release();
if (listener != null) {
//listener.onError(message);
}
}
/**
* Encode to AAC
*/
private void encodeWavToAAC() {
Log.e("AudioProcessor", "Encode to AAC");
//ffmpeg -i file.wav -c:a aac -b:a 128k -f adts output.m4a
String[] cmd = { "-i" , combinedwavTempFile.toString(), "-c:a", "aac", "-b:a", "128k", "-f", "adts", "-y",outputFile.toString()};
ffmpeg.execute(cmd, new ExecuteBinaryResponseHandler() {
@Override
public void onStart() {
super.onStart();
percentProgress.setText("Normalizing volume\n"+"3/5");
}
@Override
public void onSuccess(String message) {
super.onSuccess(message);
if (listener != null) {
listener.onSuccess(outputFile);
}
release();
}
@Override
public void onFailure(String message) {
super.onFailure(message);
onError(message);
encodeWavToAAC();
}
});
}
/**
* Uninitializes class
*/
private void release() {
if (listener != null) {
listener.onFinish();
}
destroyTempFiles();
}
/**
* Prepares temp required files by deleteing them if they exsist.
* Files cannot exists before ffmpeg actions. FFMpeg automatically creates those files.
*/
private void prepareTempFiles() {
pcmtowavTempFile = new File(context.getFilesDir()+ Common.TEMP_LOCAL_DIR + "/" + "_pcm.wav");
mp3towavTempFile = new File(context.getFilesDir()+ Common.TEMP_LOCAL_DIR + "/" + "_mp3.wav");
combinedwavTempFile = new File(context.getFilesDir()+ Common.TEMP_LOCAL_DIR + "/" + "_combined.wav");
volumeChangedTempFile = new File(context.getFilesDir()+ Common.TEMP_LOCAL_DIR + "/" + "_volumeChanged.wav");
destroyTempFiles();
}
/**
* Destroys temp required files
*/
private void destroyTempFiles() {
pcmtowavTempFile.delete();
mp3towavTempFile.delete();
combinedwavTempFile.delete();
volumeChangedTempFile.delete();
}
/**
* Checks if all files are set, so we can process them
* @return - all files ready
*/
private boolean checkIfAllFilesPresent() {
if(micPcmFile == null || backgroundMp3File == null || outputFile == null) {
Log.e("AudioProcessor", "All files are not set! Set all files!");
return false;
}
return true;
}
public void setOutputFile(File outputFile) {
this.outputFile = outputFile;
}
public void setListener(AudioProcessorListener listener) {
this.listener = listener;
}
public void setMicPcmFile(File micPcmFile) {
this.micPcmFile = micPcmFile;
}
public void setBackgroundMp3File(File backgroundMp3File) {
this.backgroundMp3File = backgroundMp3File;
}
public interface AudioProcessorListener {
void onStart();
void onSuccess(File output);
void onError(String message);
void onFinish();
}
}How I am usually testing it and getting the crashes is letting the
AudioProcessor
get to the 2nd method, which isconvertMP3ToWav()
and then close the application. When I start it back up again and start processing the files the application crashes.I have tried many ways and I thought about throwing the application back to start when it is minimized with this code in the
Main Activity
@Override
protected void onUserLeaveHint()
{
if (Build.VERSION.SDK_INT >= 21) {
finishAndRemoveTask();
} else {
finish();
}
}I thought that it would stop everything but it still kept crashing. After doing some debugging I found that even when I minimize the app and do the
finishAndRemoveTask()
theAudioProcessor
is still working and it still does all of theffmpeg
commands and even calls theonSuccess()/onFinish()
methods.How can I completely stop everything or at least stop and clear the
AudioProcessor
when the application is minimized ? -
Anomalie #3760 : Le correcteur typographique ne gère pas le HTML5
26 mars 2016, par Christian Pierre MOMONGrrrrrr, les codes sont escapés dans le titre mais pas dans le texte. Je remets mon texte en faisant apparaître les codes :-/
Le correcteur typographique fait un super boulot pour le rendu des articles. Par exemple, il transforme « Exemple : » en HTML « Exemple& nbsp ; : ».
C’est vraiment super, sauf que ce n’est pas compatible avec HTML5 où le & nbsp ; est ignoré (  est attendu à la place). Et donc on obtient des rendus catastrophiques (monstrueux espaces entre « : » et le mot précédent, etc.). J’ai tenté de mettre « », même résultat. J’ai tenté de mettre « Exemple& #160 ; : » et j’obtiens « Exemple& #160 ;& nbsp ; : » :-/
Le HTML5 devenant de plus en plus répandu et étant amené à devenir le DOCTYPE de référence, suggestion d’adapter le module typo pour gérer le HTML5.
Informations complémentaires :
- j’utilise SPIP 3.0.17-2+deb8u2 (Debian) ;
- j’ai activé « Permettre le HTML5 » dans l’interface d’administration ;
- je mets « » dans mes entêtes de fichier HTML.Avec tous mes encouragements \o/
-
FFMPEG : Video file to YUV conversion by binary ffmpeg and by code C++ give different results
30 juin 2016, par Anny GDisclaimer : I have looked at the following question,
FFMPEG : RGB to YUV conversion by binary ffmpeg and by code C++ give different results
but it didn’t help and it is not applicable to me because I am not using SwsContext or anything.Following first few tutorials by http://dranger.com/ffmpeg/, I have created a simple program that reads a video, decodes it and then when the frame is decoded, it writes the raw yuv values to a file (no padding), using the data provided by AVFrame after we successfully decoded a frame. To be more specific, I write out arrays
AVFrame->data[0]
,AVFrame->data[1]
andAVFrame->data[2]
to a file, i.e. I simply append Y values, then U values, then V values to a file. The file turns out to be of yuv422p format.When I convert the same original video to a raw yuv format using the ffmpeg(same version of ffmpeg) command line tool, the two yuv files are the same in size, but differ in content.
FYI, I am able to play both of the yuv files using the yuv player, and they look identical as well.
Here is the exact command I run to convert the original video to a yuv video using ffmpeg command line tool
~/bin/ffmpeg -i super-short-video.h264 -c:v rawvideo -pix_fmt yuv422p "super-short-video-yuv422p.yuv"
What causes this difference in bytes and can it be fixed ? Is there perhaps another way of converting the original video to a yuv using the ffmpeg tool but maybe I need to use different settings ?
Ffmpeg output when I convert to a yuv format :
ffmpeg version N-80002-g5afecff Copyright (c) 2000-2016 the FFmpeg developers
built with gcc 4.8 (Ubuntu 4.8.4-2ubuntu1~14.04.1)
configuration: --prefix=/home/me/ffmpeg_build --pkg-config-flags=--static --extra-cflags=-I/home/me/ffmpeg_build/include --extra-ldflags=-L/home/me/ffmpeg_build/lib --bindir=/home/me/bin --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-nonfree --extra-cflags=-pg --extra-ldflags=-pg --disable-stripping
libavutil 55. 24.100 / 55. 24.100
libavcodec 57. 42.100 / 57. 42.100
libavformat 57. 36.100 / 57. 36.100
libavdevice 57. 0.101 / 57. 0.101
libavfilter 6. 45.100 / 6. 45.100
libswscale 4. 1.100 / 4. 1.100
libswresample 2. 0.101 / 2. 0.101
libpostproc 54. 0.100 / 54. 0.100
Input #0, h264, from 'super-short-video.h264':
Duration: N/A, bitrate: N/A
Stream #0:0: Video: h264 (High), yuv420p, 1280x720, 25 fps, 25 tbr, 1200k tbn
[rawvideo @ 0x24f6fc0] Using AVStream.codec to pass codec parameters to muxers is deprecated, use AVStream.codecpar instead.
Output #0, rawvideo, to 'super-short-video-yuv422p.yuv':
Metadata:
encoder : Lavf57.36.100
Stream #0:0: Video: rawvideo (Y42B / 0x42323459), yuv422p, 1280x720, q=2-31, 200 kb/s, 25 fps, 25 tbn
Metadata:
encoder : Lavc57.42.100 rawvideo
Stream mapping:
Stream #0:0 -> #0:0 (h264 (native) -> rawvideo (native))
Press [q] to stop, [?] for help
frame= 50 fps=0.0 q=-0.0 Lsize= 90000kB time=00:00:02.00 bitrate=368640.0kbits/s speed=11.3x
video:90000kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.000000%