
Recherche avancée
Médias (1)
-
The pirate bay depuis la Belgique
1er avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
Autres articles (26)
-
Les tâches Cron régulières de la ferme
1er décembre 2010, parLa gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
Le super Cron (gestion_mutu_super_cron)
Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...) -
Les formats acceptés
28 janvier 2010, parLes commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
ffmpeg -codecs ffmpeg -formats
Les format videos acceptés en entrée
Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
Les formats vidéos de sortie possibles
Dans un premier temps on (...) -
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
Sur d’autres sites (5714)
-
Multi-Threaded Video Decoder Leaks Memory
3 janvier 2018, par CethricMy intention is to create a relatively simple video playback system to be used in a larger program that I am working on. Relevant code to the video decoder is here. The best I have been able to do so far is narrow down the memory leak to this section of code (or rather I have not noticed any memory leaks occurring when video is not used).
This is probably a very broad question how ever I am unsure of the scope of the issue I am having and as such of how to word my question.
What I want to know is what have I missed or done wrong that has lead to a noticeable memory leak (by noticeable I mean I can watch memory usage climb megabytes per minute). I have tried ensured that every allocation that I make is matched by a deallocation.
EDIT 1
This is to be built on a Windows 10 machine running MSYS2 (MinGW64) -
lavfi/dnn_backend_openvino.c : Spelling Correction in OpenVino Backend
23 avril 2021, par shubhanshu02 -
Screeching white sound coming while playing audio as a raw stream
27 avril 2020, par Sri Nithya SharabheshwaranandaI. Background



- 

- I am trying to make an application which helps to match subtitles to the audio waveform very accurately at the waveform level, at the word level or even at the character level.
- The audio is expected to be Sanskrit chants (Yoga, rituals etc.) which are extremely long compound words [ example - aṅganyā-sokta-mātaro-bījam is traditionally one word broken only to assist reading ]
- The input transcripts / subtitles might be roughly in sync at the sentence/verse level but surely would not be in sync at the word level.
- The application should be able to figure out points of silence in the audio waveform, so that it can guess the start and end points of each word (or even letter/consonant/vowel in a word), such that the audio-chanting and visual-subtitle at the word level (or even at letter/consonant/vowel level) perfectly match, and the corresponding UI just highlights or animates the exact word (or even letter) in the subtitle line which is being chanted at that moment, and also show that word (or even the letter/consonant/vowel) in bigger font. This app's purpose is to assist learning Sanskrit chanting.
- It is not expected to be a 100% automated process, nor 100% manual but a mix where the application should assist the human as much as possible.













II. Following is the first code I wrote for this purpose, wherein



- 

- First I open a mp3 (or any audio format) file,
- Seek to some arbitrary point in the timeline of the audio file // as of now playing from zero offset
- Get the audio data in raw format for 2 purposes - (1) playing it and (2) drawing the waveform.
- Playing the raw audio data using standard java audio libraries











III. The problem I am facing is, between every cycle there is screeching sound.



- 

- Probably I need to close the line between cycles ? Sounds simple, I can try.
- But I am also wondering if this overall approach itself is correct ? Any tip, guide, suggestion, link would be really helpful.
- Also I just hard coded the sample-rate etc ( 44100Hz etc. ), are these good to set as default presets or it should depend on the input format ?









IV. Here is the code



import com.github.kokorin.jaffree.StreamType;
import com.github.kokorin.jaffree.ffmpeg.FFmpeg;
import com.github.kokorin.jaffree.ffmpeg.FFmpegProgress;
import com.github.kokorin.jaffree.ffmpeg.FFmpegResult;
import com.github.kokorin.jaffree.ffmpeg.NullOutput;
import com.github.kokorin.jaffree.ffmpeg.PipeOutput;
import com.github.kokorin.jaffree.ffmpeg.ProgressListener;
import com.github.kokorin.jaffree.ffprobe.Stream;
import com.github.kokorin.jaffree.ffmpeg.UrlInput;
import com.github.kokorin.jaffree.ffprobe.FFprobe;
import com.github.kokorin.jaffree.ffprobe.FFprobeResult;
import java.io.IOException;
import java.io.OutputStream;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicLong;
import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioSystem;
import javax.sound.sampled.DataLine;
import javax.sound.sampled.SourceDataLine;


public class FFMpegToRaw {
 Path BIN = Paths.get("f:\\utilities\\ffmpeg-20190413-0ad0533-win64-static\\bin");
 String VIDEO_MP4 = "f:\\org\\TEMPLE\\DeviMahatmyamRecitationAudio\\03_01_Devi Kavacham.mp3";
 FFprobe ffprobe;
 FFmpeg ffmpeg;

 public void basicCheck() throws Exception {
 if (BIN != null) {
 ffprobe = FFprobe.atPath(BIN);
 } else {
 ffprobe = FFprobe.atPath();
 }
 FFprobeResult result = ffprobe
 .setShowStreams(true)
 .setInput(VIDEO_MP4)
 .execute();

 for (Stream stream : result.getStreams()) {
 System.out.println("Stream " + stream.getIndex()
 + " type " + stream.getCodecType()
 + " duration " + stream.getDuration(TimeUnit.SECONDS));
 } 
 if (BIN != null) {
 ffmpeg = FFmpeg.atPath(BIN);
 } else {
 ffmpeg = FFmpeg.atPath();
 }

 //Sometimes ffprobe can't show exact duration, use ffmpeg trancoding to NULL output to get it
 final AtomicLong durationMillis = new AtomicLong();
 FFmpegResult fFmpegResult = ffmpeg
 .addInput(
 UrlInput.fromUrl(VIDEO_MP4)
 )
 .addOutput(new NullOutput())
 .setProgressListener(new ProgressListener() {
 @Override
 public void onProgress(FFmpegProgress progress) {
 durationMillis.set(progress.getTimeMillis());
 }
 })
 .execute();
 System.out.println("audio size - "+fFmpegResult.getAudioSize());
 System.out.println("Exact duration: " + durationMillis.get() + " milliseconds");
 }

 public void toRawAndPlay() throws Exception {
 ProgressListener listener = new ProgressListener() {
 @Override
 public void onProgress(FFmpegProgress progress) {
 System.out.println(progress.getFrame());
 }
 };

 // code derived from : https://stackoverflow.com/questions/32873596/play-raw-pcm-audio-received-in-udp-packets

 int sampleRate = 44100;//24000;//Hz
 int sampleSize = 16;//Bits
 int channels = 1;
 boolean signed = true;
 boolean bigEnd = false;
 String format = "s16be"; //"f32le"

 //https://trac.ffmpeg.org/wiki/audio types
 final AudioFormat af = new AudioFormat(sampleRate, sampleSize, channels, signed, bigEnd);
 final DataLine.Info info = new DataLine.Info(SourceDataLine.class, af);
 final SourceDataLine line = (SourceDataLine) AudioSystem.getLine(info);

 line.open(af, 4096); // format , buffer size
 line.start();

 OutputStream destination = new OutputStream() {
 @Override public void write(int b) throws IOException {
 throw new UnsupportedOperationException("Nobody uses thi.");
 }
 @Override public void write(byte[] b, int off, int len) throws IOException {
 String o = new String(b);
 boolean showString = false;
 System.out.println("New output ("+ len
 + ", off="+off + ") -> "+(showString?o:"")); 
 // output wave form repeatedly

 if(len%2!=0) {
 len -= 1;
 System.out.println("");
 }
 line.write(b, off, len);
 System.out.println("done round");
 }
 };

 // src : http://blog.wudilabs.org/entry/c3d357ed/?lang=en-US
 FFmpegResult result = FFmpeg.atPath(BIN).
 addInput(UrlInput.fromPath(Paths.get(VIDEO_MP4))).
 addOutput(PipeOutput.pumpTo(destination).
 disableStream(StreamType.VIDEO). //.addArgument("-vn")
 setFrameRate(sampleRate). //.addArguments("-ar", sampleRate)
 addArguments("-ac", "1").
 setFormat(format) //.addArguments("-f", format)
 ).
 setProgressListener(listener).
 execute();

 // shut down audio
 line.drain();
 line.stop();
 line.close();

 System.out.println("result = "+result.toString());
 }

 public static void main(String[] args) throws Exception {
 FFMpegToRaw raw = new FFMpegToRaw();
 raw.basicCheck();
 raw.toRawAndPlay();
 }
}





Thank You