Recherche avancée

Médias (1)

Mot : - Tags -/biomaping

Autres articles (69)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

Sur d’autres sites (10154)

  • Multi-Threaded Video Decoder Leaks Memory

    3 janvier 2018, par Cethric

    My intention is to create a relatively simple video playback system to be used in a larger program that I am working on. Relevant code to the video decoder is here. The best I have been able to do so far is narrow down the memory leak to this section of code (or rather I have not noticed any memory leaks occurring when video is not used).

    This is probably a very broad question how ever I am unsure of the scope of the issue I am having and as such of how to word my question.

    What I want to know is what have I missed or done wrong that has lead to a noticeable memory leak (by noticeable I mean I can watch memory usage climb megabytes per minute). I have tried ensured that every allocation that I make is matched by a deallocation.

    EDIT 1
    This is to be built on a Windows 10 machine running MSYS2 (MinGW64)

  • lavfi/dnn_backend_openvino.c : Spelling Correction in OpenVino Backend

    23 avril 2021, par shubhanshu02
    lavfi/dnn_backend_openvino.c : Spelling Correction in OpenVino Backend
    

    Correct Spelling of the word `descibe` to `describe`
    in init_model_ov

    Signed-off-by : shubhanshu02 <shubhanshu.e01@gmail.com>

    • [DH] libavfilter/dnn/dnn_backend_openvino.c
  • Screeching white sound coming while playing audio as a raw stream

    27 avril 2020, par Sri Nithya Sharabheshwarananda

    I. Background

    &#xA;&#xA;

      &#xA;
    1. I am trying to make an application which helps to match subtitles to the audio waveform very accurately at the waveform level, at the word level or even at the character level.
    2. &#xA;

    3. The audio is expected to be Sanskrit chants (Yoga, rituals etc.) which are extremely long compound words [ example - aṅganyā-sokta-mātaro-bījam is traditionally one word broken only to assist reading ]
    4. &#xA;

    5. The input transcripts / subtitles might be roughly in sync at the sentence/verse level but surely would not be in sync at the word level.
    6. &#xA;

    7. The application should be able to figure out points of silence in the audio waveform, so that it can guess the start and end points of each word (or even letter/consonant/vowel in a word), such that the audio-chanting and visual-subtitle at the word level (or even at letter/consonant/vowel level) perfectly match, and the corresponding UI just highlights or animates the exact word (or even letter) in the subtitle line which is being chanted at that moment, and also show that word (or even the letter/consonant/vowel) in bigger font. This app's purpose is to assist learning Sanskrit chanting.
    8. &#xA;

    9. It is not expected to be a 100% automated process, nor 100% manual but a mix where the application should assist the human as much as possible.
    10. &#xA;

    &#xA;&#xA;

    II. Following is the first code I wrote for this purpose, wherein

    &#xA;&#xA;

      &#xA;
    1. First I open a mp3 (or any audio format) file,
    2. &#xA;

    3. Seek to some arbitrary point in the timeline of the audio file // as of now playing from zero offset
    4. &#xA;

    5. Get the audio data in raw format for 2 purposes - (1) playing it and (2) drawing the waveform.
    6. &#xA;

    7. Playing the raw audio data using standard java audio libraries
    8. &#xA;

    &#xA;&#xA;

    III. The problem I am facing is, between every cycle there is screeching sound.

    &#xA;&#xA;

      &#xA;
    • Probably I need to close the line between cycles ? Sounds simple, I can try.
    • &#xA;

    • But I am also wondering if this overall approach itself is correct ? Any tip, guide, suggestion, link would be really helpful.
    • &#xA;

    • Also I just hard coded the sample-rate etc ( 44100Hz etc. ), are these good to set as default presets or it should depend on the input format ?
    • &#xA;

    &#xA;&#xA;

    IV. Here is the code

    &#xA;&#xA;

    import com.github.kokorin.jaffree.StreamType;&#xA;import com.github.kokorin.jaffree.ffmpeg.FFmpeg;&#xA;import com.github.kokorin.jaffree.ffmpeg.FFmpegProgress;&#xA;import com.github.kokorin.jaffree.ffmpeg.FFmpegResult;&#xA;import com.github.kokorin.jaffree.ffmpeg.NullOutput;&#xA;import com.github.kokorin.jaffree.ffmpeg.PipeOutput;&#xA;import com.github.kokorin.jaffree.ffmpeg.ProgressListener;&#xA;import com.github.kokorin.jaffree.ffprobe.Stream;&#xA;import com.github.kokorin.jaffree.ffmpeg.UrlInput;&#xA;import com.github.kokorin.jaffree.ffprobe.FFprobe;&#xA;import com.github.kokorin.jaffree.ffprobe.FFprobeResult;&#xA;import java.io.IOException;&#xA;import java.io.OutputStream;&#xA;import java.nio.file.Path;&#xA;import java.nio.file.Paths;&#xA;import java.util.concurrent.TimeUnit;&#xA;import java.util.concurrent.atomic.AtomicLong;&#xA;import javax.sound.sampled.AudioFormat;&#xA;import javax.sound.sampled.AudioSystem;&#xA;import javax.sound.sampled.DataLine;&#xA;import javax.sound.sampled.SourceDataLine;&#xA;&#xA;&#xA;public class FFMpegToRaw {&#xA;    Path BIN = Paths.get("f:\\utilities\\ffmpeg-20190413-0ad0533-win64-static\\bin");&#xA;    String VIDEO_MP4 = "f:\\org\\TEMPLE\\DeviMahatmyamRecitationAudio\\03_01_Devi Kavacham.mp3";&#xA;    FFprobe ffprobe;&#xA;    FFmpeg ffmpeg;&#xA;&#xA;    public void basicCheck() throws Exception {&#xA;        if (BIN != null) {&#xA;            ffprobe = FFprobe.atPath(BIN);&#xA;        } else {&#xA;            ffprobe = FFprobe.atPath();&#xA;        }&#xA;        FFprobeResult result = ffprobe&#xA;                .setShowStreams(true)&#xA;                .setInput(VIDEO_MP4)&#xA;                .execute();&#xA;&#xA;        for (Stream stream : result.getStreams()) {&#xA;            System.out.println("Stream " &#x2B; stream.getIndex()&#xA;                    &#x2B; " type " &#x2B; stream.getCodecType()&#xA;                    &#x2B; " duration " &#x2B; stream.getDuration(TimeUnit.SECONDS));&#xA;        }    &#xA;        if (BIN != null) {&#xA;            ffmpeg = FFmpeg.atPath(BIN);&#xA;        } else {&#xA;            ffmpeg = FFmpeg.atPath();&#xA;        }&#xA;&#xA;        //Sometimes ffprobe can&#x27;t show exact duration, use ffmpeg trancoding to NULL output to get it&#xA;        final AtomicLong durationMillis = new AtomicLong();&#xA;        FFmpegResult fFmpegResult = ffmpeg&#xA;                .addInput(&#xA;                        UrlInput.fromUrl(VIDEO_MP4)&#xA;                )&#xA;                .addOutput(new NullOutput())&#xA;                .setProgressListener(new ProgressListener() {&#xA;                    @Override&#xA;                    public void onProgress(FFmpegProgress progress) {&#xA;                        durationMillis.set(progress.getTimeMillis());&#xA;                    }&#xA;                })&#xA;                .execute();&#xA;        System.out.println("audio size - "&#x2B;fFmpegResult.getAudioSize());&#xA;        System.out.println("Exact duration: " &#x2B; durationMillis.get() &#x2B; " milliseconds");&#xA;    }&#xA;&#xA;    public void toRawAndPlay() throws Exception {&#xA;        ProgressListener listener = new ProgressListener() {&#xA;            @Override&#xA;            public void onProgress(FFmpegProgress progress) {&#xA;                System.out.println(progress.getFrame());&#xA;            }&#xA;        };&#xA;&#xA;        // code derived from : https://stackoverflow.com/questions/32873596/play-raw-pcm-audio-received-in-udp-packets&#xA;&#xA;        int sampleRate = 44100;//24000;//Hz&#xA;        int sampleSize = 16;//Bits&#xA;        int channels   = 1;&#xA;        boolean signed = true;&#xA;        boolean bigEnd = false;&#xA;        String format  = "s16be"; //"f32le"&#xA;&#xA;        //https://trac.ffmpeg.org/wiki/audio types&#xA;        final AudioFormat af = new AudioFormat(sampleRate, sampleSize, channels, signed, bigEnd);&#xA;        final DataLine.Info info = new DataLine.Info(SourceDataLine.class, af);&#xA;        final SourceDataLine line = (SourceDataLine) AudioSystem.getLine(info);&#xA;&#xA;        line.open(af, 4096); // format , buffer size&#xA;        line.start();&#xA;&#xA;        OutputStream destination = new OutputStream() {&#xA;            @Override public void write(int b) throws IOException {&#xA;                throw new UnsupportedOperationException("Nobody uses thi.");&#xA;            }&#xA;            @Override public void write(byte[] b, int off, int len) throws IOException {&#xA;                String o = new String(b);&#xA;                boolean showString = false;&#xA;                System.out.println("New output ("&#x2B; len&#xA;                        &#x2B; ", off="&#x2B;off &#x2B; ") -> "&#x2B;(showString?o:"")); &#xA;                // output wave form repeatedly&#xA;&#xA;                if(len%2!=0) {&#xA;                    len -= 1;&#xA;                    System.out.println("");&#xA;                }&#xA;                line.write(b, off, len);&#xA;                System.out.println("done round");&#xA;            }&#xA;        };&#xA;&#xA;        // src : http://blog.wudilabs.org/entry/c3d357ed/?lang=en-US&#xA;        FFmpegResult result = FFmpeg.atPath(BIN).&#xA;            addInput(UrlInput.fromPath(Paths.get(VIDEO_MP4))).&#xA;            addOutput(PipeOutput.pumpTo(destination).&#xA;                disableStream(StreamType.VIDEO). //.addArgument("-vn")&#xA;                setFrameRate(sampleRate).            //.addArguments("-ar", sampleRate)&#xA;                addArguments("-ac", "1").&#xA;                setFormat(format)              //.addArguments("-f", format)&#xA;            ).&#xA;            setProgressListener(listener).&#xA;            execute();&#xA;&#xA;        // shut down audio&#xA;        line.drain();&#xA;        line.stop();&#xA;        line.close();&#xA;&#xA;        System.out.println("result = "&#x2B;result.toString());&#xA;    }&#xA;&#xA;    public static void main(String[] args) throws Exception {&#xA;        FFMpegToRaw raw = new FFMpegToRaw();&#xA;        raw.basicCheck();&#xA;        raw.toRawAndPlay();&#xA;    }&#xA;}&#xA;&#xA;

    &#xA;&#xA;

    Thank You

    &#xA;