Recherche avancée

Médias (1)

Mot : - Tags -/censure

Autres articles (100)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • L’agrémenter visuellement

    10 avril 2011

    MediaSPIP est basé sur un système de thèmes et de squelettes. Les squelettes définissent le placement des informations dans la page, définissant un usage spécifique de la plateforme, et les thèmes l’habillage graphique général.
    Chacun peut proposer un nouveau thème graphique ou un squelette et le mettre à disposition de la communauté.

Sur d’autres sites (10976)

  • The problem with the AudioDispatcher, the analysis in audioDispatcherFactory is not running, TarsosDSP

    4 février, par roman_gor_

    I'm making an application for sound analysis and spotlight control. The colors of the spotlight change to the beat of the music. I use the TarsosDSP library for this, additionally downloaded the FFmpeg-Kit library to convert audio to WAV format, PCM 16L to work with audioDispatcher.
The problem is that when audio is transmitted in the correct format, dispatcher starts and immediately ends. The boolean Process method is not executed, but the process Finished() method is executed. I found out that the stream starts, the file is not empty, it is converted to the correct format, BUT the getFrameLength() method, when interacting with the AudioStream to which I pass the filePath, returns the file path value -1, that is, in fact, it is not filled in. I've already searched through everything, and the github library code, and all the neural networks, I don't know how to solve this issue. The problem is with AudioDispatcher and AudioDispatcherFactory.from Pipe() ?

    


    private void playAndAnalyzeAudio(String filePath, Uri uri)
    {
        if (mediaPlayer != null)
            mediaPlayer.release();
        mediaPlayer = MediaPlayer.create(requireContext(), uri);

        new Thread(() -> {
            extractAudio(inputFilePath, outputFilePath);
            getActivity().runOnUiThread(() -> {
                mediaPlayer = MediaPlayer.create(requireContext(), uri);
                if (mediaPlayer != null) {
                    mediaPlayer.start(); // Start music after analyze
                    startSendingData(); // Start data sending
                }
            });
        }).start();
    }

    private void analyzeAudio(String filePath)
    {
        try {
            AudioDispatcher audioDispatcher = AudioDispatcherFactory.fromPipe(filePath, 44100, 1024, 0);
            MFCC mfcc = new MFCC(1024, 44100, 13, 50, 20, 10000);
            audioDispatcher.addAudioProcessor(mfcc);
            Log.d("AUDIO_ANALYSIS", "Начинаем анализ аудиофайла..." + audioDispatcher);
            audioDispatcher.addAudioProcessor(new AudioProcessor() {
                @Override
                public boolean process(AudioEvent audioEvent) {
                    Log.d("AUDIO_ANALYSIS", "Обрабатываем аудио...");

                    float[] amplitudes = audioEvent.getFloatBuffer();
                    Log.d("AUDIO_ANALYSIS", "Размер буфера: " + amplitudes.length);

                    float[] mfccs = mfcc.getMFCC();
                    if (mfccs == null) {
                        Log.e("AUDIO_ANALYSIS", "MFCC не сгенерировался!");
                        return true;
                    }

                    float currentBass = mfccs[0] + mfccs[1];
                    float totalEnergy = 0;
                    for (float amp : amplitudes) {
                        totalEnergy += Math.abs(amp);
                    }

                    Log.d("AUDIO_ANALYSIS", "Bass Energy: " + currentBass + ", Total Energy: " + totalEnergy);

                    if (currentBass > BASS_THRESHOLD || totalEnergy > ENERGY_THRESHOLD) {
                        changeColor();
                        Log.d("SONG", "Color wac changed on a : " + currentColor);
                        brightness = MAX_BRIGHTNESS;
                    } else {
                        brightness *= 0.9f;
                    }

                    return true;
                }

                @Override
                public void processingFinished() {
                    getActivity().runOnUiThread(() -> Toast.makeText(requireContext(), "Анализ завершён", Toast.LENGTH_SHORT).show());
                }
            });
            File file = new File(filePath);
            if (!file.exists() || file.length() == 0) {
                Log.e("AUDIO_ANALYSIS", "Error: file is empty! " + filePath);
                return;
            } else {
                Log.d("AUDIO_ANALYSIS", "File is, size: " + file.length() + " byte.");
            }
            Log.d("AUDIO_ANALYSIS", "Start of analyzing: " + filePath);
            File ffmpegFile = new File(getContext().getCacheDir(), "ffmpeg");
            if (!ffmpegFile.setExecutable(true)) {
                Log.e("AUDIO_ANALYSIS", "You don't have any roots for ffmpeg!");
            }
            else
                Log.e("AUDIO_ANALYSIS", "You have roots for ffmpeg!");

            new Thread(() -> {
                Log.d("AUDIO_ANALYSIS", "Start dispatcher...");
                audioDispatcher.run();
                Log.d("AUDIO_ANALYSIS", "Dispatcher end.");
            }).start();
        } catch (Exception e) {
            e.printStackTrace();
            Toast.makeText(requireContext(), "Error of analyzing", Toast.LENGTH_SHORT).show();
        }
    }
public void extractAudio(String inputFilePath, String outputFilePath) {
        File outputFile = new File(outputFilePath);
        if (outputFile.exists()) {
            outputFile.delete();  // Удаляем существующий файл
        }
        // Строим команду для извлечения аудио
        String command = "-i " + inputFilePath + " -vn -acodec pcm_s16le -ar 44100 -ac 2 " + outputFilePath;

        // Используем FFmpegKit для выполнения команды
        FFmpegKit.executeAsync(command, session -> {
            if (session.getReturnCode().isSuccess()) {
                Log.d("AUDIO_EXTRACT", "Аудио извлечено успешно: " + outputFilePath);
                analyzeAudio(outputFilePath);  // Продолжаем анализировать аудио
            } else {
                Log.e("AUDIO_EXTRACT", "Ошибка извлечения аудио: " + session.getFailStackTrace());
            }
        });
    }


    


    Sorry about the number of lines, i tried to describe the problem very detailed.
I tried to change AudioDispatcherFactory.fromPipe() on a AudioDispatcherFactory.fromFile(), but this method don't available in Android, only in Java, how i see the error "Javax.sound..., unexpected error, method don't available"
I tried to change String command in executeAudio() method, to change arguments of fromPipe() method, but in did not to bring success.
I want that my audio file will be correct analyze with audiodispatcher and then, that data from analyze will be transfered to arduino. Now in Logs I see "Color : null, value : 0.0.

    


  • movenc : Heuristically set the duration of the last sample in a fragment if not set

    6 mars 2015, par Martin Storsjö
    movenc : Heuristically set the duration of the last sample in a fragment if not set
    

    Even if this is a guess, it is way better than writing a zero duration
    of the last sample in a fragment (because if the duration is zero,
    the first sample of the next fragment will have the same timestamp
    as the last sample in the previous one).

    Since we normally don’t require libavformat muxer users to set
    the duration field in AVPacket, we probably can’t strictly require
    it here either, so don’t log this as a strict warning, only as info.

    Signed-off-by : Martin Storsjö <martin@martin.st>

    • [DBH] libavformat/movenc.c
    • [DBH] libavformat/movenc.h
  • Demo : Add sample code for Bootstrap 4 usage (#2173)

    20 juin 2018, par gwhenne
    Demo : Add sample code for Bootstrap 4 usage (#2173)