Recherche avancée

Médias (0)

Mot : - Tags -/signalement

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (26)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • Contribute to documentation

    13 avril 2011

    Documentation is vital to the development of improved technical capabilities.
    MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
    To contribute, register to the project users’ mailing (...)

  • Selection of projects using MediaSPIP

    2 mai 2011, par

    The examples below are representative elements of MediaSPIP specific uses for specific projects.
    MediaSPIP farm @ Infini
    The non profit organizationInfini develops hospitality activities, internet access point, training, realizing innovative projects in the field of information and communication technologies and Communication, and hosting of websites. It plays a unique and prominent role in the Brest (France) area, at the national level, among the half-dozen such association. Its members (...)

Sur d’autres sites (4183)

  • Webcam Serverless Live stream

    23 juillet 2021, par curiouscoder

    I'm trying to live stream my webcam in a serverless way in the following flow :

    


    webcam browser >> s3 bucket >> lambda/ffmpeg encoding >> s3 output bucket >> dash player

    


    This is working really good so far but I'm facing the following problem :

    


    ffmpeg will only encode those seconds received (I stream the webcam to s3 each X seconds with some 300kb .webm file). So the .mpd file generated by ffmpeg encoder will have the type 'static' when ffmpeg finishes encoding and not the 'dynamic' type desired. Therefore, the dash player won't request the other files from s3 and the streaming will stop. For example, if I let the webcam streaming running for 15 seconds, the viewer is able to watch the 15 minutes. But if I keep sending the streams each 2 seconds the viewer will be able to watch only the first 2 seconds because browser won't request any other .m4s files.

    


    So, I have the following question :

    


    Is there a way to force the dash player to reload the .mpd file that is stored in s3 even when the type is static instead of dynamic ?

    


    Thanks in advance !

    


  • libfdk-aacdec : Reduce the default decoder delay by one frame

    4 juillet 2014, par Omer Osman
    libfdk-aacdec : Reduce the default decoder delay by one frame
    

    The default error concealment method if none is set via
    aacDecoder_SetParam(AAC_CONCEAL_METHOD) is set in
    CConcealment_InitCommonData within the fdk-aac library
    and is set to Energy Interpolation. This method requires one frame
    delay to the output. To reduce the default decoder output delay and
    avoid missing the last frame in file based decoding, use Noise
    Substitution as the default concealment method.

    Signed-off-by : Omer Osman <omer.osman@iis.fraunhofer.de>
    Signed-off-by : Martin Storsjö <martin@martin.st>

    • [DBH] libavcodec/libfdk-aacdec.c
    • [DBH] libavcodec/version.h
  • The problem with the AudioDispatcher, the analysis in audioDispatcherFactory is not running, TarsosDSP

    4 février, par roman_gor_

    I'm making an application for sound analysis and spotlight control. The colors of the spotlight change to the beat of the music. I use the TarsosDSP library for this, additionally downloaded the FFmpeg-Kit library to convert audio to WAV format, PCM 16L to work with audioDispatcher.&#xA;The problem is that when audio is transmitted in the correct format, dispatcher starts and immediately ends. The boolean Process method is not executed, but the process Finished() method is executed. I found out that the stream starts, the file is not empty, it is converted to the correct format, BUT the getFrameLength() method, when interacting with the AudioStream to which I pass the filePath, returns the file path value -1, that is, in fact, it is not filled in. I've already searched through everything, and the github library code, and all the neural networks, I don't know how to solve this issue. The problem is with AudioDispatcher and AudioDispatcherFactory.from Pipe() ?

    &#xA;

    private void playAndAnalyzeAudio(String filePath, Uri uri)&#xA;    {&#xA;        if (mediaPlayer != null)&#xA;            mediaPlayer.release();&#xA;        mediaPlayer = MediaPlayer.create(requireContext(), uri);&#xA;&#xA;        new Thread(() -> {&#xA;            extractAudio(inputFilePath, outputFilePath);&#xA;            getActivity().runOnUiThread(() -> {&#xA;                mediaPlayer = MediaPlayer.create(requireContext(), uri);&#xA;                if (mediaPlayer != null) {&#xA;                    mediaPlayer.start(); // Start music after analyze&#xA;                    startSendingData(); // Start data sending&#xA;                }&#xA;            });&#xA;        }).start();&#xA;    }&#xA;&#xA;    private void analyzeAudio(String filePath)&#xA;    {&#xA;        try {&#xA;            AudioDispatcher audioDispatcher = AudioDispatcherFactory.fromPipe(filePath, 44100, 1024, 0);&#xA;            MFCC mfcc = new MFCC(1024, 44100, 13, 50, 20, 10000);&#xA;            audioDispatcher.addAudioProcessor(mfcc);&#xA;            Log.d("AUDIO_ANALYSIS", "Начинаем анализ аудиофайла..." &#x2B; audioDispatcher);&#xA;            audioDispatcher.addAudioProcessor(new AudioProcessor() {&#xA;                @Override&#xA;                public boolean process(AudioEvent audioEvent) {&#xA;                    Log.d("AUDIO_ANALYSIS", "Обрабатываем аудио...");&#xA;&#xA;                    float[] amplitudes = audioEvent.getFloatBuffer();&#xA;                    Log.d("AUDIO_ANALYSIS", "Размер буфера: " &#x2B; amplitudes.length);&#xA;&#xA;                    float[] mfccs = mfcc.getMFCC();&#xA;                    if (mfccs == null) {&#xA;                        Log.e("AUDIO_ANALYSIS", "MFCC не сгенерировался!");&#xA;                        return true;&#xA;                    }&#xA;&#xA;                    float currentBass = mfccs[0] &#x2B; mfccs[1];&#xA;                    float totalEnergy = 0;&#xA;                    for (float amp : amplitudes) {&#xA;                        totalEnergy &#x2B;= Math.abs(amp);&#xA;                    }&#xA;&#xA;                    Log.d("AUDIO_ANALYSIS", "Bass Energy: " &#x2B; currentBass &#x2B; ", Total Energy: " &#x2B; totalEnergy);&#xA;&#xA;                    if (currentBass > BASS_THRESHOLD || totalEnergy > ENERGY_THRESHOLD) {&#xA;                        changeColor();&#xA;                        Log.d("SONG", "Color wac changed on a : " &#x2B; currentColor);&#xA;                        brightness = MAX_BRIGHTNESS;&#xA;                    } else {&#xA;                        brightness *= 0.9f;&#xA;                    }&#xA;&#xA;                    return true;&#xA;                }&#xA;&#xA;                @Override&#xA;                public void processingFinished() {&#xA;                    getActivity().runOnUiThread(() -> Toast.makeText(requireContext(), "Анализ завершён", Toast.LENGTH_SHORT).show());&#xA;                }&#xA;            });&#xA;            File file = new File(filePath);&#xA;            if (!file.exists() || file.length() == 0) {&#xA;                Log.e("AUDIO_ANALYSIS", "Error: file is empty! " &#x2B; filePath);&#xA;                return;&#xA;            } else {&#xA;                Log.d("AUDIO_ANALYSIS", "File is, size: " &#x2B; file.length() &#x2B; " byte.");&#xA;            }&#xA;            Log.d("AUDIO_ANALYSIS", "Start of analyzing: " &#x2B; filePath);&#xA;            File ffmpegFile = new File(getContext().getCacheDir(), "ffmpeg");&#xA;            if (!ffmpegFile.setExecutable(true)) {&#xA;                Log.e("AUDIO_ANALYSIS", "You don&#x27;t have any roots for ffmpeg!");&#xA;            }&#xA;            else&#xA;                Log.e("AUDIO_ANALYSIS", "You have roots for ffmpeg!");&#xA;&#xA;            new Thread(() -> {&#xA;                Log.d("AUDIO_ANALYSIS", "Start dispatcher...");&#xA;                audioDispatcher.run();&#xA;                Log.d("AUDIO_ANALYSIS", "Dispatcher end.");&#xA;            }).start();&#xA;        } catch (Exception e) {&#xA;            e.printStackTrace();&#xA;            Toast.makeText(requireContext(), "Error of analyzing", Toast.LENGTH_SHORT).show();&#xA;        }&#xA;    }&#xA;public void extractAudio(String inputFilePath, String outputFilePath) {&#xA;        File outputFile = new File(outputFilePath);&#xA;        if (outputFile.exists()) {&#xA;            outputFile.delete();  // Удаляем существующий файл&#xA;        }&#xA;        // Строим команду для извлечения аудио&#xA;        String command = "-i " &#x2B; inputFilePath &#x2B; " -vn -acodec pcm_s16le -ar 44100 -ac 2 " &#x2B; outputFilePath;&#xA;&#xA;        // Используем FFmpegKit для выполнения команды&#xA;        FFmpegKit.executeAsync(command, session -> {&#xA;            if (session.getReturnCode().isSuccess()) {&#xA;                Log.d("AUDIO_EXTRACT", "Аудио извлечено успешно: " &#x2B; outputFilePath);&#xA;                analyzeAudio(outputFilePath);  // Продолжаем анализировать аудио&#xA;            } else {&#xA;                Log.e("AUDIO_EXTRACT", "Ошибка извлечения аудио: " &#x2B; session.getFailStackTrace());&#xA;            }&#xA;        });&#xA;    }&#xA;

    &#xA;

    Sorry about the number of lines, i tried to describe the problem very detailed.&#xA;I tried to change AudioDispatcherFactory.fromPipe() on a AudioDispatcherFactory.fromFile(), but this method don't available in Android, only in Java, how i see the error "Javax.sound..., unexpected error, method don't available"&#xA;I tried to change String command in executeAudio() method, to change arguments of fromPipe() method, but in did not to bring success.&#xA;I want that my audio file will be correct analyze with audiodispatcher and then, that data from analyze will be transfered to arduino. Now in Logs I see "Color : null, value : 0.0.

    &#xA;