Recherche avancée

Médias (1)

Mot : - Tags -/censure

Autres articles (85)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • (Dés)Activation de fonctionnalités (plugins)

    18 février 2011, par

    Pour gérer l’ajout et la suppression de fonctionnalités supplémentaires (ou plugins), MediaSPIP utilise à partir de la version 0.2 SVP.
    SVP permet l’activation facile de plugins depuis l’espace de configuration de MediaSPIP.
    Pour y accéder, il suffit de se rendre dans l’espace de configuration puis de se rendre sur la page "Gestion des plugins".
    MediaSPIP est fourni par défaut avec l’ensemble des plugins dits "compatibles", ils ont été testés et intégrés afin de fonctionner parfaitement avec chaque (...)

Sur d’autres sites (10561)

  • Setting up RTP on Nginx

    2 février 2021, par Swap

    I'm trying to use Janus Media Server to relay WebRTC streams to a particular RTP host/port, from where ffmpeg can pick it up as an input and convert it further to an rtmp stream, which can then be used to broadcast to various social media platforms (such as, YouTube, Twitch, Facebook, etc.)

    


    My inspiration for this has been the following blog - https://www.meetecho.com/blog/firefox-webrtc-youtube-kinda/

    


    Specifically, I'm trying to replicate the following architecture -

    


    architecture

    


    And Janus, as per their documentation, has a very neat API for doing it -

    


    {&#xA;    "request" : "rtp_forward",&#xA;    "room" : <unique numeric="numeric" of="of" the="the" room="room" publisher="publisher" is="is" in="in">,&#xA;    "publisher_id" : <unique numeric="numeric" of="of" the="the" publisher="publisher" to="to" relay="relay" externally="externally">,&#xA;    "host" : "<host address="address" to="to" forward="forward" the="the" rtp="rtp" and="and" packets="packets">",&#xA;    "host_family" : "",&#xA;    "audio_port" : <port to="to" forward="forward" the="the" audio="audio" rtp="rtp" packets="packets">,&#xA;    "audio_ssrc" : <audio ssrc="ssrc" to="to" use="use" when="when" optional="optional">,&#xA;    "audio_pt" : <audio payload="payload" type="type" to="to" use="use" when="when" optional="optional">,&#xA;    "audio_rtcp_port" : <port to="to" contact="contact" receive="receive" audio="audio" rtcp="rtcp" feedback="feedback" from="from" the="the" and="and" currently="currently" unused="unused" for="for">,&#xA;    "video_port" : <port to="to" forward="forward" the="the" video="video" rtp="rtp" packets="packets">,&#xA;    "video_ssrc" : <video ssrc="ssrc" to="to" use="use" when="when" optional="optional">,&#xA;    "video_pt" : <video payload="payload" type="type" to="to" use="use" when="when" optional="optional">,&#xA;    "video_rtcp_port" : <port to="to" contact="contact" receive="receive" video="video" rtcp="rtcp" feedback="feedback" from="from" the="the" optional="optional">,&#xA;    "simulcast" : ,&#xA;    "video_port_2" : <if simulcasting="simulcasting" and="and" forwarding="forwarding" each="each" port="port" to="to" forward="forward" the="the" video="video" rtp="rtp" packets="packets" from="from" second="second" substream="substream"></if>layer to>,&#xA;    "video_ssrc_2" : <if simulcasting="simulcasting" and="and" forwarding="forwarding" each="each" video="video" ssrc="ssrc" to="to" use="use" the="the" second="second" substream="substream"></if>layer; optional>,&#xA;    "video_pt_2" : <if simulcasting="simulcasting" and="and" forwarding="forwarding" each="each" video="video" payload="payload" type="type" to="to" use="use" the="the" second="second" substream="substream"></if>layer; optional>,&#xA;    "video_port_3" : <if simulcasting="simulcasting" and="and" forwarding="forwarding" each="each" port="port" to="to" forward="forward" the="the" video="video" rtp="rtp" packets="packets" from="from" third="third" substream="substream"></if>layer to>,&#xA;    "video_ssrc_3" : <if simulcasting="simulcasting" and="and" forwarding="forwarding" each="each" video="video" ssrc="ssrc" to="to" use="use" the="the" third="third" substream="substream"></if>layer; optional>,&#xA;    "video_pt_3" : <if simulcasting="simulcasting" and="and" forwarding="forwarding" each="each" video="video" payload="payload" type="type" to="to" use="use" the="the" third="third" substream="substream"></if>layer; optional>,&#xA;    "data_port" : <port to="to" forward="forward" the="the" messages="messages">,&#xA;    "srtp_suite" : <length of="of" authentication="authentication" tag="tag" or="or" optional="optional">,&#xA;    "srtp_crypto" : "<key to="to" use="use" as="as" crypto="crypto" encoded="encoded" key="key" in="in" optional="optional">"&#xA;}&#xA;</key></length></port></port></video></video></port></port></audio></audio></port></host></unique></unique>

    &#xA;

    For this, I've setup a Nginx server, where I've also installed Janus and everything's been running smoothly so far. But I'm quite clueless as to how to setup my Nginx server so that it accepts RTP connections (which will be forwarded as RTMP using ffmpeg).

    &#xA;

    Please guide me to any relevant resources that would help me achieve this. Thanks in advance !

    &#xA;

  • The problem with the AudioDispatcher, the analysis in audioDispatcherFactory is not running, TarsosDSP

    4 février, par roman_gor_

    I'm making an application for sound analysis and spotlight control. The colors of the spotlight change to the beat of the music. I use the TarsosDSP library for this, additionally downloaded the FFmpeg-Kit library to convert audio to WAV format, PCM 16L to work with audioDispatcher.&#xA;The problem is that when audio is transmitted in the correct format, dispatcher starts and immediately ends. The boolean Process method is not executed, but the process Finished() method is executed. I found out that the stream starts, the file is not empty, it is converted to the correct format, BUT the getFrameLength() method, when interacting with the AudioStream to which I pass the filePath, returns the file path value -1, that is, in fact, it is not filled in. I've already searched through everything, and the github library code, and all the neural networks, I don't know how to solve this issue. The problem is with AudioDispatcher and AudioDispatcherFactory.from Pipe() ?

    &#xA;

    private void playAndAnalyzeAudio(String filePath, Uri uri)&#xA;    {&#xA;        if (mediaPlayer != null)&#xA;            mediaPlayer.release();&#xA;        mediaPlayer = MediaPlayer.create(requireContext(), uri);&#xA;&#xA;        new Thread(() -> {&#xA;            extractAudio(inputFilePath, outputFilePath);&#xA;            getActivity().runOnUiThread(() -> {&#xA;                mediaPlayer = MediaPlayer.create(requireContext(), uri);&#xA;                if (mediaPlayer != null) {&#xA;                    mediaPlayer.start(); // Start music after analyze&#xA;                    startSendingData(); // Start data sending&#xA;                }&#xA;            });&#xA;        }).start();&#xA;    }&#xA;&#xA;    private void analyzeAudio(String filePath)&#xA;    {&#xA;        try {&#xA;            AudioDispatcher audioDispatcher = AudioDispatcherFactory.fromPipe(filePath, 44100, 1024, 0);&#xA;            MFCC mfcc = new MFCC(1024, 44100, 13, 50, 20, 10000);&#xA;            audioDispatcher.addAudioProcessor(mfcc);&#xA;            Log.d("AUDIO_ANALYSIS", "Начинаем анализ аудиофайла..." &#x2B; audioDispatcher);&#xA;            audioDispatcher.addAudioProcessor(new AudioProcessor() {&#xA;                @Override&#xA;                public boolean process(AudioEvent audioEvent) {&#xA;                    Log.d("AUDIO_ANALYSIS", "Обрабатываем аудио...");&#xA;&#xA;                    float[] amplitudes = audioEvent.getFloatBuffer();&#xA;                    Log.d("AUDIO_ANALYSIS", "Размер буфера: " &#x2B; amplitudes.length);&#xA;&#xA;                    float[] mfccs = mfcc.getMFCC();&#xA;                    if (mfccs == null) {&#xA;                        Log.e("AUDIO_ANALYSIS", "MFCC не сгенерировался!");&#xA;                        return true;&#xA;                    }&#xA;&#xA;                    float currentBass = mfccs[0] &#x2B; mfccs[1];&#xA;                    float totalEnergy = 0;&#xA;                    for (float amp : amplitudes) {&#xA;                        totalEnergy &#x2B;= Math.abs(amp);&#xA;                    }&#xA;&#xA;                    Log.d("AUDIO_ANALYSIS", "Bass Energy: " &#x2B; currentBass &#x2B; ", Total Energy: " &#x2B; totalEnergy);&#xA;&#xA;                    if (currentBass > BASS_THRESHOLD || totalEnergy > ENERGY_THRESHOLD) {&#xA;                        changeColor();&#xA;                        Log.d("SONG", "Color wac changed on a : " &#x2B; currentColor);&#xA;                        brightness = MAX_BRIGHTNESS;&#xA;                    } else {&#xA;                        brightness *= 0.9f;&#xA;                    }&#xA;&#xA;                    return true;&#xA;                }&#xA;&#xA;                @Override&#xA;                public void processingFinished() {&#xA;                    getActivity().runOnUiThread(() -> Toast.makeText(requireContext(), "Анализ завершён", Toast.LENGTH_SHORT).show());&#xA;                }&#xA;            });&#xA;            File file = new File(filePath);&#xA;            if (!file.exists() || file.length() == 0) {&#xA;                Log.e("AUDIO_ANALYSIS", "Error: file is empty! " &#x2B; filePath);&#xA;                return;&#xA;            } else {&#xA;                Log.d("AUDIO_ANALYSIS", "File is, size: " &#x2B; file.length() &#x2B; " byte.");&#xA;            }&#xA;            Log.d("AUDIO_ANALYSIS", "Start of analyzing: " &#x2B; filePath);&#xA;            File ffmpegFile = new File(getContext().getCacheDir(), "ffmpeg");&#xA;            if (!ffmpegFile.setExecutable(true)) {&#xA;                Log.e("AUDIO_ANALYSIS", "You don&#x27;t have any roots for ffmpeg!");&#xA;            }&#xA;            else&#xA;                Log.e("AUDIO_ANALYSIS", "You have roots for ffmpeg!");&#xA;&#xA;            new Thread(() -> {&#xA;                Log.d("AUDIO_ANALYSIS", "Start dispatcher...");&#xA;                audioDispatcher.run();&#xA;                Log.d("AUDIO_ANALYSIS", "Dispatcher end.");&#xA;            }).start();&#xA;        } catch (Exception e) {&#xA;            e.printStackTrace();&#xA;            Toast.makeText(requireContext(), "Error of analyzing", Toast.LENGTH_SHORT).show();&#xA;        }&#xA;    }&#xA;public void extractAudio(String inputFilePath, String outputFilePath) {&#xA;        File outputFile = new File(outputFilePath);&#xA;        if (outputFile.exists()) {&#xA;            outputFile.delete();  // Удаляем существующий файл&#xA;        }&#xA;        // Строим команду для извлечения аудио&#xA;        String command = "-i " &#x2B; inputFilePath &#x2B; " -vn -acodec pcm_s16le -ar 44100 -ac 2 " &#x2B; outputFilePath;&#xA;&#xA;        // Используем FFmpegKit для выполнения команды&#xA;        FFmpegKit.executeAsync(command, session -> {&#xA;            if (session.getReturnCode().isSuccess()) {&#xA;                Log.d("AUDIO_EXTRACT", "Аудио извлечено успешно: " &#x2B; outputFilePath);&#xA;                analyzeAudio(outputFilePath);  // Продолжаем анализировать аудио&#xA;            } else {&#xA;                Log.e("AUDIO_EXTRACT", "Ошибка извлечения аудио: " &#x2B; session.getFailStackTrace());&#xA;            }&#xA;        });&#xA;    }&#xA;

    &#xA;

    Sorry about the number of lines, i tried to describe the problem very detailed.&#xA;I tried to change AudioDispatcherFactory.fromPipe() on a AudioDispatcherFactory.fromFile(), but this method don't available in Android, only in Java, how i see the error "Javax.sound..., unexpected error, method don't available"&#xA;I tried to change String command in executeAudio() method, to change arguments of fromPipe() method, but in did not to bring success.&#xA;I want that my audio file will be correct analyze with audiodispatcher and then, that data from analyze will be transfered to arduino. Now in Logs I see "Color : null, value : 0.0.

    &#xA;

  • FFPLAY produces black video output [closed]

    28 janvier 2020, par RooterTooter

    I’m having an issue playing videos with ffplay on an embedded arm device (imx6). The OS is based on yocto sumo and uses the meta-freescale layers for imx6.

    I have a number of test videos in different formats that I am sure are formatted correctly (They play fine on my laptop with ffplay). FFMPEG has all the necessary codecs, detects my streams, it plays audio without an issue, but the video is just black.

    It’s worth nothing that I’m running X11 and have xterm running, and when ffplay is trying to play, a black box will pop up on the screen in the correct dimensions like it thinks it’s decoding video, but it’s always blank.

    $DISPLAY=:0 ffplay test.mp4
    ffplay version 3.3.3 Copyright (c) 2003-2017 the FFmpeg developers
     built with gcc 7.3.0 (GCC)
     configuration: --disable-stripping --enable-pic --enable-shared --enable-pthreads --disable-libxcb --disable-libxcb-shm --disable-libxcb-xfixes --disable-libxcb-shape --enable-nonfree --cross-prefix=arm-poky-linux-gnueabi- --ld='arm-poky-linux-gnueabi-gcc -march=armv7-a -mfpu=neon -mfloat-abi=hard -mcpu=cortex-a9 --sysroot=/home/builder/imx-yocto-bsp/machine/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/ffmpeg/3.3.3-r0/recipe-sysroot' --cc='arm-poky-linux-gnueabi-gcc -march=armv7-a -mfpu=neon -mfloat-abi=hard -mcpu=cortex-a9 --sysroot=/home/builder/imx-yocto-bsp/machine/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/ffmpeg/3.3.3-r0/recipe-sysroot' --cxx='arm-poky-linux-gnueabi-g++ -march=armv7-a -mfpu=neon -mfloat-abi=hard -mcpu=cortex-a9 --sysroot=/home/builder/imx-yocto-bsp/machine/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/ffmpeg/3.3.3-r0/recipe-sysroot' --arch=arm --target-os=linux --enable-cross-compile --extra-cflags=' -O2 -pipe -g -feliminate-unused-debug-types -fdebug-prefix-map=/home/builder/imx-yocto-bsp/machine/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/ffmpeg/3.3.3-r0=/usr/src/debug/ffmpeg/3.3.3-r0 -fdebug-prefix-map=/home/builder/imx-yocto-bsp/machine/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/ffmpeg/3.3.3-r0/recipe-sysroot-native= -fdebug-prefix-map=/home/builder/imx-yocto-bsp/machine/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/ffmpeg/3.3.3-r0/recipe-sysroot= -march=armv7-a -mfpu=neon -mfloat-abi=hard -mcpu=cortex-a9 --sysroot=/home/builder/imx-yocto-bsp/machine/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/ffmpeg/3.3.3-r0/recipe-sysroot' --extra-ldflags='-Wl,-O1 -Wl,--hash-style=gnu -Wl,--as-needed' --sysroot=/home/builder/imx-yocto-bsp/machine/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/ffmpeg/3.3.3-r0/recipe-sysroot --enable-hardcoded-tables --libdir=/usr/lib --shlibdir=/usr/lib --datadir=/usr/share/ffmpeg --disable-mipsdsp --disable-mipsdspr2 --cpu=cortex-a9 --pkg-config=pkg-config --enable-avcodec --enable-avdevice --enable-avfilter --enable-avformat --enable-avresample --enable-bzlib --enable-gpl --disable-libgsm --disable-indev=jack --disable-libvorbis --enable-lzma --disable-libmp3lame --enable-openssl --enable-postproc --disable-libschroedinger --enable-sdl2 --disable-libspeex --enable-swresample --enable-swscale --enable-libtheora --enable-vaapi --enable-vdpau --enable-libvpx --enable-libx264 --enable-outdev=xv
     libavutil      55. 58.100 / 55. 58.100
     libavcodec     57. 89.100 / 57. 89.100
     libavformat    57. 71.100 / 57. 71.100
     libavdevice    57.  6.100 / 57.  6.100
     libavfilter     6. 82.100 /  6. 82.100
     libavresample   3.  5.  0 /  3.  5.  0
     libswscale      4.  6.100 /  4.  6.100
     libswresample   2.  7.100 /  2.  7.100
     libpostproc    54.  5.100 / 54.  5.100
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'test.mp4':q=    0B f=0/0  
     Metadata:
       major_brand     : isom
       minor_version   : 512
       compatible_brands: isomiso2avc1mp41
       encoder         : Lavf58.20.100
     Duration: 00:00:30.88, start: 0.000000, bitrate: 143 kb/s
       Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 320x180, 67 kb/s, 21.08 fps, 21.08 tbr, 16192 tbn, 42.17 tbc (default)
       Metadata:
         handler_name    : VideoHandler
       Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, mono, fltp, 66 kb/s (default)
       Metadata:
         handler_name    : SoundHandler

    I’ve tried h264 and mp2 video with the same results. Has anyone seen this before