
Recherche avancée
Médias (3)
-
Valkaama DVD Cover Outside
4 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
Valkaama DVD Label
4 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Image
-
Valkaama DVD Cover Inside
4 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
Autres articles (86)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...) -
(Dés)Activation de fonctionnalités (plugins)
18 février 2011, parPour gérer l’ajout et la suppression de fonctionnalités supplémentaires (ou plugins), MediaSPIP utilise à partir de la version 0.2 SVP.
SVP permet l’activation facile de plugins depuis l’espace de configuration de MediaSPIP.
Pour y accéder, il suffit de se rendre dans l’espace de configuration puis de se rendre sur la page "Gestion des plugins".
MediaSPIP est fourni par défaut avec l’ensemble des plugins dits "compatibles", ils ont été testés et intégrés afin de fonctionner parfaitement avec chaque (...)
Sur d’autres sites (10562)
-
Setting up RTP on Nginx
2 février 2021, par SwapI'm trying to use Janus Media Server to relay WebRTC streams to a particular RTP host/port, from where ffmpeg can pick it up as an input and convert it further to an rtmp stream, which can then be used to broadcast to various social media platforms (such as, YouTube, Twitch, Facebook, etc.)


My inspiration for this has been the following blog - https://www.meetecho.com/blog/firefox-webrtc-youtube-kinda/


Specifically, I'm trying to replicate the following architecture -




And Janus, as per their documentation, has a very neat API for doing it -


{
 "request" : "rtp_forward",
 "room" : <unique numeric="numeric" of="of" the="the" room="room" publisher="publisher" is="is" in="in">,
 "publisher_id" : <unique numeric="numeric" of="of" the="the" publisher="publisher" to="to" relay="relay" externally="externally">,
 "host" : "<host address="address" to="to" forward="forward" the="the" rtp="rtp" and="and" packets="packets">",
 "host_family" : "",
 "audio_port" : <port to="to" forward="forward" the="the" audio="audio" rtp="rtp" packets="packets">,
 "audio_ssrc" : <audio ssrc="ssrc" to="to" use="use" when="when" optional="optional">,
 "audio_pt" : <audio payload="payload" type="type" to="to" use="use" when="when" optional="optional">,
 "audio_rtcp_port" : <port to="to" contact="contact" receive="receive" audio="audio" rtcp="rtcp" feedback="feedback" from="from" the="the" and="and" currently="currently" unused="unused" for="for">,
 "video_port" : <port to="to" forward="forward" the="the" video="video" rtp="rtp" packets="packets">,
 "video_ssrc" : <video ssrc="ssrc" to="to" use="use" when="when" optional="optional">,
 "video_pt" : <video payload="payload" type="type" to="to" use="use" when="when" optional="optional">,
 "video_rtcp_port" : <port to="to" contact="contact" receive="receive" video="video" rtcp="rtcp" feedback="feedback" from="from" the="the" optional="optional">,
 "simulcast" : ,
 "video_port_2" : <if simulcasting="simulcasting" and="and" forwarding="forwarding" each="each" port="port" to="to" forward="forward" the="the" video="video" rtp="rtp" packets="packets" from="from" second="second" substream="substream"></if>layer to>,
 "video_ssrc_2" : <if simulcasting="simulcasting" and="and" forwarding="forwarding" each="each" video="video" ssrc="ssrc" to="to" use="use" the="the" second="second" substream="substream"></if>layer; optional>,
 "video_pt_2" : <if simulcasting="simulcasting" and="and" forwarding="forwarding" each="each" video="video" payload="payload" type="type" to="to" use="use" the="the" second="second" substream="substream"></if>layer; optional>,
 "video_port_3" : <if simulcasting="simulcasting" and="and" forwarding="forwarding" each="each" port="port" to="to" forward="forward" the="the" video="video" rtp="rtp" packets="packets" from="from" third="third" substream="substream"></if>layer to>,
 "video_ssrc_3" : <if simulcasting="simulcasting" and="and" forwarding="forwarding" each="each" video="video" ssrc="ssrc" to="to" use="use" the="the" third="third" substream="substream"></if>layer; optional>,
 "video_pt_3" : <if simulcasting="simulcasting" and="and" forwarding="forwarding" each="each" video="video" payload="payload" type="type" to="to" use="use" the="the" third="third" substream="substream"></if>layer; optional>,
 "data_port" : <port to="to" forward="forward" the="the" messages="messages">,
 "srtp_suite" : <length of="of" authentication="authentication" tag="tag" or="or" optional="optional">,
 "srtp_crypto" : "<key to="to" use="use" as="as" crypto="crypto" encoded="encoded" key="key" in="in" optional="optional">"
}
</key></length></port></port></video></video></port></port></audio></audio></port></host></unique></unique>


For this, I've setup a Nginx server, where I've also installed Janus and everything's been running smoothly so far. But I'm quite clueless as to how to setup my Nginx server so that it accepts RTP connections (which will be forwarded as RTMP using ffmpeg).


Please guide me to any relevant resources that would help me achieve this. Thanks in advance !


-
The problem with the AudioDispatcher, the analysis in audioDispatcherFactory is not running, TarsosDSP
4 février, par roman_gor_I'm making an application for sound analysis and spotlight control. The colors of the spotlight change to the beat of the music. I use the TarsosDSP library for this, additionally downloaded the FFmpeg-Kit library to convert audio to WAV format, PCM 16L to work with audioDispatcher.
The problem is that when audio is transmitted in the correct format, dispatcher starts and immediately ends. The boolean Process method is not executed, but the process Finished() method is executed. I found out that the stream starts, the file is not empty, it is converted to the correct format, BUT the getFrameLength() method, when interacting with the AudioStream to which I pass the filePath, returns the file path value -1, that is, in fact, it is not filled in. I've already searched through everything, and the github library code, and all the neural networks, I don't know how to solve this issue. The problem is with AudioDispatcher and AudioDispatcherFactory.from Pipe() ?


private void playAndAnalyzeAudio(String filePath, Uri uri)
 {
 if (mediaPlayer != null)
 mediaPlayer.release();
 mediaPlayer = MediaPlayer.create(requireContext(), uri);

 new Thread(() -> {
 extractAudio(inputFilePath, outputFilePath);
 getActivity().runOnUiThread(() -> {
 mediaPlayer = MediaPlayer.create(requireContext(), uri);
 if (mediaPlayer != null) {
 mediaPlayer.start(); // Start music after analyze
 startSendingData(); // Start data sending
 }
 });
 }).start();
 }

 private void analyzeAudio(String filePath)
 {
 try {
 AudioDispatcher audioDispatcher = AudioDispatcherFactory.fromPipe(filePath, 44100, 1024, 0);
 MFCC mfcc = new MFCC(1024, 44100, 13, 50, 20, 10000);
 audioDispatcher.addAudioProcessor(mfcc);
 Log.d("AUDIO_ANALYSIS", "Начинаем анализ аудиофайла..." + audioDispatcher);
 audioDispatcher.addAudioProcessor(new AudioProcessor() {
 @Override
 public boolean process(AudioEvent audioEvent) {
 Log.d("AUDIO_ANALYSIS", "Обрабатываем аудио...");

 float[] amplitudes = audioEvent.getFloatBuffer();
 Log.d("AUDIO_ANALYSIS", "Размер буфера: " + amplitudes.length);

 float[] mfccs = mfcc.getMFCC();
 if (mfccs == null) {
 Log.e("AUDIO_ANALYSIS", "MFCC не сгенерировался!");
 return true;
 }

 float currentBass = mfccs[0] + mfccs[1];
 float totalEnergy = 0;
 for (float amp : amplitudes) {
 totalEnergy += Math.abs(amp);
 }

 Log.d("AUDIO_ANALYSIS", "Bass Energy: " + currentBass + ", Total Energy: " + totalEnergy);

 if (currentBass > BASS_THRESHOLD || totalEnergy > ENERGY_THRESHOLD) {
 changeColor();
 Log.d("SONG", "Color wac changed on a : " + currentColor);
 brightness = MAX_BRIGHTNESS;
 } else {
 brightness *= 0.9f;
 }

 return true;
 }

 @Override
 public void processingFinished() {
 getActivity().runOnUiThread(() -> Toast.makeText(requireContext(), "Анализ завершён", Toast.LENGTH_SHORT).show());
 }
 });
 File file = new File(filePath);
 if (!file.exists() || file.length() == 0) {
 Log.e("AUDIO_ANALYSIS", "Error: file is empty! " + filePath);
 return;
 } else {
 Log.d("AUDIO_ANALYSIS", "File is, size: " + file.length() + " byte.");
 }
 Log.d("AUDIO_ANALYSIS", "Start of analyzing: " + filePath);
 File ffmpegFile = new File(getContext().getCacheDir(), "ffmpeg");
 if (!ffmpegFile.setExecutable(true)) {
 Log.e("AUDIO_ANALYSIS", "You don't have any roots for ffmpeg!");
 }
 else
 Log.e("AUDIO_ANALYSIS", "You have roots for ffmpeg!");

 new Thread(() -> {
 Log.d("AUDIO_ANALYSIS", "Start dispatcher...");
 audioDispatcher.run();
 Log.d("AUDIO_ANALYSIS", "Dispatcher end.");
 }).start();
 } catch (Exception e) {
 e.printStackTrace();
 Toast.makeText(requireContext(), "Error of analyzing", Toast.LENGTH_SHORT).show();
 }
 }
public void extractAudio(String inputFilePath, String outputFilePath) {
 File outputFile = new File(outputFilePath);
 if (outputFile.exists()) {
 outputFile.delete(); // Удаляем существующий файл
 }
 // Строим команду для извлечения аудио
 String command = "-i " + inputFilePath + " -vn -acodec pcm_s16le -ar 44100 -ac 2 " + outputFilePath;

 // Используем FFmpegKit для выполнения команды
 FFmpegKit.executeAsync(command, session -> {
 if (session.getReturnCode().isSuccess()) {
 Log.d("AUDIO_EXTRACT", "Аудио извлечено успешно: " + outputFilePath);
 analyzeAudio(outputFilePath); // Продолжаем анализировать аудио
 } else {
 Log.e("AUDIO_EXTRACT", "Ошибка извлечения аудио: " + session.getFailStackTrace());
 }
 });
 }



Sorry about the number of lines, i tried to describe the problem very detailed.
I tried to change AudioDispatcherFactory.fromPipe() on a AudioDispatcherFactory.fromFile(), but this method don't available in Android, only in Java, how i see the error "Javax.sound..., unexpected error, method don't available"
I tried to change String command in executeAudio() method, to change arguments of fromPipe() method, but in did not to bring success.
I want that my audio file will be correct analyze with audiodispatcher and then, that data from analyze will be transfered to arduino. Now in Logs I see "Color : null, value : 0.0.


-
FFPLAY produces black video output [closed]
28 janvier 2020, par RooterTooterI’m having an issue playing videos with ffplay on an embedded arm device (imx6). The OS is based on yocto sumo and uses the meta-freescale layers for imx6.
I have a number of test videos in different formats that I am sure are formatted correctly (They play fine on my laptop with ffplay). FFMPEG has all the necessary codecs, detects my streams, it plays audio without an issue, but the video is just black.
It’s worth nothing that I’m running X11 and have xterm running, and when ffplay is trying to play, a black box will pop up on the screen in the correct dimensions like it thinks it’s decoding video, but it’s always blank.
$DISPLAY=:0 ffplay test.mp4
ffplay version 3.3.3 Copyright (c) 2003-2017 the FFmpeg developers
built with gcc 7.3.0 (GCC)
configuration: --disable-stripping --enable-pic --enable-shared --enable-pthreads --disable-libxcb --disable-libxcb-shm --disable-libxcb-xfixes --disable-libxcb-shape --enable-nonfree --cross-prefix=arm-poky-linux-gnueabi- --ld='arm-poky-linux-gnueabi-gcc -march=armv7-a -mfpu=neon -mfloat-abi=hard -mcpu=cortex-a9 --sysroot=/home/builder/imx-yocto-bsp/machine/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/ffmpeg/3.3.3-r0/recipe-sysroot' --cc='arm-poky-linux-gnueabi-gcc -march=armv7-a -mfpu=neon -mfloat-abi=hard -mcpu=cortex-a9 --sysroot=/home/builder/imx-yocto-bsp/machine/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/ffmpeg/3.3.3-r0/recipe-sysroot' --cxx='arm-poky-linux-gnueabi-g++ -march=armv7-a -mfpu=neon -mfloat-abi=hard -mcpu=cortex-a9 --sysroot=/home/builder/imx-yocto-bsp/machine/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/ffmpeg/3.3.3-r0/recipe-sysroot' --arch=arm --target-os=linux --enable-cross-compile --extra-cflags=' -O2 -pipe -g -feliminate-unused-debug-types -fdebug-prefix-map=/home/builder/imx-yocto-bsp/machine/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/ffmpeg/3.3.3-r0=/usr/src/debug/ffmpeg/3.3.3-r0 -fdebug-prefix-map=/home/builder/imx-yocto-bsp/machine/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/ffmpeg/3.3.3-r0/recipe-sysroot-native= -fdebug-prefix-map=/home/builder/imx-yocto-bsp/machine/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/ffmpeg/3.3.3-r0/recipe-sysroot= -march=armv7-a -mfpu=neon -mfloat-abi=hard -mcpu=cortex-a9 --sysroot=/home/builder/imx-yocto-bsp/machine/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/ffmpeg/3.3.3-r0/recipe-sysroot' --extra-ldflags='-Wl,-O1 -Wl,--hash-style=gnu -Wl,--as-needed' --sysroot=/home/builder/imx-yocto-bsp/machine/tmp/work/cortexa9hf-neon-poky-linux-gnueabi/ffmpeg/3.3.3-r0/recipe-sysroot --enable-hardcoded-tables --libdir=/usr/lib --shlibdir=/usr/lib --datadir=/usr/share/ffmpeg --disable-mipsdsp --disable-mipsdspr2 --cpu=cortex-a9 --pkg-config=pkg-config --enable-avcodec --enable-avdevice --enable-avfilter --enable-avformat --enable-avresample --enable-bzlib --enable-gpl --disable-libgsm --disable-indev=jack --disable-libvorbis --enable-lzma --disable-libmp3lame --enable-openssl --enable-postproc --disable-libschroedinger --enable-sdl2 --disable-libspeex --enable-swresample --enable-swscale --enable-libtheora --enable-vaapi --enable-vdpau --enable-libvpx --enable-libx264 --enable-outdev=xv
libavutil 55. 58.100 / 55. 58.100
libavcodec 57. 89.100 / 57. 89.100
libavformat 57. 71.100 / 57. 71.100
libavdevice 57. 6.100 / 57. 6.100
libavfilter 6. 82.100 / 6. 82.100
libavresample 3. 5. 0 / 3. 5. 0
libswscale 4. 6.100 / 4. 6.100
libswresample 2. 7.100 / 2. 7.100
libpostproc 54. 5.100 / 54. 5.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'test.mp4':q= 0B f=0/0
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf58.20.100
Duration: 00:00:30.88, start: 0.000000, bitrate: 143 kb/s
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 320x180, 67 kb/s, 21.08 fps, 21.08 tbr, 16192 tbn, 42.17 tbc (default)
Metadata:
handler_name : VideoHandler
Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, mono, fltp, 66 kb/s (default)
Metadata:
handler_name : SoundHandlerI’ve tried h264 and mp2 video with the same results. Has anyone seen this before