
Recherche avancée
Médias (91)
-
Géodiversité
9 septembre 2011, par ,
Mis à jour : Août 2018
Langue : français
Type : Texte
-
USGS Real-time Earthquakes
8 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
-
SWFUpload Process
6 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
-
Podcasting Legal guide
16 mai 2011, par
Mis à jour : Mai 2011
Langue : English
Type : Texte
-
Creativecommons informational flyer
16 mai 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (14)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...) -
Formulaire personnalisable
21 juin 2013, parCette page présente les champs disponibles dans le formulaire de publication d’un média et il indique les différents champs qu’on peut ajouter. Formulaire de création d’un Media
Dans le cas d’un document de type média, les champs proposés par défaut sont : Texte Activer/Désactiver le forum ( on peut désactiver l’invite au commentaire pour chaque article ) Licence Ajout/suppression d’auteurs Tags
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire. (...)
Sur d’autres sites (2026)
-
How do I broadcast live audio in Node.js ?
20 juin 2020, par Yousef AlaqraI'm trying stream live audio to a wide range of clients in a web browser.



My current solution :



Dotnet core 3.1 console application



- 

- receive the audio data over UDP
- trimming the first 28 bytes of each received packet
- and send the processed packet over UDP.









Node JS



- 

- execute a Ffmepg as a child process to receive audio data packets
over UDP from the console app, and encode each packet to audio WAV
format
- Pipe out the result of the child process into a GET HTTP endpoint response







Browser



- 

- HTML audio element with source value equals to the node js GET
endpoint





Problem :



The solution is giving a good result, but only for one device(one to one), which is not what I want to achieve.



I've tried many solutions to make it applicable to a wide range of devices, such as using working threads and forking a child process, but none of them changes the result.



I believe that I've to make some changes to the node js implementation, so here I'll share it with you, hoping to get a clue to solve the problem.



var express = require("express");
var app = express();
var children = require("child_process");

var port = 5001;
var host = "192.168.1.230";

app.listen(port, host, () => {
 console.log("Server running at http://" + host + ":" + port + "/");
});

app.get('/stream', (req, res) => {
 const ffmpegCommand = "ffmpeg";
 var ffmpegOptions =
 "-f s16le -ar 48000 -ac 2 -i udp://192.168.1.230:65535 -f wav -";

 var ffm = children.spawn(ffmpegCommand, ffmpegOptions.split(" "));

 res.writeHead(200, { "Content-Type": "audio/wav; codecs=PCM" });
 ffm.stdout.pipe(res);
});




If someone interested to see the full implementation, please let me know.


-
How to broadcast live audio in node js (1 to many)
19 juin 2020, par Yousef AlaqraI'm trying stream live audio to a wide range of clients in a web browser.



My current solution :



Dotnet core 3.1 console application



- 

- receive the audio data over UDP
- trimming the first 28 bytes of each received packet
- and send the processed packet over UDP.









Node JS



- 

- execute a Ffmepg as a child process to receive audio data packets
over UDP from the console app, and encode each packet to audio WAV
format
- Pipe out the result of the child process into a GET HTTP endpoint response







Browser



- 

- HTML audio element with source value equals to the node js GET
endpoint





Problem :



The solution is giving a good result, but only for one device(one to one), which is not what I want to achieve.



I've tried many solutions to make it applicable to a wide range of devices, such as using working threads and forking a child process, but none of them changes the result.



I believe that I've to make some changes to the node js implementation, so here I'll share it with you, hoping to get a clue to solve the problem.



var express = require("express");
var app = express();
var children = require("child_process");

var port = 5001;
var host = "192.168.1.230";

app.listen(port, host, () => {
 console.log("Server running at http://" + host + ":" + port + "/");
});

app.get('/stream', (req, res) => {
 const ffmpegCommand = "ffmpeg";
 var ffmpegOptions =
 "-f s16le -ar 48000 -ac 2 -i udp://192.168.1.230:65535 -f wav -";

 var ffm = children.spawn(ffmpegCommand, ffmpegOptions.split(" "));

 res.writeHead(200, { "Content-Type": "audio/wav; codecs=PCM" });
 ffm.stdout.pipe(res);
});




If someone interested to see the full implementation, please let me know.


-
(Ffmpeg) How to play live audio in the browser from received UDP packets using Ffmpeg ?
26 octobre 2022, par Yousef AlaqraI have .NET Core console application which acts as UDP Server and UDP Client



- 

- UDP client by receiving audio packet.
- UDP server, by sending each received packet.







Here's a sample code of the console app :



static UdpClient udpListener = new UdpClient();
 static IPEndPoint endPoint = new IPEndPoint(IPAddress.Parse("192.168.1.230"), 6980);
 static IAudioSender audioSender = new UdpAudioSender(new IPEndPoint(IPAddress.Parse("192.168.1.230"), 65535));

 static void Main(string[] args)
 {
 udpListener.Client.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.ReuseAddress, true);
 udpListener.Client.Bind(endPoint);

 try
 {
 udpListener.BeginReceive(new AsyncCallback(recv), null);
 }
 catch (Exception e)
 {
 throw e;
 }

 Console.WriteLine("Press enter to dispose the running service");
 Console.ReadLine();
 }

 private async static void recv(IAsyncResult res)
 {
 byte[] received = udpListener.EndReceive(res, ref endPoint);
 OnAudioCaptured(received);
 udpListener.BeginReceive(new AsyncCallback(recv), null);
 }




On the other side, I have a node js API application, which supposes to execute an FFmpeg command as a child process and to do the following



- 

- receive the audio packet as an input from the console app UDP server.
- convert the received bytes into WebM
- pipe out the result into the response.









Finally, in the client-side, I should have an audio element with source value equals to the http://localhost:3000



For now, I can only execute this FFmpeg command :



ffmpeg -f s16le -ar 48000 -ac 2 -i 'udp://192.168.1.230:65535' output.wav




Which do the following



- 

- Receive UDP packet as an input
- Convert the received bytes into the output.wav audio file.







How would I execute a child process in the node js server which receives the UDP packets and pipe out the result into the response as Webm ?