Recherche avancée

Médias (91)

Autres articles (14)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Formulaire personnalisable

    21 juin 2013, par

    Cette page présente les champs disponibles dans le formulaire de publication d’un média et il indique les différents champs qu’on peut ajouter. Formulaire de création d’un Media
    Dans le cas d’un document de type média, les champs proposés par défaut sont : Texte Activer/Désactiver le forum ( on peut désactiver l’invite au commentaire pour chaque article ) Licence Ajout/suppression d’auteurs Tags
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire. (...)

Sur d’autres sites (2026)

  • How do I broadcast live audio in Node.js ?

    20 juin 2020, par Yousef Alaqra

    I'm trying stream live audio to a wide range of clients in a web browser.

    



    My current solution :

    



    Dotnet core 3.1 console application

    



      

    1. receive the audio data over UDP
    2. 


    3. trimming the first 28 bytes of each received packet
    4. 


    5. and send the processed packet over UDP.
    6. 


    



    Node JS

    



      

    1. execute a Ffmepg as a child process to receive audio data packets
over UDP from the console app, and encode each packet to audio WAV
format
    2. 


    3. Pipe out the result of the child process into a GET HTTP endpoint response
    4. 


    



    Browser

    



      

    1. HTML audio element with source value equals to the node js GET
endpoint
    2. 


    



    Problem :

    



    The solution is giving a good result, but only for one device(one to one), which is not what I want to achieve.

    



    I've tried many solutions to make it applicable to a wide range of devices, such as using working threads and forking a child process, but none of them changes the result.

    



    I believe that I've to make some changes to the node js implementation, so here I'll share it with you, hoping to get a clue to solve the problem.

    



    var express = require("express");
var app = express();
var children = require("child_process");

var port = 5001;
var host = "192.168.1.230";

app.listen(port, host, () => {
  console.log("Server running at http://" + host + ":" + port + "/");
});

app.get('/stream', (req, res) => {
  const ffmpegCommand = "ffmpeg";
  var ffmpegOptions =
    "-f s16le -ar 48000 -ac 2 -i udp://192.168.1.230:65535 -f wav -";

  var ffm = children.spawn(ffmpegCommand, ffmpegOptions.split(" "));

  res.writeHead(200, { "Content-Type": "audio/wav; codecs=PCM" });
  ffm.stdout.pipe(res);
});


    



    If someone interested to see the full implementation, please let me know.

    


  • How to broadcast live audio in node js (1 to many)

    19 juin 2020, par Yousef Alaqra

    I'm trying stream live audio to a wide range of clients in a web browser.

    



    My current solution :

    



    Dotnet core 3.1 console application

    



      

    1. receive the audio data over UDP
    2. 


    3. trimming the first 28 bytes of each received packet
    4. 


    5. and send the processed packet over UDP.
    6. 


    



    Node JS

    



      

    1. execute a Ffmepg as a child process to receive audio data packets
over UDP from the console app, and encode each packet to audio WAV
format
    2. 


    3. Pipe out the result of the child process into a GET HTTP endpoint response
    4. 


    



    Browser

    



      

    1. HTML audio element with source value equals to the node js GET
endpoint
    2. 


    



    Problem :

    



    The solution is giving a good result, but only for one device(one to one), which is not what I want to achieve.

    



    I've tried many solutions to make it applicable to a wide range of devices, such as using working threads and forking a child process, but none of them changes the result.

    



    I believe that I've to make some changes to the node js implementation, so here I'll share it with you, hoping to get a clue to solve the problem.

    



    var express = require("express");
var app = express();
var children = require("child_process");

var port = 5001;
var host = "192.168.1.230";

app.listen(port, host, () => {
  console.log("Server running at http://" + host + ":" + port + "/");
});

app.get('/stream', (req, res) => {
  const ffmpegCommand = "ffmpeg";
  var ffmpegOptions =
    "-f s16le -ar 48000 -ac 2 -i udp://192.168.1.230:65535 -f wav -";

  var ffm = children.spawn(ffmpegCommand, ffmpegOptions.split(" "));

  res.writeHead(200, { "Content-Type": "audio/wav; codecs=PCM" });
  ffm.stdout.pipe(res);
});


    



    If someone interested to see the full implementation, please let me know.

    


  • (Ffmpeg) How to play live audio in the browser from received UDP packets using Ffmpeg ?

    26 octobre 2022, par Yousef Alaqra

    I have .NET Core console application which acts as UDP Server and UDP Client

    



      

    • UDP client by receiving audio packet.
    • 


    • UDP server, by sending each received packet.
    • 


    



    Here's a sample code of the console app :

    



    static UdpClient udpListener = new UdpClient();
    static IPEndPoint endPoint = new IPEndPoint(IPAddress.Parse("192.168.1.230"), 6980);
    static IAudioSender audioSender = new UdpAudioSender(new IPEndPoint(IPAddress.Parse("192.168.1.230"), 65535));

    static void Main(string[] args)
    {
        udpListener.Client.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.ReuseAddress, true);
        udpListener.Client.Bind(endPoint);

        try
        {
            udpListener.BeginReceive(new AsyncCallback(recv), null);
        }
        catch (Exception e)
        {
            throw e;
        }

        Console.WriteLine("Press enter to dispose the running service");
        Console.ReadLine();
    }

    private async static void recv(IAsyncResult res)
    {
        byte[] received = udpListener.EndReceive(res, ref endPoint);
        OnAudioCaptured(received);
        udpListener.BeginReceive(new AsyncCallback(recv), null);
    }


    



    On the other side, I have a node js API application, which supposes to execute an FFmpeg command as a child process and to do the following

    



      

    • receive the audio packet as an input from the console app UDP server.
    • 


    • convert the received bytes into WebM
    • 


    • pipe out the result into the response.
    • 


    



    Finally, in the client-side, I should have an audio element with source value equals to the http://localhost:3000

    



    For now, I can only execute this FFmpeg command :

    



    ffmpeg -f  s16le  -ar 48000 -ac 2 -i 'udp://192.168.1.230:65535' output.wav


    



    Which do the following

    



      

    • Receive UDP packet as an input
    • 


    • Convert the received bytes into the output.wav audio file.
    • 


    



    How would I execute a child process in the node js server which receives the UDP packets and pipe out the result into the response as Webm ?