
Recherche avancée
Autres articles (4)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Installation en mode ferme
4 février 2011, parLe mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
C’est la méthode que nous utilisons sur cette même plateforme.
L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...) -
Utilisation et configuration du script
19 janvier 2011, parInformations spécifiques à la distribution Debian
Si vous utilisez cette distribution, vous devrez activer les dépôts "debian-multimedia" comme expliqué ici :
Depuis la version 0.3.1 du script, le dépôt peut être automatiquement activé à la suite d’une question.
Récupération du script
Le script d’installation peut être récupéré de deux manières différentes.
Via svn en utilisant la commande pour récupérer le code source à jour :
svn co (...)
Sur d’autres sites (3683)
-
(Ffmpeg) How to play live audio in the browser from received UDP packets using Ffmpeg ?
26 octobre 2022, par Yousef AlaqraI have .NET Core console application which acts as UDP Server and UDP Client



- 

- UDP client by receiving audio packet.
- UDP server, by sending each received packet.







Here's a sample code of the console app :



static UdpClient udpListener = new UdpClient();
 static IPEndPoint endPoint = new IPEndPoint(IPAddress.Parse("192.168.1.230"), 6980);
 static IAudioSender audioSender = new UdpAudioSender(new IPEndPoint(IPAddress.Parse("192.168.1.230"), 65535));

 static void Main(string[] args)
 {
 udpListener.Client.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.ReuseAddress, true);
 udpListener.Client.Bind(endPoint);

 try
 {
 udpListener.BeginReceive(new AsyncCallback(recv), null);
 }
 catch (Exception e)
 {
 throw e;
 }

 Console.WriteLine("Press enter to dispose the running service");
 Console.ReadLine();
 }

 private async static void recv(IAsyncResult res)
 {
 byte[] received = udpListener.EndReceive(res, ref endPoint);
 OnAudioCaptured(received);
 udpListener.BeginReceive(new AsyncCallback(recv), null);
 }




On the other side, I have a node js API application, which supposes to execute an FFmpeg command as a child process and to do the following



- 

- receive the audio packet as an input from the console app UDP server.
- convert the received bytes into WebM
- pipe out the result into the response.









Finally, in the client-side, I should have an audio element with source value equals to the http://localhost:3000



For now, I can only execute this FFmpeg command :



ffmpeg -f s16le -ar 48000 -ac 2 -i 'udp://192.168.1.230:65535' output.wav




Which do the following



- 

- Receive UDP packet as an input
- Convert the received bytes into the output.wav audio file.







How would I execute a child process in the node js server which receives the UDP packets and pipe out the result into the response as Webm ?


-
ffmpeg time-lapse from raw .NEF photos
14 juin 2020, par Emil TermanI have about 3000 .NEF photos on the server, each weighting about 80MB.
I need to make a time-lapse out of them. Since it's just for a demo, it's fine if I reduce the quality, so compressing is fine.



I've tried different things, but I couldn't really find a way to make a time-lapse out of .nef files.
But I did find a way to make time-lapse out of .jpg files :



ffmpeg -r 24 -f concat -i ordered_list_of_photos.txt -s hd1080 -vcodec libx264 out.mp4




This works perfectly, but it only works with .jpgs it seems.



-f concat -i ordered_list_of_photos.txt
- means : read the files from ordered_list_of_photos.txt. That file contains lines like :


file 'jpgs/file1.jpg'
file 'jpgs/file2.jpg'
...




Do you have any suggestions on how to do this ? I'm pretty sure it has something to do with rawvideo demuxer, but I can't figure it out on my own.



Converting the .nef files to .jpg seems like an option, but I can't get
ufraw-batch
to work, as it throws segmentation faults after the first conversion. And I also don't have Desktop access to the computer, I'm using ssh to do all of this (so GUI apps won't work I think).

-
How to Use FFmpeg to Fetch an Audio From Local Network and Decode it to PCM ?
26 mai 2020, par Yousef AlaqraCurrently, I have a node js server which is connected to a specific IP address on the local network (the source of the audio), to receive the audio using VBAN protocol. VBAN protocol, basically uses UDP to send audio over the local network.



Node js implementation :



http.listen(3000, () => {
 console.log("Server running on port 3000");
});

let PORT = 6980;
let HOST = "192.168.1.244";

io.on("connection", (socket) => {
 console.log("a user connected");
 socket.on("disconnect", () => {
 console.log("user disconnected");
 });
});

io.on("connection", () => {

 let dgram = require("dgram");
 let server = dgram.createSocket("udp4");

 server.on("listening", () => {
 let address = server.address();
 console.log("server host", address.address);
 console.log("server port", address.port);
 });

 server.on("message", function (message, remote) {
 let audioData = vban.ProcessPacket(message);
 io.emit("audio", audioData); // // console.log(`Received packet: ${remote.address}:${remote.port}`)
 });
 server.bind({
 address: "192.168.1.230",
 port: PORT,
 exclusive: false,
 });
});




once the server receives a package from the local network, it processes the package, then, using socket.io it emits the processed data to the client.



An example of the processed audio data that's being emitted from the socket, and received in the angular :



audio {
 format: {
 channels: 2,
 sampleRate: 44100,
 interleaved: true,
 float: false,
 signed: true,
 bitDepth: 16,
 byteOrder: 'LE'
 },
 sampleRate: 44100,
 buffer: <buffer 2e="2e" 00="00" ce="ce" ff="ff" 3d="3d" bd="bd" 44="44" b6="b6" 48="48" c3="c3" 32="32" d3="d3" 31="31" d4="d4" 30="30" dd="dd" 38="38" 34="34" e5="e5" 1d="1d" c6="c6" 25="25" 974="974" more="more" bytes="bytes">,
 channels: 2,
}
</buffer>



In the client-side (Angular), after receiving a package using socket.io.clinet, AudioConetext is used to decode the audio and play it :



playAudio(audioData) {
 let audioCtx = new AudioContext();
 let count = 0;
 let offset = 0;
 let msInput = 1000;
 let msToBuffer = Math.max(50, msInput);
 let bufferX = 0;
 let audioBuffer;
 let prevFormat = {};
 let source;

 if (!audioBuffer || Object.keys(audioData.format).some((key) => prevFormat[key] !== audioData.format[key])) {
 prevFormat = audioData.format;
 bufferX = Math.ceil(((msToBuffer / 1000) * audioData.sampleRate) / audioData.samples);
 if (bufferX < 3) {
 bufferX = 3;
 }
 audioBuffer = audioCtx.createBuffer(audioData.channels, audioData.samples * bufferX, audioData.sampleRate);
 if (source) {
 source.disconnect();
 }
 source = audioCtx.createBufferSource();
 console.log("source", source);
 source.connect(audioCtx.destination);
 source.loop = true;
 source.buffer = audioBuffer;
 source.start();
 }
 }




Regardless that audio isn't playing in the client-side, and there is something wrong, this isn't the correct implementation.



Brad, mentioned in the comments below, that I can implement this correctly and less complexity using FFmpeg child-process. And I'm very interested to know how to fetch the audio locally using FFmpeg.