
Recherche avancée
Médias (91)
-
DJ Z-trip - Victory Lap : The Obama Mix Pt. 2
15 septembre 2011
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Matmos - Action at a Distance
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
DJ Dolores - Oslodum 2004 (includes (cc) sample of “Oslodum” by Gilberto Gil)
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Danger Mouse & Jemini - What U Sittin’ On ? (starring Cee Lo and Tha Alkaholiks)
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Cornelius - Wataridori 2
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
The Rapture - Sister Saviour (Blackstrobe Remix)
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (85)
-
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...) -
MediaSPIP Player : problèmes potentiels
22 février 2011, parLe lecteur ne fonctionne pas sur Internet Explorer
Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...) -
Submit bugs and patches
13 avril 2011Unfortunately a software is never perfect.
If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
You may also (...)
Sur d’autres sites (6565)
-
Live audio using ffmpeg, javascript and nodejs
8 novembre 2017, par klausI am new to this thing. Please don’t hang me for the poor grammar. I am trying to create a proof of concept application which I will later extend. It does the following : We have a html page which asks for permission to use the microphone. We capture the microphone input and send it via websocket to a node js app.
JS (Client) :
var bufferSize = 4096;
var socket = new WebSocket(URL);
var myPCMProcessingNode = context.createScriptProcessor(bufferSize, 1, 1);
myPCMProcessingNode.onaudioprocess = function(e) {
var input = e.inputBuffer.getChannelData(0);
socket.send(convertFloat32ToInt16(input));
}
function convertFloat32ToInt16(buffer) {
l = buffer.length;
buf = new Int16Array(l);
while (l--) {
buf[l] = Math.min(1, buffer[l])*0x7FFF;
}
return buf.buffer;
}
navigator.mediaDevices.getUserMedia({audio:true, video:false})
.then(function(stream){
var microphone = context.createMediaStreamSource(stream);
microphone.connect(myPCMProcessingNode);
myPCMProcessingNode.connect(context.destination);
})
.catch(function(e){});In the server we take each incoming buffer, run it through ffmpeg, and send what comes out of the std out to another device using the node js ’http’ POST. The device has a speaker. We are basically trying to create a 1 way audio link from the browser to the device.
Node JS (Server) :
var WebSocketServer = require('websocket').server;
var http = require('http');
var children = require('child_process');
wsServer.on('request', function(request) {
var connection = request.accept(null, request.origin);
connection.on('message', function(message) {
if (message.type === 'utf8') { /*NOP*/ }
else if (message.type === 'binary') {
ffm.stdin.write(message.binaryData);
}
});
connection.on('close', function(reasonCode, description) {});
connection.on('error', function(error) {});
});
var ffm = children.spawn(
'./ffmpeg.exe'
,'-stdin -f s16le -ar 48k -ac 2 -i pipe:0 -acodec pcm_u8 -ar 48000 -f aiff pipe:1'.split(' ')
);
ffm.on('exit',function(code,signal){});
ffm.stdout.on('data', (data) => {
req.write(data);
});
var options = {
host: 'xxx.xxx.xxx.xxx',
port: xxxx,
path: '/path/to/service/on/device',
method: 'POST',
headers: {
'Content-Type': 'application/octet-stream',
'Content-Length': 0,
'Authorization' : 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx',
'Transfer-Encoding' : 'chunked',
'Connection': 'keep-alive'
}
};
var req = http.request(options, function(res) {});The device supports only continuous POST and only a couple of formats (ulaw, aiff, wav)
This solution doesn’t seem to work. In the device speaker we only hear something like white noise.
Also, I think I may have a problem with the buffer I am sending to the ffmpeg std in -> Tried to dump whatever comes out of the websocket to a .wav file then play it with VLC -> it plays everything in the record very fast -> 10 seconds of recording played in about 1 second.
I am new to audio processing and have searched for about 3 days now for solutions on how to improve this and found nothing.
I would ask from the community for 2 things :
-
Is something wrong with my approach ? What more can I do to make this work ? I will post more details if required.
-
If what I am doing is reinventing the wheel then I would like to know what other software / 3rd party service (like amazon or whatever) can accomplish the same thing.
Thank you.
-
-
How to feed frames one by one and grab decoded frames using LibAV ?
22 septembre 2024, par Alvan RahimliI am integrating a third party system where I send a TCP packet to ask e.g. 5 frames from a live CCTV camera stream, and it sends those 5 frames one by one. Each package is h264 encoded frame, wrapped with some relevant data.


I need to write a code using Libav where I can :


- 

- Feed the frames one by one using AVIOContext (or smth similar).
- Grab decoded frame.
- Draw that to a window (Not relevant to the question, writing for context).








I've been doing the same with GStreamer by creating a pipeline like this :


AppSrc -> H264Parser -> H264Decoder -> FrameGrabber



The code below is what I was able to write so far :


using System.Runtime.InteropServices;
using FFmpeg.AutoGen;

namespace AvIoCtxExperiment;

public static unsafe class Program
{
 public static void Main()
 {
 ffmpeg.avdevice_register_all();
 Console.WriteLine($"FFmpeg v: {ffmpeg.av_version_info()}");

 // Generous buffer size for I frames (~43KB)
 const int bufferSize = 50 * 1024;

 var buff = (byte*)ffmpeg.av_malloc(bufferSize);
 if (buff == null)
 throw new Exception("Buffer is null");

 // A mock frame provider. Frames are stored in separate files and this reads & returns them one-by-one.
 var frameProvider = new FrameProvider(@"D:\Frames-1", 700);
 var gch = GCHandle.Alloc(frameProvider);

 var avioCtx = ffmpeg.avio_alloc_context(
 buffer: buff,
 buffer_size: bufferSize,
 write_flag: 0,
 opaque: (void*)GCHandle.ToIntPtr(gch),
 read_packet: new avio_alloc_context_read_packet(ReadFunc),
 write_packet: null,
 seek: null);

 var formatContext = ffmpeg.avformat_alloc_context();
 formatContext->pb = avioCtx;
 formatContext->flags |= ffmpeg.AVFMT_FLAG_CUSTOM_IO;

 var openResult = ffmpeg.avformat_open_input(&formatContext, null, null, null);
 if (openResult < 0)
 throw new Exception("Open Input Failed");

 if (ffmpeg.avformat_find_stream_info(formatContext, null) < 0)
 throw new Exception("Find StreamInfo Failed");

 AVPacket packet;
 while (ffmpeg.av_read_frame(formatContext, &packet) >= 0)
 {
 Console.WriteLine($"GRAB: {packet.buf->size}");
 ffmpeg.av_packet_unref(&packet);
 }
 }

 private static int ReadFunc(void* opaque, byte* buf, int bufSize)
 {
 var frameProvider = (FrameProvider?)GCHandle.FromIntPtr((IntPtr)opaque).Target;
 if (frameProvider == null)
 {
 return 0;
 }

 byte[] managedBuffer = new byte[bufSize];

 var fBuff = frameProvider.NextFrame();
 if (fBuff == null)
 {
 return ffmpeg.AVERROR_EOF;
 }

 int bytesRead = fBuff.Length;
 fBuff.CopyTo(managedBuffer, 0);

 if (bytesRead > 0)
 {
 Marshal.Copy(managedBuffer, 0, (IntPtr)buf, bytesRead);
 Console.WriteLine($"READ size: {fBuff.Length}");
 return bytesRead;
 }

 return ffmpeg.AVERROR_EOF;
 }
}



Second thing that confuses me is with AppSrc, I can feed frame of any size, but with LibAV, it asks for buffer size. With AppSrc, I am responsible of feeding frames into the pipeline. GStreamer notifies me when it is enough with signals. But with libav, it calls read_packet delegate itself.


Any help is much appreciated. Thanks.


P.S. I'm writing a C# app, but sample code in C is fine, I can adapt the code myself.


-
How to feed frames one by one and grab decoded frames using LibAV ?
15 septembre 2024, par Alvan RahimliI am integrating a third party system where I send a TCP packet to ask e.g. 5 frames from a live CCTV camera stream, and it sends those 5 frames one by one. Each package is h264 encoded frame, wrapped with some relevant data.


I need to write a code using Libav where I can :


- 

- Feed the frames one by one using AVIOContext (or smth similar).
- Grab decoded frame.
- Draw that to a window (Not relevant to the question, writing for context).








I've been doing the same with GStreamer by creating a pipeline like this :


AppSrc -> H264Parser -> H264Decoder -> FrameGrabber



The code below is what I was able to write so far :


using System.Runtime.InteropServices;
using FFmpeg.AutoGen;

namespace AvIoCtxExperiment;

public static unsafe class Program
{
 public static void Main()
 {
 ffmpeg.avdevice_register_all();
 Console.WriteLine($"FFmpeg v: {ffmpeg.av_version_info()}");

 // Generous buffer size for I frames (~43KB)
 const int bufferSize = 50 * 1024;

 var buff = (byte*)ffmpeg.av_malloc(bufferSize);
 if (buff == null)
 throw new Exception("Buffer is null");

 // A mock frame provider. Frames are stored in separate files and this reads & returns them one-by-one.
 var frameProvider = new FrameProvider(@"D:\Frames-1", 700);
 var gch = GCHandle.Alloc(frameProvider);

 var avioCtx = ffmpeg.avio_alloc_context(
 buffer: buff,
 buffer_size: bufferSize,
 write_flag: 0,
 opaque: (void*)GCHandle.ToIntPtr(gch),
 read_packet: new avio_alloc_context_read_packet(ReadFunc),
 write_packet: null,
 seek: null);

 var formatContext = ffmpeg.avformat_alloc_context();
 formatContext->pb = avioCtx;
 formatContext->flags |= ffmpeg.AVFMT_FLAG_CUSTOM_IO;

 var openResult = ffmpeg.avformat_open_input(&formatContext, null, null, null);
 if (openResult < 0)
 throw new Exception("Open Input Failed");

 if (ffmpeg.avformat_find_stream_info(formatContext, null) < 0)
 throw new Exception("Find StreamInfo Failed");

 AVPacket packet;
 while (ffmpeg.av_read_frame(formatContext, &packet) >= 0)
 {
 Console.WriteLine($"GRAB: {packet.buf->size}");
 ffmpeg.av_packet_unref(&packet);
 }
 }

 private static int ReadFunc(void* opaque, byte* buf, int bufSize)
 {
 var frameProvider = (FrameProvider?)GCHandle.FromIntPtr((IntPtr)opaque).Target;
 if (frameProvider == null)
 {
 return 0;
 }

 byte[] managedBuffer = new byte[bufSize];

 var fBuff = frameProvider.NextFrame();
 if (fBuff == null)
 {
 return ffmpeg.AVERROR_EOF;
 }

 int bytesRead = fBuff.Length;
 fBuff.CopyTo(managedBuffer, 0);

 if (bytesRead > 0)
 {
 Marshal.Copy(managedBuffer, 0, (IntPtr)buf, bytesRead);
 Console.WriteLine($"READ size: {fBuff.Length}");
 return bytesRead;
 }

 return ffmpeg.AVERROR_EOF;
 }
}



Second thing that confuses me is with AppSrc, I can feed frame of any size, but with LibAV, it asks for buffer size. With AppSrc, I am responsible of feeding frames into the pipeline. GStreamer notifies me when it is enough with signals. But with libav, it calls read_packet delegate itself.


Any help is much appreciated. Thanks.


P.S. I'm writing a C# app, but sample code in C is fine, I can adapt the code myself.