
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (103)
-
Gestion des droits de création et d’édition des objets
8 février 2011, parPar défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;
-
Dépôt de média et thèmes par FTP
31 mai 2013, parL’outil MédiaSPIP traite aussi les média transférés par la voie FTP. Si vous préférez déposer par cette voie, récupérez les identifiants d’accès vers votre site MédiaSPIP et utilisez votre client FTP favori.
Vous trouverez dès le départ les dossiers suivants dans votre espace FTP : config/ : dossier de configuration du site IMG/ : dossier des média déjà traités et en ligne sur le site local/ : répertoire cache du site web themes/ : les thèmes ou les feuilles de style personnalisées tmp/ : dossier de travail (...) -
ANNEXE : Les plugins utilisés spécifiquement pour la ferme
5 mars 2010, parLe site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)
Sur d’autres sites (11973)
-
Node.js : How to convert stdout complicated text output from ffmpeg into a buffer
26 janvier 2023, par beardfriend`��\b�b��� � !zGo 3f\x14<\x02���jǧ�3�R�\x01L��OjQd���r ;��\t�I\x10\x13y�pJr��e ?\x17\x12-�i����l��"����Т\x1C�\x002<Ҹ�L)E\t�(֠.̸�i��s\t�....


Result of below code is complicated


ffmpeg -i freevideo.mp4 -r 24 -f image2pipe -


this image was exectued on node.js child_process and output was shown in unix cli


I want to change this complicated string to buffer into nodejs
after that want to upload BufferImage into S3


Question


- 

- How can I do
- What is format this complicated string






-
How do you use Node.js to stream an MP4 file with ffmpeg ?
27 avril 2023, par LaserJesusI've been trying to solve this problem for several days now and would really appreciate any help on the subject.



I'm able to successfully stream an mp4 audio file stored on a Node.js server using fluent-ffmpeg by passing the location of the file as a string and transcoding it to mp3. If I create a file stream from the same file and pass that to fluent-ffmpeg instead it works for an mp3 input file, but not a mp4 file. In the case of the mp4 file no error is thrown and it claims the stream completed successfully, but nothing is playing in the browser. I'm guessing this has to do with the meta data being stored at the end of an mp4 file, but I don't know how to code around this. This is the exact same file that works correctly when it's location is passed to ffmpeg, rather than the stream. When I try and pass a stream to the mp4 file on s3, again no error is thrown, but nothing streams to the browser. This isn't surprising as ffmpeg won't work with the file locally as stream, so expecting it to handle the stream from s3 is wishful thinking.



How can I stream the mp4 file from s3, without storing it locally as a file first ? How do I get ffmpeg to do this without transcoding the file too ? The following is the code I have at the moment which isn't working. Note that it attempts to pass the s3 file as a stream to ffmpeg and it's also transcoding it into an mp3, which I'd prefer not to do.



.get(function(req,res) {
 aws.s3(s3Bucket).getFile(s3Path, function (err, result) {
 if (err) {
 return next(err);
 }
 var proc = new ffmpeg(result)
 .withAudioCodec('libmp3lame')
 .format('mp3')
 .on('error', function (err, stdout, stderr) {
 console.log('an error happened: ' + err.message);
 console.log('ffmpeg stdout: ' + stdout);
 console.log('ffmpeg stderr: ' + stderr);
 })
 .on('end', function () {
 console.log('Processing finished !');
 })
 .on('progress', function (progress) {
 console.log('Processing: ' + progress.percent + '% done');
 })
 .pipe(res, {end: true});
 });
});




This is using the knox library when it calls aws.s3... I've also tried writing it using the standard aws sdk for Node.js, as shown below, but I get the same outcome as above.



var AWS = require('aws-sdk');

var s3 = new AWS.S3({
 accessKeyId: process.env.AWS_ACCESS_KEY_ID,
 secretAccessKey: process.env.AWS_SECRET_KEY,
 region: process.env.AWS_REGION_ID
});
var fileStream = s3.getObject({
 Bucket: s3Bucket,
 Key: s3Key
 }).createReadStream();
var proc = new ffmpeg(fileStream)
 .withAudioCodec('libmp3lame')
 .format('mp3')
 .on('error', function (err, stdout, stderr) {
 console.log('an error happened: ' + err.message);
 console.log('ffmpeg stdout: ' + stdout);
 console.log('ffmpeg stderr: ' + stderr);
 })
 .on('end', function () {
 console.log('Processing finished !');
 })
 .on('progress', function (progress) {
 console.log('Processing: ' + progress.percent + '% done');
 })
 .pipe(res, {end: true});




=====================================



Updated



I placed an mp3 file in the same s3 bucket and the code I have here worked and was able to stream the file through to the browser without storing a local copy. So the streaming issues I face have something to do with the mp4/aac container/encoder format.



I'm still interested in a way to bring the m4a file down from s3 to the Node.js server in it's entirety, then pass it to ffmpeg for streaming without actually storing the file in the local file system.



=====================================



Updated Again



I've managed to get the server streaming the file, as mp4, straight to the browser. This half answers my original question. My only issue now is that I have to download the file to a local store first, before I can stream it. I'd still like to find a way to stream from s3 without needing the temporary file.



aws.s3(s3Bucket).getFile(s3Path, function(err, result){
 result.pipe(fs.createWriteStream(file_location));
 result.on('end', function() {
 console.log('File Downloaded!');
 var proc = new ffmpeg(file_location)
 .outputOptions(['-movflags isml+frag_keyframe'])
 .toFormat('mp4')
 .withAudioCodec('copy')
 .seekInput(offset)
 .on('error', function(err,stdout,stderr) {
 console.log('an error happened: ' + err.message);
 console.log('ffmpeg stdout: ' + stdout);
 console.log('ffmpeg stderr: ' + stderr);
 })
 .on('end', function() {
 console.log('Processing finished !');
 })
 .on('progress', function(progress) {
 console.log('Processing: ' + progress.percent + '% done');
 })
 .pipe(res, {end: true});
 });
});




On the receiving side I just have the following javascript in an empty html page :



window.AudioContext = window.AudioContext || window.webkitAudioContext;
context = new AudioContext();

function process(Data) {
 source = context.createBufferSource(); // Create Sound Source
 context.decodeAudioData(Data, function(buffer){
 source.buffer = buffer;
 source.connect(context.destination);
 source.start(context.currentTime);
 });
};

function loadSound() {
 var request = new XMLHttpRequest();
 request.open("GET", "/stream/", true);
 request.responseType = "arraybuffer";

 request.onload = function() {
 var Data = request.response;
 process(Data);
 };

 request.send();
};

loadSound()




=====================================



The Answer



The code above under the title 'updated again' will stream an mp4 file, from s3, via a Node.js server to a browser without using flash. It does require that the file be stored temporarily on the Node.js server so that the meta data in the file is moved from the end of the file to the front. In order to stream without storing the temporary file, you need to actual modify the file on S3 first and make this meta data change. If you have changed the file in this way on S3 then you can modify the code under the title 'updated again' so that the result from S3 is piped straight into the ffmpeg constructor, rather than into a file stream on the Node.js server, then providing that file location to ffmepg, as the code does now. You can change the final 'pipe' command to 'save(location)' to get a version of the mp4 file locally with the meta data moved to the front. You can then upload that new version of the file to S3 and try out the end to end streaming. Personally I'm now going to create a task that modifies the files in this way as they are uploaded to s3 in the first place. This allows me to record and stream in mp4 without transcoding or storing a temp file on the Node.js server.


-
FFMPEG C++, Best practise / pattern for encoding a live feed
21 mars 2016, par MrSmithI have got working code that captures from /dev/videox and encodes a h264 file just fine, that is like this :
- grab frame from camera
- possibly do a rgb-yuv tango
- encode with ffmpeg lib
- repeat
Single thread.
Now here is my thinking, if the time taken for 2 and 3 sums up to be larger than 1/25th of a second I will miss a frame from the camera.
Not unfeasable right ?
A spike in load on the target system and frame(s) are dropped.
So I am thinking I should thread this and put in a buffer between capture and encoding.
Now the question(s) is.- Is my line of thinking correct ?
- Is my solution a variable one ?
- How much of a problem is it really, would it fail anyway, am I about to create a solution for which there is no problem.
Insights ? :)
Thanks