
Recherche avancée
Autres articles (8)
-
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...) -
L’agrémenter visuellement
10 avril 2011MediaSPIP est basé sur un système de thèmes et de squelettes. Les squelettes définissent le placement des informations dans la page, définissant un usage spécifique de la plateforme, et les thèmes l’habillage graphique général.
Chacun peut proposer un nouveau thème graphique ou un squelette et le mettre à disposition de la communauté. -
Taille des images et des logos définissables
9 février 2011, parDans beaucoup d’endroits du site, logos et images sont redimensionnées pour correspondre aux emplacements définis par les thèmes. L’ensemble des ces tailles pouvant changer d’un thème à un autre peuvent être définies directement dans le thème et éviter ainsi à l’utilisateur de devoir les configurer manuellement après avoir changé l’apparence de son site.
Ces tailles d’images sont également disponibles dans la configuration spécifique de MediaSPIP Core. La taille maximale du logo du site en pixels, on permet (...)
Sur d’autres sites (2747)
-
How to extract orientation information from videos ?
23 décembre 2016, par SidAfter surfing through tons of documentation on the web it seems that the iPhone always shoots the video at a 480x360 aspect ratio and applies a transformation matrix on the video track. (480x360 may change but its always the same for a given device)
Here is a way of modifying the ffmpeg source within a iOS project and accessing the matrix http://www.seqoy.com/correct-orientation-for-iphone-recorded-movies-with-ffmpeg/
Here is a cleaner way of finding the transformation matrix in iOS-4
how to detect (iphone sdk) if a video file was recorded in portrait orientation, or landscapeHow can the orientation of the video be extracted in either of the options below -
iOS 3.2
ffmpeg (through the command line server side)
ruby
Any help will be appreciated.
-
Cannot convert video file to audio file inside AWS lambda function using Node js
21 février 2019, par ArunI cannot convert a video file into an audio file inside AWS lambda function using Node JS. While running my lambda function it doesn’t throw any error it executes without any error. But the audio file size is still 0 MB size. I am not able to find bugs or any issues in my code.
Here is my code,
const fs = require('fs');
const childProcess = require('child_process');
const AWS = require('aws-sdk');
const path = require('path');
AWS.config.update({
region : 'us-east-2'
});
const s3 = new AWS.S3({apiVersion: '2006-03-01'});
exports.handler = (event, context, callback) => {
process.env.PATH = process.env.PATH + ':/tmp/';
process.env['FFMPEG_PATH'] = '/tmp/ffmpeg';
const BIN_PATH = process.env['LAMBDA_TASK_ROOT'];
process.env['PATH'] = process.env['PATH'] + ':' + BIN_PATH;
childProcess.exec(
'cp /var/task/ffmpeg /tmp/.; chmod 755 /tmp/ffmpeg;',
function (error, stdout, stderr) {
if (error) {
console.log('Error occured',error);
} else {
var ffmpeg = '/tmp/ffmpeg';
var createStream = fs.createWriteStream("/tmp/video.mp3");
createStream.end();
var params = {
Bucket: "test-bucket",
Key: event.Records[0].s3.object.key
};
s3.getObject(params, function(err, data) {
if (err) {
console.log("Error", err);
}
fs.writeFile("/tmp/vid.mp4", data.Body, function (err) {
if (err) console.log(err.code, "-", err.message);
return callback(err);
}, function() {
try {
var stats = fs.statSync("/tmp/vid.mp4");
console.log("size of the file1 ", stats["size"]);
try {
console.log("Yeah");
const inputFilename = "/tmp/vid.mp4";
const mp3Filename = "/tmp/video.mp3";
// // Convert the FLV file to an MP3 file using ffmpeg.
const ffmpegArgs = [
'-i', inputFilename,
'-vn', // Disable the video stream in the output.
'-acodec', 'libmp3lame', // Use Lame for the mp3 encoding.
'-ac', '2', // Set 2 audio channels.
'-q:a', '6', // Set the quality to be roughly 128 kb/s.
mp3Filename,
];
try {
const process = childProcess.spawnSync(ffmpeg, ffmpegArgs);
console.log("stdout ", process.stdout);
console.log("stderr ", process.stderr);
console.log("tmp files ");
fs.readdir('/tmp/', (err, files) => {
files.forEach(file => {
var stats = fs.statSync(`/tmp/${file}`);
console.log("size of the file2 ", stats["size"]);
console.log(file);
});
});
} catch (e) {
console.log("error while converting video to audio ", e);
}
// return process;
} catch (e) {
console.log(e);
}
} catch (e) {
console.log("file is not complete", e);
}
}, function () {
console.log("checking ");
var stats = fs.statSync("/tmp/video.mp3");
console.log("size of the file2 ", stats["size"]);
});
return callback(err);
});
}
}
)
}Code workflow
First of all, I have downloaded ffmpeg binary exec file and put into my project directory. After that, I compressed my project and put it into the lambda function. This lambda function will be triggered whenever the new files are uploaded into an S3 bucket. I have checked /tmp/ storage files and the audio file .mp3 present but the size is 0 MB.
Note
And also, in my code the below is not calling or this part is not reaching. When I look into Cloudwatch logs I can’t see this console log messages. I don’t know why this function is not calling.
function () {
console.log("checking ");
var stats = fs.statSync("/tmp/video.mp3");
console.log("size of the file2 ", stats["size"]);
});Please help me to find the solution of this issue. I have spent a lot of times to figure out this issue. But I am not able to find the solution. Any suggestions are welcome !!
Thanks, -
Multiple live video outputs advice. Live stream/Record/Preview, FFMPEG, Windows, Decklink [closed]
18 septembre 2024, par stroltzI am looking for advice on how best to achieve multiple live video outputs.


The live source is a Decklink card on Windows. (We have a ffmpeg build working to access the card) We want 4 outputs ;


- 

-
We want to run a preview window (low quality would be preferred) just so the user can see the video is working.


-
We want to be able to live stream - single bit rate, RTMP. (goes up to a CDN)


-
Independent from the streaming we want to be able to stop and start recording to file. Ideally using CRF. So a separate encode – but maybe we use the RTMP encode, not sure, and do 1 x encode only.


-
We also want to save a separate audio file. Stops and starts at the same time as the video file above (if required we could do this as a post process on the video file we make above)












We want to keep CPU use down to as reasonable as possible. (so no high end hardware)


We have had a suggestion of this with ffmpeg ;


Input >> ffmpeg


- 

- split input to main and monitoring ;
- scale monitoring stream to lower resolution
- encode both streams
- provide both outputs to local streaming server
ffmpeg >> local streaming server
- use API to start and stop recordings (or web console, if you do it manually)
- provide streams to CDN or/and provide access to your streams for end users














recorded files >> another ffmpeg (controlled by some script that get
RECORDING COMPLETED event to start ffmpeg process)


- 

- extract audio from recorded file
- save audio into file






Which sounds possible, but if doing that, which local streaming server would work best (open source, API...)


or open to other ideas as to the best way.


https://trac.ffmpeg.org/wiki/Creating%20multiple%20outputs shows lots of ways, but I don't think you get to control the individual outputs independently.


-