
Recherche avancée
Autres articles (52)
-
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...) -
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Possibilité de déploiement en ferme
12 avril 2011, parMediaSPIP peut être installé comme une ferme, avec un seul "noyau" hébergé sur un serveur dédié et utilisé par une multitude de sites différents.
Cela permet, par exemple : de pouvoir partager les frais de mise en œuvre entre plusieurs projets / individus ; de pouvoir déployer rapidement une multitude de sites uniques ; d’éviter d’avoir à mettre l’ensemble des créations dans un fourre-tout numérique comme c’est le cas pour les grandes plate-formes tout public disséminées sur le (...)
Sur d’autres sites (5822)
-
ffmpeg hangs when muxing longer videos
11 août 2020, par 492357816I am using ffmpeg with Janus WebRTC server for muxing video conference recordings. For video conferences under approximaately 5 minutes duration, existing code works well. If the source files are longer than that the process appears to hang without returning either an error or a completed file.
Any advice/suggestions would be most welcome.


var recordingsPath = exports.recordingsPath;
 var videoRecordingPath = `${recordingsPath}/${filename}-video.mjr`;
 var audioRecordingPath = `${recordingsPath}/${filename}-audio.mjr`;
 var videoOutputPath = `${recordingsPath}/${filename}-output1.mp4`;
 var audioOutputPath = `${recordingsPath}/${filename}-output1.wav`;
 var finalOutputPath = `${recordingsPath}/${filename}-final.mp4`;
 var rawConverterPath = "/opt/janus/bin/janus-pp-rec";
 var audioRecordingByteSize = exports.getFileSizeInBytes(audioRecordingPath);
 var audioIsValid = audioRecordingByteSize > 8;

 if (audioIsValid) {
 //turn audio and video mjr files into .wav and mp4 respectively
 exports.runBashSync([ rawConverterPath, videoRecordingPath, videoOutputPath]);
 exports.runBashSync([ rawConverterPath, audioRecordingPath, audioOutputPath]);
 return exports.mergeMp4AndWav(
 audioOutputPath, videoOutputPath, finalOutputPath
 ).then(function(result) {
 // remove source data for audio & video
 exports.runBashSync([ "rm", audioOutputPath]);
 exports.runBashSync([ "rm", videoOutputPath]);
 exports.runBashSync([ "rm", audioRecordingPath]);
 exports.runBashSync([ "rm", videoRecordingPath]);
 });
 } else {
 // handle cases where audio is not available
 feedback("no audio");
 return new Promise(function(resolve) {
 exports.runBashSync([ rawConverterPath, videoRecordingPath, finalOutputPath], true);
 if (Number.isInteger(audioRecordingByteSize)) {
 exports.runBashSync([ "rm", audioRecordingPath]);
 }
 exports.runBashSync([ "rm", videoRecordingPath]);
 resolve(true);
 });
 }
};

exports.joinMp4s = function(mp4Filenames, outputPath) {
 feedback("joining mp4s");
 if (mp4Filenames.length === 1) {
 feedback("single-stream case");
 exports.runBashSync(["mv", mp4Filenames[0], outputPath]);
 return new Promise(function(resolve) { resolve(true); });
 }
 feedback("multi-stream case");
 var joinCmd = ["ffmpeg"];
 mp4Filenames.forEach(function(filename) {
 joinCmd.push(`-i ${filename}`);
 });
 joinCmd.push("-strict -2");
 var totalHeight = 960;
 var totalWidth = 1280;

 joinCmd.push(`-filter_complex "color=size=${totalWidth}x${totalHeight}:c=Black [base];`);

 // var numStrms = mp4Filenames.length;
 var streamsPattern = exports.setStreamsPattern(mp4Filenames);
 var strmHeight = streamsPattern.height;
 var strmWidth = streamsPattern.width;
 var strmDmns = `${strmWidth}x${strmHeight}`;
 feedback("streamDimensions: " + strmDmns);

 var i;
 for (i = 0; i < mp4Filenames.length; i++) {
 joinCmd.push(`[${i}:v] setpts=PTS-STARTPTS, scale=${strmDmns} [temp${i}];`);
 }
 for (i = 0; i < mp4Filenames.length; i++) {
 var xCoord;
 var yCoord;
 if (i === 0) {
 xCoord = streamsPattern.coords[i].xCoord;
 yCoord = streamsPattern.coords[i].yCoord;
 joinCmd.push(`[base][temp${i}] overlay=shortest=1:x=${xCoord}:y=${yCoord} [tmp${i + 1}];`);
 // joinCmd.push(`[base][temp${i}] overlay=shortest=1 [tmp${i + 1}];`);
 } else {
 var cmd = `[tmp${i}][temp${i}] overlay=shortest=1`;
 xCoord = streamsPattern.coords[i].xCoord;
 yCoord = streamsPattern.coords[i].yCoord;

 if (xCoord) { cmd += ":x=" + xCoord; }
 if (yCoord) { cmd += ":y=" + yCoord; }

 if (i + 1 !== mp4Filenames.length) { cmd += ` [tmp${i + 1}];`; }
 joinCmd.push(cmd);
 }
 }

 joinCmd.push(`" ${outputPath}`);
 feedback("join command: " + joinCmd);
 return exports.runBashAsync(joinCmd).then(function(result) {

 mp4Filenames.forEach(function(filename) {
 feedback("removing: " + filename);
 exports.runBashSync(`rm ${filename}`);
 });
 });
};




-
FFMPEG OpenTok archive audio drift
25 août 2020, par Chris KookenI am using OpenTok to build a live video platform. It generates webm files from each users stream.


I am using FFMPEG to convert webm (WEBRTC) videos to MP4s to edit in my NLE. The problem I am having is my audio is drifting. I THINK it is because the user pauses the audio during the stream. This is the command i'm running


ffmpeg -acodec libopus -i 65520df3-1033-480e-adde-1856d18e2352.webm -max_muxing_queue_size 99999999 65520df3-1033-480e-adde-1856d18e2352.webm.new.mp4



The problem is I think, whenever the user muted themselves, there are no frames. But the PTS is in tact.


This is from the OpenTok documentation (my WebRTC platform)




Audio and video frames may not arrive with monotonic timestamps ; frame rates are not always consistent. This is especially relevant if either the video or audio track is disabled for a time, using one of publishVideo or publishAudio publisher properties.






Frame presentation timestamps (PTS) are written based on NTP
timestamps taken at the time of capture, offset by the timestamp of
the first received frame. Even if a track is muted and later unmuted,
the timestamp offset should remain consistent throughout the duration
of the entire stream. When decoding in post-processing, a gap in PTS
between consecutive frames will exist for the duration of the track
mute : there are no "silent" frames in the container.




How can I convert these files and have them play in sync ? Note, when I play in QuickTime or VLC, the files are synced correctly.


EDIT
I've gotten pretty close with this command :


ffmpeg -acodec libopus -i $f -max_muxing_queue_size 99999999 -vsync 1 -af aresample=async=1 -r 30 $f.mp4



But every once in a while, I get a video where the audio starts right away, and they wont actually be talking in the video until half-way thought the video. My guess is they muted themselves during the video conference... so in some cases audio is 5-10 mins ahead. Again, plays fine in quicktime, but pulled into my NLE, its way off.


-
Video editing multi customer host in server instances
1er août 2020, par General OmoscoI created a multi conference live stream web app using rtc and ffmpeg C api for media stream, videos/images/texts files mixed up together in realtime sending the output to rtmp multiple destinations, but unfortunately my server could not handle more than 2 conference room smoothly.


The server which couldn't handle more than two conference room running ffmpeg api


It is Comfort plan in OVH
Type : vps,
Processor : 4 vCore,
Ram : 8gb


Usage
CPU : 100% //playing smoothly or not.
Ram : 3%


My question here is that. Is it possible to be creating self isolated instance on fly and automatically deploy the app in the instance for each customer ? And the host provider that can accept that.