
Recherche avancée
Médias (91)
-
Head down (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Echoplex (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Discipline (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Letting you (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
1 000 000 (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
999 999 (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
Autres articles (49)
-
Librairies et logiciels spécifiques aux médias
10 décembre 2010, parPour un fonctionnement correct et optimal, plusieurs choses sont à prendre en considération.
Il est important, après avoir installé apache2, mysql et php5, d’installer d’autres logiciels nécessaires dont les installations sont décrites dans les liens afférants. Un ensemble de librairies multimedias (x264, libtheora, libvpx) utilisées pour l’encodage et le décodage des vidéos et sons afin de supporter le plus grand nombre de fichiers possibles. Cf. : ce tutoriel ; FFMpeg avec le maximum de décodeurs et (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs
Sur d’autres sites (7521)
-
ffmpeg stream chrome kiosk mode ubuntu 16.04 server
21 décembre 2016, par RaulI have a weird out-of-sync issue while using ffmpeg to stream to youtube live a chrome browser from an ub untu 16.04 server.
Issue : output video streamed to youtube has audio/video out of sync, sometimes with as much as 3s
Current flow :
1) start pulseaudio - we using something like this to start it :
pulseaudio --start -vvv --disallow-exit --log-target=syslog --high-priority --exit-idle-time=-1 --daemonize
2) start Xvfb
Xvfb :0 -ac -screen 0 1920x1080x24
3) start chrome linux in kiosk mode
google-chrome --kiosk --disable-gpu --incognito --no-first-run --disable-java --disable-plugins --disable-translate --disk-cache-size=$((1024 * 1024)) --disk-cache-dir=/tmp/chrome/ --user-data-dir=/tmp/chrome/ --force-device-scale-factor=1 --window-size=1920,1080 --window-position=0,0 LOCATION_URL
4) start ffmpeg
ffmpeg -y \
-thread_queue_size 8192 -rtbufsize 250M -f x11grab -video_size 1920x1080 -framerate 24 -i :0 \
-thread_queue_size 8192 -channel_layout stereo -f alsa -i pulse \
-c:v libx264 -pix_fmt yuv420p -c:v libx264 -g 48 -crf 24 -filter:v fps=24 -preset ultrafast -tune zerolatency \
-c:a aac -strict -2 -channel_layout stereo -ab 96k -ac 2 -flags +global_header \
-f flv YOUTUBE_LIVE_STREAMING_RTMPNote : this is running on an amazon ec2 instance, meaning there is no soundcard, so alsa and pulseaudio are creating a dummy audio card. However, the latency does not come from there. Logs :
Nov 25 06:14:22 ip-172-31-29-8 pulseaudio[26602]: [pulseaudio] protocol-native.c: Adjust latency mode enabled, configuring sink latency to half of overall latency.
Nov 25 06:14:22 ip-172-31-29-8 pulseaudio[26602]: [pulseaudio] protocol-native.c: Requested latency=23.22 ms, Received latency=23.22 ms
Nov 25 06:14:22 ip-172-31-29-8 pulseaudio[26602]: [pulseaudio] protocol-native.c: Final latency 69.66 ms = 23.22 ms + 2*11.61 ms + 23.22 msAt this point, here’s what we observed :
-
if we start ffmpeg exactly after issuing the command to start chrome, we see the DTS errors from ffmpeg. Audio is out of sync with the video and has delay of 3-5seconds AHEAD. We also noticed the out of sync remains the same for the full duration of the stream
-
if we start ffmpeg after around 10seconds, audio and video are almost in sync. We then manually added a -itsoffset -0.125 to the ffmpeg command and everything is perfect.
Questions :
- Why would ffmpeg have so much lag if it’s started right after chrome ?
- Is starting the ffmpeg after 10s or X seconds the expected behavior ? That is, is this because the system needs to wait for audio/video signals to be "ready" or something ?
- Is there a way to 100% calculate or know when Chrome is fully ready and start ffmpeg ? We found sometimes it takes 5s, sometimes 10. Depends on the URL we load.
- Besides the DTS error that ffmpeg throws, is there any other way to know if audio/video is out-of-sync ? as sometimes we have a delay of between 0.5 to 1s, but ffmpeg does not report anything. And a restart is required to "re-balance" the audio/video inputs and get them back in sync.
- Can pulseaudio be the problem in this scenario ?
Thank you
UPDATE Dec 20
We were able to do some tricks to force chrome to start the audio on page load, and that will force connect to pulseaudio. Doing that, plus adding a 3s delay for ffmpeg to start, there is no more delay when ffmpeg starts.
However, our app is a webRTC app, and we have a STRANGER thing happening : if we start the page with no webcam/audio, once the webcam/audio is enabled, ffmpeg (while showing no errors) has a delay of 2s or so. While keep talking, in about max 30s, that delay is GONE.So the new questions are :
- Besides the DTS error that ffmpeg throws, is there any other way to know if audio/video is out-of-sync ? as sometimes we have a delay of between 0.5 to 1s, but ffmpeg does not report anything.
- What could cause the initial audio/video out of sync issue and then catching up ?
-
-
Store converted video in a buffer to transfer to S3
24 mai 2012, par ChrisI have a system set up to upload an image, take that temporarily uploaded file, convert it to a resized jpeg, read the buffer of that conversion and send the buffer to amazon S3 for storage as an image. It works wonderfully because no permanent file is stored on the my server, everything is on S3.
Now I am attempting to add this same functionality but with video. The process goes through, but the resulting files stored on Amazons S3 servers are 1.5kb a piece, instead of multimb videos.
My code is as follows :
public function transfer($method, $file, $bucketName, $filename, $contentType){
switch($method){
case 'file':
if($this->s3->putObject($this->s3->inputFile($file, false), $bucketName, $filename, S3::ACL_PUBLIC_READ, array(), $contentType))
return true;
else
return false;
break;
case 'string':
if ($this->s3->putObject($file, $bucketName, $filename, S3::ACL_PUBLIC_READ, array(), $contentType))
return true;
else
return false;
break;
case 'resource':
if($this->s3->putObject($this->s3->inputResource(fopen($file, 'rb'), filesize($file)), $bucketName, $filename, S3::ACL_PUBLIC_READ, array(), $contentType))
return true;
else
return false;
break;
}
return false;
}
/**
* AmazonS3Handler - convert()
*/
public function convert($file, $type)
{
$ffmpeg = "/usr/bin/ffmpeg";
$cmd['webm'] = $ffmpeg. " -i ". $file ." -vcodec libvpx -acodec libvorbis -ac 2 -f webm -g 30 2>&1";
$cmd['ogv'] = $ffmpeg. " -i ". $file ." -vcodec libtheora -acodec libvorbis -ac 2 2>&1";
$cmd['mp4'] = $ffmpeg. " -i ". $file ." -vcodec libx264 -acodec libfaac -ac 2 2>&1";
$cmd['jpg'] = $ffmpeg. " -i ". $file ." -vframes 30";
ob_start();
passthru($cmd[$type]);
$fileContents = ob_get_contents();
ob_end_clean();
return $fileContents;
}From my understanding, passthru should return raw output which could be picked up by the output buffer.
Am I doing something wrong ? Is there a better way to convert a video on my servers but keep the data on S3 ?
Thanks !
EDIT : I've boiled it down to the executing of the command. FFMPEG wasn't being located, so I changed the path to "/usr/bin/ffmpeg", no more error 127, now when I run exec() (not passthru) I get error 1
EDIT2 : Here is the output of running the script :
Array
(
[0] => ffmpeg version 0.7.3-4:0.7.3-0ubuntu0.11.10.1, Copyright (c) 2000-2011 the Libav developers
[1] => built on Jan 4 2012 16:08:51 with gcc 4.6.1
[2] => configuration: --extra-version='4:0.7.3-0ubuntu0.11.10.1' --arch=amd64 --prefix=/usr --enable-vdpau --enable-bzlib --enable-libgsm --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-pthreads --enable-zlib --enable-libvpx --enable-runtime-cpudetect --enable-vaapi --enable-gpl --enable-postproc --enable-swscale --enable-x11grab --enable-libdc1394 --enable-shared --disable-static
[3] => libavutil 51. 7. 0 / 51. 7. 0
[4] => libavcodec 53. 6. 0 / 53. 6. 0
[5] => libavformat 53. 3. 0 / 53. 3. 0
[6] => libavdevice 53. 0. 0 / 53. 0. 0
[7] => libavfilter 2. 4. 0 / 2. 4. 0
[8] => libswscale 2. 0. 0 / 2. 0. 0
[9] => libpostproc 52. 0. 0 / 52. 0. 0
[10] =>
[11] => Seems stream 1 codec frame rate differs from container frame rate: 2000.00 (2000/1) -> 30.30 (1000/33)
[12] => Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'iv.m4v':
[13] => Metadata:
[14] => major_brand : M4V
[15] => minor_version : 1
[16] => compatible_brands: M4V M4A mp42isom
[17] => creation_time : 2012-01-23 04:00:18
[18] => encoder : Mac OS X v10.7.2 (CMA 889, CM 705.42, x86_64)
[19] => Duration: 00:00:11.70, start: 0.000000, bitrate: 345 kb/s
[20] => Stream #0.0(und): Audio: aac, 44100 Hz, stereo, s16, 125 kb/s
[21] => Metadata:
[22] => creation_time : 2012-01-23 04:00:18
[23] => Stream #0.1(und): Video: h264 (Main), yuv420p, 640x360 [PAR 1:1 DAR 16:9], 206 kb/s, 30.30 fps, 30.30 tbr, 1k tbn, 2k tbc
[24] => Metadata:
[25] => creation_time : 2012-01-23 04:00:18
[26] => iv.webm: Permission denied
)It looks like the permissions aren't set right, however the file has chmod 777 permissions. What other permissions need to get changed ?
-
Create mp4 thumbnail in node.js
21 mai 2015, par trdavidsonnew in node.js and aws framework so I apologize in advance. I am trying to configure the AWS DB of my app to automatically create thumbnails using AWS Lambda. This works great using the example provided by Amazon for regular .jpg images (walkthrough here : https://alestic.com/2014/11/aws-lambda-cli/).
However to try and do the same operation for mp4 files seems exponentially more difficult. After some searching I found that it seems the way to do this is by using the ffmpeg module. The problem is that I do not at all understand the response object returned by aws, and thus am not sure how to manipulate it so that ffmpeg can use it.
current code :
// dependencies
var async = require('async');
var AWS = require('aws-sdk');
var gm = require('gm')
.subClass({ imageMagick: true }); // Enable ImageMagick integration.
var util = require('util');
var ffmpeg = require('ffmpeg');
var stream = require('stream')
// constants
var MAX_WIDTH = 250;
var MAX_HEIGHT = 250;
// get reference to S3 client
var s3 = new AWS.S3();
exports.handler = function(event, context) {
// Read options from the event.
console.log("Reading options from event:\n", util.inspect(event, {depth: 5}));
var srcBucket = event.Records[0].s3.bucket.name;
// Object key may have spaces or unicode non-ASCII characters.
var srcKey =
decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, " "));
var dstBucket = srcBucket + "small";
var dstKey = "small-" + srcKey;
// Sanity check: validate that source and destination are different buckets.
if (srcBucket == dstBucket) {
console.error("Destination bucket must not match source bucket.");
return;
}
// Infer the image type.
var typeMatch = srcKey.match(/\.([^.]*)$/);
if (!typeMatch) {
console.error('unable to infer image type for key ' + srcKey);
return;
}
var imageType = typeMatch[1];
if (imageType != "mp4" && imageType != "avi") {
console.log('skipping non-image ' + srcKey);
return;
}
// Download the image from S3, transform, and upload to a different S3 bucket.
async.waterfall([
function download(next) {
// Download the image from S3 into a buffer.
s3.getObject({
Bucket: srcBucket,
Key: srcKey
},
next);
},
function tranform(response, next) {
var instream = new stream.Readable();
instream.push(response.Body)
instream.push(null)
var outstream = new stream();
ffmpeg(instream)
.screenshots({timestamps: 1, size: '200x200'})
.output('screenshot.png')
.output(outstream)
.on('end', function(){
console.log('screenshots finished processing son!')
})
gm(outstream, 'screenshot.png').size(function(err, size) {
// Infer the scaling factor to avoid stretching the image unnaturally.
var scalingFactor = Math.min(
MAX_WIDTH / size.width,
MAX_HEIGHT / size.height
);
var width = scalingFactor * size.width;
var height = scalingFactor * size.height;
// Transform the image buffer in memory.
this.resize(width, height)
.toBuffer(imageType, function(err, buffer) {
if (err) {
next(err);
} else {
next(null, response.ContentType, buffer);
}
});
});
},
function upload(contentType, data, next) {
// Stream the transformed image to a different S3 bucket.
s3.putObject({
Bucket: dstBucket,
Key: dstKey,
Body: data,
ContentType: contentType
},
next);
}
], function (err) {
if (err) {
console.error(
'Unable to resize ' + srcBucket + '/' + srcKey +
' and upload to ' + dstBucket + '/' + dstKey +
' due to an error: ' + err
);
} else {
console.log(
'Successfully resized ' + srcBucket + '/' + srcKey +
' and uploaded to ' + dstBucket + '/' + dstKey
);
}
context.done();
}
);} ;
Any suggestions are welcome ! Thanks