
Recherche avancée
Autres articles (108)
-
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)
Sur d’autres sites (10761)
-
How to make drawtext work in AWS Lambda ffmpeg ?
22 mars 2020, par codeulI have setup an AWS Lambda function to use ffmpeg using layer
https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:145266761615:applications~ffmpeg-lambda-layer
.Some ffmpeg commands work, but noticed when I use
drawtext
ordrawbox
, I am not getting a proper mp4 file. The output looks corrupted and is low in size. (FYI : The output file is/tmp/test2.mp4
and then I copy it to an S3 bucket.)Whats wrong here ? Would appreciate any help. Thanks.
ffmpeg command :
ffmpeg -f lavfi -i color=0x142d3d:s=1280*720:d=10 -vf "drawtext=fontcolor=white:fontsize=50:fontfile=aladin.ttf:text='test':y=10:x=10" -movflags +faststart -y /tmp/test2.mp4
From log :
o --cc=gcc-6 --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gmp --enable-gray --enable-libaom --enable-libfribidi --enable-libass --enable-libvmaf --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librubberband --enable-libsoxr --enable-libspeex --enable-libvorbis --enable-libopus --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzvbi --enable-libzimg
libavutil 56. 22.100 / 56. 22.100
libavcodec 58. 35.100 / 58. 35.100
libavformat 58. 20.100 / 58. 20.100
libavdevice 58. 5.100 / 58. 5.100
libavfilter 7. 40.101 / 7. 40.101
libswscale 5. 3.100 / 5. 3.100
libswresample 3. 3.100 / 3. 3.100
libpostproc 55. 3.100 / 55. 3.100
Input #0, lavfi, from 'color=0x142d3d:s=1280*720:d=10':
Duration: N/A, start: 0.000000, bitrate: N/A
Stream #0:0: Video: rawvideo (I420 / 0x30323449), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], 25 tbr, 25 tbn, 25 tbc
Stream mapping:
Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264))
Press [q] to stop, [?] for help
[Parsed_drawtext_0 @ 0x5852500] Using "/var/task/fonts/aladin.ttf"
[libx264 @ 0x5850080] using SAR=1/1
[libx264 @ 0x5850080] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
[libx264 @ 0x5850080] profile Progressive High, level 3.1, 4:2:0, 8-bit
[libx264 @ 0x5850080] 264 - core 157 r2969 d4099dd - H.264/MPEG-4 AVC codec - Copyleft 2003-2019 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=3 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to '/tmp/test2.mp4':
Metadata:
encoder : Lavf58.20.100
Stream #0:0: Video: h264 (libx264) (avc1 / 0x31637661), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], q=-1--1, 25 fps, 12800 tbn, 25 tbc
Metadata:
encoder : Lavc58.35.100 libx264
Side data:
cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
frame= 2 fps=0.0 q=0.0 size= 0kB time=00:00:00.00 bitrate=N/A speed= 0x
frame= 9 fps=7.5 q=0.0 size= 0kB time=00:00:00.00 bitrate=N/A speed= 0x
frame= 17 fps=9.8 q=0.0 size= 0kB time=00:00:00.00 bitrate=N/A speed= 0x
frame= 25 fps= 11 q=0.0 size= 0kB time=00:00:00.00 bitrate=N/A speed= 0x
frame= 30 fps=7.4 q=0.0 size= 0kB time=00:00:00.00 bitrate=N/A speed= 0x
================= -
Create mp4 thumbnail in node.js
21 mai 2015, par trdavidsonnew in node.js and aws framework so I apologize in advance. I am trying to configure the AWS DB of my app to automatically create thumbnails using AWS Lambda. This works great using the example provided by Amazon for regular .jpg images (walkthrough here : https://alestic.com/2014/11/aws-lambda-cli/).
However to try and do the same operation for mp4 files seems exponentially more difficult. After some searching I found that it seems the way to do this is by using the ffmpeg module. The problem is that I do not at all understand the response object returned by aws, and thus am not sure how to manipulate it so that ffmpeg can use it.
current code :
// dependencies
var async = require('async');
var AWS = require('aws-sdk');
var gm = require('gm')
.subClass({ imageMagick: true }); // Enable ImageMagick integration.
var util = require('util');
var ffmpeg = require('ffmpeg');
var stream = require('stream')
// constants
var MAX_WIDTH = 250;
var MAX_HEIGHT = 250;
// get reference to S3 client
var s3 = new AWS.S3();
exports.handler = function(event, context) {
// Read options from the event.
console.log("Reading options from event:\n", util.inspect(event, {depth: 5}));
var srcBucket = event.Records[0].s3.bucket.name;
// Object key may have spaces or unicode non-ASCII characters.
var srcKey =
decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, " "));
var dstBucket = srcBucket + "small";
var dstKey = "small-" + srcKey;
// Sanity check: validate that source and destination are different buckets.
if (srcBucket == dstBucket) {
console.error("Destination bucket must not match source bucket.");
return;
}
// Infer the image type.
var typeMatch = srcKey.match(/\.([^.]*)$/);
if (!typeMatch) {
console.error('unable to infer image type for key ' + srcKey);
return;
}
var imageType = typeMatch[1];
if (imageType != "mp4" && imageType != "avi") {
console.log('skipping non-image ' + srcKey);
return;
}
// Download the image from S3, transform, and upload to a different S3 bucket.
async.waterfall([
function download(next) {
// Download the image from S3 into a buffer.
s3.getObject({
Bucket: srcBucket,
Key: srcKey
},
next);
},
function tranform(response, next) {
var instream = new stream.Readable();
instream.push(response.Body)
instream.push(null)
var outstream = new stream();
ffmpeg(instream)
.screenshots({timestamps: 1, size: '200x200'})
.output('screenshot.png')
.output(outstream)
.on('end', function(){
console.log('screenshots finished processing son!')
})
gm(outstream, 'screenshot.png').size(function(err, size) {
// Infer the scaling factor to avoid stretching the image unnaturally.
var scalingFactor = Math.min(
MAX_WIDTH / size.width,
MAX_HEIGHT / size.height
);
var width = scalingFactor * size.width;
var height = scalingFactor * size.height;
// Transform the image buffer in memory.
this.resize(width, height)
.toBuffer(imageType, function(err, buffer) {
if (err) {
next(err);
} else {
next(null, response.ContentType, buffer);
}
});
});
},
function upload(contentType, data, next) {
// Stream the transformed image to a different S3 bucket.
s3.putObject({
Bucket: dstBucket,
Key: dstKey,
Body: data,
ContentType: contentType
},
next);
}
], function (err) {
if (err) {
console.error(
'Unable to resize ' + srcBucket + '/' + srcKey +
' and upload to ' + dstBucket + '/' + dstKey +
' due to an error: ' + err
);
} else {
console.log(
'Successfully resized ' + srcBucket + '/' + srcKey +
' and uploaded to ' + dstBucket + '/' + dstKey
);
}
context.done();
}
);} ;
Any suggestions are welcome ! Thanks
-
Store converted video in a buffer to transfer to S3
24 mai 2012, par ChrisI have a system set up to upload an image, take that temporarily uploaded file, convert it to a resized jpeg, read the buffer of that conversion and send the buffer to amazon S3 for storage as an image. It works wonderfully because no permanent file is stored on the my server, everything is on S3.
Now I am attempting to add this same functionality but with video. The process goes through, but the resulting files stored on Amazons S3 servers are 1.5kb a piece, instead of multimb videos.
My code is as follows :
public function transfer($method, $file, $bucketName, $filename, $contentType){
switch($method){
case 'file':
if($this->s3->putObject($this->s3->inputFile($file, false), $bucketName, $filename, S3::ACL_PUBLIC_READ, array(), $contentType))
return true;
else
return false;
break;
case 'string':
if ($this->s3->putObject($file, $bucketName, $filename, S3::ACL_PUBLIC_READ, array(), $contentType))
return true;
else
return false;
break;
case 'resource':
if($this->s3->putObject($this->s3->inputResource(fopen($file, 'rb'), filesize($file)), $bucketName, $filename, S3::ACL_PUBLIC_READ, array(), $contentType))
return true;
else
return false;
break;
}
return false;
}
/**
* AmazonS3Handler - convert()
*/
public function convert($file, $type)
{
$ffmpeg = "/usr/bin/ffmpeg";
$cmd['webm'] = $ffmpeg. " -i ". $file ." -vcodec libvpx -acodec libvorbis -ac 2 -f webm -g 30 2>&1";
$cmd['ogv'] = $ffmpeg. " -i ". $file ." -vcodec libtheora -acodec libvorbis -ac 2 2>&1";
$cmd['mp4'] = $ffmpeg. " -i ". $file ." -vcodec libx264 -acodec libfaac -ac 2 2>&1";
$cmd['jpg'] = $ffmpeg. " -i ". $file ." -vframes 30";
ob_start();
passthru($cmd[$type]);
$fileContents = ob_get_contents();
ob_end_clean();
return $fileContents;
}From my understanding, passthru should return raw output which could be picked up by the output buffer.
Am I doing something wrong ? Is there a better way to convert a video on my servers but keep the data on S3 ?
Thanks !
EDIT : I've boiled it down to the executing of the command. FFMPEG wasn't being located, so I changed the path to "/usr/bin/ffmpeg", no more error 127, now when I run exec() (not passthru) I get error 1
EDIT2 : Here is the output of running the script :
Array
(
[0] => ffmpeg version 0.7.3-4:0.7.3-0ubuntu0.11.10.1, Copyright (c) 2000-2011 the Libav developers
[1] => built on Jan 4 2012 16:08:51 with gcc 4.6.1
[2] => configuration: --extra-version='4:0.7.3-0ubuntu0.11.10.1' --arch=amd64 --prefix=/usr --enable-vdpau --enable-bzlib --enable-libgsm --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-pthreads --enable-zlib --enable-libvpx --enable-runtime-cpudetect --enable-vaapi --enable-gpl --enable-postproc --enable-swscale --enable-x11grab --enable-libdc1394 --enable-shared --disable-static
[3] => libavutil 51. 7. 0 / 51. 7. 0
[4] => libavcodec 53. 6. 0 / 53. 6. 0
[5] => libavformat 53. 3. 0 / 53. 3. 0
[6] => libavdevice 53. 0. 0 / 53. 0. 0
[7] => libavfilter 2. 4. 0 / 2. 4. 0
[8] => libswscale 2. 0. 0 / 2. 0. 0
[9] => libpostproc 52. 0. 0 / 52. 0. 0
[10] =>
[11] => Seems stream 1 codec frame rate differs from container frame rate: 2000.00 (2000/1) -> 30.30 (1000/33)
[12] => Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'iv.m4v':
[13] => Metadata:
[14] => major_brand : M4V
[15] => minor_version : 1
[16] => compatible_brands: M4V M4A mp42isom
[17] => creation_time : 2012-01-23 04:00:18
[18] => encoder : Mac OS X v10.7.2 (CMA 889, CM 705.42, x86_64)
[19] => Duration: 00:00:11.70, start: 0.000000, bitrate: 345 kb/s
[20] => Stream #0.0(und): Audio: aac, 44100 Hz, stereo, s16, 125 kb/s
[21] => Metadata:
[22] => creation_time : 2012-01-23 04:00:18
[23] => Stream #0.1(und): Video: h264 (Main), yuv420p, 640x360 [PAR 1:1 DAR 16:9], 206 kb/s, 30.30 fps, 30.30 tbr, 1k tbn, 2k tbc
[24] => Metadata:
[25] => creation_time : 2012-01-23 04:00:18
[26] => iv.webm: Permission denied
)It looks like the permissions aren't set right, however the file has chmod 777 permissions. What other permissions need to get changed ?