
Recherche avancée
Autres articles (39)
-
La file d’attente de SPIPmotion
28 novembre 2010, parUne file d’attente stockée dans la base de donnée
Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)
Sur d’autres sites (5067)
-
ffmpeg extract multiple frames from single input causing SIGSEGV in node.js child_process on lambda env
11 octobre 2023, par Andrew StillI'm trying to dynamically extract multiple different frames from single video input. So the command I'm calling looking like this


ffmpeg -loglevel debug -hide_banner -t 13.269541 -y -ss 0 -i "input-s3-url" -ss 13.269541 -i "same-input-s3-url" -map 0:v -vframes 1 /tmp/ca4cd7a3159743938c5362c171ea2cae.0.png -map 1:v -vframes 1 /tmp/ca4cd7a3159743938c5362c171ea2cae.13.269541.png



It works and everything is good, until I deploy it to lambda. Even though I'm using 10gb of RAM it still failing with error. Locally it works like a charm but not on lambda. I'm not sure what the problem here but i'm regularly (not always) getting SIGSEGV


at ChildProcess.exithandler (node:child_process:402:12)
at ChildProcess.emit (node:events:513:28) 
at ChildProcess.emit (node:domain:489:12)
at maybeClose (node:internal/child_process: 1100:16)
at Process.ChildProcess._handle.onexit (node:internal/child_process:304:5) 
{
code: null, 
killed: false, 
signal: 'SIGSEGV'
cmd: '/opt/bin/ffmpeg -loglevel error -hide_banner -t 131.805393 -y -ss 0 -i "https: //



Double-checked memory usage and it's doesn't look like a reason, but I'm not sure how correct this number


Memory size : 10240 MB Max Memory used : 140 MB


I'm think maybe it's because it's making requests for each input, at least that's what I saw in debug mode, but still have no idea what's the problem here, would appreciate any suggestions/optimizations/help. Thanks


ffmpeg added on lambda using this layer - https://serverlessrepo.aws.amazon.com/applications/us-east-1/145266761615/ffmpeg-lambda-layer


-
AWS Lambda : "Unzipped size must be smaller than 106534017 bytes" after adding single file
17 septembre 2023, par leonWhen trying to deploy my lambdas using AWS through the serverless framework I had no problems until I tried adding the ffmpeg binary.


Now the ffmpeg binaries I have tried to add have ranged from 26 mb to 50 mb. Whichever I add, I get the following error :


UPDATE_FAILED: WhatsappDocumentHandlerLambdaFunction (AWS::Lambda::Function)
Resource handler returned message: "Unzipped size must be smaller than 106534017 bytes (Service: Lambda, Status Code: 400, Request ID: ...)" (RequestToken: ..., HandlerErrorCode: InvalidRequest)



The problem is that I did not add the file to this function. I added it to a completely different one.


I have tried the following things :


- 

- Creating an "empty" function that only contains the ffmpeg binary and a function handler
- Creating a layer that only contains the ffmpeg binary
- Deleting the ffmpeg binary (the error goes away and deployment succeeds
- Varying sizes of ffmpeg binaries between 26 and 50mb
- Getting the ffmpeg-lambda-layer (https://github.com/serverlesspub/ffmpeg-aws-lambda-layer ; https://serverlessrepo.aws.amazon.com/applications/us-east-1/145266761615/ffmpeg-lambda-layer) and deploying it myself












When trying every single one of these options I get the UPDATE_FAILED error in a different function that surely is not too big.


I know I can deploy using a docker image but why complicate things with docker images when it should work ?


I am very thankful for any ideas.


-
Using ffmpeg to assemble images from S3 into a video
10 juillet 2020, par Mass Dot NetI can easily assemble images from local disk into a video using ffmpeg and passing a
%06d
filespec. Here's what a typical (pseudocode) command would look like :

ffmpeg.exe -hide_banner -y -r 60 -t 12 -i /JpgsToCombine/%06d.JPG <..etc..>



However, I'm struggling to do the same with images stored in AWS S3, without using some third party software to mount a virtual drive (e.g. TNTDrive). The S3 folder containing our images is too large to download to the 20GB ephemeral storage provided for AWS containers, and we're trying to avoid EFS because we'd have to provision expensive bandwidth.


Here's what the HTTP and S3 URLs to each of our JPGs looks like :


# HTTP URL
https://massdotnet.s3.amazonaws.com/jpgs-to-combine/000000.JPG # frame 0
https://massdotnet.s3.amazonaws.com/jpgs-to-combine/000012.JPG # frame 12
https://massdotnet.s3.amazonaws.com/jpgs-to-combine/000123.JPG # frame 123
https://massdotnet.s3.amazonaws.com/jpgs-to-combine/456789.JPG # frame 456789

# S3 URL
s3://massdotnet/jpgs-to-combine/000000.JPG # frame 0
s3://massdotnet/jpgs-to-combine/000012.JPG # frame 12
s3://massdotnet/jpgs-to-combine/000123.JPG # frame 123
s3://massdotnet/jpgs-to-combine/456789.JPG # frame 456789



Is there any way to get ffmpeg to assemble these ? We could generate a signed URL for each S3 file, and put several thousand of those URLs onto a command line with an FFMPEG concat filter. However, we'd run up into the command line input limit in Linux at some point using this approach. I'm hoping there's a better way...