
Recherche avancée
Autres articles (40)
-
Personnaliser les catégories
21 juin 2013, parFormulaire de création d’une catégorie
Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire.
Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...) -
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
-
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
Sur d’autres sites (9574)
-
ffmpeg taking too much time to compress videos in a nest Js project [closed]
30 août 2023, par SheerazI am trying to read a video as a stream from firebase and compress it using ffmpeg an it is too much time for the job to complete such that my call breaks before its done and I am getting socket hangup error. I have tried to increase the timeout of the call to 540s but it is still not enought for the job to be done. The framework I am using is nestJs with react as front-end. This is a web-app and the feature I am working on is a video recording functionality.


Here is the code I am trying to run. The library I am using is "fluent-ffmpeg" : "^2.1.2", and
"ffmpeg" : "^0.0.4",


const bucket = storage.bucket('my-bucket.appspot.com');
 const filePath = 'videos/video.mp4'; // Replace with the full path to your file
 const videoFile = bucket.file(filePath);
 const readStream = videoFile.createReadStream();
const ffmpegCommand = ffmpeg();
 ffmpegCommand.input(readStream);
 ffmpegCommand
 .outputOptions([
 '-preset ultrafast',
 '-c:v libx264',
 '-c:a aac',
 '-vf scale=640:480',
 ])
 .format('wav');
const outputStream = await ffmpegCommand.pipe();



-
FileNotFoundError on aws Lambda when concatenating videos with ffmpeg
2 juillet 2021, par Shibu MenonGoal :



- 

- Concat 2 videos (both are in an s3 bucket) via aws Lambda using ffmpeg
- Upload the resultant output.mp4 to another S3 bucket
- Python 3+









I've already created a layer containing a static ffmpeg



The Error :



{
 "errorMessage": "[Errno 2] No such file or directory: '/tmp/output.mp4'",
 "errorType": "FileNotFoundError",
 "stackTrace": [
 [
 "/var/task/lambda_function.py",
 19,
 "lambda_handler",
 "s3.Object(bucketLowRes, mp4OutputFileName).put(Body=open(new_file_key, 'rb'))"
 ]
 ]
}




My Lambda function :



import json
import os
import subprocess
import boto3

s3 = boto3.resource('s3')
bucketLowRes = s3.Bucket("bucket-conc-lowres")

def lambda_handler(event, context):
 # TODO implement

 mp4OutputFileName = 'output.mp4'

 new_file_key = os.path.abspath(os.path.join(os.sep, 'tmp', mp4OutputFileName))
 subprocess.call( ['/opt/ffmpeg', '-i', 'concat:s3://bucket-word-clips/00th76kqwfs915hbixycb77y9v3riwsj30.mp4|s3://bucket-word-clips/00uoakp6jyafbu13ycvl6w2i9tj42eux30.mp4', new_file_key ] )

 s3.Object(bucketLowRes, mp4OutputFileName).put(Body=open(new_file_key, 'rb'))

 return {
 'statusCode': 200,
 'body': json.dumps('Hello from Lambda!')
 }




Question :



- 

- FileNotFoundError : Where is the output mp4 file of my ffmpeg concat being saved ??
- And if it is being saved to /tmp/output.mp4 , then why the FileNotFoundError ??







thanks


-
Lambda/ffmpeg timelapse generation - output zero bytes, can't debug ffmpeg
25 août 2021, par GoOutsideI am attempting to use an AWS Lambda FFMPEG layer to build a timelapse of static images in an S3 bucket. To begin, I am basing my project off of the tutorial located here.


I can replicate the steps in the tutorial, so I know the FFMPEG layer is working in Lambda. I have replicated the FFMPEG commands on a standalone server, so I know they are correct.


Here is my setup : I have two S3 buckets,
lambda-source-bucket
andlambda-destination-bucket
. The contents oflambda-source-bucket
are :

1.jpg
2.jpg
3.jpg
4.jpg
5.jpg
6.jpg
7.jpg
files.txt



The
files.txt
contains this :

file 'https://lambda-source-bucket.s3.us-west-2.amazonaws.com/1.jpg'
file 'https://lambda-source-bucket.s3.us-west-2.amazonaws.com/2.jpg'
file 'https://lambda-source-bucket.s3.us-west-2.amazonaws.com/3.jpg'
file 'https://lambda-source-bucket.s3.us-west-2.amazonaws.com/4.jpg'
file 'https://lambda-source-bucket.s3.us-west-2.amazonaws.com/5.jpg'
file 'https://lambda-source-bucket.s3.us-west-2.amazonaws.com/6.jpg'
file 'https://lambda-source-bucket.s3.us-west-2.amazonaws.com/7.jpg'



This is my Lambda function code (in Python) :


import json
import os
import subprocess
import shlex
import boto3

S3_DESTINATION_BUCKET = "lambda-destination-bucket"
SIGNED_URL_TIMEOUT = 60

def lambda_handler(event, context):

 s3_source_bucket = event['Records'][0]['s3']['bucket']['name']
 s3_source_key = event['Records'][0]['s3']['object']['key']

 s3_source_basename = os.path.splitext(os.path.basename(s3_source_key))[0]
 s3_destination_filename = "timelapse.mp4"

 s3_client = boto3.client('s3')
 s3_source_signed_url = s3_client.generate_presigned_url('get_object',
 Params={'Bucket': s3_source_bucket, 'Key': s3_source_key},
 ExpiresIn=SIGNED_URL_TIMEOUT)

 ffmpeg_cmd = "/opt/bin/ffmpeg -y -r 24 -f concat -safe 0 -protocol_whitelist file,http,tcp,https,tls -I ""https://lambda-source-bucket.s3.us-west-2.amazonaws.com/files.txt"" -c copy -s 1024x576 -vcodec libx264 -" 
command1 = shlex.split(ffmpeg_cmd)
 p1 = subprocess.run(command1, stdout=subprocess.PIPE, stderr=subprocess.PIPE)

 resp = s3_client.put_object(Body=p1.stdout, Bucket=S3_DESTINATION_BUCKET, Key=s3_destination_filename)

 return {
 'statusCode': 200,
 'body': json.dumps('Processing complete successfully')
 }



The trigger for the Lambda function is when a new
files.txt
file is added tolambda-source-bucket
.

So far I have been able to get the trigger to fire, the function supposedly runs without errors (in Cloudwatch), and the function creates a new
timelapse.mp4
in thelambda-destination-bucket
. But this file is0 bytes
. I see no FFMPEG errors in the Cloudwatch console, though I am not sure I know how to configure my Lambda function code to log FFMPEG errors.

Also : if I'm going about this in a totally wrong way, I'd love to hear feedback. I'm guessing that the
concat
andfiles.txt
method of looping throughhttps://
is not the most efficient way to do this, but it's the only way I can figure this out so far.

Any help is most sincerely and humbly appreciated.