
Recherche avancée
Médias (91)
-
Spoon - Revenge !
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
My Morning Jacket - One Big Holiday
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Zap Mama - Wadidyusay ?
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
David Byrne - My Fair Lady
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Beastie Boys - Now Get Busy
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Granite de l’Aber Ildut
9 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
Autres articles (63)
-
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...) -
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)
Sur d’autres sites (8705)
-
ffmpeg + AWS Lambda issues. Won't compress full video
7 juillet 2022, par Joesph Stah LynnSo I followed this tutorial to set everything up, and changed the function a bit to compress video, but no matter what I try, on larger videos (basically anything over 50-100MB), the output file will always be cut short, and depending on the encoding settings I'm using, will be cut by different amounts. I tried using the solution found here, adding a -nostdin flag to my ffmpeg command, but that also didn't seem to fix the issue.

Another odd thing, is no matter what I try, if I remove the '-f mpegts' flag, the output video will be 0B.

My Lambda function is set up with 3008MB of Memory (submitted a ticket to get my limit upped so I can use the full 10240MB available), and 2048MB of Ephemeral storage (I honestly am not sure if I need anything more than the minimum 512, but I upped it to try and fix the issue). When I check my cloudwatch logs, on really large files, it will occasionally time out, but other than that, I will get no error messages, just the standard start, end, and billable time messages.

This is the code for my lambda function.


import json
import os
import subprocess
import shlex
import boto3

S3_DESTINATION_BUCKET = "rw-video-out"
SIGNED_URL_TIMEOUT = 600

def lambda_handler(event, context):

 s3_source_bucket = event['Records'][0]['s3']['bucket']['name']
 s3_source_key = event['Records'][0]['s3']['object']['key']

 s3_source_basename = os.path.splitext(os.path.basename(s3_source_key))[0]
 s3_destination_filename = s3_source_basename + "-comp.mp4"

 s3_client = boto3.client('s3')
 s3_source_signed_url = s3_client.generate_presigned_url('get_object',
 Params={'Bucket': s3_source_bucket, 'Key': s3_source_key},
 ExpiresIn=SIGNED_URL_TIMEOUT)

 ffmpeg_cmd = f"/opt/bin/ffmpeg -nostdin -i {s3_source_signed_url} -f mpegts libx264 -preset fast -crf 28 -c:a copy - "
 command1 = shlex.split(ffmpeg_cmd)
 p1 = subprocess.run(command1, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
 resp = s3_client.put_object(Body=p1.stdout, Bucket=S3_DESTINATION_BUCKET, Key=s3_destination_filename)
 s3 = boto3.resource('s3')
 s3.Object(s3_source_bucket,s3_source_key).delete()

 return {
 'statusCode': 200,
 'body': json.dumps('Processing complete successfully')
 }



This is the code from the solution I mentioned, but when I try using this code, I get output.mp4 not found errors


def lambda_handler(event, context):
 print(event)
 os.chdir('/tmp')
 s3_source_bucket = event['Records'][0]['s3']['bucket']['name']
 s3_source_key = event['Records'][0]['s3']['object']['key']

 s3_source_basename = os.path.splitext(os.path.basename(s3_source_key))[0]
 s3_destination_filename = s3_source_basename + ".mp4"

 s3_client = boto3.client('s3')
 s3_source_signed_url = s3_client.generate_presigned_url('get_object',
 Params={'Bucket': s3_source_bucket, 'Key': s3_source_key},
 ExpiresIn=SIGNED_URL_TIMEOUT)
 print(s3_source_signed_url)
 s3_client.download_file(s3_source_bucket,s3_source_key,s3_source_key)
 # ffmpeg_cmd = "/opt/bin/ffmpeg -framerate 25 -i \"" + s3_source_signed_url + "\" output.mp4 "
 ffmpeg_cmd = f"/opt/bin/ffmpeg -framerate 25 -i {s3_source_key} output.mp4 "
 # command1 = shlex.split(ffmpeg_cmd)
 # print(command1)
 os.system(ffmpeg_cmd)
 # os.system('ls')
 # p1 = subprocess.run(command1, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
 file = 'output.mp4'
 resp = s3_client.put_object(Body=open(file,"rb"), Bucket=S3_DESTINATION_BUCKET, Key=s3_destination_filename)
 # resp = s3_client.put_object(Body=p1.stdout, Bucket=S3_DESTINATION_BUCKET, Key=s3_destination_filename)
 s3 = boto3.resource('s3')
 s3.Object(s3_source_bucket,s3_source_key).delete()
 return {
 'statusCode': 200,
 'body': json.dumps('Processing complete successfully')
 }



Any help would be greatly appreciated.


-
AWS Lambda subprocess OSError : [Errno 2] No such file or directory
11 septembre 2016, par LevI’m trying to create a lambda function that makes collection of thumbnails from a video on amazon s3 using ffmpeg. ffmpeg binary is included into fuction package.
function code :
# -*- coding: utf-8 -*-
import stat
import shutil
import boto3
import logging
import subprocess as sp
import os
import threading
thumbnail_prefix = 'thumb_'
thumbnail_ext = '.jpg'
time_delta = 1
video_frames_path = 'media/videos/frames'
print('Loading function')
logger = logging.getLogger()
logger.setLevel(logging.INFO)
lambda_tmp_dir = '/tmp' # Lambda fuction can use this directory.
# ffmpeg is stored with this script.
# When executing ffmpeg, execute permission is requierd.
# But Lambda source directory do not have permission to change it.
# So move ffmpeg binary to `/tmp` and add permission.
ffmpeg_bin = "{0}/ffmpeg.linux64".format(lambda_tmp_dir)
shutil.copyfile('/var/task/ffmpeg.linux64', ffmpeg_bin)
os.chmod(ffmpeg_bin, 777)
# tried also:
# os.chmod(ffmpeg_bin, os.stat(ffmpeg_bin).st_mode | stat.S_IEXEC)
s3 = boto3.client('s3')
def get_thumb_filename(num):
return '{prefix}{num:03d}{ext}'.format(prefix=thumbnail_prefix, num=num, ext=thumbnail_ext)
def create_thumbnails(video_url):
i = 1
filenames_list = []
filename = None
while i == 1 or os.path.isfile(os.path.join(os.getcwd(), get_thumb_filename(i-1))):
if filename:
filenames_list.append(filename)
time = time_delta * (i - 1)
filename = get_thumb_filename(i)
print(ffmpeg_bin)
if os.path.isfile(ffmpeg_bin):
print('ok')
sp.call(['sudo',
ffmpeg_bin,
'-ss',
str(time),
'-i',
video_url,
'-frames:v',
'1',
get_thumb_filename(i)])
i += 1
print(filenames_list)
return filenames_list
def s3_upload_file(file_path, key, bucket, acl, content_type):
file = open(file_path, 'r')
s3.put_object(
Bucket=bucket,
ACL=acl,
Body=file,
Key=key,
ContentType=content_type
)
logger.info("file {0} moved to {1}/{2}".format(file_path, bucket, key))
def s3_upload_files_in_threads(filenames_list, dir_path, bucket, s3path, acl, content_type):
for filename in filenames_list:
if os.path.isfile(os.path.join(dir_path, filename)):
print(os.path.join(dir_path, filename))
t = threading.Thread(target=s3_upload_file,
args=(os.path.join(dir_path, filename),
'{0}/{1}'.format(s3path, filename),
bucket,
acl,
content_type)).start()
def lambda_handler(event, context):
bucket = event['Records'][0]['s3']['bucket']['name']
video_key = event['Records'][0]['s3']['object']['key']
video_name = video_key.split('/')[-1].split('.')[0]
video_url = 'http://{0}/{1}'.format(bucket, video_key)
filenames_list = create_thumbnails(video_url)
s3_upload_files_in_threads(filenames_list,
os.getcwd(),
bucket,
'{0}/{1}'.format(video_frames_path, video_name),
'public-read',
'image/jpeg')
returnduring the execution I get following logs :
Loading function
/tmp/ffmpeg.linux64
ok
[Errno 2] No such file or directory: OSError
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 112, in lambda_handler
filenames_list = create_thumbnails(video_url)
File "/var/task/lambda_function.py", line 77, in create_thumbnails
get_thumb_filename(i)])
File "/usr/lib64/python2.7/subprocess.py", line 522, in call
return Popen(*popenargs, **kwargs).wait()
File "/usr/lib64/python2.7/subprocess.py", line 710, in __init__
errread, errwrite)
File "/usr/lib64/python2.7/subprocess.py", line 1335, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directoryWhen I use the same sp.call() with the same ffmpeg binary on my ec2 instance it works fine.
-
How to use (django-celery,RQ) worker to execute a video filetype conversion (ffmpeg) in django on heroku (My code works locally)
15 janvier 2013, par GetItDoneOne part of my website includes a form that allows users to upload video. I use ffmpeg to convert the video to flv. My media and static files are stored on Amazon S3. I can get everything to work perfectly locally, however I can't seem to figure out how to use a worker to run the video conversion subprocess in production. I have dj-celery and rq installed in my app. The code in my view that I was able to get to work locally is :
#views.py
def upload_broadcast(request):
if request.method == 'POST':
form = VideoUploadForm(request.POST, request.FILES)
if form.is_valid():
new_video=form.save()
def convert_to_flv(video):
filename = video.video_upload
sourcefile = "%s%s" % (settings.MEDIA_ROOT, filename)
flvfilename = "%s.flv" % video.id
imagefilename = "%s.png" % video.id
thumbnailfilename = "%svideos/flv/%s" % (settings.MEDIA_ROOT, imagefilename)
targetfile = "%svideos/flv/%s" % (settings.MEDIA_ROOT, flvfilename)
ffmpeg = "ffmpeg -i %s -acodec mp3 -ar 22050 -f flv -s 320x240 %s" % (sourcefile, targetfile)
grabimage = "ffmpeg -y -i %s -vframes 1 -ss 00:00:02 -an -vcodec png -f rawvideo -s 320x240 %s" % (sourcefile, thumbnailfilename)
print ("SOURCE: %s" % sourcefile)
print ("TARGET: %s" % targetfile)
print ("TARGET IMAGE: %s" % thumbnailfilename)
print ("FFMPEG TASK CODE: %s" % ffmpeg)
print ("IMAGE TASK CODE: %s" % grabimage)
try:
ffmpegresult = subprocess.call(ffmpeg)
print "---------------FFMPEG---------------"
print ffmpegresult
except:
print "Not working."
try:
videothumbnail = subprocess.call(grabimage)
print "---------------IMAGE---------------"
print videothumbnail
except:
print "Not working."
video.flvfilename = flvfilename
video.videothumbnail = imagefilename
video.save()
convert_to_flv(new_video)
return HttpResponseRedirect('/video_list/')
else:
...This is my first time trying to use a worker (or ever pushing a project to production), so even with the documentation it is still unclear to me what I need to do. I have tried several different things but nothing seems to work. Is there just a simple way to tell celery to run the ffmpegresult = subprocess.call(ffmpeg) ? Thanks in advance for any help or insight.
EDIT- Added heroku logs
2013-01-10T20:58:57+00:00 app[web.1]: TARGET: /media/videos/flv/8.flv
2013-01-10T20:58:57+00:00 app[web.1]: IMAGE TASK CODE: ffmpeg -y -i /media/videos/practice.wmv -vframes 1 -ss 00:00:02 - an -vcodec png -f rawvideo -s 320x240 /media/videos/flv/8.png
2013-01-10T20:58:57+00:00 app[web.1]: SOURCE: /media/videos/practice.wmv
2013-01-10T20:58:57+00:00 app[web.1]: FFMPEG TASK CODE: ffmpeg -i /media/videos/practice.wmv -acodec mp3 -ar 22050 -f fl v -s 320x240 /media/videos/flv/8.flv
2013-01-10T20:58:57+00:00 app[web.1]: TARGET IMAGE: /media/videos/flv/8.png
2013-01-10T20:58:57+00:00 app[web.1]: Not working.
2013-01-10T20:58:57+00:00 app[web.1]: Not working.NEWER EDIT
I tried adding a tasks.py and added the task :
celery = Celery('tasks', broker='redis://guest@localhost//')
@celery.task
def ffmpeg_task(video):
converted_file = subprocess.call(video)
return converted_filethen I changed the relevant section of my view to :
...
try:
ffmpeg_task.delay(ffmpeg)
print "---------------FFMPEG---------------"
print ffmpegresult
except:
print "Not working."
...My new logs are :
2013-01-15T13:19:52+00:00 app[web.1]: TARGET IMAGE: /media/videos/flv/12.png
2013-01-15T13:19:52+00:00 app[web.1]: SOURCE: /media/videos/practice.wmv
2013-01-15T13:19:52+00:00 app[web.1]: FFMPEG TASK CODE: ffmpeg -i /media/videos/practice.wmv -acodec mp3 -ar 22050 -f fl v -s 320x240 /media/videos/flv/12.flv
2013-01-15T13:19:52+00:00 app[web.1]: IMAGE TASK CODE: ffmpeg -y -i /media/videos/practice.wmv -vframes 1 -ss 00:00:02 -an -vcodec png -f rawvideo -s 320x240 /media/videos/flv/12.png
2013-01-15T13:19:52+00:00 app[web.1]: TARGET: /media/videos/flv/12.flv
2013-01-15T13:20:17+00:00 app[web.1]: 2013-01-15 13:20:17 [2] [CRITICAL] WORKER TIMEOUT (pid:12)
2013-01-15T13:20:17+00:00 app[web.1]: 2013-01-15 13:20:17 [2] [CRITICAL] WORKER TIMEOUT (pid:12)
2013-01-15T13:20:17+00:00 app[web.1]: 2013-01-15 13:20:17 [19] [INFO] Booting worker with pid: 19Am I completely missing something ? I'll keep trying, but will be very appreciative of any direction or assistance.