Recherche avancée

Médias (91)

Autres articles (29)

  • Other interesting software

    13 avril 2011, par

    We don’t claim to be the only ones doing what we do ... and especially not to assert claims to be the best either ... What we do, we just try to do it well and getting better ...
    The following list represents softwares that tend to be more or less as MediaSPIP or that MediaSPIP tries more or less to do the same, whatever ...
    We don’t know them, we didn’t try them, but you can take a peek.
    Videopress
    Website : http://videopress.com/
    License : GNU/GPL v2
    Source code : (...)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Script d’installation automatique de MediaSPIP

    25 avril 2011, par

    Afin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
    Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
    La documentation de l’utilisation du script d’installation (...)

Sur d’autres sites (7212)

  • ffmpeg + AWS Lambda issues. Won't compress full video

    7 juillet 2022, par Joesph Stah Lynn

    So I followed this tutorial to set everything up, and changed the function a bit to compress video, but no matter what I try, on larger videos (basically anything over 50-100MB), the output file will always be cut short, and depending on the encoding settings I'm using, will be cut by different amounts. I tried using the solution found here, adding a -nostdin flag to my ffmpeg command, but that also didn't seem to fix the issue.
    
Another odd thing, is no matter what I try, if I remove the '-f mpegts' flag, the output video will be 0B.
    
My Lambda function is set up with 3008MB of Memory (submitted a ticket to get my limit upped so I can use the full 10240MB available), and 2048MB of Ephemeral storage (I honestly am not sure if I need anything more than the minimum 512, but I upped it to try and fix the issue). When I check my cloudwatch logs, on really large files, it will occasionally time out, but other than that, I will get no error messages, just the standard start, end, and billable time messages.

    


    This is the code for my lambda function.

    


    import json
import os
import subprocess
import shlex
import boto3

S3_DESTINATION_BUCKET = "rw-video-out"
SIGNED_URL_TIMEOUT = 600

def lambda_handler(event, context):

    s3_source_bucket = event['Records'][0]['s3']['bucket']['name']
    s3_source_key = event['Records'][0]['s3']['object']['key']

    s3_source_basename = os.path.splitext(os.path.basename(s3_source_key))[0]
    s3_destination_filename = s3_source_basename + "-comp.mp4"

    s3_client = boto3.client('s3')
    s3_source_signed_url = s3_client.generate_presigned_url('get_object',
        Params={'Bucket': s3_source_bucket, 'Key': s3_source_key},
        ExpiresIn=SIGNED_URL_TIMEOUT)

    ffmpeg_cmd = f"/opt/bin/ffmpeg -nostdin -i {s3_source_signed_url} -f mpegts libx264 -preset fast -crf 28 -c:a copy - "
    command1 = shlex.split(ffmpeg_cmd)
    p1 = subprocess.run(command1, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
    resp = s3_client.put_object(Body=p1.stdout, Bucket=S3_DESTINATION_BUCKET, Key=s3_destination_filename)
    s3 = boto3.resource('s3')
    s3.Object(s3_source_bucket,s3_source_key).delete()

    return {
        'statusCode': 200,
        'body': json.dumps('Processing complete successfully')
    }


    


    This is the code from the solution I mentioned, but when I try using this code, I get output.mp4 not found errors

    


    def lambda_handler(event, context):
    print(event)
    os.chdir('/tmp')
    s3_source_bucket = event['Records'][0]['s3']['bucket']['name']
    s3_source_key = event['Records'][0]['s3']['object']['key']

    s3_source_basename = os.path.splitext(os.path.basename(s3_source_key))[0]
    s3_destination_filename = s3_source_basename + ".mp4"

    s3_client = boto3.client('s3')
    s3_source_signed_url = s3_client.generate_presigned_url('get_object',
        Params={'Bucket': s3_source_bucket, 'Key': s3_source_key},
        ExpiresIn=SIGNED_URL_TIMEOUT)
    print(s3_source_signed_url)
    s3_client.download_file(s3_source_bucket,s3_source_key,s3_source_key)
    # ffmpeg_cmd = "/opt/bin/ffmpeg -framerate 25 -i \"" + s3_source_signed_url + "\" output.mp4 "
    ffmpeg_cmd = f"/opt/bin/ffmpeg -framerate 25 -i {s3_source_key} output.mp4 "
    # command1 = shlex.split(ffmpeg_cmd)
    # print(command1)
    os.system(ffmpeg_cmd)
    # os.system('ls')
    # p1 = subprocess.run(command1, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
    file = 'output.mp4'
    resp = s3_client.put_object(Body=open(file,"rb"), Bucket=S3_DESTINATION_BUCKET, Key=s3_destination_filename)
    # resp = s3_client.put_object(Body=p1.stdout, Bucket=S3_DESTINATION_BUCKET, Key=s3_destination_filename)
    s3 = boto3.resource('s3')
    s3.Object(s3_source_bucket,s3_source_key).delete()
    return {
        'statusCode': 200,
        'body': json.dumps('Processing complete successfully')
    }


    


    Any help would be greatly appreciated.

    


  • install ffmpeg on amazon ecr linux python

    27 mai 2024, par Luka Savic

    I'm trying to install ffmpeg on docker for amazon lambda function.
Code for Dockerfile is :

    


    FROM public.ecr.aws/lambda/python:3.8

# Copy function code
COPY app.py ${LAMBDA_TASK_ROOT}

# Install the function's dependencies using file requirements.txt
# from your project folder.

COPY requirements.txt  .
RUN  yum install gcc -y
RUN  pip3 install -r requirements.txt --target "${LAMBDA_TASK_ROOT}"
RUN  yum install -y ffmpeg

# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)
CMD [ "app.handler" ]


    


    I am getting an error :

    


     > [6/6] RUN  yum install -y ffmpeg:
#9 0.538 Loaded plugins: ovl
#9 1.814 No package ffmpeg available.
#9 1.843 Error: Nothing to do


    


  • Processing video frame by frame in AWS Lambda with Node.js and FFmpeg [closed]

    29 décembre 2023, par Aviato

    I am working on a project where I need to process video frames one at a time in an AWS Lambda function using Node.js. My goal is to avoid storing all frames in memory or the filesystem due to resource constraints. I plan to use the fluent-ffmpeg library or ffmpeg from child processes for video processing.

    


    In the past, I used OpenCV to process videos and frames without writing the frames on the disk or storing all the frames at once on the memory itself. But now as I am using node js, its a little hard to set up the code using ffmpeg, etc.

    


    Here is a small snippet from what I did with opencv :-

    


    import cv2

cap = cv2.VideoCapture(video_file)

out = cv2.VideoWriter('output.mp4', fourcc, fps, (width, height))

def generate_frame():
        while cap.isOpened():
            code, frame = cap.read()
            if code:
                yield frame
            else:
                print("completed")
                break

for i, frame in enumerate(generate_frame()):
          # Now we can process the video frames directly and write them on the output opencv
          out.write(editing_frames)


    


    Additionally, I intend to leverage image processing libraries like Sharp and the Canvas API to edit individual frames before assembling the final video. I am looking for help in handling video frames efficiently within the constraints of AWS Lambda.

    


    Any insights, code snippets, or recommendations would be greatly appreciated. Thank you !