Recherche avancée

Médias (1)

Mot : - Tags -/Rennes

Autres articles (53)

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

Sur d’autres sites (13209)

  • How do I install ffmpeg on one EC2 Amazon Linux instance that can stream a mp4 ? [closed]

    12 septembre 2020, par starpebble

    Good day. How can I install ffmpeg on an EC2 amazon linux machine that can stream a mp4 ?

    


    The goal : an ffmpeg install on EC2 Amazon Linux that can stream one mp4 to one rtmps endpoint. Then, create an integration test suite with it.

    


    Is it just me or is ffmpeg a little crippled on EC2 Amazon Linux ?

    


    Example :

    


    ffmpeg -re -i input.mp4 -c:v libx264 -b:v 6000K -maxrate 6000K -pix_fmt yuv420p -s 1920x1080 -profile:v main -preset veryfast -g 120 -x264opts "nal-hrd=cbr:no-scenecut” -acodec aac -ab 160k -ar 44100 -f flv rtmps:///app/


    


    Linux OS :

    


    Linux version 4.14.193-113.317.amzn1.x86_64 (mockbuild@koji-pdx-corp-builder-60005) (gcc version 7.2.1 20170915 (Red Hat 7.2.1-2) (GCC)) #1 SMP Thu Sep 3 19:08:08 UTC 2020


    


    The stackoverflow answer to similar questions fail to install a ffmpeg that can stream.

    


    An installation script such as Install FFMPEG Library on EC2 Server fail this year.

    


    The static downloads referenced on John Van Sickle-FFmpeg Static Builds fail to stream to IVS. I tried the i686 release, my first guess for an x86_64 instance.

    


    The git source tree compiled binary fails to stream. Example : The tip of the tree isn't what I expected because the binary fails to recognize switches like -preset.

    


    I'd love to be able to explain streaming to anyone. Thanks.

    


  • Best practices for developing scalable video transcoding server on Amazon Web Services ?

    6 septembre 2016, par undefined

    What do people think are the most important issues when developing an application that is going to allow users to upload video and images to a server and have them transcoded by FFMPEG and stored in amazon S3 ? I have a couple of options ;

    1) install FFMPEG on the same server that handles file uploads, when a video is uploaded and stored on EC2 instance, call FFMPEG to convert it then when done, write the file to S3 bucket and dispose of the original.

    How scalable is this ? What happens when many users upload at the same time ? How do I manage multiple processes at once ? How do I know when to start another instance and load balance this configuration ?

    2) Have one server for processing uploads (updating database, renaming files etc) and one server for doing transcoding. Again what is the best way to manage multiple processes ? should I be looking at Amazon SQS for this ? Can I tell the transcoding server to get the file from the upload server or should I copy the file to the transcoding server ? Should I just store all files on S3 and SQS can read from there. I am trying to have as little traffic as possible.

    I am running a linux box as the upload server and have FFMPEG running on this.

    Any advice on best practices for setting up such a configuration would be appreciated. Many thanks

  • install ffmpeg on amazon ecr linux python

    27 mai 2024, par Luka Savic

    I'm trying to install ffmpeg on docker for amazon lambda function.
Code for Dockerfile is :

    


    FROM public.ecr.aws/lambda/python:3.8

# Copy function code
COPY app.py ${LAMBDA_TASK_ROOT}

# Install the function's dependencies using file requirements.txt
# from your project folder.

COPY requirements.txt  .
RUN  yum install gcc -y
RUN  pip3 install -r requirements.txt --target "${LAMBDA_TASK_ROOT}"
RUN  yum install -y ffmpeg

# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)
CMD [ "app.handler" ]


    


    I am getting an error :

    


     > [6/6] RUN  yum install -y ffmpeg:
#9 0.538 Loaded plugins: ovl
#9 1.814 No package ffmpeg available.
#9 1.843 Error: Nothing to do