Recherche avancée

Médias (1)

Mot : - Tags -/book

Autres articles (93)

  • Mediabox : ouvrir les images dans l’espace maximal pour l’utilisateur

    8 février 2011, par

    La visualisation des images est restreinte par la largeur accordée par le design du site (dépendant du thème utilisé). Elles sont donc visibles sous un format réduit. Afin de profiter de l’ensemble de la place disponible sur l’écran de l’utilisateur, il est possible d’ajouter une fonctionnalité d’affichage de l’image dans une boite multimedia apparaissant au dessus du reste du contenu.
    Pour ce faire il est nécessaire d’installer le plugin "Mediabox".
    Configuration de la boite multimédia
    Dès (...)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

Sur d’autres sites (11566)

  • FFMPEG on Heroku exceeds memory quota in testing

    5 juillet 2022, par Patrick Vellia

    After following this tutorial, and getting it to work locally on my own development environment, before really getting my hands dirty and working deeper on my own project implementation, I decided to push it up to Heroku to test in a staging environment.

    


    I had to have Heroku add the FFMPEG build-pack and turn on the Redis Server for ActionCable to work.

    


    I didn't link the staging to a cloud storage bucket on Google or Amazon yet, just allowed it to upload directly to the dymo disk for testing. So it would go into the storage directory as it would in development for now.

    


    the test MOV file is 186 MB in size.

    


    The system uploaded the file fine.

    


    According to the logs, it then copied the file from storage to tmp as the tutorial has us do.

    


    Then it called streamio-ffmpeg's transcode method.

    


    At this point, Heroku forcibly kills the dymo because it far exceeds the memory quota.

    


    As this is a test environment, it's only on the free tier of Heroku.

    


    I'm thinking I won't be able to directly process video projects on Heroku itself, unless I'm wrong ? Would it be better to call an API like Cloud Functions or Amazon Lambda, or spin up a Compute Engine long enough to process the FFMPEG command ?

    


  • lavc/aarch64 : motion estimation functions in neon

    26 juin 2022, par Swinney, Jonathan
    lavc/aarch64 : motion estimation functions in neon
    

    - ff_pix_abs16_neon
    - ff_pix_abs16_xy2_neon

    In direct micro benchmarks of these ff functions verses their C implementations,
    these functions performed as follows on AWS Graviton 3.

    ff_pix_abs16_neon :
    pix_abs_0_0_c : 141.1
    pix_abs_0_0_neon : 19.6

    ff_pix_abs16_xy2_neon :
    pix_abs_0_3_c : 269.1
    pix_abs_0_3_neon : 39.3

    Tested with :
    ./tests/checkasm/checkasm —test=motion —bench —disable-linux-perf

    Signed-off-by : Jonathan Swinney <jswinney@amazon.com>
    Signed-off-by : Martin Storsjö <martin@martin.st>

    • [DH] libavcodec/aarch64/Makefile
    • [DH] libavcodec/aarch64/me_cmp_init_aarch64.c
    • [DH] libavcodec/aarch64/me_cmp_neon.S
    • [DH] libavcodec/me_cmp.c
    • [DH] libavcodec/me_cmp.h
    • [DH] tests/checkasm/Makefile
    • [DH] tests/checkasm/checkasm.c
    • [DH] tests/checkasm/checkasm.h
    • [DH] tests/checkasm/motion.c
    • [DH] tests/fate/checkasm.mak
  • ffmpeg + AWS Lambda issues. Won't compress full video

    7 juillet 2022, par Joesph Stah Lynn

    So I followed this tutorial to set everything up, and changed the function a bit to compress video, but no matter what I try, on larger videos (basically anything over 50-100MB), the output file will always be cut short, and depending on the encoding settings I'm using, will be cut by different amounts. I tried using the solution found here, adding a -nostdin flag to my ffmpeg command, but that also didn't seem to fix the issue.
    &#xA;Another odd thing, is no matter what I try, if I remove the '-f mpegts' flag, the output video will be 0B.
    &#xA;My Lambda function is set up with 3008MB of Memory (submitted a ticket to get my limit upped so I can use the full 10240MB available), and 2048MB of Ephemeral storage (I honestly am not sure if I need anything more than the minimum 512, but I upped it to try and fix the issue). When I check my cloudwatch logs, on really large files, it will occasionally time out, but other than that, I will get no error messages, just the standard start, end, and billable time messages.

    &#xA;

    This is the code for my lambda function.

    &#xA;

    import json&#xA;import os&#xA;import subprocess&#xA;import shlex&#xA;import boto3&#xA;&#xA;S3_DESTINATION_BUCKET = "rw-video-out"&#xA;SIGNED_URL_TIMEOUT = 600&#xA;&#xA;def lambda_handler(event, context):&#xA;&#xA;    s3_source_bucket = event[&#x27;Records&#x27;][0][&#x27;s3&#x27;][&#x27;bucket&#x27;][&#x27;name&#x27;]&#xA;    s3_source_key = event[&#x27;Records&#x27;][0][&#x27;s3&#x27;][&#x27;object&#x27;][&#x27;key&#x27;]&#xA;&#xA;    s3_source_basename = os.path.splitext(os.path.basename(s3_source_key))[0]&#xA;    s3_destination_filename = s3_source_basename &#x2B; "-comp.mp4"&#xA;&#xA;    s3_client = boto3.client(&#x27;s3&#x27;)&#xA;    s3_source_signed_url = s3_client.generate_presigned_url(&#x27;get_object&#x27;,&#xA;        Params={&#x27;Bucket&#x27;: s3_source_bucket, &#x27;Key&#x27;: s3_source_key},&#xA;        ExpiresIn=SIGNED_URL_TIMEOUT)&#xA;&#xA;    ffmpeg_cmd = f"/opt/bin/ffmpeg -nostdin -i {s3_source_signed_url} -f mpegts libx264 -preset fast -crf 28 -c:a copy - "&#xA;    command1 = shlex.split(ffmpeg_cmd)&#xA;    p1 = subprocess.run(command1, stdout=subprocess.PIPE, stderr=subprocess.PIPE)&#xA;    resp = s3_client.put_object(Body=p1.stdout, Bucket=S3_DESTINATION_BUCKET, Key=s3_destination_filename)&#xA;    s3 = boto3.resource(&#x27;s3&#x27;)&#xA;    s3.Object(s3_source_bucket,s3_source_key).delete()&#xA;&#xA;    return {&#xA;        &#x27;statusCode&#x27;: 200,&#xA;        &#x27;body&#x27;: json.dumps(&#x27;Processing complete successfully&#x27;)&#xA;    }&#xA;

    &#xA;

    This is the code from the solution I mentioned, but when I try using this code, I get output.mp4 not found errors

    &#xA;

    def lambda_handler(event, context):&#xA;    print(event)&#xA;    os.chdir(&#x27;/tmp&#x27;)&#xA;    s3_source_bucket = event[&#x27;Records&#x27;][0][&#x27;s3&#x27;][&#x27;bucket&#x27;][&#x27;name&#x27;]&#xA;    s3_source_key = event[&#x27;Records&#x27;][0][&#x27;s3&#x27;][&#x27;object&#x27;][&#x27;key&#x27;]&#xA;&#xA;    s3_source_basename = os.path.splitext(os.path.basename(s3_source_key))[0]&#xA;    s3_destination_filename = s3_source_basename &#x2B; ".mp4"&#xA;&#xA;    s3_client = boto3.client(&#x27;s3&#x27;)&#xA;    s3_source_signed_url = s3_client.generate_presigned_url(&#x27;get_object&#x27;,&#xA;        Params={&#x27;Bucket&#x27;: s3_source_bucket, &#x27;Key&#x27;: s3_source_key},&#xA;        ExpiresIn=SIGNED_URL_TIMEOUT)&#xA;    print(s3_source_signed_url)&#xA;    s3_client.download_file(s3_source_bucket,s3_source_key,s3_source_key)&#xA;    # ffmpeg_cmd = "/opt/bin/ffmpeg -framerate 25 -i \"" &#x2B; s3_source_signed_url &#x2B; "\" output.mp4 "&#xA;    ffmpeg_cmd = f"/opt/bin/ffmpeg -framerate 25 -i {s3_source_key} output.mp4 "&#xA;    # command1 = shlex.split(ffmpeg_cmd)&#xA;    # print(command1)&#xA;    os.system(ffmpeg_cmd)&#xA;    # os.system(&#x27;ls&#x27;)&#xA;    # p1 = subprocess.run(command1, stdout=subprocess.PIPE, stderr=subprocess.PIPE)&#xA;    file = &#x27;output.mp4&#x27;&#xA;    resp = s3_client.put_object(Body=open(file,"rb"), Bucket=S3_DESTINATION_BUCKET, Key=s3_destination_filename)&#xA;    # resp = s3_client.put_object(Body=p1.stdout, Bucket=S3_DESTINATION_BUCKET, Key=s3_destination_filename)&#xA;    s3 = boto3.resource(&#x27;s3&#x27;)&#xA;    s3.Object(s3_source_bucket,s3_source_key).delete()&#xA;    return {&#xA;        &#x27;statusCode&#x27;: 200,&#xA;        &#x27;body&#x27;: json.dumps(&#x27;Processing complete successfully&#x27;)&#xA;    }&#xA;

    &#xA;

    Any help would be greatly appreciated.

    &#xA;