Recherche avancée

Médias (3)

Mot : - Tags -/image

Autres articles (64)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (11087)

  • I tried to play the audio on Alexa skill from my S3 Bucket, from the test tab, **it show but in fact, I can't hear any sound

    19 avril 2022, par Siti Mayna

    So I tried to play the audio on Alexa skill from my S3 Bucket, from the test tab, it show but in fact, I can't hear any sound. Another fact is, that I tried to use the sample audio from https://developer.amazon.com/en-US/docs/alexa/custom-skills/ask-soundlibrary.html and it is worked, but why it won't work when it comes from my own S3 Bucket ?

    


    Notes :

    


    I've tried to test the skill using my mobile phone also.

    


    I've tried to encode the audio using FFmpeg.

    


    I've tried to use Jovo to convert the audio. https://v3.jovo.tech/audio-converter

    


    I don't know how to fix this error.

    


    There is no error message on cloud watch.

    


    Assumptions :
There is some problem related to the audio resources or there is more set to play audio from S3 Bucket since the sample audio is working.

    


    Steps to reproduce :

    


    


    Build the interaction model

    


    


    


    Encode the audio to make it Alexa skill friendly (fulfill the requirements, like sample rate, etc), I used and tried all of these :

    


    


    A :

    


    ffmpeg -i  -ac 2 -codec:a libmp3lame -b:a 48k -ar 16000 -write_xing 0 


    


    B :

    


    ffmpeg -i  -ac 2 -codec:a libmp3lame -b:a 48k -ar 24000 -write_xing 0 


    


    C :

    


    ffmpeg -y -i input.mp3 -ar 16000 -ab 48k -codec:a libmp3lame -ac 1 output.mp3


    


    


    Upload the audio resources on S3Bucket
Audio sample on s3 storage but none of them are produce any sounds

    


    


    


    Use the link and insert it to APLA.json

    


    


    
    {
      "type": "APLA",
      "version": "0.91",
      "description": "Simple document that generates speech",
      "mainTemplate": {
        "parameters": [
          "payload"
        ],
        "type": "Sequencer",
        "items": [
          {
            "type": "Audio",
            "source": "https://72578561-d9d8-47b4-811c-cafbcbc5ddb9-us-east-1.s3.amazonaws.com/Media/one-small-step-alexa-24.mp3"
          }
        ]
      }
    }



    


    notes : I change the link sources based on audio that I tried.

    


    


    the intent on lambda_function.py :

    


    


    def _load_apl_document(file_path):
    # type: (str) -> Dict[str, Any]
    """Load the apl json document at the path into a dict object."""
    with open(file_path) as f:
        return json.load(f)

class LaunchRequestHandler(AbstractRequestHandler):
    """Handler for Skill Launch."""
    def can_handle(self, handler_input):
        # type: (HandlerInput) -> bool

        return ask_utils.is_request_type("LaunchRequest")(handler_input)

    def handle(self, handler_input):
        # type: (HandlerInput) -> Response
        logger.info("In LaunchRequestHandler")

        # type: (HandlerInput) -> Response
        speak_output = "Hello World!"
        # .ask("add a reprompt if you want to keep the session open for the user to respond")

        return (
            handler_input.response_builder
                #.speak(speak_output)
                .add_directive(
                        RenderDocumentDirective(
                            token="pagerToken",
                            document=_load_apl_document("APLA.json"),
                            datasources={}
                        )
                    )
                .response
        )


    


    


    Deploy

    


    


    


    Test it

    


    


    


    The result of the test on my end :

The response for testing

    


    


    the JSON response :

    


    {
    "body": {
        "version": "1.0",
        "response": {
            "directives": [
                {
                    "type": "Alexa.Presentation.APLA.RenderDocument",
                    "token": "pagerToken",
                    "document": {
                        "type": "APLA",
                        "version": "0.91",
                        "description": "Simple document that generates speech",
                        "mainTemplate": {
                            "parameters": [
                                "payload"
                            ],
                            "type": "Sequencer",
                            "items": [
                                {
                                    "type": "Audio",
                                    "source": "https://72578561-d9d8-47b4-811c-cafbcbc5ddb9-us-east-1.s3.amazonaws.com/Media/one-small-step-alexa-24.mp3"
                                }
                            ]
                        }
                    },
                    "datasources": {}
                }
            ],
            "type": "_DEFAULT_RESPONSE"
        },
        "sessionAttributes": {},
        "userAgent": "ask-python/1.16.1 Python/3.7.12"
    }
}


    


    


    On my cloud Watch :
Cloud Watch

    


    


  • swscale : aarch64 : Optimize the final summation in the hscale routine

    20 avril 2022, par Martin Storsjö
    swscale : aarch64 : Optimize the final summation in the hscale routine
    

    Before : Cortex A53 A72 A73 Graviton 2 Graviton 3
    hscale_8_to_15_width8_neon : 8273.0 4602.5 4289.5 2429.7 1629.1
    hscale_8_to_15_width16_neon : 12405.7 6803.0 6359.0 3549.0 2378.4
    hscale_8_to_15_width32_neon : 21258.7 11491.7 11469.2 5797.2 3919.6
    hscale_8_to_15_width40_neon : 25652.0 14173.7 12488.2 6893.5 4810.4

    After :
    hscale_8_to_15_width8_neon : 7633.0 3981.5 3350.2 1980.7 1261.1
    hscale_8_to_15_width16_neon : 11666.7 5951.0 5512.0 3080.7 2131.4
    hscale_8_to_15_width32_neon : 20900.7 10733.2 9481.7 5275.2 3862.1
    hscale_8_to_15_width40_neon : 24826.0 13536.2 11502.0 6397.2 4731.9

    Thus, this gives overall a 8-29% speedup for the smaller filter
    sizes, around 1-8% for the larger filter sizes.

    Inspired by a patch by Jonathan Swinney <jswinney@amazon.com>.

    Signed-off-by : Martin Storsjö <martin@martin.st>

    • [DH] libswscale/aarch64/hscale.S
  • How to "extend" aws docker base image (.net core from scratch) by ... libs/ubuntu/ffmpeg ?

    26 avril 2022, par Nigrimmist

    i would like to use AWS Lambda through the image containers using .net core 3.1 and it is works fine for me in simplest code case. But i stucked with next scenario :

    &#xA;

    By default, aws provide base image with .net core with aws libs based on "from scratch". So as i know, it minimal Linux that does not contains even package manager.

    &#xA;

    I need to work with ffmpeg in the code, but to do it i need to install few packages and ... fmpeg. I have working code on image

    &#xA;

    FROM mcr.microsoft.com/dotnet/runtime:3.1-bionic&#xA;

    &#xA;

    It is ubuntu with .net core runtime. But what is the right strategic in case of AWS Lambda image ? How can i ... merge them ?.

    &#xA;

    Have few ideas, but not sure :

    &#xA;

      &#xA;
    1. use as is FROM public.ecr.aws/lambda/dotnet:core3.1 and try to install package manager, all depenendencies to use ffmpeg and so on ?
    2. &#xA;

    3. Use mcr.microsoft.com/dotnet/runtime:3.1-bionic, somehow add required by amazon dependencies (how ? download content and attach from local ?) and configure it to run in Lambda runtime ?
    4. &#xA;

    5. ... ?
    6. &#xA;

    &#xA;

    Will be glad to hear where is the solution here. Thanks !

    &#xA;