Recherche avancée

Médias (1)

Mot : - Tags -/iphone

Autres articles (54)

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

Sur d’autres sites (6968)

  • I tried to play the audio on Alexa skill from my S3 Bucket, from the test tab, **it show but in fact, I can't hear any sound

    19 avril 2022, par Siti Mayna

    So I tried to play the audio on Alexa skill from my S3 Bucket, from the test tab, it show but in fact, I can't hear any sound. Another fact is, that I tried to use the sample audio from https://developer.amazon.com/en-US/docs/alexa/custom-skills/ask-soundlibrary.html and it is worked, but why it won't work when it comes from my own S3 Bucket ?

    


    Notes :

    


    I've tried to test the skill using my mobile phone also.

    


    I've tried to encode the audio using FFmpeg.

    


    I've tried to use Jovo to convert the audio. https://v3.jovo.tech/audio-converter

    


    I don't know how to fix this error.

    


    There is no error message on cloud watch.

    


    Assumptions :
There is some problem related to the audio resources or there is more set to play audio from S3 Bucket since the sample audio is working.

    


    Steps to reproduce :

    


    


    Build the interaction model

    


    


    


    Encode the audio to make it Alexa skill friendly (fulfill the requirements, like sample rate, etc), I used and tried all of these :

    


    


    A :

    


    ffmpeg -i  -ac 2 -codec:a libmp3lame -b:a 48k -ar 16000 -write_xing 0 


    


    B :

    


    ffmpeg -i  -ac 2 -codec:a libmp3lame -b:a 48k -ar 24000 -write_xing 0 


    


    C :

    


    ffmpeg -y -i input.mp3 -ar 16000 -ab 48k -codec:a libmp3lame -ac 1 output.mp3


    


    


    Upload the audio resources on S3Bucket
Audio sample on s3 storage but none of them are produce any sounds

    


    


    


    Use the link and insert it to APLA.json

    


    


    
    {
      "type": "APLA",
      "version": "0.91",
      "description": "Simple document that generates speech",
      "mainTemplate": {
        "parameters": [
          "payload"
        ],
        "type": "Sequencer",
        "items": [
          {
            "type": "Audio",
            "source": "https://72578561-d9d8-47b4-811c-cafbcbc5ddb9-us-east-1.s3.amazonaws.com/Media/one-small-step-alexa-24.mp3"
          }
        ]
      }
    }



    


    notes : I change the link sources based on audio that I tried.

    


    


    the intent on lambda_function.py :

    


    


    def _load_apl_document(file_path):
    # type: (str) -> Dict[str, Any]
    """Load the apl json document at the path into a dict object."""
    with open(file_path) as f:
        return json.load(f)

class LaunchRequestHandler(AbstractRequestHandler):
    """Handler for Skill Launch."""
    def can_handle(self, handler_input):
        # type: (HandlerInput) -> bool

        return ask_utils.is_request_type("LaunchRequest")(handler_input)

    def handle(self, handler_input):
        # type: (HandlerInput) -> Response
        logger.info("In LaunchRequestHandler")

        # type: (HandlerInput) -> Response
        speak_output = "Hello World!"
        # .ask("add a reprompt if you want to keep the session open for the user to respond")

        return (
            handler_input.response_builder
                #.speak(speak_output)
                .add_directive(
                        RenderDocumentDirective(
                            token="pagerToken",
                            document=_load_apl_document("APLA.json"),
                            datasources={}
                        )
                    )
                .response
        )


    


    


    Deploy

    


    


    


    Test it

    


    


    


    The result of the test on my end :

The response for testing

    


    


    the JSON response :

    


    {
    "body": {
        "version": "1.0",
        "response": {
            "directives": [
                {
                    "type": "Alexa.Presentation.APLA.RenderDocument",
                    "token": "pagerToken",
                    "document": {
                        "type": "APLA",
                        "version": "0.91",
                        "description": "Simple document that generates speech",
                        "mainTemplate": {
                            "parameters": [
                                "payload"
                            ],
                            "type": "Sequencer",
                            "items": [
                                {
                                    "type": "Audio",
                                    "source": "https://72578561-d9d8-47b4-811c-cafbcbc5ddb9-us-east-1.s3.amazonaws.com/Media/one-small-step-alexa-24.mp3"
                                }
                            ]
                        }
                    },
                    "datasources": {}
                }
            ],
            "type": "_DEFAULT_RESPONSE"
        },
        "sessionAttributes": {},
        "userAgent": "ask-python/1.16.1 Python/3.7.12"
    }
}


    


    


    On my cloud Watch :
Cloud Watch

    


    


  • How can I develop Image recognition program

    18 juillet 2017, par SeongHyun Lee

    My program will recognize the advertisement between each TV Programs.
    But I don’t know how to recognize the ads.
    I had Sound Recognition in mind but It’s so difficult.
    I’m using FFmpeg Library.
    There is VideoState struct Reference.

    typedef struct VideoState {
    SDL_Thread *read_tid;
    SDL_Thread *video_tid;
    SDL_Thread *refresh_tid;
    AVInputFormat *iformat;
    int no_background;
    int abort_request;
    int force_refresh;
    int paused;
    int last_paused;
    int que_attachments_req;
    int seek_req;
    int seek_flags;
    int64_t seek_pos;
    int64_t seek_rel;
    int read_pause_return;
    AVFormatContext *ic;

    int audio_stream;

    int av_sync_type;
    double external_clock; /* external clock base */
    int64_t external_clock_time;

    double audio_clock;
    double audio_diff_cum; /* used for AV difference average computation */
    double audio_diff_avg_coef;
    double audio_diff_threshold;
    int audio_diff_avg_count;
    AVStream *audio_st;
    PacketQueue audioq;
    int audio_hw_buf_size;
    DECLARE_ALIGNED(16,uint8_t,audio_buf2)[AVCODEC_MAX_AUDIO_FRAME_SIZE * 4];
    uint8_t silence_buf[SDL_AUDIO_BUFFER_SIZE];
    uint8_t *audio_buf;
    uint8_t *audio_buf1;
    unsigned int audio_buf_size; /* in bytes */
    int audio_buf_index; /* in bytes */
    int audio_write_buf_size;
    AVPacket audio_pkt_temp;
    AVPacket audio_pkt;
    struct AudioParams audio_src;
    struct AudioParams audio_tgt;
    struct SwrContext *swr_ctx;
    double audio_current_pts;
    double audio_current_pts_drift;
    int frame_drops_early;
    int frame_drops_late;
    AVFrame *frame;

    enum ShowMode {
       SHOW_MODE_NONE = -1, SHOW_MODE_VIDEO = 0, SHOW_MODE_WAVES,
    SHOW_MODE_RDFT, SHOW_MODE_NB
    } show_mode;
    int16_t sample_array[SAMPLE_ARRAY_SIZE];
    int sample_array_index;
    int last_i_start;
    RDFTContext *rdft;
    int rdft_bits;
    FFTSample *rdft_data;
    int xpos;

    SDL_Thread *subtitle_tid;
    int subtitle_stream;
    int subtitle_stream_changed;
    AVStream *subtitle_st;
    PacketQueue subtitleq;
    SubPicture subpq[SUBPICTURE_QUEUE_SIZE];
    int subpq_size, subpq_rindex, subpq_windex;
    SDL_mutex *subpq_mutex;
    SDL_cond *subpq_cond;

    double frame_timer;
    double frame_last_pts;
    double frame_last_duration;
    double frame_last_dropped_pts;
    double frame_last_returned_time;
    double frame_last_filter_delay;
    int64_t frame_last_dropped_pos;
    double video_clock;                          ///< pts of last decoded frame
    / predicted pts of next decoded frame
    int video_stream;
    AVStream *video_st;
    PacketQueue videoq;
    double video_current_pts;                    ///< current displayed pts
    (different from video_clock if frame fifos are used)
    double video_current_pts_drift;              ///< video_current_pts - time
    (av_gettime) at which we updated video_current_pts - used to have running
    video pts
    int64_t video_current_pos;                   ///< current displayed file pos
    VideoPicture pictq[VIDEO_PICTURE_QUEUE_SIZE];
    int pictq_size, pictq_rindex, pictq_windex;
    SDL_mutex *pictq_mutex;
    SDL_cond *pictq_cond;
    #if !CONFIG_AVFILTER
    struct SwsContext *img_convert_ctx;
    #endif

    char filename[1024];
    int width, height, xleft, ytop;
    int step;

    #if CONFIG_AVFILTER
    AVFilterContext *in_video_filter;           ///< the first filter in the
    video chain
    AVFilterContext *out_video_filter;          ///< the last filter in the
    video chain
    int use_dr1;
    FrameBuffer *buffer_pool;
    #endif

    int refresh;
    int last_video_stream, last_audio_stream, last_subtitle_stream;

    SDL_cond *continue_read_thread;

    enum V_Show_Mode v_show_mode;
    } VideoState;

    What can I use for My Program.... I really need your help.. Thank you !!!

  • Revision 106594 : Quand il y a changement de borne, il faut faire un z+1 mini pour que ...

    8 octobre 2017, par spip.franck@… — Log

    Quand il y a changement de borne, il faut faire un z+1 mini pour que les sites utilisant déjà le plug le sachent