Recherche avancée

Médias (0)

Mot : - Tags -/metadatas

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (82)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

Sur d’autres sites (10147)

  • I tried to play the audio on Alexa skill from my S3 Bucket, from the test tab, **it show but in fact, I can't hear any sound

    19 avril 2022, par Siti Mayna

    So I tried to play the audio on Alexa skill from my S3 Bucket, from the test tab, it show but in fact, I can't hear any sound. Another fact is, that I tried to use the sample audio from https://developer.amazon.com/en-US/docs/alexa/custom-skills/ask-soundlibrary.html and it is worked, but why it won't work when it comes from my own S3 Bucket ?

    


    Notes :

    


    I've tried to test the skill using my mobile phone also.

    


    I've tried to encode the audio using FFmpeg.

    


    I've tried to use Jovo to convert the audio. https://v3.jovo.tech/audio-converter

    


    I don't know how to fix this error.

    


    There is no error message on cloud watch.

    


    Assumptions :
There is some problem related to the audio resources or there is more set to play audio from S3 Bucket since the sample audio is working.

    


    Steps to reproduce :

    


    


    Build the interaction model

    


    


    


    Encode the audio to make it Alexa skill friendly (fulfill the requirements, like sample rate, etc), I used and tried all of these :

    


    


    A :

    


    ffmpeg -i  -ac 2 -codec:a libmp3lame -b:a 48k -ar 16000 -write_xing 0 


    


    B :

    


    ffmpeg -i  -ac 2 -codec:a libmp3lame -b:a 48k -ar 24000 -write_xing 0 


    


    C :

    


    ffmpeg -y -i input.mp3 -ar 16000 -ab 48k -codec:a libmp3lame -ac 1 output.mp3


    


    


    Upload the audio resources on S3Bucket
Audio sample on s3 storage but none of them are produce any sounds

    


    


    


    Use the link and insert it to APLA.json

    


    


    
    {
      "type": "APLA",
      "version": "0.91",
      "description": "Simple document that generates speech",
      "mainTemplate": {
        "parameters": [
          "payload"
        ],
        "type": "Sequencer",
        "items": [
          {
            "type": "Audio",
            "source": "https://72578561-d9d8-47b4-811c-cafbcbc5ddb9-us-east-1.s3.amazonaws.com/Media/one-small-step-alexa-24.mp3"
          }
        ]
      }
    }



    


    notes : I change the link sources based on audio that I tried.

    


    


    the intent on lambda_function.py :

    


    


    def _load_apl_document(file_path):
    # type: (str) -> Dict[str, Any]
    """Load the apl json document at the path into a dict object."""
    with open(file_path) as f:
        return json.load(f)

class LaunchRequestHandler(AbstractRequestHandler):
    """Handler for Skill Launch."""
    def can_handle(self, handler_input):
        # type: (HandlerInput) -> bool

        return ask_utils.is_request_type("LaunchRequest")(handler_input)

    def handle(self, handler_input):
        # type: (HandlerInput) -> Response
        logger.info("In LaunchRequestHandler")

        # type: (HandlerInput) -> Response
        speak_output = "Hello World!"
        # .ask("add a reprompt if you want to keep the session open for the user to respond")

        return (
            handler_input.response_builder
                #.speak(speak_output)
                .add_directive(
                        RenderDocumentDirective(
                            token="pagerToken",
                            document=_load_apl_document("APLA.json"),
                            datasources={}
                        )
                    )
                .response
        )


    


    


    Deploy

    


    


    


    Test it

    


    


    


    The result of the test on my end :

The response for testing

    


    


    the JSON response :

    


    {
    "body": {
        "version": "1.0",
        "response": {
            "directives": [
                {
                    "type": "Alexa.Presentation.APLA.RenderDocument",
                    "token": "pagerToken",
                    "document": {
                        "type": "APLA",
                        "version": "0.91",
                        "description": "Simple document that generates speech",
                        "mainTemplate": {
                            "parameters": [
                                "payload"
                            ],
                            "type": "Sequencer",
                            "items": [
                                {
                                    "type": "Audio",
                                    "source": "https://72578561-d9d8-47b4-811c-cafbcbc5ddb9-us-east-1.s3.amazonaws.com/Media/one-small-step-alexa-24.mp3"
                                }
                            ]
                        }
                    },
                    "datasources": {}
                }
            ],
            "type": "_DEFAULT_RESPONSE"
        },
        "sessionAttributes": {},
        "userAgent": "ask-python/1.16.1 Python/3.7.12"
    }
}


    


    


    On my cloud Watch :
Cloud Watch

    


    


  • "File doesn't exist" - streamio FFMPEG on screenshot after create method

    3 mai 2013, par dodgerogers747

    I have videos being directly uploaded to S3 using Amazon's CORS configuration. Videos are uploaded via a dedicated S3 form, once they have been uploaded successfully the URL of the video is appended to the @video.file hidden_field via javascript and then the video saves.

    I can't get this after_save method to work which takes a screenshot of the video and saves it to S3 via carrierwave after the video has been saved as a rails object. ( It was previously working using a carrierwave video upload instance )

    It errors out withErrno::ENOENT - No such file or directory - the file 'http://bucket-name.s3.amazonaws.com/uploads/video/file/secure-random-hex/video_name.m4v' does not exist: I have tried running this method as a class method to call it from the console but it always comes back with the same error, even though the video exists.

    My bucket is set to public, read and write. How come it doesn't think the file exists ?

    If anyone needs more code just shout, thanks in advance.

    application trace

    Started POST "/videos" for 127.0.0.1 at 2013-05-03 10:48:07 -0700
    Processing by VideosController#create as JS
     Parameters: {"utf8"=>"✓", "authenticity_token"=>"MAHxrVcmPDtVIMfDWZBwL0YnzaAaAe1PTGip5M4OVoY=", "video"=>{"user_id"=>"5", "file"=>"http://bucket-name.s3.amazonaws.com/uploads/video/file/secure-random-hex/video.m4v"}}
     User Load (0.3ms)  SELECT `users`.* FROM `users` WHERE `users`.`id` = 5 LIMIT 1
      (0.1ms)  BEGIN
     SQL (20.5ms)  INSERT INTO `videos` (`created_at`, `file`, `question_id`, `screenshot`, `updated_at`, `user_id`) VALUES ('2013-05-03 17:48:07', 'http://teebox-network.s3.amazonaws.com/uploads/video/file/secure-random-hex/video.m4v', NULL, NULL, '2013-05-03 17:48:07', 5)
      (44.0ms)  ROLLBACK
    Completed 500 Internal Server Error in 71ms

    Errno::ENOENT - No such file or directory - the file 'http://teebox-network.s3.amazonaws.com/uploads/video/file/secure-random-hex/video.m4v' does not exist:
     (gem) streamio-ffmpeg-0.9.0/lib/ffmpeg/movie.rb:10:in `initialize'
     app/models/video.rb:25:in `new'
     app/models/video.rb:25:in `take_screenshot'

    video.rb

     attr_accessible :user_id, :question_id, :file, :screenshot
     belongs_to :question
     belongs_to :user

     default_scope order('created_at DESC')

     after_create :take_screenshot

     mount_uploader :screenshot, ImageUploader

     validates_presence_of :user_id, :file

     def take_screenshot
       FFMPEG.ffmpeg_binary = '/opt/local/bin/ffmpeg'
       movie = FFMPEG::Movie.new("#{self.file}")
       self.screenshot = movie.screenshot("#{Rails.root}/public/uploads/tmp/screenshots/#{File.basename(self.file)}.jpg", seek_time: 2 )
       self.save!
     end

    videos/_form.html.erb

    <form action="http://bucket-name.s3.amazonaws.com" data-remote="true" class="direct-upload" enctype="multipart/form-data" method="post">
     <input type="hidden" />
     <input type="hidden" value="ACCESS_KEY" />
     <input type="hidden" value="public-read" />
     <input type="hidden" />
     <input type="hidden" />
     <input type="hidden" value="201" />
     <input type="file" />
    </form>

    &lt;%= form_for @video, html: { multipart: true, id: "new_video" }, remote: true do |f| %>
           &lt;% if @video.errors.any? %>
       <div>
       <h2>&lt;%= pluralize(@video.errors.count, "error") %> prohibited this post from being saved:</h2>

     <ul>
       &lt;% @video.errors.full_messages.each do |msg| %>
           <li>&lt;%= msg %></li>
           &lt;% end %>
       </ul>
       </div>
    &lt;% end %>

       &lt;%= f.hidden_field :user_id, value: current_user.id %>
       &lt;%= f.hidden_field :file %><br />

       &lt;% end %>

    ImageUploader

    class ImageUploader &lt; CarrierWave::Uploader::Base

     include CarrierWave::RMagick

      include Sprockets::Helpers::RailsHelper
      include Sprockets::Helpers::IsolatedHelper

     storage :fog

     before :store, :remember_cache_id
     after :store, :delete_tmp_dir

       def cache_dir
         Rails.root.join(&#39;public/uploads/tmp/&#39;)
       end

       def remember_cache_id(new_file)
         @cache_id_was = cache_id
       end

       def delete_tmp_dir(new_file)
         if @cache_id_was.present? &amp;&amp; @cache_id_was =~ /\A[\d]{8}\-[\d]{4}\-[\d]+\-[\d]{4}\z/
           FileUtils.rm_rf(File.join(root, cache_dir, @cache_id_was))
         end
       end

     process resize_and_pad: [306, 150, &#39;#000&#39;]

     def store_dir
       "uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
     end

     def extension_white_list
       %w(jpg)
       # %w(ogg ogv 3gp mp4 m4v webm mov)
     end
  • Using FFmpeg with URL input causes SIGSEGV in AWS Lambda (Python runtime)

    26 mars, par Dave94

    I'm trying to implement a video converting solution on AWS Lambda following their article named Processing user-generated content using AWS Lambda and FFmpeg.&#xA;However when I run my command with subprocess.Popen() it returns -11 which translates to SIGSEGV (segmentation fault).&#xA;I've tried to process the video with the newest (4.3.1) static build from John Van Sickle's site as with the "official" ffmpeg-lambda-layer but it seems like it doesn't matter which one I use, the result is the same.

    &#xA;

    If I download the video to the Lambda's /tmp directory and add this downloaded file as an input to FFmpeg it works correctly (with the same parameters). However I'm trying to prevent this as the /tmp directory's max. size is only 512 MB which is not quite enough for me.

    &#xA;

    The relevant code which returns SIGSEGV :

    &#xA;

    ffmpeg_cmd = &#x27;/opt/bin/ffmpeg -stream_loop -1 -i "&#x27; &#x2B; s3_source_signed_url &#x2B; &#x27;" -i /opt/bin/audio.mp3 -i /opt/bin/watermark.png -shortest -y -deinterlace -vcodec libx264 -pix_fmt yuv420p -preset veryfast -r 30 -g 60 -b:v 4500k -c:a copy -map 0:v:0 -map 1:a:0 -filter_complex scale=1920:1080:force_original_aspect_ratio=decrease,pad=1920:1080:(ow-iw)/2:(oh-ih)/2,setsar=1,overlay=(W-w)/2:(H-h)/2,format=yuv420p -loglevel verbose -f flv -&#x27;&#xA;command1 = shlex.split(ffmpeg_cmd)&#xA;p1 = subprocess.Popen(command1, stdout=subprocess.PIPE, stderr=subprocess.PIPE)&#xA;stdout, stderr = p1.communicate()&#xA;print(p1.returncode) #prints -11&#xA;

    &#xA;

    stderr of FFmpeg :

    &#xA;

    ffmpeg version 4.1.3-static https://johnvansickle.com/ffmpeg/  Copyright (c) 2000-2019 the FFmpeg developers&#xA;  built with gcc 6.3.0 (Debian 6.3.0-18&#x2B;deb9u1) 20170516&#xA;  configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc-6 --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gmp --enable-gray --enable-libaom --enable-libfribidi --enable-libass --enable-libvmaf --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librubberband --enable-libsoxr --enable-libspeex --enable-libvorbis --enable-libopus --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzvbi --enable-libzimg&#xA;  libavutil      56. 22.100 / 56. 22.100&#xA;  libavcodec     58. 35.100 / 58. 35.100&#xA;  libavformat    58. 20.100 / 58. 20.100&#xA;  libavdevice    58.  5.100 / 58.  5.100&#xA;  libavfilter     7. 40.101 /  7. 40.101&#xA;  libswscale      5.  3.100 /  5.  3.100&#xA;  libswresample   3.  3.100 /  3.  3.100&#xA;  libpostproc    55.  3.100 / 55.  3.100&#xA;[tcp @ 0x728cc00] Starting connection attempt to 52.219.74.177 port 443&#xA;[tcp @ 0x728cc00] Successfully connected to 52.219.74.177 port 443&#xA;[h264 @ 0x729b780] Reinit context to 1280x720, pix_fmt: yuv420p&#xA;Input #0, mov,mp4,m4a,3gp,3g2,mj2, from &#x27;https://bucket.s3.amazonaws.com --> presigned url with 15 min expiration time&#x27;:&#xA;  Metadata:&#xA;    major_brand     : mp42&#xA;    minor_version   : 0&#xA;    compatible_brands: mp42mp41isomavc1&#xA;    creation_time   : 2015-09-02T07:42:42.000000Z&#xA;  Duration: 00:00:15.64, start: 0.000000, bitrate: 2640 kb/s&#xA;    Stream #0:0(und): Video: h264 (High), 1 reference frame (avc1 / 0x31637661), yuv420p(tv, bt709, left), 1280x720 [SAR 1:1 DAR 16:9], 2475 kb/s, 25 fps, 25 tbr, 25 tbn, 50 tbc (default)&#xA;    Metadata:&#xA;      creation_time   : 2015-09-02T07:42:42.000000Z&#xA;      handler_name    : L-SMASH Video Handler&#xA;      encoder         : AVC Coding&#xA;    Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 160 kb/s (default)&#xA;    Metadata:&#xA;      creation_time   : 2015-09-02T07:42:42.000000Z&#xA;      handler_name    : L-SMASH Audio Handler&#xA;[mp3 @ 0x733f340] Skipping 0 bytes of junk at 1344.&#xA;Input #1, mp3, from &#x27;/opt/bin/audio.mp3&#x27;:&#xA;  Metadata:&#xA;    encoded_by      : Logic Pro X&#xA;    date            : 2021-01-03&#xA;    coding_history  : &#xA;    time_reference  : 158760000&#xA;    umid            : 0x0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000004500F9E4&#xA;    encoder         : Lavf58.49.100&#xA;  Duration: 00:04:01.21, start: 0.025057, bitrate: 320 kb/s&#xA;    Stream #1:0: Audio: mp3, 44100 Hz, stereo, fltp, 320 kb/s&#xA;    Metadata:&#xA;      encoder         : Lavc58.97&#xA;Input #2, png_pipe, from &#x27;/opt/bin/watermark.png&#x27;:&#xA;  Duration: N/A, bitrate: N/A&#xA;    Stream #2:0: Video: png, 1 reference frame, rgba(pc), 701x190 [SAR 1521:1521 DAR 701:190], 25 tbr, 25 tbn, 25 tbc&#xA;[Parsed_scale_0 @ 0x7341140] w:1920 h:1080 flags:&#x27;bilinear&#x27; interl:0&#xA;Stream mapping:&#xA;  Stream #0:0 (h264) -> scale&#xA;  Stream #2:0 (png) -> overlay:overlay&#xA;  format -> Stream #0:0 (libx264)&#xA;  Stream #1:0 -> #0:1 (copy)&#xA;Press [q] to stop, [?] for help&#xA;[h264 @ 0x72d8600] Reinit context to 1280x720, pix_fmt: yuv420p&#xA;[Parsed_scale_0 @ 0x733c1c0] w:1920 h:1080 flags:&#x27;bilinear&#x27; interl:0&#xA;[graph 0 input from stream 0:0 @ 0x7669200] w:1280 h:720 pixfmt:yuv420p tb:1/25 fr:25/1 sar:1/1 sws_param:flags=2&#xA;[graph 0 input from stream 2:0 @ 0x766a980] w:701 h:190 pixfmt:rgba tb:1/25 fr:25/1 sar:1521/1521 sws_param:flags=2&#xA;[auto_scaler_0 @ 0x7670240] w:iw h:ih flags:&#x27;bilinear&#x27; interl:0&#xA;[deinterlace_in_2_0 @ 0x766b680] auto-inserting filter &#x27;auto_scaler_0&#x27; between the filter &#x27;graph 0 input from stream 2:0&#x27; and the filter &#x27;deinterlace_in_2_0&#x27;&#xA;[Parsed_scale_0 @ 0x733c1c0] w:1280 h:720 fmt:yuv420p sar:1/1 -> w:1920 h:1080 fmt:yuv420p sar:1/1 flags:0x2&#xA;[Parsed_pad_1 @ 0x733ce00] w:1920 h:1080 -> w:1920 h:1080 x:0 y:0 color:0x000000FF&#xA;[Parsed_setsar_2 @ 0x733da00] w:1920 h:1080 sar:1/1 dar:16/9 -> sar:1/1 dar:16/9&#xA;[auto_scaler_0 @ 0x7670240] w:701 h:190 fmt:rgba sar:1521/1521 -> w:701 h:190 fmt:yuva420p sar:1/1 flags:0x2&#xA;[Parsed_overlay_3 @ 0x733e440] main w:1920 h:1080 fmt:yuv420p overlay w:701 h:190 fmt:yuva420p&#xA;[Parsed_overlay_3 @ 0x733e440] [framesync @ 0x733e5a8] Selected 1/50 time base&#xA;[Parsed_overlay_3 @ 0x733e440] [framesync @ 0x733e5a8] Sync level 2&#xA;[libx264 @ 0x72c1c00] using SAR=1/1&#xA;[libx264 @ 0x72c1c00] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2&#xA;[libx264 @ 0x72c1c00] profile Progressive High, level 4.0, 4:2:0, 8-bit&#xA;[libx264 @ 0x72c1c00] 264 - core 157 r2969 d4099dd - H.264/MPEG-4 AVC codec - Copyleft 2003-2019 - http://www.videolan.org/x264.html - options: cabac=1 ref=1 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=2 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=0 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=0 threads=9 lookahead_threads=3 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=1 keyint=60 keyint_min=6 scenecut=40 intra_refresh=0 rc_lookahead=10 rc=abr mbtree=1 bitrate=4500 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00&#xA;Output #0, flv, to &#x27;pipe:&#x27;:&#xA;  Metadata:&#xA;    major_brand     : mp42&#xA;    minor_version   : 0&#xA;    compatible_brands: mp42mp41isomavc1&#xA;    encoder         : Lavf58.20.100&#xA;    Stream #0:0: Video: h264 (libx264), 1 reference frame ([7][0][0][0] / 0x0007), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], q=-1--1, 4500 kb/s, 30 fps, 1k tbn, 30 tbc (default)&#xA;    Metadata:&#xA;      encoder         : Lavc58.35.100 libx264&#xA;    Side data:&#xA;      cpb: bitrate max/min/avg: 0/0/4500000 buffer size: 0 vbv_delay: -1&#xA;    Stream #0:1: Audio: mp3 ([2][0][0][0] / 0x0002), 44100 Hz, stereo, fltp, 320 kb/s&#xA;    Metadata:&#xA;      encoder         : Lavc58.97&#xA;frame=   27 fps=0.0 q=32.0 size=     247kB time=00:00:00.03 bitrate=59500.0kbits/s speed=0.0672x&#xA;frame=   77 fps= 77 q=27.0 size=    1115kB time=00:00:02.03 bitrate=4478.0kbits/s speed=2.03x&#xA;frame=  126 fps= 83 q=25.0 size=    2302kB time=00:00:04.00 bitrate=4712.4kbits/s speed=2.64x&#xA;frame=  177 fps= 87 q=26.0 size=    3576kB time=00:00:06.03 bitrate=4854.4kbits/s speed=2.97x&#xA;frame=  225 fps= 88 q=25.0 size=    4910kB time=00:00:07.96 bitrate=5047.8kbits/s speed=3.13x&#xA;frame=  272 fps= 89 q=27.0 size=    6189kB time=00:00:09.84 bitrate=5147.9kbits/s speed=3.22x&#xA;frame=  320 fps= 90 q=27.0 size=    7058kB time=00:00:11.78 bitrate=4907.5kbits/s speed=3.31x&#xA;frame=  372 fps= 91 q=26.0 size=    8098kB time=00:00:13.84 bitrate=4791.0kbits/s speed=3.4x&#xA;

    &#xA;

    And that's the end of it. It should continue to do the processing until 00:04:02 as that's my audio's length but it stops here every time (approximately this is my video length).

    &#xA;

    The relevant code which works correctly :

    &#xA;

    ffmpeg_cmd = &#x27;/opt/bin/ffmpeg -stream_loop -1 -i "&#x27; &#x2B; &#x27;/tmp/&#x27; &#x2B; s3_source_key &#x2B; &#x27;" -i /opt/bin/audio.mp3 -i /opt/bin/watermark.png -shortest -y -deinterlace -vcodec libx264 -pix_fmt yuv420p -preset veryfast -r 30 -g 60 -b:v 4500k -c:a copy -map 0:v:0 -map 1:a:0 -filter_complex scale=1920:1080:force_original_aspect_ratio=decrease,pad=1920:1080:(ow-iw)/2:(oh-ih)/2,setsar=1,overlay=(W-w)/2:(H-h)/2,format=yuv420p -loglevel verbose -f flv -&#x27;&#xA;command1 = shlex.split(ffmpeg_cmd)&#xA;p1 = subprocess.Popen(command1, stdout=subprocess.PIPE, stderr=subprocess.PIPE)&#xA;stdout, stderr = p1.communicate()&#xA;print(p1.returncode) #prints 0&#xA;

    &#xA;

    With this code it repeats the video as many times as it has to do to be as long as the audio.

    &#xA;

    Both versions work correctly on my computer.

    &#xA;

    This question is almost the same but in my case FFmpeg is able to access the signed URL.

    &#xA;