Recherche avancée

Médias (1)

Mot : - Tags -/artwork

Autres articles (53)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Déploiements possibles

    31 janvier 2010, par

    Deux types de déploiements sont envisageable dépendant de deux aspects : La méthode d’installation envisagée (en standalone ou en ferme) ; Le nombre d’encodages journaliers et la fréquentation envisagés ;
    L’encodage de vidéos est un processus lourd consommant énormément de ressources système (CPU et RAM), il est nécessaire de prendre tout cela en considération. Ce système n’est donc possible que sur un ou plusieurs serveurs dédiés.
    Version mono serveur
    La version mono serveur consiste à n’utiliser qu’une (...)

Sur d’autres sites (9219)

  • Using FFmpeg with URL input causes SIGSEGV in AWS Lambda (Python runtime)

    26 mars, par Dave94

    I'm trying to implement a video converting solution on AWS Lambda following their article named Processing user-generated content using AWS Lambda and FFmpeg.
However when I run my command with subprocess.Popen() it returns -11 which translates to SIGSEGV (segmentation fault).
I've tried to process the video with the newest (4.3.1) static build from John Van Sickle's site as with the "official" ffmpeg-lambda-layer but it seems like it doesn't matter which one I use, the result is the same.

    


    If I download the video to the Lambda's /tmp directory and add this downloaded file as an input to FFmpeg it works correctly (with the same parameters). However I'm trying to prevent this as the /tmp directory's max. size is only 512 MB which is not quite enough for me.

    


    The relevant code which returns SIGSEGV :

    


    ffmpeg_cmd = '/opt/bin/ffmpeg -stream_loop -1 -i "' + s3_source_signed_url + '" -i /opt/bin/audio.mp3 -i /opt/bin/watermark.png -shortest -y -deinterlace -vcodec libx264 -pix_fmt yuv420p -preset veryfast -r 30 -g 60 -b:v 4500k -c:a copy -map 0:v:0 -map 1:a:0 -filter_complex scale=1920:1080:force_original_aspect_ratio=decrease,pad=1920:1080:(ow-iw)/2:(oh-ih)/2,setsar=1,overlay=(W-w)/2:(H-h)/2,format=yuv420p -loglevel verbose -f flv -'
command1 = shlex.split(ffmpeg_cmd)
p1 = subprocess.Popen(command1, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p1.communicate()
print(p1.returncode) #prints -11


    


    stderr of FFmpeg :

    


    ffmpeg version 4.1.3-static https://johnvansickle.com/ffmpeg/  Copyright (c) 2000-2019 the FFmpeg developers
  built with gcc 6.3.0 (Debian 6.3.0-18+deb9u1) 20170516
  configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc-6 --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gmp --enable-gray --enable-libaom --enable-libfribidi --enable-libass --enable-libvmaf --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librubberband --enable-libsoxr --enable-libspeex --enable-libvorbis --enable-libopus --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzvbi --enable-libzimg
  libavutil      56. 22.100 / 56. 22.100
  libavcodec     58. 35.100 / 58. 35.100
  libavformat    58. 20.100 / 58. 20.100
  libavdevice    58.  5.100 / 58.  5.100
  libavfilter     7. 40.101 /  7. 40.101
  libswscale      5.  3.100 /  5.  3.100
  libswresample   3.  3.100 /  3.  3.100
  libpostproc    55.  3.100 / 55.  3.100
[tcp @ 0x728cc00] Starting connection attempt to 52.219.74.177 port 443
[tcp @ 0x728cc00] Successfully connected to 52.219.74.177 port 443
[h264 @ 0x729b780] Reinit context to 1280x720, pix_fmt: yuv420p
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'https://bucket.s3.amazonaws.com --> presigned url with 15 min expiration time':
  Metadata:
    major_brand     : mp42
    minor_version   : 0
    compatible_brands: mp42mp41isomavc1
    creation_time   : 2015-09-02T07:42:42.000000Z
  Duration: 00:00:15.64, start: 0.000000, bitrate: 2640 kb/s
    Stream #0:0(und): Video: h264 (High), 1 reference frame (avc1 / 0x31637661), yuv420p(tv, bt709, left), 1280x720 [SAR 1:1 DAR 16:9], 2475 kb/s, 25 fps, 25 tbr, 25 tbn, 50 tbc (default)
    Metadata:
      creation_time   : 2015-09-02T07:42:42.000000Z
      handler_name    : L-SMASH Video Handler
      encoder         : AVC Coding
    Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 160 kb/s (default)
    Metadata:
      creation_time   : 2015-09-02T07:42:42.000000Z
      handler_name    : L-SMASH Audio Handler
[mp3 @ 0x733f340] Skipping 0 bytes of junk at 1344.
Input #1, mp3, from '/opt/bin/audio.mp3':
  Metadata:
    encoded_by      : Logic Pro X
    date            : 2021-01-03
    coding_history  : 
    time_reference  : 158760000
    umid            : 0x0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000004500F9E4
    encoder         : Lavf58.49.100
  Duration: 00:04:01.21, start: 0.025057, bitrate: 320 kb/s
    Stream #1:0: Audio: mp3, 44100 Hz, stereo, fltp, 320 kb/s
    Metadata:
      encoder         : Lavc58.97
Input #2, png_pipe, from '/opt/bin/watermark.png':
  Duration: N/A, bitrate: N/A
    Stream #2:0: Video: png, 1 reference frame, rgba(pc), 701x190 [SAR 1521:1521 DAR 701:190], 25 tbr, 25 tbn, 25 tbc
[Parsed_scale_0 @ 0x7341140] w:1920 h:1080 flags:'bilinear' interl:0
Stream mapping:
  Stream #0:0 (h264) -> scale
  Stream #2:0 (png) -> overlay:overlay
  format -> Stream #0:0 (libx264)
  Stream #1:0 -> #0:1 (copy)
Press [q] to stop, [?] for help
[h264 @ 0x72d8600] Reinit context to 1280x720, pix_fmt: yuv420p
[Parsed_scale_0 @ 0x733c1c0] w:1920 h:1080 flags:'bilinear' interl:0
[graph 0 input from stream 0:0 @ 0x7669200] w:1280 h:720 pixfmt:yuv420p tb:1/25 fr:25/1 sar:1/1 sws_param:flags=2
[graph 0 input from stream 2:0 @ 0x766a980] w:701 h:190 pixfmt:rgba tb:1/25 fr:25/1 sar:1521/1521 sws_param:flags=2
[auto_scaler_0 @ 0x7670240] w:iw h:ih flags:'bilinear' interl:0
[deinterlace_in_2_0 @ 0x766b680] auto-inserting filter 'auto_scaler_0' between the filter 'graph 0 input from stream 2:0' and the filter 'deinterlace_in_2_0'
[Parsed_scale_0 @ 0x733c1c0] w:1280 h:720 fmt:yuv420p sar:1/1 -> w:1920 h:1080 fmt:yuv420p sar:1/1 flags:0x2
[Parsed_pad_1 @ 0x733ce00] w:1920 h:1080 -> w:1920 h:1080 x:0 y:0 color:0x000000FF
[Parsed_setsar_2 @ 0x733da00] w:1920 h:1080 sar:1/1 dar:16/9 -> sar:1/1 dar:16/9
[auto_scaler_0 @ 0x7670240] w:701 h:190 fmt:rgba sar:1521/1521 -> w:701 h:190 fmt:yuva420p sar:1/1 flags:0x2
[Parsed_overlay_3 @ 0x733e440] main w:1920 h:1080 fmt:yuv420p overlay w:701 h:190 fmt:yuva420p
[Parsed_overlay_3 @ 0x733e440] [framesync @ 0x733e5a8] Selected 1/50 time base
[Parsed_overlay_3 @ 0x733e440] [framesync @ 0x733e5a8] Sync level 2
[libx264 @ 0x72c1c00] using SAR=1/1
[libx264 @ 0x72c1c00] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0x72c1c00] profile Progressive High, level 4.0, 4:2:0, 8-bit
[libx264 @ 0x72c1c00] 264 - core 157 r2969 d4099dd - H.264/MPEG-4 AVC codec - Copyleft 2003-2019 - http://www.videolan.org/x264.html - options: cabac=1 ref=1 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=2 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=0 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=0 threads=9 lookahead_threads=3 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=1 keyint=60 keyint_min=6 scenecut=40 intra_refresh=0 rc_lookahead=10 rc=abr mbtree=1 bitrate=4500 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, flv, to 'pipe:':
  Metadata:
    major_brand     : mp42
    minor_version   : 0
    compatible_brands: mp42mp41isomavc1
    encoder         : Lavf58.20.100
    Stream #0:0: Video: h264 (libx264), 1 reference frame ([7][0][0][0] / 0x0007), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], q=-1--1, 4500 kb/s, 30 fps, 1k tbn, 30 tbc (default)
    Metadata:
      encoder         : Lavc58.35.100 libx264
    Side data:
      cpb: bitrate max/min/avg: 0/0/4500000 buffer size: 0 vbv_delay: -1
    Stream #0:1: Audio: mp3 ([2][0][0][0] / 0x0002), 44100 Hz, stereo, fltp, 320 kb/s
    Metadata:
      encoder         : Lavc58.97
frame=   27 fps=0.0 q=32.0 size=     247kB time=00:00:00.03 bitrate=59500.0kbits/s speed=0.0672x
frame=   77 fps= 77 q=27.0 size=    1115kB time=00:00:02.03 bitrate=4478.0kbits/s speed=2.03x
frame=  126 fps= 83 q=25.0 size=    2302kB time=00:00:04.00 bitrate=4712.4kbits/s speed=2.64x
frame=  177 fps= 87 q=26.0 size=    3576kB time=00:00:06.03 bitrate=4854.4kbits/s speed=2.97x
frame=  225 fps= 88 q=25.0 size=    4910kB time=00:00:07.96 bitrate=5047.8kbits/s speed=3.13x
frame=  272 fps= 89 q=27.0 size=    6189kB time=00:00:09.84 bitrate=5147.9kbits/s speed=3.22x
frame=  320 fps= 90 q=27.0 size=    7058kB time=00:00:11.78 bitrate=4907.5kbits/s speed=3.31x
frame=  372 fps= 91 q=26.0 size=    8098kB time=00:00:13.84 bitrate=4791.0kbits/s speed=3.4x


    


    And that's the end of it. It should continue to do the processing until 00:04:02 as that's my audio's length but it stops here every time (approximately this is my video length).

    


    The relevant code which works correctly :

    


    ffmpeg_cmd = '/opt/bin/ffmpeg -stream_loop -1 -i "' + '/tmp/' + s3_source_key + '" -i /opt/bin/audio.mp3 -i /opt/bin/watermark.png -shortest -y -deinterlace -vcodec libx264 -pix_fmt yuv420p -preset veryfast -r 30 -g 60 -b:v 4500k -c:a copy -map 0:v:0 -map 1:a:0 -filter_complex scale=1920:1080:force_original_aspect_ratio=decrease,pad=1920:1080:(ow-iw)/2:(oh-ih)/2,setsar=1,overlay=(W-w)/2:(H-h)/2,format=yuv420p -loglevel verbose -f flv -'
command1 = shlex.split(ffmpeg_cmd)
p1 = subprocess.Popen(command1, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p1.communicate()
print(p1.returncode) #prints 0


    


    With this code it repeats the video as many times as it has to do to be as long as the audio.

    


    Both versions work correctly on my computer.

    


    This question is almost the same but in my case FFmpeg is able to access the signed URL.

    


  • I tried to play the audio on Alexa skill from my S3 Bucket, from the test tab, **it show but in fact, I can't hear any sound

    19 avril 2022, par Siti Mayna

    So I tried to play the audio on Alexa skill from my S3 Bucket, from the test tab, it show but in fact, I can't hear any sound. Another fact is, that I tried to use the sample audio from https://developer.amazon.com/en-US/docs/alexa/custom-skills/ask-soundlibrary.html and it is worked, but why it won't work when it comes from my own S3 Bucket ?

    


    Notes :

    


    I've tried to test the skill using my mobile phone also.

    


    I've tried to encode the audio using FFmpeg.

    


    I've tried to use Jovo to convert the audio. https://v3.jovo.tech/audio-converter

    


    I don't know how to fix this error.

    


    There is no error message on cloud watch.

    


    Assumptions :
There is some problem related to the audio resources or there is more set to play audio from S3 Bucket since the sample audio is working.

    


    Steps to reproduce :

    


    


    Build the interaction model

    


    


    


    Encode the audio to make it Alexa skill friendly (fulfill the requirements, like sample rate, etc), I used and tried all of these :

    


    


    A :

    


    ffmpeg -i  -ac 2 -codec:a libmp3lame -b:a 48k -ar 16000 -write_xing 0 


    


    B :

    


    ffmpeg -i  -ac 2 -codec:a libmp3lame -b:a 48k -ar 24000 -write_xing 0 


    


    C :

    


    ffmpeg -y -i input.mp3 -ar 16000 -ab 48k -codec:a libmp3lame -ac 1 output.mp3


    


    


    Upload the audio resources on S3Bucket
Audio sample on s3 storage but none of them are produce any sounds

    


    


    


    Use the link and insert it to APLA.json

    


    


    
    {
      "type": "APLA",
      "version": "0.91",
      "description": "Simple document that generates speech",
      "mainTemplate": {
        "parameters": [
          "payload"
        ],
        "type": "Sequencer",
        "items": [
          {
            "type": "Audio",
            "source": "https://72578561-d9d8-47b4-811c-cafbcbc5ddb9-us-east-1.s3.amazonaws.com/Media/one-small-step-alexa-24.mp3"
          }
        ]
      }
    }



    


    notes : I change the link sources based on audio that I tried.

    


    


    the intent on lambda_function.py :

    


    


    def _load_apl_document(file_path):
    # type: (str) -> Dict[str, Any]
    """Load the apl json document at the path into a dict object."""
    with open(file_path) as f:
        return json.load(f)

class LaunchRequestHandler(AbstractRequestHandler):
    """Handler for Skill Launch."""
    def can_handle(self, handler_input):
        # type: (HandlerInput) -> bool

        return ask_utils.is_request_type("LaunchRequest")(handler_input)

    def handle(self, handler_input):
        # type: (HandlerInput) -> Response
        logger.info("In LaunchRequestHandler")

        # type: (HandlerInput) -> Response
        speak_output = "Hello World!"
        # .ask("add a reprompt if you want to keep the session open for the user to respond")

        return (
            handler_input.response_builder
                #.speak(speak_output)
                .add_directive(
                        RenderDocumentDirective(
                            token="pagerToken",
                            document=_load_apl_document("APLA.json"),
                            datasources={}
                        )
                    )
                .response
        )


    


    


    Deploy

    


    


    


    Test it

    


    


    


    The result of the test on my end :

The response for testing

    


    


    the JSON response :

    


    {
    "body": {
        "version": "1.0",
        "response": {
            "directives": [
                {
                    "type": "Alexa.Presentation.APLA.RenderDocument",
                    "token": "pagerToken",
                    "document": {
                        "type": "APLA",
                        "version": "0.91",
                        "description": "Simple document that generates speech",
                        "mainTemplate": {
                            "parameters": [
                                "payload"
                            ],
                            "type": "Sequencer",
                            "items": [
                                {
                                    "type": "Audio",
                                    "source": "https://72578561-d9d8-47b4-811c-cafbcbc5ddb9-us-east-1.s3.amazonaws.com/Media/one-small-step-alexa-24.mp3"
                                }
                            ]
                        }
                    },
                    "datasources": {}
                }
            ],
            "type": "_DEFAULT_RESPONSE"
        },
        "sessionAttributes": {},
        "userAgent": "ask-python/1.16.1 Python/3.7.12"
    }
}


    


    


    On my cloud Watch :
Cloud Watch

    


    


  • Maven dependency works on machine but not when packaged in a .war - org.bytedeco.javacpp.avutil

    26 décembre 2016, par Jake Miller

    I have a web app where you can upload a video and a thumbnail is created for this video. It works completely fine on my machine, but as soon as I deploy to AWS (Amazon Web Services) as a .war file, I experience dependency issues with maven.

    Here’s the exception from the logs :

    Caused by: java.lang.NoClassDefFoundError: Could not initialize class org.bytedeco.javacpp.avutil
       at java.lang.Class.forName0(Native Method) ~[na:1.8.0_101]
       at java.lang.Class.forName(Class.java:348) ~[na:1.8.0_101]
       at org.bytedeco.javacpp.Loader.load(Loader.java:472) ~[javacpp-1.2.1.jar!/:1.2.1]
       at org.bytedeco.javacpp.Loader.load(Loader.java:417) ~[javacpp-1.2.1.jar!/:1.2.1]
       at org.bytedeco.javacpp.avformat$AVFormatContext.<clinit>(avformat.java:2597) ~[ffmpeg-2.8.1-1.1.jar!/:1.2.1]
       at org.bytedeco.javacv.FFmpegFrameGrabber.startUnsafe(FFmpegFrameGrabber.java:391) ~[javacv-1.2.jar!/:1.2]
       at org.bytedeco.javacv.FFmpegFrameGrabber.start(FFmpegFrameGrabber.java:385) ~[javacv-1.2.jar!/:1.2]
       at com.myapp.app.service.ICampaignService.createThumbnail(ICampaignService.java:425) ~[classes!/:0.0.44T-SNAPSHOT]
       at com.myapp.app.service.ICampaignService$$FastClassBySpringCGLIB$$47736265.invoke(<generated>) ~[classes!/:0.0.44T-SNAPSHOT]
       at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204) ~[spring-core-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
       at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:720) ~[spring-aop-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
       at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157) ~[spring-aop-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
       at org.springframework.transaction.interceptor.TransactionInterceptor$1.proceedWithInvocation(TransactionInterceptor.java:99) ~[spring-tx-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
       at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:280) ~[spring-tx-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
       at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:96) ~[spring-tx-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
       at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) ~[spring-aop-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
       at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:655) ~[spring-aop-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
       at com.myapp.app.service.ICampaignService$$EnhancerBySpringCGLIB$$67e59894.createThumbnail(<generated>) ~[classes!/:0.0.44T-SNAPSHOT]
       at com.myapp.app.controllers.CampaignController.uploadCampaign(CampaignController.java:237) ~[classes!/:0.0.44T-SNAPSHOT]
       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_101]
       at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_101]
       at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_101]
       at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_101]
       at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:221) ~[spring-web-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
       at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:136) ~[spring-web-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
       at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:114) ~[spring-webmvc-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
       at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:827) ~[spring-webmvc-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
       at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:738) ~[spring-webmvc-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
       at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:85) ~[spring-webmvc-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
       at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:963) ~[spring-webmvc-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
       ... 85 common frames omitted
    </generated></generated></clinit>

    Here are my maven dependencies for the ffmpeg wrapper I’m using :

    <dependency>
       <groupid>org.bytedeco</groupid>
       <artifactid>javacv</artifactid>
       <version>1.3</version>
    </dependency>
    <dependency>
       <groupid>org.bytedeco</groupid>
       <artifactid>javacpp</artifactid>
       <version>1.3</version>
    </dependency>

    I’ve tried various different versions but nothing has worked so far. What’s strange is the error says org.bytedeco.javacpp.avutil but I have access to the avutil class in my IDE.

    EDIT

    I’ve also tried everything at this github post : https://github.com/bytedeco/javacpp-presets/issues/225

    No solution I’ve found has worked. I’ve tried various different versions and various different org.bytedeco dependencies but they all yield the same error.