Recherche avancée

Médias (2)

Mot : - Tags -/media

Autres articles (12)

  • Taille des images et des logos définissables

    9 février 2011, par

    Dans beaucoup d’endroits du site, logos et images sont redimensionnées pour correspondre aux emplacements définis par les thèmes. L’ensemble des ces tailles pouvant changer d’un thème à un autre peuvent être définies directement dans le thème et éviter ainsi à l’utilisateur de devoir les configurer manuellement après avoir changé l’apparence de son site.
    Ces tailles d’images sont également disponibles dans la configuration spécifique de MediaSPIP Core. La taille maximale du logo du site en pixels, on permet (...)

  • Configuration spécifique d’Apache

    4 février 2011, par

    Modules spécifiques
    Pour la configuration d’Apache, il est conseillé d’activer certains modules non spécifiques à MediaSPIP, mais permettant d’améliorer les performances : mod_deflate et mod_headers pour compresser automatiquement via Apache les pages. Cf ce tutoriel ; mode_expires pour gérer correctement l’expiration des hits. Cf ce tutoriel ;
    Il est également conseillé d’ajouter la prise en charge par apache du mime-type pour les fichiers WebM comme indiqué dans ce tutoriel.
    Création d’un (...)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

Sur d’autres sites (3354)

  • AWS Lambda function for modify video

    4 février 2017, par Gold Fish

    I want to create a Lambda function that invoked whenever someone uploads to the S3 bucket. The purpose of the function is to take the uploaded file and if its a video file (mp4) so make a new file which is a preview of the last one (using ffmpeg). The Lambda function is written in nodejs.
    I took the code here for reference, but I do something wrong for I get an error saying that no input specified for SetStartTime :

    //dependecies
    var async = require('async');
    var AWS = require('aws-sdk');
    var util = require('util');
    var ffmpeg = require('fluent-ffmpeg');

    // get reference to S3 client
    var s3 = new AWS.S3();


    exports.handler = function(event, context, callback) {
       // Read options from the event.
       console.log("Reading options from event:\n", util.inspect(event, {depth: 5}));
       var srcBucket = event.Records[0].s3.bucket.name;
       // Object key may have spaces or unicode non-ASCII characters.
       var srcKey    =
       decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, " "));  
       var dstBucket = srcBucket;
       var dstKey    = "preview_" + srcKey;

       // Sanity check: validate that source and destination are different buckets.
       if (srcBucket == dstBucket) {
           callback("Source and destination buckets are the same.");
           return;
       }

       // Infer the video type.
       var typeMatch = srcKey.match(/\.([^.]*)$/);
       if (!typeMatch) {
           callback("Could not determine the video type.");
           return;
       }
       var videoType = typeMatch[1];
       if (videoType != "mp4") {
           callback('Unsupported video type: ${videoType}');
           return;
       }

       // Download the video from S3, transform, and upload to a different S3 bucket.
       async.waterfall([
           function download(next) {
               // Download the video from S3 into a buffer.
               s3.getObject({
                       Bucket: srcBucket,
                       Key: srcKey
                   },
                   next);
               },
           function transform(response, next) {
           console.log("response.Body:\n", response.Body);
           ffmpeg(response.Body)
               .setStartTime('00:00:03')
               .setDuration('10')   //.output('public/videos/test/test.mp4')
           .toBuffer(videoType, function(err, buffer) {
                           if (err) {
                               next(err);
                           } else {
                               next(null, response.ContentType, buffer);
                           }
                    });
           },
           function upload(contentType, data, next) {
               // Stream the transformed image to a different S3 bucket.
               s3.putObject({
                       Bucket: dstBucket,
                       Key: dstKey,
                       Body: data,
                       ContentType: contentType
                   },
                   next);
               }
           ], function (err) {
               if (err) {
                   console.error(
                       'Unable to modify ' + srcBucket + '/' + srcKey +
                       ' and upload to ' + dstBucket + '/' + dstKey +
                       ' due to an error: ' + err
                   );
               } else {
                   console.log(
                       'Successfully modify ' + srcBucket + '/' + srcKey +
                       ' and uploaded to ' + dstBucket + '/' + dstKey
                   );
               }

               callback(null, "message");
           }
       );
    };

    So what am I doing wrong ?

  • Maven dependency works on machine but not when packaged in a .war - org.bytedeco.javacpp.avutil

    26 décembre 2016, par Jake Miller

    I have a web app where you can upload a video and a thumbnail is created for this video. It works completely fine on my machine, but as soon as I deploy to AWS (Amazon Web Services) as a .war file, I experience dependency issues with maven.

    Here’s the exception from the logs :

    Caused by: java.lang.NoClassDefFoundError: Could not initialize class org.bytedeco.javacpp.avutil
       at java.lang.Class.forName0(Native Method) ~[na:1.8.0_101]
       at java.lang.Class.forName(Class.java:348) ~[na:1.8.0_101]
       at org.bytedeco.javacpp.Loader.load(Loader.java:472) ~[javacpp-1.2.1.jar!/:1.2.1]
       at org.bytedeco.javacpp.Loader.load(Loader.java:417) ~[javacpp-1.2.1.jar!/:1.2.1]
       at org.bytedeco.javacpp.avformat$AVFormatContext.<clinit>(avformat.java:2597) ~[ffmpeg-2.8.1-1.1.jar!/:1.2.1]
       at org.bytedeco.javacv.FFmpegFrameGrabber.startUnsafe(FFmpegFrameGrabber.java:391) ~[javacv-1.2.jar!/:1.2]
       at org.bytedeco.javacv.FFmpegFrameGrabber.start(FFmpegFrameGrabber.java:385) ~[javacv-1.2.jar!/:1.2]
       at com.myapp.app.service.ICampaignService.createThumbnail(ICampaignService.java:425) ~[classes!/:0.0.44T-SNAPSHOT]
       at com.myapp.app.service.ICampaignService$$FastClassBySpringCGLIB$$47736265.invoke(<generated>) ~[classes!/:0.0.44T-SNAPSHOT]
       at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204) ~[spring-core-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
       at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:720) ~[spring-aop-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
       at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157) ~[spring-aop-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
       at org.springframework.transaction.interceptor.TransactionInterceptor$1.proceedWithInvocation(TransactionInterceptor.java:99) ~[spring-tx-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
       at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:280) ~[spring-tx-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
       at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:96) ~[spring-tx-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
       at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) ~[spring-aop-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
       at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:655) ~[spring-aop-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
       at com.myapp.app.service.ICampaignService$$EnhancerBySpringCGLIB$$67e59894.createThumbnail(<generated>) ~[classes!/:0.0.44T-SNAPSHOT]
       at com.myapp.app.controllers.CampaignController.uploadCampaign(CampaignController.java:237) ~[classes!/:0.0.44T-SNAPSHOT]
       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_101]
       at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_101]
       at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_101]
       at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_101]
       at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:221) ~[spring-web-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
       at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:136) ~[spring-web-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
       at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:114) ~[spring-webmvc-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
       at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:827) ~[spring-webmvc-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
       at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:738) ~[spring-webmvc-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
       at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:85) ~[spring-webmvc-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
       at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:963) ~[spring-webmvc-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
       ... 85 common frames omitted
    </generated></generated></clinit>

    Here are my maven dependencies for the ffmpeg wrapper I’m using :

    <dependency>
       <groupid>org.bytedeco</groupid>
       <artifactid>javacv</artifactid>
       <version>1.3</version>
    </dependency>
    <dependency>
       <groupid>org.bytedeco</groupid>
       <artifactid>javacpp</artifactid>
       <version>1.3</version>
    </dependency>

    I’ve tried various different versions but nothing has worked so far. What’s strange is the error says org.bytedeco.javacpp.avutil but I have access to the avutil class in my IDE.

    EDIT

    I’ve also tried everything at this github post : https://github.com/bytedeco/javacpp-presets/issues/225

    No solution I’ve found has worked. I’ve tried various different versions and various different org.bytedeco dependencies but they all yield the same error.

  • ffmpeg stream chrome kiosk mode ubuntu 16.04 server

    21 décembre 2016, par Raul

    I have a weird out-of-sync issue while using ffmpeg to stream to youtube live a chrome browser from an ub untu 16.04 server.

    Issue : output video streamed to youtube has audio/video out of sync, sometimes with as much as 3s

    Current flow :

    1) start pulseaudio - we using something like this to start it :

    pulseaudio --start -vvv --disallow-exit --log-target=syslog --high-priority --exit-idle-time=-1 --daemonize

    2) start Xvfb

    Xvfb :0 -ac -screen 0 1920x1080x24

    3) start chrome linux in kiosk mode

    google-chrome --kiosk --disable-gpu --incognito --no-first-run --disable-java --disable-plugins --disable-translate --disk-cache-size=$((1024 * 1024)) --disk-cache-dir=/tmp/chrome/ --user-data-dir=/tmp/chrome/ --force-device-scale-factor=1 --window-size=1920,1080 --window-position=0,0 LOCATION_URL

    4) start ffmpeg

    ffmpeg -y \
     -thread_queue_size 8192 -rtbufsize 250M -f x11grab -video_size 1920x1080 -framerate 24 -i :0 \
     -thread_queue_size 8192 -channel_layout stereo -f alsa -i pulse \
     -c:v libx264 -pix_fmt yuv420p -c:v libx264 -g 48 -crf 24 -filter:v fps=24 -preset ultrafast -tune zerolatency \
     -c:a aac -strict -2 -channel_layout stereo -ab 96k -ac 2 -flags +global_header \
     -f flv YOUTUBE_LIVE_STREAMING_RTMP

    Note : this is running on an amazon ec2 instance, meaning there is no soundcard, so alsa and pulseaudio are creating a dummy audio card. However, the latency does not come from there. Logs :

    Nov 25 06:14:22 ip-172-31-29-8 pulseaudio[26602]: [pulseaudio] protocol-native.c: Adjust latency mode enabled, configuring sink latency to half of overall latency.
    Nov 25 06:14:22 ip-172-31-29-8 pulseaudio[26602]: [pulseaudio] protocol-native.c: Requested latency=23.22 ms, Received latency=23.22 ms
    Nov 25 06:14:22 ip-172-31-29-8 pulseaudio[26602]: [pulseaudio] protocol-native.c: Final latency 69.66 ms = 23.22 ms + 2*11.61 ms + 23.22 ms

    At this point, here’s what we observed :

    1. if we start ffmpeg exactly after issuing the command to start chrome, we see the DTS errors from ffmpeg. Audio is out of sync with the video and has delay of 3-5seconds AHEAD. We also noticed the out of sync remains the same for the full duration of the stream

    2. if we start ffmpeg after around 10seconds, audio and video are almost in sync. We then manually added a -itsoffset -0.125 to the ffmpeg command and everything is perfect.

    Questions :

    1. Why would ffmpeg have so much lag if it’s started right after chrome ?
    2. Is starting the ffmpeg after 10s or X seconds the expected behavior ? That is, is this because the system needs to wait for audio/video signals to be "ready" or something ?
    3. Is there a way to 100% calculate or know when Chrome is fully ready and start ffmpeg ? We found sometimes it takes 5s, sometimes 10. Depends on the URL we load.
    4. Besides the DTS error that ffmpeg throws, is there any other way to know if audio/video is out-of-sync ? as sometimes we have a delay of between 0.5 to 1s, but ffmpeg does not report anything. And a restart is required to "re-balance" the audio/video inputs and get them back in sync.
    5. Can pulseaudio be the problem in this scenario ?

    Thank you

    UPDATE Dec 20

    We were able to do some tricks to force chrome to start the audio on page load, and that will force connect to pulseaudio. Doing that, plus adding a 3s delay for ffmpeg to start, there is no more delay when ffmpeg starts.
    However, our app is a webRTC app, and we have a STRANGER thing happening : if we start the page with no webcam/audio, once the webcam/audio is enabled, ffmpeg (while showing no errors) has a delay of 2s or so. While keep talking, in about max 30s, that delay is GONE.

    So the new questions are :

    1. Besides the DTS error that ffmpeg throws, is there any other way to know if audio/video is out-of-sync ? as sometimes we have a delay of between 0.5 to 1s, but ffmpeg does not report anything.
    2. What could cause the initial audio/video out of sync issue and then catching up ?