Recherche avancée

Médias (0)

Mot : - Tags -/serveur

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (65)

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

  • Gestion des droits de création et d’édition des objets

    8 février 2011, par

    Par défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;

  • Diogene : création de masques spécifiques de formulaires d’édition de contenus

    26 octobre 2010, par

    Diogene est un des plugins ? SPIP activé par défaut (extension) lors de l’initialisation de MediaSPIP.
    A quoi sert ce plugin
    Création de masques de formulaires
    Le plugin Diogène permet de créer des masques de formulaires spécifiques par secteur sur les trois objets spécifiques SPIP que sont : les articles ; les rubriques ; les sites
    Il permet ainsi de définir en fonction d’un secteur particulier, un masque de formulaire par objet, ajoutant ou enlevant ainsi des champs afin de rendre le formulaire (...)

Sur d’autres sites (4315)

  • PipedInputStream / PipedOutputStream, ImageIO and ffmpeg

    19 avril 2015, par jdevelop

    I have the following code in Scala :

         val pos = new PipedOutputStream()
         val pis = new PipedInputStream(pos)

         Future {
           LOG.trace("Start rendering")
           generateFrames(videoRenderParams.length) {
             img ⇒ ImageIO.write(img, "PNG", pos)
           }
           pos.flush()
           IOUtils.closeQuietly(pos)
           LOG.trace("Finished rendering")
         } onComplete {
           case Success(_) ⇒
             LOG.trace("Complete successfully")
           case Failure(err) ⇒
             LOG.error("Can't render stuff", err)
             IOUtils.closeQuietly(pis)
             IOUtils.closeQuietly(pos)
         }

         val prc = (ffmpegCli #< pis).!(logger)

    the Future simply writes the generated images one by one to the OutputStream. Now the ffmpeg process reads the input images from stdin and converts them to MP4 file.

    That works pretty well, but for some reason sometimes I’m getting the following stacktraces :

    I/O error Pipe closed for process: <input stream="stream" />
    java.io.IOException: Pipe closed
       at java.io.PipedInputStream.checkStateForReceive(PipedInputStream.java:260)
       at java.io.PipedInputStream.receive(PipedInputStream.java:226)
       at java.io.PipedOutputStream.write(PipedOutputStream.java:149)
       at scala.sys.process.BasicIO$.loop$1(BasicIO.scala:236)
       at scala.sys.process.BasicIO$.transferFullyImpl(BasicIO.scala:242)
       at scala.sys.process.BasicIO$.transferFully(BasicIO.scala:223)
       at scala.sys.process.ProcessImpl$PipeThread.runloop(ProcessImpl.scala:159)
       at scala.sys.process.ProcessImpl$PipeSource.run(ProcessImpl.scala:179)

    At the same time I’m getting the following error from another stream :

    javax.imageio.IIOException: I/O error writing PNG file!
       at com.sun.imageio.plugins.png.PNGImageWriter.write(PNGImageWriter.java:1168)
       at javax.imageio.ImageWriter.write(ImageWriter.java:615)
       at javax.imageio.ImageIO.doWrite(ImageIO.java:1612)
       at javax.imageio.ImageIO.write(ImageIO.java:1578)
       at

    So it seems that the streams were broken somewhere in between, so ffmpeg can not read the data, and ImageIO can not write the data.

    What is even more interesting - the problem is reproducible only on certain Linux server (Amazon). It works flawlessly on other Linux boxes. So I wonder if somebody could point me out to the possible causes of this error.

    What I’ve tried so far :

    • use Oracle JDK 8 and OpenJDK
    • use different versions of FFMPEG

    Nothing worked by the moment.

  • Trying to grab video stream from a 802W device

    1er juin 2015, par brentil

    A group of us in the RC hobby forums had started trying to use a device called the 802W, it takes RCA in and then broadcasts it back out over a WiFi you connect to via an Android or iOS device. They’re typically used for backup camera addon systems for vehicles. We want to use it to do FPV (First Person Video/View) with using smartphones instead of buying more expensive FPV goggles.

    802W device example (plenty of clones online)

    http://www.amazon.com/Wireless-Backup-Camera-Transmitter-Android/dp/B00LJPTJSY

    The problem is you can only use their application WIFI_AVIN or WIFI_AVIN2 from the app stores to connect to it because they don’t publish the information about how to grab the stream data. We want to write our own apps that can use the stream to better show the information. We’ve tried using VLC to grab the stream from an Android phone or a Windows PC but we’ve had no success so far. I was hoping someone could look at the Wireshark outputs and might understand what they’re looking at better than I am. I "think" it’s a UDP multicast being broadcasted but I just don’t know enough to be sure. We’ve tried using VLC to connect to network streams directly on the device or from udp ://@ type addresses but I think part of the issue too might be we’re missing the file path of the stream file.

    Attempting to reverse engineer their code for learning purposes showed that ffmpeg is inside a compiled .so library which also seems to be where the actual connection code happens which we were unable to dig into.

    In the images 192.168.72.33 is my phone and 192.168.72.173 is the 802W device.

    Image of what I believe is a UDP broadcast of the video information.
    Image of what I believe is a UDP broadcast of the video information.

    This is what the stream turns into when the device connects using the WIFI_AVIN application.
    This is what the stream turns into when the device connects using the WIFI_AVIN application.

  • Create mp4 thumbnail in node.js

    21 mai 2015, par trdavidson

    new in node.js and aws framework so I apologize in advance. I am trying to configure the AWS DB of my app to automatically create thumbnails using AWS Lambda. This works great using the example provided by Amazon for regular .jpg images (walkthrough here : https://alestic.com/2014/11/aws-lambda-cli/).

    However to try and do the same operation for mp4 files seems exponentially more difficult. After some searching I found that it seems the way to do this is by using the ffmpeg module. The problem is that I do not at all understand the response object returned by aws, and thus am not sure how to manipulate it so that ffmpeg can use it.

    current code :

    // dependencies
    var async = require('async');
    var AWS = require('aws-sdk');
    var gm = require('gm')
               .subClass({ imageMagick: true }); // Enable ImageMagick integration.
    var util = require('util');
    var ffmpeg = require('ffmpeg');
    var stream = require('stream')

    // constants
    var MAX_WIDTH  = 250;
    var MAX_HEIGHT = 250;

    // get reference to S3 client
    var s3 = new AWS.S3();

    exports.handler = function(event, context) {
       // Read options from the event.
       console.log("Reading options from event:\n", util.inspect(event, {depth: 5}));
       var srcBucket = event.Records[0].s3.bucket.name;
       // Object key may have spaces or unicode non-ASCII characters.
       var srcKey    =
       decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, " "));  
       var dstBucket = srcBucket + "small";
       var dstKey    = "small-" + srcKey;
    // Sanity check: validate that source and destination are different buckets.
    if (srcBucket == dstBucket) {
       console.error("Destination bucket must not match source bucket.");
       return;
    }

    // Infer the image type.
    var typeMatch = srcKey.match(/\.([^.]*)$/);
    if (!typeMatch) {
       console.error('unable to infer image type for key ' + srcKey);
       return;
    }
    var imageType = typeMatch[1];
    if (imageType != "mp4" &amp;&amp; imageType != "avi") {
       console.log('skipping non-image ' + srcKey);
       return;
    }

    // Download the image from S3, transform, and upload to a different S3 bucket.
    async.waterfall([
       function download(next) {
           // Download the image from S3 into a buffer.

           s3.getObject({
                   Bucket: srcBucket,
                   Key: srcKey
               },
               next);
           },
       function tranform(response, next) {
           var instream = new stream.Readable();
           instream.push(response.Body)
           instream.push(null)

           var outstream = new stream();

           ffmpeg(instream)
           .screenshots({timestamps: 1, size: '200x200'})
           .output('screenshot.png')
           .output(outstream)
           .on('end', function(){
               console.log('screenshots finished processing son!')
           })

           gm(outstream, 'screenshot.png').size(function(err, size) {
               // Infer the scaling factor to avoid stretching the image unnaturally.
               var scalingFactor = Math.min(
                   MAX_WIDTH / size.width,
                   MAX_HEIGHT / size.height
               );
               var width  = scalingFactor * size.width;
               var height = scalingFactor * size.height;

               // Transform the image buffer in memory.
               this.resize(width, height)
                   .toBuffer(imageType, function(err, buffer) {
                       if (err) {
                           next(err);
                       } else {
                           next(null, response.ContentType, buffer);
                       }
                   });
           });
       },
       function upload(contentType, data, next) {
           // Stream the transformed image to a different S3 bucket.
           s3.putObject({
                   Bucket: dstBucket,
                   Key: dstKey,
                   Body: data,
                   ContentType: contentType
               },
               next);
           }
       ], function (err) {
           if (err) {
               console.error(
                   'Unable to resize ' + srcBucket + '/' + srcKey +
                   ' and upload to ' + dstBucket + '/' + dstKey +
                   ' due to an error: ' + err
               );
           } else {
               console.log(
                   'Successfully resized ' + srcBucket + '/' + srcKey +
                   ' and uploaded to ' + dstBucket + '/' + dstKey
               );
           }

           context.done();
       }
    );

    } ;

    Any suggestions are welcome ! Thanks