
Recherche avancée
Autres articles (35)
-
Pas question de marché, de cloud etc...
10 avril 2011Le vocabulaire utilisé sur ce site essaie d’éviter toute référence à la mode qui fleurit allègrement
sur le web 2.0 et dans les entreprises qui en vivent.
Vous êtes donc invité à bannir l’utilisation des termes "Brand", "Cloud", "Marché" etc...
Notre motivation est avant tout de créer un outil simple, accessible à pour tout le monde, favorisant
le partage de créations sur Internet et permettant aux auteurs de garder une autonomie optimale.
Aucun "contrat Gold ou Premium" n’est donc prévu, aucun (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Le plugin : Podcasts.
14 juillet 2010, parLe problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
Types de fichiers supportés dans les flux
Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...)
Sur d’autres sites (4805)
-
Create mp4 thumbnail in node.js
21 mai 2015, par trdavidsonnew in node.js and aws framework so I apologize in advance. I am trying to configure the AWS DB of my app to automatically create thumbnails using AWS Lambda. This works great using the example provided by Amazon for regular .jpg images (walkthrough here : https://alestic.com/2014/11/aws-lambda-cli/).
However to try and do the same operation for mp4 files seems exponentially more difficult. After some searching I found that it seems the way to do this is by using the ffmpeg module. The problem is that I do not at all understand the response object returned by aws, and thus am not sure how to manipulate it so that ffmpeg can use it.
current code :
// dependencies
var async = require('async');
var AWS = require('aws-sdk');
var gm = require('gm')
.subClass({ imageMagick: true }); // Enable ImageMagick integration.
var util = require('util');
var ffmpeg = require('ffmpeg');
var stream = require('stream')
// constants
var MAX_WIDTH = 250;
var MAX_HEIGHT = 250;
// get reference to S3 client
var s3 = new AWS.S3();
exports.handler = function(event, context) {
// Read options from the event.
console.log("Reading options from event:\n", util.inspect(event, {depth: 5}));
var srcBucket = event.Records[0].s3.bucket.name;
// Object key may have spaces or unicode non-ASCII characters.
var srcKey =
decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, " "));
var dstBucket = srcBucket + "small";
var dstKey = "small-" + srcKey;
// Sanity check: validate that source and destination are different buckets.
if (srcBucket == dstBucket) {
console.error("Destination bucket must not match source bucket.");
return;
}
// Infer the image type.
var typeMatch = srcKey.match(/\.([^.]*)$/);
if (!typeMatch) {
console.error('unable to infer image type for key ' + srcKey);
return;
}
var imageType = typeMatch[1];
if (imageType != "mp4" && imageType != "avi") {
console.log('skipping non-image ' + srcKey);
return;
}
// Download the image from S3, transform, and upload to a different S3 bucket.
async.waterfall([
function download(next) {
// Download the image from S3 into a buffer.
s3.getObject({
Bucket: srcBucket,
Key: srcKey
},
next);
},
function tranform(response, next) {
var instream = new stream.Readable();
instream.push(response.Body)
instream.push(null)
var outstream = new stream();
ffmpeg(instream)
.screenshots({timestamps: 1, size: '200x200'})
.output('screenshot.png')
.output(outstream)
.on('end', function(){
console.log('screenshots finished processing son!')
})
gm(outstream, 'screenshot.png').size(function(err, size) {
// Infer the scaling factor to avoid stretching the image unnaturally.
var scalingFactor = Math.min(
MAX_WIDTH / size.width,
MAX_HEIGHT / size.height
);
var width = scalingFactor * size.width;
var height = scalingFactor * size.height;
// Transform the image buffer in memory.
this.resize(width, height)
.toBuffer(imageType, function(err, buffer) {
if (err) {
next(err);
} else {
next(null, response.ContentType, buffer);
}
});
});
},
function upload(contentType, data, next) {
// Stream the transformed image to a different S3 bucket.
s3.putObject({
Bucket: dstBucket,
Key: dstKey,
Body: data,
ContentType: contentType
},
next);
}
], function (err) {
if (err) {
console.error(
'Unable to resize ' + srcBucket + '/' + srcKey +
' and upload to ' + dstBucket + '/' + dstKey +
' due to an error: ' + err
);
} else {
console.log(
'Successfully resized ' + srcBucket + '/' + srcKey +
' and uploaded to ' + dstBucket + '/' + dstKey
);
}
context.done();
}
);} ;
Any suggestions are welcome ! Thanks
-
Trying to grab video stream from a 802W device
1er juin 2015, par brentilA group of us in the RC hobby forums had started trying to use a device called the 802W, it takes RCA in and then broadcasts it back out over a WiFi you connect to via an Android or iOS device. They’re typically used for backup camera addon systems for vehicles. We want to use it to do FPV (First Person Video/View) with using smartphones instead of buying more expensive FPV goggles.
802W device example (plenty of clones online)
http://www.amazon.com/Wireless-Backup-Camera-Transmitter-Android/dp/B00LJPTJSY
The problem is you can only use their application WIFI_AVIN or WIFI_AVIN2 from the app stores to connect to it because they don’t publish the information about how to grab the stream data. We want to write our own apps that can use the stream to better show the information. We’ve tried using VLC to grab the stream from an Android phone or a Windows PC but we’ve had no success so far. I was hoping someone could look at the Wireshark outputs and might understand what they’re looking at better than I am. I "think" it’s a UDP multicast being broadcasted but I just don’t know enough to be sure. We’ve tried using VLC to connect to network streams directly on the device or from udp ://@ type addresses but I think part of the issue too might be we’re missing the file path of the stream file.
Attempting to reverse engineer their code for learning purposes showed that ffmpeg is inside a compiled .so library which also seems to be where the actual connection code happens which we were unable to dig into.
In the images 192.168.72.33 is my phone and 192.168.72.173 is the 802W device.
Image of what I believe is a UDP broadcast of the video information.
This is what the stream turns into when the device connects using the WIFI_AVIN application.
-
PipedInputStream / PipedOutputStream, ImageIO and ffmpeg
19 avril 2015, par jdevelopI have the following code in Scala :
val pos = new PipedOutputStream()
val pis = new PipedInputStream(pos)
Future {
LOG.trace("Start rendering")
generateFrames(videoRenderParams.length) {
img ⇒ ImageIO.write(img, "PNG", pos)
}
pos.flush()
IOUtils.closeQuietly(pos)
LOG.trace("Finished rendering")
} onComplete {
case Success(_) ⇒
LOG.trace("Complete successfully")
case Failure(err) ⇒
LOG.error("Can't render stuff", err)
IOUtils.closeQuietly(pis)
IOUtils.closeQuietly(pos)
}
val prc = (ffmpegCli #< pis).!(logger)the Future simply writes the generated images one by one to the OutputStream. Now the ffmpeg process reads the input images from stdin and converts them to MP4 file.
That works pretty well, but for some reason sometimes I’m getting the following stacktraces :
I/O error Pipe closed for process: <input stream="stream" />
java.io.IOException: Pipe closed
at java.io.PipedInputStream.checkStateForReceive(PipedInputStream.java:260)
at java.io.PipedInputStream.receive(PipedInputStream.java:226)
at java.io.PipedOutputStream.write(PipedOutputStream.java:149)
at scala.sys.process.BasicIO$.loop$1(BasicIO.scala:236)
at scala.sys.process.BasicIO$.transferFullyImpl(BasicIO.scala:242)
at scala.sys.process.BasicIO$.transferFully(BasicIO.scala:223)
at scala.sys.process.ProcessImpl$PipeThread.runloop(ProcessImpl.scala:159)
at scala.sys.process.ProcessImpl$PipeSource.run(ProcessImpl.scala:179)At the same time I’m getting the following error from another stream :
javax.imageio.IIOException: I/O error writing PNG file!
at com.sun.imageio.plugins.png.PNGImageWriter.write(PNGImageWriter.java:1168)
at javax.imageio.ImageWriter.write(ImageWriter.java:615)
at javax.imageio.ImageIO.doWrite(ImageIO.java:1612)
at javax.imageio.ImageIO.write(ImageIO.java:1578)
atSo it seems that the streams were broken somewhere in between, so ffmpeg can not read the data, and ImageIO can not write the data.
What is even more interesting - the problem is reproducible only on certain Linux server (Amazon). It works flawlessly on other Linux boxes. So I wonder if somebody could point me out to the possible causes of this error.
What I’ve tried so far :
- use Oracle JDK 8 and OpenJDK
- use different versions of FFMPEG
Nothing worked by the moment.