
Recherche avancée
Médias (1)
-
Somos millones 1
21 juillet 2014, par
Mis à jour : Juin 2015
Langue : français
Type : Video
Autres articles (92)
-
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...) -
Configurer la prise en compte des langues
15 novembre 2010, parAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...) -
XMP PHP
13 mai 2011, parDixit Wikipedia, XMP signifie :
Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)
Sur d’autres sites (7873)
-
How to convert a javascript animation to video on the server-side using nodejs ?
13 mai 2019, par user9964622I have a app where a user can create animations , I want to be able to convert these animations to video on server side, so user can save and share them eg YouTube, etc
Here is what I have so far , animation created using create js and ffmpegserver.js.
ffmpegserver.js.
This is a simple node server and library that sends canvas frames to the server and uses FFmpeg to compress the video. It can be used standalone or with CCapture.js
Test3.html
<code class="echappe-js"><body onload="init();">Simple Tween Demo
<script src="http://localhost:8081/ffmpegserver/CCapture.js"></script>
<script src="http://localhost:8081/ffmpegserver/ffmpegserver.js"></script>
<script src="https://code.createjs.com/1.0.0/createjs.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/tween.js/17.2.0/Tween.js"></script>
<script src='http://stackoverflow.com/feeds/tag/test3.js'></script>
Test3.js
/* eslint-disable eol-last */
/* eslint-disable no-undef */
/* eslint-disable quotes */
var canvas, stage;
function init() {
var framesPerSecond = 60;
var numFrames = framesPerSecond * 5; // a 5 second 60fps video
var frameNum = 0;
var progressElem = document.getElementById("progress");
var progressNode = document.createTextNode("");
progressElem.appendChild(progressNode);
function onProgress(progress) {
progressNode.nodeValue = (progress * 100).toFixed(1) + "%";
}
function showVideoLink(url, size) {
size = size ? (" [size: " + (size / 1024 / 1024).toFixed(1) + "meg]") : " [unknown size]";
var a = document.createElement("a");
a.href = url;
var filename = url;
var slashNdx = filename.lastIndexOf("/");
if (slashNdx >= 0) {
filename = filename.substr(slashNdx + 1);
}
a.download = filename;
a.appendChild(document.createTextNode("Download"));
var container = document.getElementById("container").insertBefore(a, progressElem);
}
var capturer = new CCapture( {
format: 'ffmpegserver',
//workersPath: "3rdparty/",
//format: 'gif',
//verbose: true,
framerate: framesPerSecond,
onProgress: onProgress,
//extension: ".mp4",
//codec: "libx264",
} );
capturer.start();
canvas = document.getElementById("testCanvas");
stage = new createjs.Stage(canvas);
var ball = new createjs.Shape();
ball.graphics.setStrokeStyle(5, 'round', 'round');
// eslint-disable-next-line quotes
ball.graphics.beginStroke('#000000');
ball.graphics.beginFill("#FF0000").drawCircle(0, 0, 50);
ball.graphics.setStrokeStyle(1, 'round', 'round');
ball.graphics.beginStroke('#000000');
ball.graphics.moveTo(0, 0);
ball.graphics.lineTo(0, 50);
ball.graphics.endStroke();
ball.x = 200;
ball.y = -50;
createjs.Tween.get(ball, {loop: -1})
.to({x: ball.x, y: canvas.height - 55, rotation: -360}, 1500, createjs.Ease.bounceOut)
.wait(1000)
.to({x: canvas.width - 55, rotation: 360}, 2500, createjs.Ease.bounceOut)
.wait(1000)
.to({scaleX: 2, scaleY: 2}, 2500, createjs.Ease.quadOut)
.wait(1000)
stage.addChild(ball);
createjs.Ticker.addEventListener("tick", stage);
function render() {
requestAnimationFrame(render);
capturer.capture( canvas );
++frameNum;
if (frameNum < numFrames) {
progressNode.nodeValue = "rendered frame# " + frameNum + " of " + numFrames;
} else if (frameNum === numFrames) {
capturer.stop();
capturer.save(showVideoLink);
}
}
render();
}Everything works fine, you can test it yourself if you want by cloning the repo.
Right now animation rendering happens in client side, I would like this animation rendering to happen in the backend side
What do I need to change to make this animation rendering in backend server side using Nodejs ? any help or suggestions will be appreciated.
-
How to programmatically read an audio RTP stream using javacv and ffmpeg ?
21 mai 2019, par ChrisI am trying to read an audio RTP stream coming from ffmpeg in command line using javaCV. I create a DatagramSocket that listens to a specified port but can’t get the audio frames.
I have tried with different types of buffer to play the audio to my speakers but I am getting a lot of "Invalid return value 0 for stream protocol" error messages with no audio in my speakers.
I am running the following command to stream an audio file :
ffmpeg -re -i /some/file.wav -ar 44100 -f mulaw -f rtp rtp ://127.0.0.1:7780
And an excerpt of my code so far :
public class FrameGrabber implements Runnable
private static final TimeUnit SECONDS = TimeUnit.SECONDS;
private InetAddress ipAddress;
private DatagramSocket serverSocket;
public FrameGrabber(Integer port) throws UnknownHostException, SocketException {
super();
this.ipAddress = InetAddress.getByName("192.168.44.18");
serverSocket = new DatagramSocket(port, ipAddress);
}
public AudioFormat getAudioFormat() {
float sampleRate = 44100.0F;
// 8000,11025,16000,22050,44100
int sampleSizeInBits = 16;
// 8,16
int channels = 1;
// 1,2
boolean signed = true;
// true,false
boolean bigEndian = false;
// true,false
return new AudioFormat(sampleRate, sampleSizeInBits, channels, signed, bigEndian);
}
@Override
public void run() {
byte[] buffer = new byte[2048];
DatagramPacket packet = new DatagramPacket(buffer, buffer.length);
DataInputStream dis = new DataInputStream(new ByteArrayInputStream(packet.getData(), packet.getOffset(), packet.getLength()));
FFmpegFrameGrabber grabber = new FFmpegFrameGrabber(dis);
grabber.setFormat("mulaw");
grabber.setSampleRate((int) getAudioFormat().getSampleRate());
grabber.setAudioChannels(getAudioFormat().getChannels());
SourceDataLine soundLine = null;
try {
grabber.start();
if (grabber.getSampleRate() > 0 && grabber.getAudioChannels() > 0) {
AudioFormat audioFormat = new AudioFormat(grabber.getSampleRate(), 16, grabber.getAudioChannels(), true, true);
DataLine.Info info = new DataLine.Info(SourceDataLine.class, audioFormat);
soundLine = (SourceDataLine) AudioSystem.getLine(info);
soundLine.open(audioFormat);
soundLine.start();
}
ExecutorService executor = Executors.newSingleThreadExecutor();
while (true) {
try {
serverSocket.receive(packet);
} catch (IOException e) {
e.printStackTrace();
}
Frame frame = grabber.grab();
//if (frame == null) break;
if (frame != null && frame.samples != null) {
ShortBuffer channelSamplesFloatBuffer = (ShortBuffer) frame.samples[0];
channelSamplesFloatBuffer.rewind();
ByteBuffer outBuffer = ByteBuffer.allocate(channelSamplesFloatBuffer.capacity() * 2);
float[] samples = new float[channelSamplesFloatBuffer.capacity()];
for (int i = 0; i < channelSamplesFloatBuffer.capacity(); i++) {
short val = channelSamplesFloatBuffer.get(i);
outBuffer.putShort(val);
}
if (soundLine == null) return;
try {
SourceDataLine finalSoundLine = soundLine;
executor.submit(() -> {
finalSoundLine.write(outBuffer.array(), 0, outBuffer.capacity());
outBuffer.clear();
}).get();
} catch (InterruptedException interruptedException) {
Thread.currentThread().interrupt();
}
}
}
/*
executor.shutdownNow();
executor.awaitTermination(1, SECONDS);
if (soundLine != null) {
soundLine.stop();
}
grabber.stop();
grabber.release();*/
} catch (ExecutionException ex) {
System.out.println("ExecutionException");
ex.printStackTrace();
} catch (org.bytedeco.javacv.FrameGrabber.Exception ex) {
System.out.println("FrameGrabberException");
ex.printStackTrace();
} catch (LineUnavailableException ex) {
System.out.println("LineUnavailableException");
ex.printStackTrace();
}/* catch (InterruptedException e) {
System.out.println("InterruptedException");
e.printStackTrace();
}*/
}
public static void main(String[] args) throws SocketException, UnknownHostException {
Runnable apRunnable = new FrameGrabber(7780);
Thread ap = new Thread(apRunnable);
ap.start();
}At this stage, I am trying to play the audio file in my speakers but I am getting the following logs :
Task :FrameGrabber.main()
Invalid return value 0 for stream protocol
Invalid return value 0 for stream protocol
Input #0, mulaw, from ’java.io.DataInputStream@474e6cea’ :
Duration : N/A, bitrate : 352 kb/s
Stream #0:0 : Audio : pcm_mulaw, 44100 Hz, 1 channels, s16, 352 kb/s
Invalid return value 0 for stream protocol
Invalid return value 0 for stream protocol
Invalid return value 0 for stream protocol
Invalid return value 0 for stream protocol
...What am I doing wrong ?
Thanks in advance !
-
How to import fluent-ffmpeg in AWS lambda ?
30 janvier 2020, par FookI’m trying to use fluent-ffmpeg in AWS Lambda, but cannot get it setup correctly. At the top of my index.js :
import ffmpeg from "fluent-ffmpeg";
But it is always undefined.
ffmpeg === undefined
.I’m using Serverless and have ffmpeg included as a layer.
serverless.yaml
functions:
createGifFromVideo:
handler: src/services/createGifFromVideo/index.handler
layers:
- { Ref: FfmpegLambdaLayer }
events:
- sns: arn:aws:sns:us-east-1:${self:custom.accountId}:NewVideoPostContentTopic-${self:provider.stage}
layers:
ffmpeg:
path: src/layerspackage.json
{
"name": "createGifFromVideo",
"version": "1.0.0",
"main": "index.js",
"license": "MIT",
"private": true,
"dependencies": {
"fluent-ffmpeg": "^2.1.2"
}
}The uploaded lambda seems to be constructed correctly from what I can tell. Webpack builds the file with fluent-ffmpeg merged in and it is linked to the ffmpeg layer.
I can load other packages. It’s just fluent-ffmpeg that comes back
undefined
.From the docs it mentions passing
FFMPEG_PATH
andFFPROBE_PATH
as environment variables. Are these necessary with a layer ?I would be grateful to see a configuration that works.