
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (105)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs
12 avril 2011, parLa manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras. -
Possibilité de déploiement en ferme
12 avril 2011, parMediaSPIP peut être installé comme une ferme, avec un seul "noyau" hébergé sur un serveur dédié et utilisé par une multitude de sites différents.
Cela permet, par exemple : de pouvoir partager les frais de mise en œuvre entre plusieurs projets / individus ; de pouvoir déployer rapidement une multitude de sites uniques ; d’éviter d’avoir à mettre l’ensemble des créations dans un fourre-tout numérique comme c’est le cas pour les grandes plate-formes tout public disséminées sur le (...)
Sur d’autres sites (11019)
-
Problems decoding a gstreamer pipeline into images using node.js
18 septembre 2023, par JK2018I have this Gstreamer pipeline that launches a test video , encodes it in h264 and sends it via udp on localhost 5000.


gst-launch-1.0 -v videotestsrc ! videoconvert ! x264enc tune=zerolatency ! h264parse ! queue ! rtph264pay ! udpsink host=127.0.0.1 port=5000



I run this pipeline in my terminal.


Then I have a minimalistic node.js server that is suposed to receive the udp stream coming from the gstreamer pipeline, decode the video into images.
Finally I emit each image (not a fragment of an image) over a socket.


I have tried several approaches unsuccessfully.
My first attempt was using a node gstreamer library and use gstreamer to decode the udp and re encode as images.


My second attempt was using ffmpeg library.
Here is the code below :


const express = require("express");
const http = require("http");
const { Server } = require("socket.io");
const socketIO = require("socket.io");
const cors = require("cors");
const app = express();
const server = http.createServer(app);
const dgram = require("dgram");
const ffmpeg = require("fluent-ffmpeg");

const io = socketIO(server, {
 cors: {
 origin: "http://localhost:3001",
 methods: ["GET", "POST"],
 },
});

app.use(
 cors({
 origin: "http://localhost:3001",
 methods: ["GET", "POST"],
 })
);

const udpServer = dgram.createSocket("udp4");

io.on("connection", (socket) => {
 console.log("A client connected");

 socket.on("disconnect", () => {
 console.log("A client disconnected");
 });
});

// Function to decode an H.264 video frame
function decodeH264Video(inputData, callback) {
 const command = ffmpeg()
 .input(inputData)
 .inputFormat("h264")
 .inputOptions(["-c:v h264"])
 .toFormat("image2")
 .on("end", () => {
 console.log("Decoding complete");
 })
 .on("error", (err) => {
 console.error("Error decoding video:", err);
 })
 .pipe();

 callback(command);
}

// Function to convert a decoded video frame to an image (JPEG format)
function convertVideoFrameToImage(decodedData, callback) {
 const imageStream = decodedData.pipe();
 const imageBuffer = [];

 imageStream.on("data", (chunk) => {
 imageBuffer.push(chunk);
 });

 imageStream.on("end", () => {
 const imageData = Buffer.concat(imageBuffer);
 callback(imageData);
 });
}

udpServer.on("message", (message) => {
 // Decode the UDP packet containing H.264 encoded video
 decodeH264Video(message, (decodedVideo) => {
 // Process the decoded video frame and convert it to an image
 convertVideoFrameToImage(decodedVideo, (imageData) => {
 // Send the image data to connected clients
 io.sockets.emit("image", { imageData: imageData.toString("base64") });
 });
 });
});

udpServer.bind(5000);

server.listen(3000, () => {
 console.log("Server is running on port 3000");
});



Any help is more than welcome


-
FFmpeg zoompan animation results in zig-zag pattern [closed]
2 février 2024, par kregusplease assist, as I am pulling my hair out !


The goal is for a user to specify a zoomPoint in a video as follows :


{
 "scale":4, # Scale factor, here x4
 "offset":0, # Start zooming in after 0ms
 "duration":5000, # Stay zoomed in for 5000ms
 "marginX":80, # Zoom x to 80% of video width 
 "marginY":10 # Zoom y to 10% of video height
}



I am running the following, simplified FFmpeg command :


ffmpeg -i "/tmp/input.mp4" -c:v libx264 -r 30 -vf "scale=iw*2:ih*2,zoompan=fps=30:z='if(gte(it,0.0)*lt(it,5.0), min(pzoom+0.2, 4),if(gte(it,5.0)*lt(it,5.5), max(pzoom-0.2, 1), 1))':d=1:s=1512x982:x='(iw - iw/4)*(80/100)':y='(ih - ih/4)*(10/100)'" /tmp/output.mp4



The animation duration is
0.5s
, the video framerate is30fps
, and so there are0.5 / (1/30) = 15
animation frames.

Zoom distance is
scale - 1 = 3
in this case, which makes the zoom increment3 / 15 = 0.2
.

This results in the following example video : click here


While the zoom animation ends in the correct position, you will notice it arrives at that position in a zig-zag pattern, where it starts by zooming into the top-right corner, before changing direction towards the correct position.


I cannot seem to figure out how to get it to animate the zoom in a straight line to the specified x/y position.


Any tips are welcome, thanks !


-
FFmpeg Wasm, error while creating video from canvas
12 octobre 2023, par NineCattoRulesI'm using ffmpeg.wasm in my Next.JS app.


Here my specs :


"@ffmpeg/ffmpeg": "^0.12.5",
"@ffmpeg/util": "^0.12.0",
"next": "^13.0.6",
"react": "^18.2.0",



I want to simply record a 5s video from a canvas, so I tried :


'use client'

import React, { useEffect, useRef, useState } from 'react';
import { FFmpeg } from '@ffmpeg/ffmpeg';
import { fetchFile } from '@ffmpeg/util';

const CanvasVideoRecorder = () => {
 const canvasRef = useRef(null);
 const videoChunksRef = useRef([]);
 const ffmpegRef = useRef(new FFmpeg({ log: true }));
 const [loaded, setLoaded] = useState(false);
 const [videoUrl, setVideoUrl] = useState(null);

 const load = async () => {
 await ffmpegRef.current.load({
 coreURL: '/js/ffmpeg-core.js',
 wasmURL: '/js/ffmpeg-core.wasm',
 });
 setLoaded(true);
 };

 useEffect(() => {
 const ctx = canvasRef.current.getContext('2d');
 function drawFrame(timestamp) {
 ctx.fillStyle = `rgb(${(Math.sin(timestamp / 500) * 128) + 128}, 0, 0)`;
 ctx.fillRect(0, 0, canvasRef.current.width, canvasRef.current.height);
 requestAnimationFrame(drawFrame);
 }
 requestAnimationFrame(drawFrame);
 }, []);

 const startRecording = async () => {
 const videoStream = canvasRef.current.captureStream(30);
 const videoRecorder = new MediaRecorder(videoStream, { mimeType: 'video/webm' });

 videoRecorder.ondataavailable = (event) => {
 if (event.data.size > 0) {
 videoChunksRef.current.push(event.data);
 }
 };

 videoRecorder.start();
 setTimeout(() => videoRecorder.stop(), 5000);

 videoRecorder.onstop = async () => {
 try {
 await ffmpegRef.current.writeFile('recorded.webm', await fetchFile(new Blob(videoChunksRef.current, { type: 'video/webm' })));

 await ffmpegRef.current.exec('-y', '-i', 'recorded.webm', '-an', '-c:v', 'copy', 'output_copy.webm');

 const data = await ffmpegRef.current.readFile('output_copy.webm');
 const url = URL.createObjectURL(new Blob([data.buffer], { type: 'video/webm' }));

 setVideoUrl(url);
 } catch (error) {
 console.error("Error during processing:", error);
 }
 };
 };

 return (
 <div>
 <canvas ref="{canvasRef}" width="640" height="480"></canvas>

 {loaded ? (
 <>

 <button>Start Recording</button>
 {videoUrl && <video controls="controls" src="{videoUrl}"></video>}
 >
 ) : (
 <button>Load FFmpeg</button>
 )}
 </div>
 );
};

export default CanvasVideoRecorder;



I don't know why but it catch an error :


ErrnoError: FS error



This error occurs when I do this :


await ffmpegRef.current.exec('-y', '-i', 'recorded.webm', '-an', '-c:v', 'copy', 'output_copy.webm');
const data = await ffmpegRef.current.readFile('output_copy.webm');



The
recorded.webm
file is written correctly and I can read it,ffmpegRef.current
is well defined, so what's wrong with my logic, why the exec command doesn't work ?