
Recherche avancée
Autres articles (60)
-
Installation en mode ferme
4 février 2011, parLe mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
C’est la méthode que nous utilisons sur cette même plateforme.
L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...) -
Récupération d’informations sur le site maître à l’installation d’une instance
26 novembre 2010, parUtilité
Sur le site principal, une instance de mutualisation est définie par plusieurs choses : Les données dans la table spip_mutus ; Son logo ; Son auteur principal (id_admin dans la table spip_mutus correspondant à un id_auteur de la table spip_auteurs)qui sera le seul à pouvoir créer définitivement l’instance de mutualisation ;
Il peut donc être tout à fait judicieux de vouloir récupérer certaines de ces informations afin de compléter l’installation d’une instance pour, par exemple : récupérer le (...) -
Use, discuss, criticize
13 avril 2011, parTalk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
A discussion list is available for all exchanges between users.
Sur d’autres sites (10370)
-
How to read frames of a video and write them on another video output using FFMPEG and nodejs
29 décembre 2023, par AviatoI am working on a project where I need to process video frames one at a time in Node.js. I aim to avoid storing all frames in memory or the filesystem due to resource constraints. I plan to use the ffmpeg from child processes for video processing.
I tried reading a video file and then output frames of it in the filesystem first for testing purposes :-


const ffmpegProcess = spawn('ffmpeg', [
 '-i', videoFile,
 'testfolder/%04d.png' // Output frames to stdout
]);



and the above code works fine, it saves the video frames as png files in the filesystem. Now instead of saving them in the file system, I want to read the frames on at a time and use a image manipulation library and than write the final edited frames to another video as output


I tried this :-


const ffmpegProcess = spawn('ffmpeg', [
 '-i', videoFile,
 'pipe:1' // Output frames to stdout
]);

const ffmpegOutputProcess = spawn('ffmpeg', [
 '-i', '-',
 'outputFileName.mp4'
 ]);

ffmpegProcess.stdout.on('data', (data) => {
 // Process the frame data as needed
 console.log('Received frame data:');
 ffmpegOutputProcess.stdin.write(data)
});

ffmpegProcess.on('close', (code) => {
 if (code !== 0) {
 console.error(`ffmpeg process exited with code ${code}`);
 } else {
 console.log('ffmpeg process successfully completed');
 
 }
});

// Handle errors
ffmpegProcess.on('error', (err) => {
 console.error('Error while spawning ffmpeg:', err);
});



But when I tried above code and also some other modifications in the input and output suffix in the command I got problems as below :-


- 

- ffmpeg process exited with code 1
- The final output video was corrupted when trying to initializing the filters for commands :-







const ffmpegProcess = spawn('ffmpeg', [
 '-i', videoFile,
 '-f', 'rawvideo',
 '-pix_fmt', 'rgb24',
 'pipe:1' // Output frames to stdout
]);

const ffmpegOutputCommand = [
 '-f', 'rawvideo',
 '-pix_fmt', 'rgb24',
 '-s', '1920x1080',
 '-r', '30',
 '-i', '-',
 '-c:v', 'libx264',
 '-pix_fmt', 'yuv420p',
 outputFileName
];



Thank you so much in advance :)


-
Processing video frame by frame in AWS Lambda with Node.js and FFmpeg [closed]
29 décembre 2023, par AviatoI am working on a project where I need to process video frames one at a time in an AWS Lambda function using Node.js. My goal is to avoid storing all frames in memory or the filesystem due to resource constraints. I plan to use the fluent-ffmpeg library or ffmpeg from child processes for video processing.


In the past, I used OpenCV to process videos and frames without writing the frames on the disk or storing all the frames at once on the memory itself. But now as I am using node js, its a little hard to set up the code using ffmpeg, etc.


Here is a small snippet from what I did with opencv :-


import cv2

cap = cv2.VideoCapture(video_file)

out = cv2.VideoWriter('output.mp4', fourcc, fps, (width, height))

def generate_frame():
 while cap.isOpened():
 code, frame = cap.read()
 if code:
 yield frame
 else:
 print("completed")
 break

for i, frame in enumerate(generate_frame()):
 # Now we can process the video frames directly and write them on the output opencv
 out.write(editing_frames)



Additionally, I intend to leverage image processing libraries like Sharp and the Canvas API to edit individual frames before assembling the final video. I am looking for help in handling video frames efficiently within the constraints of AWS Lambda.


Any insights, code snippets, or recommendations would be greatly appreciated. Thank you !


-
FPV-Camera Input in Black and White / losses Color on conversion with FFMPEG [closed]
22 décembre 2023, par LyffLyffso we're working on our end-of-school project and it's an FPV-Drone with an Analogue Camera on it. Plan is to send the video feed to a Raspberry Pi running an RTMP-Server from where a Phone-Application can view the live Video of the camera.


To convert this analogue Data from the camera we use a USB2.0 Grabber (this one).


To create the RTMP Stream from the converted USB-Input we use FFMPEG with the following command :




fmpeg -f v4l2 -input_format yuyv422 -i /dev/video0 -c:v libx264 -crf 20 -preset ultrafast -b:v 2000k -fflags nobuffer -rtmp_live live -f flv rtmp ://192.168.8.107:554/live/stream




It works fine, but the main problems at the moment are :


- 

- the video is in Black and White
B&W Image Stream
- the stream has a delay of 10-20s depending on the network






When I'm using the official Software provided by the Reseller I had the same problem, but as soon as it set PAL/BDGHI in the Settings the colour was shown correctly :


Settings and Color Image in official software


the video is in Black and White


Does anyone know what settings there are to correctly decode the feed from the Camera and send the video with the colour over RTMP ? I don't know if this is the right place to ask this question, but I'm running out of ideas and every single decoder I have tried apart from the ones I'm currently using does not work.


Any help is greatly appreciated :)