
Recherche avancée
Médias (1)
-
Carte de Schillerkiez
13 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
Autres articles (96)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
ANNEXE : Les plugins utilisés spécifiquement pour la ferme
5 mars 2010, parLe site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)
Sur d’autres sites (10925)
-
How to overlay multiple landscape regions from a single input to a new portrait video ? FFmpeg
27 août 2023, par 3V1LXDI have an electron program that selects multiple regions of a landscape video and lets you rearrange them in a portrait canvas. I'm having trouble building the proper ffmpeg command to create the video. I have this somewhat working. I can export 2 layers, but i can't export if i only have 1 layer or if i have 3 or more layers selected.


2 regions of video selected


layers [
 { top: 0, left: 658, width: 576, height: 1080 },
 { top: 262, left: 0, width: 576, height: 324 }
]
newPositions [
 { top: 0, left: 0, width: 576, height: 1080 },
 { top: 0, left: 0, width: 576, height: 324 }
]
filtergraph [0]crop=576:1080:658:0,scale=576:1080[v0];[0]crop=576:324:0:262,scale=576:324[v1];[v0][v1]overlay=0:0:0:0[out]

No Error export successful



1 region selected


layers [ { top: 0, left: 650, width: 576, height: 1080 } ]
newPositions [ { top: 0, left: 0, width: 576, height: 1080 } ]
filtergraph [0]crop=576:1080:650:0,scale=576:1080[v0];[v0]overlay=0:0[out]

FFmpeg error: [fc#0 @ 000001dd3b6db0c0] Cannot find a matching stream for unlabeled input pad overlay
Error initializing complex filters: Invalid argument



3 regions of video selected


layers [
 { top: 0, left: 641, width: 576, height: 1080 },
 { top: 250, left: 0, width: 576, height: 324 },
 { top: 756, left: 0, width: 576, height: 324 }
]
newPositions [
 { top: 0, left: 0, width: 576, height: 1080 },
 { top: 0, left: 0, width: 576, height: 324 },
 { top: 756, left: 0, width: 576, height: 324 }
]
filtergraph [0]crop=576:1080:641:0,scale=576:1080[v0];[0]crop=576:324:0:250,scale=576:324[v1];[0]crop=576:324:0:756,scale=576:324[v2];[v0][v1][v2]overlay=0:0:0:0:0:756[out]

FFmpeg error: [AVFilterGraph @ 0000018faf2189c0] More input link labels specified for filter 'overlay' than it has inputs: 3 > 2
[AVFilterGraph @ 0000018faf2189c0] Error linking filters

FFmpeg error: Failed to set value '[0]crop=576:1080:698:0,scale=576:1080[v0];[0]crop=576:324:0:264,scale=576:324[v1];[0]crop=576:324:0:756,scale=576:324[v2];[v0][v1][v2]overlay=0:0:0:0:0:0[out]' for option 'filter_complex': Invalid argument
Error parsing global options: Invalid argument



I can't figure out how to construct the proper overlay command. Here is the js code i'm using from my electron app.


ipcMain.handle('export-video', async (_event, args) => {
 const { videoFile, outputName, layers, newPositions } = args;
 const ffmpegPath = path.join(__dirname, 'bin', 'ffmpeg');
 const outputDir = checkOutputDir();
 
 // use same video for each layer as input
 // crop, scale, and position each layer
 // overlay each layer on top of each other

 // export video
 console.log('layers', layers);
 console.log('newPositions', newPositions);

 let filtergraph = '';

 for (let i = 0; i < layers.length; i++) {
 const { top, left, width, height } = layers[i];
 const { width: newWidth, height: newHeight } = newPositions[i];
 const filter = `[0]crop=${width}:${height}:${left}:${top},scale=${newWidth}:${newHeight}[v${i}];`;
 filtergraph += filter;
 }

 for (let i = 0; i < layers.length; i++) {
 const filter = `[v${i}]`;
 filtergraph += filter;
 }

 filtergraph += `overlay=`;
 for (let i = 0; i < layers.length; i++) {
 const { top: newTop, left: newLeft } = newPositions[i];
 const overlay = `${newLeft}:${newTop}:`;
 filtergraph += overlay;
 }

 filtergraph = filtergraph.slice(0, -1); // remove last comma
 filtergraph += `[out]`;
 
 console.log('filtergraph', filtergraph);

 const ffmpeg = spawn(ffmpegPath, [
 '-i', videoFile,
 '-filter_complex', filtergraph,
 '-map', '[out]',
 '-c:v', 'libx264',
 '-preset', 'ultrafast',
 '-crf', '18',
 '-y',
 path.join(outputDir, `${outputName}`)
 ]); 

 ffmpeg.stdout.on('data', (data) => {
 console.log(`FFmpeg output: ${data}`);
 });

 ffmpeg.stderr.on('data', (data) => {
 console.error(`FFmpeg error: ${data}`);
 });

 ffmpeg.on('close', (code) => {
 console.log(`FFmpeg process exited with code ${code}`);
 // event.reply('ffmpeg-export-done'); // Notify the renderer process
 });
});



Any advice might be helpful, The docs are confusing, Thanks.


Edit
I'm getting closer with this
Output :


layers [
 { top: 0, left: 677, width: 576, height: 1080 },
 { top: 240, left: 0, width: 576, height: 324 }
]
newPositions [
 { top: 0, left: 0, width: 576, height: 1080 },
 { top: 0, left: 0, width: 576, height: 324 }
]
filtergraph [0]crop=576:1080:677:0,scale=576:1080[v0];[0]crop=576:324:0:240,scale=576:324[v1];[0][v0]overlay=0:0[o0];[o0][v1]overlay=0:0[o1]



ipcMain.handle('export-video', async (_event, args) => {
 const { videoFile, outputName, layers, newPositions } = args;
 const ffmpegPath = path.join(__dirname, 'bin', 'ffmpeg');
 const outputDir = checkOutputDir();
 
 // use same video for each layer as input
 // crop, scale, and position each layer
 // overlay each layer on top of each other

 // export video
 console.log('layers', layers);
 console.log('newPositions', newPositions);

 let filtergraph = '';

 for (let i = 0; i < layers.length; i++) {
 const { top, left, width, height } = layers[i];
 const { width: newWidth, height: newHeight } = newPositions[i];
 const filter = `[0]crop=${width}:${height}:${left}:${top},scale=${newWidth}:${newHeight}[v${i}];`;
 filtergraph += filter;
 }

 for (let i = 0; i < layers.length; i++) {
 if (i === 0) {
 filtergraph += `[0][v${i}]overlay=`;
 } else {
 filtergraph += `[o${i-1}][v${i}]overlay=`;
 }
 const { top: newTop, left: newLeft } = newPositions[i];
 let overlay = '';
 if (i !== layers.length - 1) {
 overlay = `${newLeft}:${newTop}[o${i}];`;
 } else {
 overlay = `${newLeft}:${newTop};`;
 }
 filtergraph += overlay;
 }

 filtergraph = filtergraph.slice(0, -1); // remove last comma
 filtergraph += `[o${layers.length-1}]`;
 
 console.log('filtergraph', filtergraph);

 const ffmpeg = spawn(ffmpegPath, [
 '-i', videoFile,
 '-filter_complex', filtergraph,
 '-map', `[o${layers.length-1}]`,
 '-c:v', 'libx264',
 '-preset', 'ultrafast',
 '-crf', '18',
 '-y',
 path.join(outputDir, `${outputName}`)
 ]); 

 ffmpeg.stdout.on('data', (data) => {
 console.log(`FFmpeg output: ${data}`);
 });

 ffmpeg.stderr.on('data', (data) => {
 console.error(`FFmpeg error: ${data}`);
 });

 ffmpeg.on('close', (code) => {
 console.log(`FFmpeg process exited with code ${code}`);
 // event.reply('ffmpeg-export-done'); // Notify the renderer process
 });
});



The problem I'm having now is that its overlaying the regions over the original input and keeping the landscape dimensions instead of making a portrait video.


-
How to use FFMPEG on Python/Windows10 with Pipe for Screen recording ?
20 septembre 2020, par TrmottaI'd like to record the screen with ffmpeg as it seems to be the only player out there who can record a region of the screen along with the mouse cursor.


The following code was adapted from i want to display mouse pointer in my recording but it doesn't work on a Windows 10 (x64) setup (using Python 3.6).


#!/usr/bin/env python3

# ffmpeg -y -pix_fmt bgr0 -f avfoundation -r 20 -t 10 -i 1 -vf scale=w=3840:h=2160 -f rawvideo /dev/null

import sys
import cv2
import time
import subprocess
import numpy as np

w,h = 100, 100

def ffmpegGrab():
 """Generator to read frames from ffmpeg subprocess"""

 #ffmpeg -f gdigrab -framerate 30 -offset_x 10 -offset_y 20 -video_size 640x480 -show_region 1 -i desktop output.mkv #CODE THAT ACTUALLY WORKS WITH FFMPEG CLI

 cmd = 'D:/Downloads/ffmpeg-20200831-4a11a6f-win64-static/ffmpeg-20200831-4a11a6f-win64-static/bin/ffmpeg.exe -f gdigrab -framerate 30 -offset_x 10 -offset_y 20 -video_size 100x100 -show_region 1 -i desktop -f image2pipe, -pix_fmt bgr24 -vcodec rawvideo -an -sn' 

 proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
 #out, err = proc.communicate()
 while True:
 frame = proc.stdout.read(w*h*3)
 yield np.frombuffer(frame, dtype=np.uint8).reshape((h,w,3))

# Get frame generator
gen = ffmpegGrab()

# Get start time
start = time.time()

# Read video frames from ffmpeg in loop
nFrames = 0
while True:
 # Read next frame from ffmpeg
 frame = next(gen)
 nFrames += 1

 cv2.imshow('screenshot', frame)

 if cv2.waitKey(1) == ord("q"):
 break

 fps = nFrames/(time.time()-start)
 print(f'FPS: {fps}')


cv2.destroyAllWindows()
out.release()



By using 'cmd' as stated above, I'll get the following error :


b"ffmpeg version git-2020-08-31-4a11a6f Copyright (c) 2000-2020 the FFmpeg developers\r\n built with gcc 10.2.1 (GCC) 20200805\r\n configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libsrt --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libgsm --enable-librav1e --enable-libsvtav1 --disable-w32threads --enable-libmfx --enable-ffnvcodec --enable-cuda-llvm --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt --enable-amf\r\n libavutil 56. 58.100 / 56. 58.100\r\n libavcodec 58.101.101 / 58.101.101\r\n libavformat 58. 51.101 / 58. 51.101\r\n libavdevice 58. 11.101 / 58. 11.101\r\n libavfilter 7. 87.100 / 7. 87.100\r\n libswscale 5. 8.100 / 5. 8.100\r\n libswresample 3. 8.100 / 3. 8.100\r\n libpostproc 55. 8.100 / 55. 8.100\r\nTrailing option(s) found in the command: may be ignored.\r\n[gdigrab @ 0000017ab857f100] Capturing whole desktop as 100x100x32 at (10,20)\r\nInput #0, gdigrab, from 'desktop':\r\n Duration: N/A, start: 1599021857.538752, bitrate: 9612 kb/s\r\n Stream #0:0: Video: bmp, bgra, 100x100, 9612 kb/s, 30 fps, 30 tbr, 1000k tbn, 1000k tbc\r\n**At least one output file must be specified**\r\n"



Which is the contents of proc (and also of proc.communicate). The program crashes right after when trying to resize this message to an image of size 100x100.


I do not want to have an output file. I need to use Python subprocess along with Pipe in order to directly deliver those screen frames to my Python code, no IO required at all.


If I try the following :


cmd = 'D :/Downloads/ffmpeg-20200831-4a11a6f-win64-static/ffmpeg-20200831-4a11a6f-win64-static/bin/ffmpeg.exe -f gdigrab -framerate 30 -offset_x 10 -offset_y 20 -video_size 100x100 -i desktop -pix_fmt bgr24 -vcodec rawvideo -an -sn -f image2pipe'


proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)



Then 'frame', inside 'while True', is filled with b''.


Tried using the following libraries with no success, as I couldnt either find how to capture the mouse cursor or capture the screen at all : https://github.com/abhiTronix/vidgear, https://github.com/kkroening/ffmpeg-python


What am I missing ?
Thank you.


-
Museum of Multimedia Software, Part 2
16 août 2010, par Multimedia Mike — Software MuseumThis installment includes a bunch of old, discontinued Adobe software as well as some Flash-related mutlimedia software.
Screen Time for Flash Screen Saver Factory
"Create High Impact Screen Savers Using Macromedia Flash."
Requirements include Windows 3.1, 95 or NT 3.5.1. A 486 computer is required to play the resulting screensavers which are Flash projectors using Macromedia Flash 3.0.
Monster Interactive Instant GUI 2
Create eye-popping GUIs more easily for use in Flash. Usability experts would argue that this is not a good thing.
Adobe Dimensions 3.0
"The Easy Yet Powerful 3D Rendering Tool." This software was end-of-life’d in late 2004-early 2005 (depending on region).
Adobe ImageStyler
"Instantly add style to your Web site." Wikipedia claims that this product was sold from 1998 to 2000 when it was superseded by Adobe LiveMotion (see below).
Google is able to excavate a link to the Latin American site for Adobe ImageStyler, a page that doesn’t seem to be replicated in any other language.
Adobe LiveMotion
"Professional Web graphics and animation." This is version 1, where the last version was #2, released in 2002.
Adobe Streamline 4.0
"The most powerful way to convert images into line art." This was discontinued in mid-2005.
Adobe SuperATM
"The magic that maintains the look of your documents." This is the oldest item in my collection. A close examination of the back of the box reveals an old Adobe logo. The latest copyright date on the box is 1992.