
Recherche avancée
Autres articles (28)
-
La file d’attente de SPIPmotion
28 novembre 2010, parUne file d’attente stockée dans la base de donnée
Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...) -
ANNEXE : Les extensions, plugins SPIP des canaux
11 février 2010, parUn plugin est un ajout fonctionnel au noyau principal de SPIP. MediaSPIP consiste en un choix délibéré de plugins existant ou pas auparavant dans la communauté SPIP, qui ont pour certains nécessité soit leur création de A à Z, soit des ajouts de fonctionnalités.
Les extensions que MediaSPIP nécessite pour fonctionner
Depuis la version 2.1.0, SPIP permet d’ajouter des plugins dans le répertoire extensions/.
Les "extensions" ne sont ni plus ni moins que des plugins dont la particularité est qu’ils se (...) -
MediaSPIP Core : La Configuration
9 novembre 2010, parMediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...)
Sur d’autres sites (5642)
-
FFMPEG xstack not recognizing inputs
12 août 2020, par JoshI'm trying to arrange three input videos into a single output video using ffmpeg's xstack. I currently have the operations working with a vstack followed by an hstack, but would like to combine them into an xstack for performance.


I've tried copying the syntax from multiple locations such as :




Vertically or horizontally stack (mosaic) several videos using ffmpeg ?


My command is as follows :




C :\ffmpeg\bin\ffmpeg.exe -i states_full.mp4 -i title.mp4 -i graphs.mp4" -filter_complex "[0:v] setpts=PTS-STARTPTS, scale=qvga [a0] ; [1:v] setpts=PTS-STARTPTS, scale=qvga [a1] ; [2:v] setpts=PTS-STARTPTS, scale=qvga [a2] ; [a0][a1][a2]xstack=inputs=3:layout=0_0|w0_0|w0_h0[out] " -map "[out]" -c:v libx264 -t '30' -f matroska output.mp4




The command always errors out at the same spot, with the same error message :




'w0_0' is not recognized as an internal or external command,
operable program or batch file.




Some odd behavior is that even when I change the layout section to :


layout=w0_0|0_0|w0_h0



The error message is still on the middle '0_0' meaning it may be an error in formatting.


This issue is very strange, as the vstack and hstack still work, only the xstack fails.


-
Ffmpeg : Encoding images with audio [duplicate]
10 février 2015, par DriesThis question is an exact duplicate of :
I’m tryng to encode video and audio at the same time using Ffmpeg.
At the moment I’m using this command :ffmpeg -r 24 -pix_fmt rgba -s 1280x720 -f rawvideo -y -i -f s16le -ac 1 -ar 44100 -i - -acodec pcm_s16le -ac 1 -b:a 320k -ar 44100 -vf vflip -vcodec mpeg1video -qscale 4 -bufsize 500KB -maxrate 5000KB OUTPUT_FILE
At first I had this without the information and input from the audio. So, without the audio this works fine :
ffmpeg -r 24 -pix_fmt rgba -s 1280x720 -f rawvideo -y -i -vf vflip -vcodec mpeg1video -qscale 4 -bufsize 500KB -maxrate 5000KB OUTPUT_FILE
To write the frames to the encoder (still not using audio) I’m using the following code :
fwrite(frame, sizeof(unsigned char*) * frameWidth * frameHeight, 1, ffmpeg);
Now, I want to add audio to this encoder (see the first command). I added a new input and added the information about the audio to the command but I have no idea how I should write, and when, the audio to the encoder (together with the video).
I’ve tried this by adding the following code :
for (int t = 0; t < 44100/24; ++t)
fwrite(&audio, sizeof(short int), 1, ffmpeg);This should write 2000 samples of audio per video frame to the encoder. This code is called right after the fwrite() for the video frame.
This does give weird results however. The image is not rendered in the right spot, gives green flashes and there is no audio.
I can’t really figure out how (or when) to write the audio to the encoder.
Thanks in advance !
-
Optimizing conversion of video files to texture-atlas
7 novembre 2017, par Malu05I am working on some code that converts a video file into a 1d texture atlas (a single JPG file containing all of the frames stacked on top of each other).
I do this by reading in the frames using FFMPEG’s image2pipe, and read the pipe into a 3d numpy array. Then use numpy’s concatenate function to add them on top of each other.
The code itself works fine, but it is somewhat slow. In this example 100 frames of a video takes roughly 15-20 seconds.Reading the frames into the pipe is somewhat fast, but reading from the pipe, converting to array and concatenating takes some time.
I am not too familiar with ways to optimize numpy array handling, but
I expect that I might be doing something very inefficient here but i can’t seem to spot it. Is there anything i can do to make this process faster ?import subprocess as sp
import numpy
import datetime
import matplotlib.pyplot as plt #Only for test plotting.
FFMPEG_BIN = "ffmpeg" #link to the ffmpeg executeable
#Number of images in the texture-atlas
images = 100
for x in range(0,images): #Loop for each frame to read
res = [596,336,3] #The output format
#Setup the command for reading 1 frame in the video.
command = [ FFMPEG_BIN,
'-ss', '%s'%(x),
'-i', '340114063.mp4',
'-f', 'image2pipe',
'-vframes', '1',
'-vf','scale=%s:%s'%(res[0],res[1]),
'-pix_fmt', 'rgb24',
'-vcodec', 'rawvideo', '-']
pipe = sp.Popen(command, stdout = sp.PIPE,stderr=sp.PIPE, bufsize=-1)
#Read out the image data from the pipe
raw_image = pipe.stdout.read(res[0]*res[1]*res[2])
#Convert to numpy array
image = numpy.fromstring(raw_image, dtype='uint8')
#Clear the pipe
pipe.stdout.flush()
if x>0:
#Add the image below the last image(s) so that they form a 1d texture atlas
image = numpy.concatenate((last_image,image), axis=0)
last_image = image
#not needed, just for plotting example.
image = image.reshape((res[1]*images,res[0],res[2]))
imgplot = plt.imshow(image)
plt.savefig('foo.png')