
Recherche avancée
Médias (2)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
-
Carte de Schillerkiez
13 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
Autres articles (17)
-
Problèmes fréquents
10 mars 2010, parPHP et safe_mode activé
Une des principales sources de problèmes relève de la configuration de PHP et notamment de l’activation du safe_mode
La solution consiterait à soit désactiver le safe_mode soit placer le script dans un répertoire accessible par apache pour le site -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Les formats acceptés
28 janvier 2010, parLes commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
ffmpeg -codecs ffmpeg -formats
Les format videos acceptés en entrée
Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
Les formats vidéos de sortie possibles
Dans un premier temps on (...)
Sur d’autres sites (3976)
-
Remapping multiple Mp4 videos into a single one with ffmpeg
21 septembre 2016, par Josep BoschI’m interested in remapping multiple (6) MP4 videos into a high resolution final video according to lookup tables I calculated. The idea is convert 6 independent videos in a 360º video according to an equirectangular projection.
Example of equirectangular video hereIs there a way to do this remapping with ffmpeg or any other linux program ?
Right now I’m extracting all the frames from the videos, creating the equirectangular individual images and joining them again into a video. There must be a better way for this...UPDATE :
Following Mulyva’s suggestion I first remap each individual video using the remap filter. Those parts of the panoramic video not covered are interpreted like chromakey pixels using :
ffmpeg -i videos/camera1.MP4 -i camera0_map_x_radius5.pgm -i camera0_map_y_radius5.pgm -lavfi remap -qscale 1 out0.MP4
Then I try to overlay them using the chomakey filter :
ffmpeg -i out0.MP4 -i out1.MP4 -filter_complex "[1:v]chromakey=0x12da11:0.2:0.2[chromakey_res];[0:v][chromakey_res]overlay=eof_action=pass[out]" -map "[out]" out.mp4
As you can see, the final result has an undesirable green shadow. Any idea of how to remove it ?
-
create a movie in python [on hold]
22 mai 2017, par masoudI have some files with names (’Den_car_resample’ +’sdf’+ str(n)+’.dat’) where n changes from 0 to 25. I wrote a code to read these files and plots the results.
now I want to create a movie from these plots. at the end of the program, I used the avconv command to do that.
but, unfortunately my code creates a movie but it is empty.
I don’t know the reason exactly but I think, first, I have to define a frame to each plot and then create a movie.
can anyone please tell me how can I define a frame and also the add bit_rate of the movie.import sys
import subprocess
import sdf
import numpy as np
import matplotlib.pyplot as plt
import time
import matplotlib.animation as Animation
from matplotlib.font_manager import FontProperties
fp = FontProperties('Symbola')
##################### information from EPOCH input.deck
nx,ny= 1200, 1600
xmin=-100e-6
xmax = 110e-6
ymin = -200e-6
ymax = 200e-6
X =np.linspace(xmin,xmax,nx)
Y =np.linspace(ymin,ymax,ny)
#################
for n in range(0,25):
nstr = str(n)
######################..... reading Density of carbon
filename ="Den_car_resample" +'_sdf_'+ str(n)+'.dat'
with open(filename, 'rb') as f:
data = np.fromfile(f, dtype='float64', count=nx*ny)
Den_car = np.reshape(data, [ny, nx], order='F')
Den_car= np.log10(Den_car)
###################### Display Carbon density
fig = plt.imshow(Den_car, extent=[X.min()*1e6, X.max()*1e6, Y.min()*1e6,Y.max()*1e6], vmin=24, vmax=29, cmap='brg', aspect='auto')
plt.suptitle('Den_car')
plt.title('sdf '+ str(n)+'; Time= '+str(n*50)+'ps',color='green', fontsize=15)
plt.xlabel('x($\mu$m)')
plt.ylabel('y($\mu$m)')
plt.text(-80,-40,'Den_Carbon',color='red', fontsize=15)
plt.colorbar()
plt.savefig( 'fig%06d.png' % n, bbox_inches='tight')
plt.pause(.1)
plt.clf()
plt.close()
###################### Create movie
subprocess.call("avconv -framerate 1 -i fig%06d.png -c:v libx264 -profile:v high -crf 20".split())
sys.exit() -
Overlay circular video with transparency with maskedmerge
15 juillet 2017, par cgencoI have a square video from Snap Spectacles (1088x1088) that I want to overlay on itself zoomed in and blurred.
Example input frame :
Generated zoomed in and blurred background :
Desired output :
I think I can do this with ffmpeg’s maskedmerge, but I’m having trouble finding examples.
There’s an example of maskedmerge that merges two videos of the same size and dynamically removes a green screen, and another that merges videos with transparency.
Here’s the closest I’ve been able to get :
ffmpeg -i background.jpg -vf "movie=input.jpg[inner];[in][inner] overlay=#{offset}:0 [out]" -c:a copy output.jpg
tl ;dr : given the first two frames, how could I generate the third frame (as video) ?