
Recherche avancée
Médias (1)
-
Revolution of Open-source and film making towards open film making
6 octobre 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (63)
-
Submit bugs and patches
13 avril 2011Unfortunately a software is never perfect.
If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
You may also (...) -
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...) -
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.
Sur d’autres sites (8267)
-
TypeError : 'function' object is not subscriptable Python with ffmpeg
25 novembre 2019, par S1mpleclips = []
def clipFinder(CurrentDir, fileType):
clips.clear()
for r,d,f in os.walk(CurrentDir):
for file in f:
if fileType in file:
clips.append(r+file)
random.shuffle(clips)
def removeVods(r):
for f in clips:
if 'vod' in clips:
os.remove(r+f)
def clipString():
string = 'intermediate'
clipList = []
clipNum = 1
for f in clips:
clipList.append(string+str(clipNum)+'.ts'+'|')
clipNum+=1
string1 = ''.join(clipList)
string2 = string1[0:len(string1)-1]
return string2
def concatFiles():
clipFinder('***', '.mp4')
removeVods('***')
i = 0
intermediates = []
for f in clips:
subprocess.call(['***', '-i', clips[i], '-c', 'copy', '-bsf:v', 'h264_mp4toannexb', '-f', 'mpegts', 'intermediate'+ str(i+1) +'.ts'])
i += 1
clipsLength = len(clips)
subprocess.call['***', '-i', '"concat:' + clipString() + '"', '-c', 'copy', '-bsf:a
aac_adtstoasc', 'output.mp4']I am trying to make a clip concatenator, but the last subprocess call won’t run and gives me the error shown in the question.
Problematic code :
subprocess.call(['***', '-i', '"concat:' + clipString() + '"', '-c', 'copy', '-bsf:a aac_adtstoasc', 'output.mp4'])
all places with * were paths such as :
/davidscomputer/bin/ffmpeg/ -
WARN : Tried to pass invalid video frame, marking as broken : Your frame has data type int64, but we require uint8
5 septembre 2019, par Tavo DiazI am doing some Udemy AI courses and came across with one that "teaches" a bidimensional cheetah how to walk. I was doing the exercises on my computer, but it takes too much time. I decided to use Google Cloud to run the code and see the results some hours after. Nevertheless, when I run the code I get the following error " WARN : Tried to pass
invalid video frame, marking as broken : Your frame has data type int64, but we require uint8 (i.e. RGB values from 0-255)".After the code is executed, I see into the folder and I don’t see any videos (just the meta info).
Some more info (if it helps) :
I have a 1 CPU (4g), SSD Ubuntu 16.04 LTSI have not tried anything yet to solve it because I don´t know what to try. Im looking for solutions on the web, but nothing I could try.
This is the code
import os
import numpy as np
import gym
from gym import wrappers
import pybullet_envs
class Hp():
def __init__(self):
self.nb_steps = 1000
self.episode_lenght = 1000
self.learning_rate = 0.02
self.nb_directions = 32
self.nb_best_directions = 32
assert self.nb_best_directions <= self.nb_directions
self.noise = 0.03
self.seed = 1
self.env_name = 'HalfCheetahBulletEnv-v0'
class Normalizer():
def __init__(self, nb_inputs):
self.n = np.zeros(nb_inputs)
self.mean = np.zeros(nb_inputs)
self.mean_diff = np.zeros(nb_inputs)
self.var = np.zeros(nb_inputs)
def observe(self, x):
self.n += 1.
last_mean = self.mean.copy()
self.mean += (x - self.mean) / self.n
#abajo es el online numerator update
self.mean_diff += (x - last_mean) * (x - self.mean)
#abajo online computation de la varianza
self.var = (self.mean_diff / self.n).clip(min = 1e-2)
def normalize(self, inputs):
obs_mean = self.mean
obs_std = np.sqrt(self.var)
return (inputs - obs_mean) / obs_std
class Policy():
def __init__(self, input_size, output_size):
self.theta = np.zeros((output_size, input_size))
def evaluate(self, input, delta = None, direction = None):
if direction is None:
return self.theta.dot(input)
elif direction == 'positive':
return (self.theta + hp.noise * delta).dot(input)
else:
return (self.theta - hp.noise * delta).dot(input)
def sample_deltas(self):
return [np.random.randn(*self.theta.shape) for _ in range(hp.nb_directions)]
def update (self, rollouts, sigma_r):
step = np.zeros(self.theta.shape)
for r_pos, r_neg, d in rollouts:
step += (r_pos - r_neg) * d
self.theta += hp.learning_rate / (hp.nb_best_directions * sigma_r) * step
def explore(env, normalizer, policy, direction = None, delta = None):
state = env.reset()
done = False
num_plays = 0.
#abajo puede ser promedio de las rewards
sum_rewards = 0
while not done and num_plays < hp.episode_lenght:
normalizer.observe(state)
state = normalizer.normalize(state)
action = policy.evaluate(state, delta, direction)
state, reward, done, _ = env.step(action)
reward = max(min(reward, 1), -1)
#abajo sería poner un promedio
sum_rewards += reward
num_plays += 1
return sum_rewards
def train (env, policy, normalizer, hp):
for step in range(hp.nb_steps):
#iniciar las perturbaciones deltas y los rewards positivos/negativos
deltas = policy.sample_deltas()
positive_rewards = [0] * hp.nb_directions
negative_rewards = [0] * hp.nb_directions
#sacar las rewards en la dirección positiva
for k in range(hp.nb_directions):
positive_rewards[k] = explore(env, normalizer, policy, direction = 'positive', delta = deltas[k])
#sacar las rewards en dirección negativo
for k in range(hp.nb_directions):
negative_rewards[k] = explore(env, normalizer, policy, direction = 'negative', delta = deltas[k])
#sacar todas las rewards para sacar la desvest
all_rewards = np.array(positive_rewards + negative_rewards)
sigma_r = all_rewards.std()
#acomodar los rollauts por el max (r_pos, r_neg) y seleccionar la mejor dirección
scores = {k:max(r_pos, r_neg) for k, (r_pos, r_neg) in enumerate(zip(positive_rewards, negative_rewards))}
order = sorted(scores.keys(), key = lambda x:scores[x])[:hp.nb_best_directions]
rollouts = [(positive_rewards[k], negative_rewards[k], deltas[k]) for k in order]
#actualizar policy
policy.update (rollouts, sigma_r)
#poner el final reward del policy luego del update
reward_evaluation = explore (env, normalizer, policy)
print('Paso: ', step, 'Lejania: ', reward_evaluation)
def mkdir(base, name):
path = os.path.join(base, name)
if not os.path.exists(path):
os.makedirs(path)
return path
work_dir = mkdir('exp', 'brs')
monitor_dir = mkdir(work_dir, 'monitor')
hp = Hp()
np.random.seed(hp.seed)
env = gym.make(hp.env_name)
env = wrappers.Monitor(env, monitor_dir, force = True)
nb_inputs = env.observation_space.shape[0]
nb_outputs = env.action_space.shape[0]
policy = Policy(nb_inputs, nb_outputs)
normalizer = Normalizer(nb_inputs)
train(env, policy, normalizer, hp) -
Concatenate list of streams with ffmpeg
16 novembre 2019, par oioioiI’d like to write python code, automatically rearranging the scenes of a video. How do I use
ffmpeg-python
to concatenate a list of n streams/scenes ? What is the easiest way to merge those scenes with their audio counterpart ?
Unfortunately, I couldn’t figure out how to do it properly.The status quo :
import os
import scenedetect
from scenedetect.video_manager import VideoManager
from scenedetect.scene_manager import SceneManager
from scenedetect.frame_timecode import FrameTimecode
from scenedetect.stats_manager import StatsManager
from scenedetect.detectors import ContentDetector
import ffmpeg
STATS_FILE_PATH = 'stats.csv'
def main():
for root, dirs, files in os.walk('material'):
for file in files:
file = os.path.join(root, file)
print(file)
video_manager = VideoManager([file])
stats_manager = StatsManager()
scene_manager = SceneManager(stats_manager)
scene_manager.add_detector(ContentDetector())
base_timecode = video_manager.get_base_timecode()
end_timecode = video_manager.get_duration()
start_time = base_timecode
end_time = base_timecode + 100.0
#end_time = end_timecode[2]
video_manager.set_duration(start_time=start_time, end_time=end_time)
video_manager.set_downscale_factor()
video_manager.start()
scene_manager.detect_scenes(frame_source=video_manager)
scene_list = scene_manager.get_scene_list(base_timecode)
print('List of scenes obtained:')
for i, scene in enumerate(scene_list):
print(' Scene %2d: Start %s / Frame %d, End %s / Frame %d' % (
i+1,
scene[0].get_timecode(), scene[0].get_frames(),
scene[1].get_timecode(), scene[1].get_frames(),))
start = scene[0].get_frames()
end = scene[1].get_frames()
print(start)
print(end)
(
ffmpeg
.input(file)
.trim(start_frame=start, end_frame=end)
.setpts ('PTS-STARTPTS')
.output('scene %d.mp4' % (i+1))
.run()
)
if stats_manager.is_save_required():
with open(STATS_FILE_PATH, 'w') as stats_file:
stats_manager.save_to_csv(stats_file, base_timecode)
if __name__ == "__main__":
main()