
Recherche avancée
Autres articles (71)
-
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...) -
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ; -
Menus personnalisés
14 novembre 2010, parMediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
Menus créés à l’initialisation du site
Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...)
Sur d’autres sites (9191)
-
WARN : Tried to pass invalid video frame, marking as broken : Your frame has data type int64, but we require uint8
5 septembre 2019, par Tavo DiazI am doing some Udemy AI courses and came across with one that "teaches" a bidimensional cheetah how to walk. I was doing the exercises on my computer, but it takes too much time. I decided to use Google Cloud to run the code and see the results some hours after. Nevertheless, when I run the code I get the following error " WARN : Tried to pass
invalid video frame, marking as broken : Your frame has data type int64, but we require uint8 (i.e. RGB values from 0-255)".After the code is executed, I see into the folder and I don’t see any videos (just the meta info).
Some more info (if it helps) :
I have a 1 CPU (4g), SSD Ubuntu 16.04 LTSI have not tried anything yet to solve it because I don´t know what to try. Im looking for solutions on the web, but nothing I could try.
This is the code
import os
import numpy as np
import gym
from gym import wrappers
import pybullet_envs
class Hp():
def __init__(self):
self.nb_steps = 1000
self.episode_lenght = 1000
self.learning_rate = 0.02
self.nb_directions = 32
self.nb_best_directions = 32
assert self.nb_best_directions <= self.nb_directions
self.noise = 0.03
self.seed = 1
self.env_name = 'HalfCheetahBulletEnv-v0'
class Normalizer():
def __init__(self, nb_inputs):
self.n = np.zeros(nb_inputs)
self.mean = np.zeros(nb_inputs)
self.mean_diff = np.zeros(nb_inputs)
self.var = np.zeros(nb_inputs)
def observe(self, x):
self.n += 1.
last_mean = self.mean.copy()
self.mean += (x - self.mean) / self.n
#abajo es el online numerator update
self.mean_diff += (x - last_mean) * (x - self.mean)
#abajo online computation de la varianza
self.var = (self.mean_diff / self.n).clip(min = 1e-2)
def normalize(self, inputs):
obs_mean = self.mean
obs_std = np.sqrt(self.var)
return (inputs - obs_mean) / obs_std
class Policy():
def __init__(self, input_size, output_size):
self.theta = np.zeros((output_size, input_size))
def evaluate(self, input, delta = None, direction = None):
if direction is None:
return self.theta.dot(input)
elif direction == 'positive':
return (self.theta + hp.noise * delta).dot(input)
else:
return (self.theta - hp.noise * delta).dot(input)
def sample_deltas(self):
return [np.random.randn(*self.theta.shape) for _ in range(hp.nb_directions)]
def update (self, rollouts, sigma_r):
step = np.zeros(self.theta.shape)
for r_pos, r_neg, d in rollouts:
step += (r_pos - r_neg) * d
self.theta += hp.learning_rate / (hp.nb_best_directions * sigma_r) * step
def explore(env, normalizer, policy, direction = None, delta = None):
state = env.reset()
done = False
num_plays = 0.
#abajo puede ser promedio de las rewards
sum_rewards = 0
while not done and num_plays < hp.episode_lenght:
normalizer.observe(state)
state = normalizer.normalize(state)
action = policy.evaluate(state, delta, direction)
state, reward, done, _ = env.step(action)
reward = max(min(reward, 1), -1)
#abajo sería poner un promedio
sum_rewards += reward
num_plays += 1
return sum_rewards
def train (env, policy, normalizer, hp):
for step in range(hp.nb_steps):
#iniciar las perturbaciones deltas y los rewards positivos/negativos
deltas = policy.sample_deltas()
positive_rewards = [0] * hp.nb_directions
negative_rewards = [0] * hp.nb_directions
#sacar las rewards en la dirección positiva
for k in range(hp.nb_directions):
positive_rewards[k] = explore(env, normalizer, policy, direction = 'positive', delta = deltas[k])
#sacar las rewards en dirección negativo
for k in range(hp.nb_directions):
negative_rewards[k] = explore(env, normalizer, policy, direction = 'negative', delta = deltas[k])
#sacar todas las rewards para sacar la desvest
all_rewards = np.array(positive_rewards + negative_rewards)
sigma_r = all_rewards.std()
#acomodar los rollauts por el max (r_pos, r_neg) y seleccionar la mejor dirección
scores = {k:max(r_pos, r_neg) for k, (r_pos, r_neg) in enumerate(zip(positive_rewards, negative_rewards))}
order = sorted(scores.keys(), key = lambda x:scores[x])[:hp.nb_best_directions]
rollouts = [(positive_rewards[k], negative_rewards[k], deltas[k]) for k in order]
#actualizar policy
policy.update (rollouts, sigma_r)
#poner el final reward del policy luego del update
reward_evaluation = explore (env, normalizer, policy)
print('Paso: ', step, 'Lejania: ', reward_evaluation)
def mkdir(base, name):
path = os.path.join(base, name)
if not os.path.exists(path):
os.makedirs(path)
return path
work_dir = mkdir('exp', 'brs')
monitor_dir = mkdir(work_dir, 'monitor')
hp = Hp()
np.random.seed(hp.seed)
env = gym.make(hp.env_name)
env = wrappers.Monitor(env, monitor_dir, force = True)
nb_inputs = env.observation_space.shape[0]
nb_outputs = env.action_space.shape[0]
policy = Policy(nb_inputs, nb_outputs)
normalizer = Normalizer(nb_inputs)
train(env, policy, normalizer, hp) -
Ubuntu ffmpeg can't use libx264
27 mai 2021, par user1315621Command


ffprobe rtsp://localhost/myvideo -codec:v libx264 -show_frames -of csv



Output


Failed to set value 'libx264' for option 'codec:v': Option not found



But
libx264
seems to be installed -
sudo apt install libx264-dev
:

Reading package lists... Done
Building dependency tree 
Reading state information... Done
libx264-dev is already the newest version (2:0.152.2854+gite9a5903-2).



-
Ubuntu ffmpeg can't use libx264
27 mai 2021, par user1315621Command


ffprobe rtsp://localhost/myvideo -codec:v libx264 -show_frames -of csv



Output


Failed to set value 'libx264' for option 'codec:v': Option not found



But
libx264
seems to be installed -
sudo apt install libx264-dev
:

Reading package lists... Done
Building dependency tree 
Reading state information... Done
libx264-dev is already the newest version (2:0.152.2854+gite9a5903-2).