Recherche avancée

Médias (0)

Mot : - Tags -/objet éditorial

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (111)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

Sur d’autres sites (15699)

  • WARN : Tried to pass invalid video frame, marking as broken : Your frame has data type int64, but we require uint8

    5 septembre 2019, par Tavo Diaz

    I am doing some Udemy AI courses and came across with one that "teaches" a bidimensional cheetah how to walk. I was doing the exercises on my computer, but it takes too much time. I decided to use Google Cloud to run the code and see the results some hours after. Nevertheless, when I run the code I get the following error " WARN : Tried to pass
    invalid video frame, marking as broken : Your frame has data type int64, but we require uint8 (i.e. RGB values from 0-255)".

    After the code is executed, I see into the folder and I don’t see any videos (just the meta info).

    Some more info (if it helps) :
    I have a 1 CPU (4g), SSD Ubuntu 16.04 LTS

    I have not tried anything yet to solve it because I don´t know what to try. Im looking for solutions on the web, but nothing I could try.

    This is the code

    import os
    import numpy as np
    import gym
    from gym import wrappers
    import pybullet_envs


    class Hp():
       def __init__(self):
           self.nb_steps = 1000
           self.episode_lenght =   1000
           self.learning_rate = 0.02
           self.nb_directions = 32
           self.nb_best_directions = 32
           assert self.nb_best_directions <= self.nb_directions
           self.noise = 0.03
           self.seed = 1
           self.env_name = 'HalfCheetahBulletEnv-v0'


    class Normalizer():
       def __init__(self, nb_inputs):
           self.n = np.zeros(nb_inputs)
           self.mean = np.zeros(nb_inputs)
           self.mean_diff = np.zeros(nb_inputs)
           self.var = np.zeros(nb_inputs)

       def observe(self, x):
           self.n += 1.
           last_mean = self.mean.copy()
           self.mean += (x - self.mean) / self.n
           #abajo es el online numerator update
           self.mean_diff += (x - last_mean) * (x - self.mean)
           #abajo online computation de la varianza
           self.var = (self.mean_diff / self.n).clip(min = 1e-2)  

       def normalize(self, inputs):
           obs_mean = self.mean
           obs_std = np.sqrt(self.var)
           return (inputs - obs_mean) / obs_std

    class Policy():
       def __init__(self, input_size, output_size):
           self.theta = np.zeros((output_size, input_size))

       def evaluate(self, input, delta = None, direction = None):
           if direction is None:
               return self.theta.dot(input)
           elif direction == 'positive':
               return (self.theta + hp.noise * delta).dot(input)
           else:
               return (self.theta - hp.noise * delta).dot(input)

       def sample_deltas(self):
           return [np.random.randn(*self.theta.shape) for _ in range(hp.nb_directions)]

       def update (self, rollouts, sigma_r):
           step = np.zeros(self.theta.shape)
           for r_pos, r_neg, d in rollouts:
               step += (r_pos - r_neg) * d
           self.theta += hp.learning_rate / (hp.nb_best_directions * sigma_r) * step


    def explore(env, normalizer, policy, direction = None, delta = None):
       state = env.reset()
       done = False
       num_plays = 0.
       #abajo puede ser promedio de las rewards
       sum_rewards = 0
       while not done and num_plays < hp.episode_lenght:
           normalizer.observe(state)
           state = normalizer.normalize(state)
           action = policy.evaluate(state, delta, direction)
           state, reward, done, _ = env.step(action)
           reward = max(min(reward, 1), -1)
           #abajo sería poner un promedio
           sum_rewards += reward
           num_plays += 1
       return sum_rewards

    def train (env, policy, normalizer, hp):
       for step in range(hp.nb_steps):
           #iniciar las perturbaciones deltas y los rewards positivos/negativos
           deltas = policy.sample_deltas()
           positive_rewards = [0] * hp.nb_directions
           negative_rewards = [0] * hp.nb_directions
           #sacar las rewards en la dirección positiva
           for k in range(hp.nb_directions):
               positive_rewards[k] = explore(env, normalizer, policy, direction = 'positive', delta = deltas[k])
           #sacar las rewards en dirección negativo
           for k in range(hp.nb_directions):
               negative_rewards[k] = explore(env, normalizer, policy, direction = 'negative', delta = deltas[k])
           #sacar todas las rewards para sacar la desvest
           all_rewards = np.array(positive_rewards + negative_rewards)
           sigma_r = all_rewards.std()
           #acomodar los rollauts por el max (r_pos, r_neg) y seleccionar la mejor dirección
           scores = {k:max(r_pos, r_neg) for k, (r_pos, r_neg) in enumerate(zip(positive_rewards, negative_rewards))}
           order = sorted(scores.keys(), key = lambda x:scores[x])[:hp.nb_best_directions]
           rollouts = [(positive_rewards[k], negative_rewards[k], deltas[k]) for k in order]
           #actualizar policy
           policy.update (rollouts, sigma_r)
           #poner el final reward del policy luego del update
           reward_evaluation = explore (env, normalizer, policy)
           print('Paso: ', step, 'Lejania: ', reward_evaluation)

    def mkdir(base, name):
       path = os.path.join(base, name)
       if not os.path.exists(path):
           os.makedirs(path)
       return path
    work_dir = mkdir('exp', 'brs')
    monitor_dir = mkdir(work_dir, 'monitor')

    hp = Hp()
    np.random.seed(hp.seed)
    env = gym.make(hp.env_name)
    env = wrappers.Monitor(env, monitor_dir, force = True)
    nb_inputs = env.observation_space.shape[0]
    nb_outputs = env.action_space.shape[0]
    policy = Policy(nb_inputs, nb_outputs)
    normalizer = Normalizer(nb_inputs)
    train(env, policy, normalizer, hp)
  • Unable to play RTSP URL with ffplay but able to play it with VLC

    15 février 2018, par Dims

    I am trying to see my IP camera with VLC and see it with VLC with address

    rtsp://192.168.1.168

    Unfortunately, when I enter the same address into ffplay

    ffplay rtsp://192.168.1.168

    I get multiple errors like

    UDP timeout, retrying with TCP 0B f=0/0
    method PAUSE failed: 455 Method Not Valid In This State

    why and how to overcome ?

  • x86 : checkasm : check for or handle missing cleanup after MMX instructions

    11 décembre 2015, par Janne Grunau
    x86 : checkasm : check for or handle missing cleanup after MMX instructions
    

    Not every asm routine is expected clear the MMX state after returning.
    It is however a requisite for testing floating point code in checkasm.
    Annotate functions requiring cleanup with declare_func_emms() and issue
    emms after the call. The remaining functions are checked for having a
    cleared MMX state after return.

    • [DBH] tests/checkasm/checkasm.h
    • [DBH] tests/checkasm/h264pred.c
    • [DBH] tests/checkasm/h264qpel.c
    • [DBH] tests/checkasm/x86/checkasm.asm