Recherche avancée

Médias (0)

Mot : - Tags -/tags

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (51)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (6408)

  • mov : Export spherical information

    2 novembre 2016, par Vittorio Giovara
    mov : Export spherical information
    

    This implements Spherical Video V1 and V2, as described in the
    spatial-media collection by Google.

    Signed-off-by : Vittorio Giovara <vittorio.giovara@gmail.com>

    • [DBH] Changelog
    • [DBH] libavformat/isom.h
    • [DBH] libavformat/mov.c
    • [DBH] libavformat/version.h
  • Android-How to pass back frames from FFmpeg back to Android

    23 octobre 2013, par yarin

    It is an architecture question-i am really interesting about the answer

    I building an app with following goals :

    1.record video with effect in real time(using FFmpeg)

    2.display the customized video in real time for the user while he recording

    So,after 1 month of working...i decide to remember that goal number 2 is worth to thinking about :)
    I have a ready skeleton app that record video with effect in real time.
    but i have to preview this customized frame back to the user.

    My options (and this is my question) :

    1.Each frame that pass from onPreviewFrame(byte[] video_frame_data, Camera camera) to ffmpeg with JNI to encode-will sending back to android through the same JNI after i apply the effects(i mean : onPreviewFrame->JNI to FFMPEG->immediately apply effect->send the costumed frame back to android side for display->encode the costumed frame).

    Advantages : it is look like is the most easy to use.

    Disadvantages : use the JNI twice or the passing back the frame could consume time(i really don't now if it really big price to pay,cuz it is only byte array or int array per frame to send to android side)

    2.I heard about openGL on ndk,but i think that the surface it self created on the android side-so is it really going to be better ?
    i prefer to use other surface that i using now in java

    3.Create an video player on FFmpeg to preview each customized frame in real time.

    Thank for your helping,i hope that the first solution is available and not consume to much expensive time in terms of real time processing

  • WARN : Tried to pass invalid video frame, marking as broken : Your frame has data type int64, but we require uint8

    5 septembre 2019, par Tavo Diaz

    I am doing some Udemy AI courses and came across with one that "teaches" a bidimensional cheetah how to walk. I was doing the exercises on my computer, but it takes too much time. I decided to use Google Cloud to run the code and see the results some hours after. Nevertheless, when I run the code I get the following error " WARN : Tried to pass
    invalid video frame, marking as broken : Your frame has data type int64, but we require uint8 (i.e. RGB values from 0-255)".

    After the code is executed, I see into the folder and I don’t see any videos (just the meta info).

    Some more info (if it helps) :
    I have a 1 CPU (4g), SSD Ubuntu 16.04 LTS

    I have not tried anything yet to solve it because I don´t know what to try. Im looking for solutions on the web, but nothing I could try.

    This is the code

    import os
    import numpy as np
    import gym
    from gym import wrappers
    import pybullet_envs


    class Hp():
       def __init__(self):
           self.nb_steps = 1000
           self.episode_lenght =   1000
           self.learning_rate = 0.02
           self.nb_directions = 32
           self.nb_best_directions = 32
           assert self.nb_best_directions &lt;= self.nb_directions
           self.noise = 0.03
           self.seed = 1
           self.env_name = 'HalfCheetahBulletEnv-v0'


    class Normalizer():
       def __init__(self, nb_inputs):
           self.n = np.zeros(nb_inputs)
           self.mean = np.zeros(nb_inputs)
           self.mean_diff = np.zeros(nb_inputs)
           self.var = np.zeros(nb_inputs)

       def observe(self, x):
           self.n += 1.
           last_mean = self.mean.copy()
           self.mean += (x - self.mean) / self.n
           #abajo es el online numerator update
           self.mean_diff += (x - last_mean) * (x - self.mean)
           #abajo online computation de la varianza
           self.var = (self.mean_diff / self.n).clip(min = 1e-2)  

       def normalize(self, inputs):
           obs_mean = self.mean
           obs_std = np.sqrt(self.var)
           return (inputs - obs_mean) / obs_std

    class Policy():
       def __init__(self, input_size, output_size):
           self.theta = np.zeros((output_size, input_size))

       def evaluate(self, input, delta = None, direction = None):
           if direction is None:
               return self.theta.dot(input)
           elif direction == 'positive':
               return (self.theta + hp.noise * delta).dot(input)
           else:
               return (self.theta - hp.noise * delta).dot(input)

       def sample_deltas(self):
           return [np.random.randn(*self.theta.shape) for _ in range(hp.nb_directions)]

       def update (self, rollouts, sigma_r):
           step = np.zeros(self.theta.shape)
           for r_pos, r_neg, d in rollouts:
               step += (r_pos - r_neg) * d
           self.theta += hp.learning_rate / (hp.nb_best_directions * sigma_r) * step


    def explore(env, normalizer, policy, direction = None, delta = None):
       state = env.reset()
       done = False
       num_plays = 0.
       #abajo puede ser promedio de las rewards
       sum_rewards = 0
       while not done and num_plays &lt; hp.episode_lenght:
           normalizer.observe(state)
           state = normalizer.normalize(state)
           action = policy.evaluate(state, delta, direction)
           state, reward, done, _ = env.step(action)
           reward = max(min(reward, 1), -1)
           #abajo sería poner un promedio
           sum_rewards += reward
           num_plays += 1
       return sum_rewards

    def train (env, policy, normalizer, hp):
       for step in range(hp.nb_steps):
           #iniciar las perturbaciones deltas y los rewards positivos/negativos
           deltas = policy.sample_deltas()
           positive_rewards = [0] * hp.nb_directions
           negative_rewards = [0] * hp.nb_directions
           #sacar las rewards en la dirección positiva
           for k in range(hp.nb_directions):
               positive_rewards[k] = explore(env, normalizer, policy, direction = 'positive', delta = deltas[k])
           #sacar las rewards en dirección negativo
           for k in range(hp.nb_directions):
               negative_rewards[k] = explore(env, normalizer, policy, direction = 'negative', delta = deltas[k])
           #sacar todas las rewards para sacar la desvest
           all_rewards = np.array(positive_rewards + negative_rewards)
           sigma_r = all_rewards.std()
           #acomodar los rollauts por el max (r_pos, r_neg) y seleccionar la mejor dirección
           scores = {k:max(r_pos, r_neg) for k, (r_pos, r_neg) in enumerate(zip(positive_rewards, negative_rewards))}
           order = sorted(scores.keys(), key = lambda x:scores[x])[:hp.nb_best_directions]
           rollouts = [(positive_rewards[k], negative_rewards[k], deltas[k]) for k in order]
           #actualizar policy
           policy.update (rollouts, sigma_r)
           #poner el final reward del policy luego del update
           reward_evaluation = explore (env, normalizer, policy)
           print('Paso: ', step, 'Lejania: ', reward_evaluation)

    def mkdir(base, name):
       path = os.path.join(base, name)
       if not os.path.exists(path):
           os.makedirs(path)
       return path
    work_dir = mkdir('exp', 'brs')
    monitor_dir = mkdir(work_dir, 'monitor')

    hp = Hp()
    np.random.seed(hp.seed)
    env = gym.make(hp.env_name)
    env = wrappers.Monitor(env, monitor_dir, force = True)
    nb_inputs = env.observation_space.shape[0]
    nb_outputs = env.action_space.shape[0]
    policy = Policy(nb_inputs, nb_outputs)
    normalizer = Normalizer(nb_inputs)
    train(env, policy, normalizer, hp)