Recherche avancée

Médias (1)

Mot : - Tags -/copyleft

Autres articles (69)

  • Emballe Médias : Mettre en ligne simplement des documents

    29 octobre 2010, par

    Le plugin emballe médias a été développé principalement pour la distribution mediaSPIP mais est également utilisé dans d’autres projets proches comme géodiversité par exemple. Plugins nécessaires et compatibles
    Pour fonctionner ce plugin nécessite que d’autres plugins soient installés : CFG Saisies SPIP Bonux Diogène swfupload jqueryui
    D’autres plugins peuvent être utilisés en complément afin d’améliorer ses capacités : Ancres douces Légendes photo_infos spipmotion (...)

  • La sauvegarde automatique de canaux SPIP

    1er avril 2010, par

    Dans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
    Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)

  • Automated installation script of MediaSPIP

    25 avril 2011, par

    To overcome the difficulties mainly due to the installation of server side software dependencies, an "all-in-one" installation script written in bash was created to facilitate this step on a server with a compatible Linux distribution.
    You must have access to your server via SSH and a root account to use it, which will install the dependencies. Contact your provider if you do not have that.
    The documentation of the use of this installation script is available here.
    The code of this (...)

Sur d’autres sites (10910)

  • avformat : add AV1 RTP depacketizer and packetizer

    26 août 2024, par Chris Hodges
    avformat : add AV1 RTP depacketizer and packetizer
    

    Add RTP packetizer and depacketizer according to (most)
    of the official AV1 RTP specification. This enables
    streaming via RTSP between ffmpeg and ffmpeg and has
    also been tested to work with AV1 RTSP streams via
    GStreamer.

    It also adds the required SDP attributes for AV1.

    AV1 RTP encoding is marked as experimental due to
    draft specification status, debug amount reduced
    and other changes suggested by Tristan.

    Added optional code for searching the sequence
    header to determine the first packet for broken
    AV1 encoders / parsers.

    Stops depacketizing on corruption until next keyframe,
    no longer prematurely issues packet on decoding if
    temporal unit was not complete yet.

    Change-Id : I90f5c5b9d577908a0d713606706b5654fde5f910
    Signed-off-by : Chris Hodges <chrishod@axis.com>
    Signed-off-by : Ronald S. Bultje <rsbultje@gmail.com>

    • [DH] libavformat/Makefile
    • [DH] libavformat/demux.c
    • [DH] libavformat/rtp_av1.h
    • [DH] libavformat/rtpdec.c
    • [DH] libavformat/rtpdec_av1.c
    • [DH] libavformat/rtpdec_formats.h
    • [DH] libavformat/rtpenc.c
    • [DH] libavformat/rtpenc.h
    • [DH] libavformat/rtpenc_av1.c
    • [DH] libavformat/sdp.c
  • Developers and vendors : Want a Matomo Hoodie ? Add a tag to the Matomo Open Source Tag Manager and this could be yours !

    7 juin 2018, par Matomo Core Team — Community, Development

    The Free Open Source Tag Manager is now available as a public beta on the Matomo Marketplace. Don’t know what a Tag Manager is ? Learn more here. In Short : It lets you easily manage all your third party JavaScript and HTML snippets (analytics, ads, social media, remarketing, affiliates, etc) through a single interface.

    Over the last few months we have worked on building the core for the Matomo Tag Manager which comes with a great set of features and a large set of pre-configured triggers and variables. However, we currently lack tags.

    This is where we need your help ! Together we can build a complete and industry leading open source tag manager.

    Tag examples include Google AdWords Conversion Tracking, Facebook Buttons, Facebook Pixels, Twitter Universal Website Tags, LinkedIn Insights.

    Are you a developer who is familiar with JavaScript and keen on adding a tag ? Or are you a vendor ? Don’t be shy, we appreciate any tags, even analytics related :) We have documented how to develop a new tag here, which is quite easy and straightforward. You may also need to understand a tiny bit of PHP but you’ll likely be fine even if you don’t (here is an example PHP file and the related JS file).

    As we want to ship the Matomo Tag Manager with as many tags as possible out of the box, we appreciate any new tag additions as a pull request on https://github.com/matomo-org/tag-manager.

    We will send out “Matomo Contributor” stickers that cannot be purchased anywhere for every contributor who contributes a tag within the next 3 months. As for the top 3 contributors… you’ll receive a Matomo hoodie ! Simply send us an email at hello@matomo.org after your tag has been merged. If needed, a draw will decide who gets the hoodies.

    FYI : The Matomo Tag Manager is already prepared to be handled in different contexts and we may possibly generate containers for Android and iOS. If you are keen on building the official Matomo SDKs for any of these mobile platforms, please get in touch.

  • WARN : Tried to pass invalid video frame, marking as broken : Your frame has data type int64, but we require uint8

    5 septembre 2019, par Tavo Diaz

    I am doing some Udemy AI courses and came across with one that "teaches" a bidimensional cheetah how to walk. I was doing the exercises on my computer, but it takes too much time. I decided to use Google Cloud to run the code and see the results some hours after. Nevertheless, when I run the code I get the following error " WARN : Tried to pass
    invalid video frame, marking as broken : Your frame has data type int64, but we require uint8 (i.e. RGB values from 0-255)".

    After the code is executed, I see into the folder and I don’t see any videos (just the meta info).

    Some more info (if it helps) :
    I have a 1 CPU (4g), SSD Ubuntu 16.04 LTS

    I have not tried anything yet to solve it because I don´t know what to try. Im looking for solutions on the web, but nothing I could try.

    This is the code

    import os
    import numpy as np
    import gym
    from gym import wrappers
    import pybullet_envs


    class Hp():
       def __init__(self):
           self.nb_steps = 1000
           self.episode_lenght =   1000
           self.learning_rate = 0.02
           self.nb_directions = 32
           self.nb_best_directions = 32
           assert self.nb_best_directions &lt;= self.nb_directions
           self.noise = 0.03
           self.seed = 1
           self.env_name = 'HalfCheetahBulletEnv-v0'


    class Normalizer():
       def __init__(self, nb_inputs):
           self.n = np.zeros(nb_inputs)
           self.mean = np.zeros(nb_inputs)
           self.mean_diff = np.zeros(nb_inputs)
           self.var = np.zeros(nb_inputs)

       def observe(self, x):
           self.n += 1.
           last_mean = self.mean.copy()
           self.mean += (x - self.mean) / self.n
           #abajo es el online numerator update
           self.mean_diff += (x - last_mean) * (x - self.mean)
           #abajo online computation de la varianza
           self.var = (self.mean_diff / self.n).clip(min = 1e-2)  

       def normalize(self, inputs):
           obs_mean = self.mean
           obs_std = np.sqrt(self.var)
           return (inputs - obs_mean) / obs_std

    class Policy():
       def __init__(self, input_size, output_size):
           self.theta = np.zeros((output_size, input_size))

       def evaluate(self, input, delta = None, direction = None):
           if direction is None:
               return self.theta.dot(input)
           elif direction == 'positive':
               return (self.theta + hp.noise * delta).dot(input)
           else:
               return (self.theta - hp.noise * delta).dot(input)

       def sample_deltas(self):
           return [np.random.randn(*self.theta.shape) for _ in range(hp.nb_directions)]

       def update (self, rollouts, sigma_r):
           step = np.zeros(self.theta.shape)
           for r_pos, r_neg, d in rollouts:
               step += (r_pos - r_neg) * d
           self.theta += hp.learning_rate / (hp.nb_best_directions * sigma_r) * step


    def explore(env, normalizer, policy, direction = None, delta = None):
       state = env.reset()
       done = False
       num_plays = 0.
       #abajo puede ser promedio de las rewards
       sum_rewards = 0
       while not done and num_plays &lt; hp.episode_lenght:
           normalizer.observe(state)
           state = normalizer.normalize(state)
           action = policy.evaluate(state, delta, direction)
           state, reward, done, _ = env.step(action)
           reward = max(min(reward, 1), -1)
           #abajo sería poner un promedio
           sum_rewards += reward
           num_plays += 1
       return sum_rewards

    def train (env, policy, normalizer, hp):
       for step in range(hp.nb_steps):
           #iniciar las perturbaciones deltas y los rewards positivos/negativos
           deltas = policy.sample_deltas()
           positive_rewards = [0] * hp.nb_directions
           negative_rewards = [0] * hp.nb_directions
           #sacar las rewards en la dirección positiva
           for k in range(hp.nb_directions):
               positive_rewards[k] = explore(env, normalizer, policy, direction = 'positive', delta = deltas[k])
           #sacar las rewards en dirección negativo
           for k in range(hp.nb_directions):
               negative_rewards[k] = explore(env, normalizer, policy, direction = 'negative', delta = deltas[k])
           #sacar todas las rewards para sacar la desvest
           all_rewards = np.array(positive_rewards + negative_rewards)
           sigma_r = all_rewards.std()
           #acomodar los rollauts por el max (r_pos, r_neg) y seleccionar la mejor dirección
           scores = {k:max(r_pos, r_neg) for k, (r_pos, r_neg) in enumerate(zip(positive_rewards, negative_rewards))}
           order = sorted(scores.keys(), key = lambda x:scores[x])[:hp.nb_best_directions]
           rollouts = [(positive_rewards[k], negative_rewards[k], deltas[k]) for k in order]
           #actualizar policy
           policy.update (rollouts, sigma_r)
           #poner el final reward del policy luego del update
           reward_evaluation = explore (env, normalizer, policy)
           print('Paso: ', step, 'Lejania: ', reward_evaluation)

    def mkdir(base, name):
       path = os.path.join(base, name)
       if not os.path.exists(path):
           os.makedirs(path)
       return path
    work_dir = mkdir('exp', 'brs')
    monitor_dir = mkdir(work_dir, 'monitor')

    hp = Hp()
    np.random.seed(hp.seed)
    env = gym.make(hp.env_name)
    env = wrappers.Monitor(env, monitor_dir, force = True)
    nb_inputs = env.observation_space.shape[0]
    nb_outputs = env.action_space.shape[0]
    policy = Policy(nb_inputs, nb_outputs)
    normalizer = Normalizer(nb_inputs)
    train(env, policy, normalizer, hp)