Recherche avancée

Médias (1)

Mot : - Tags -/Rennes

Autres articles (75)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

Sur d’autres sites (8873)

  • dnn_backend_native_layer_mathunary : add asin support

    18 juin 2020, par Ting Fu
    dnn_backend_native_layer_mathunary : add asin support
    

    It can be tested with the model generated with below python script :

    import tensorflow as tf
    import numpy as np
    import imageio

    in_img = imageio.imread('input.jpeg')
    in_img = in_img.astype(np.float32)/255.0
    in_data = in_img[np.newaxis, :]

    x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
    x1 = tf.asin(x)
    x2 = tf.divide(x1, 3.1416/2) # pi/2
    y = tf.identity(x2, name='dnn_out')

    sess=tf.Session()
    sess.run(tf.global_variables_initializer())

    graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
    tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)

    print("image_process.pb generated, please use \
    path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")

    output = sess.run(y, feed_dict=x : in_data)
    imageio.imsave("out.jpg", np.squeeze(output))

    Signed-off-by : Ting Fu <ting.fu@intel.com>
    Signed-off-by : Guo Yejun <yejun.guo@intel.com>

    • [DH] libavfilter/dnn/dnn_backend_native_layer_mathunary.c
    • [DH] libavfilter/dnn/dnn_backend_native_layer_mathunary.h
    • [DH] tools/python/convert_from_tensorflow.py
    • [DH] tools/python/convert_header.py
  • merge/combine two FFMPEG commands together into one command

    26 février 2021, par Mayank Thapliyal

    I have been processing videos for a while and I have been using ffmpeg to make my life easy. But there are two commands which I want to combine into single command :-

    &#xA;

    Step 1 :- Divide a video vertically into two parts and then stack them horizontally

    &#xA;

    ffmpeg -i usa.mp4 -filter_complex "[0]crop=iw:ih/2:0:0[top];[0]crop=iw:ih/2:0:oh[bottom];[top][bottom]hstack" -preset fast -c:a copy usa$.mp4

    &#xA;

    Step 2 :- Combine 3 videos into single video (the video from Step 1 will be in between the start.mp4 and end.mp4)

    &#xA;

    ffmpeg -i start.mp4 -i usa$.mp4 -i end.mp4 -vsync 2 -filter_complex "[0:v] [0:a] [1:v] [1:a] [2:v] [2:a] concat=n=3:v=1:a=1 [v] [a]" -map "[v]" -map "[a]" usa_.mp4

    &#xA;

    Can anyone please combine the videos into single command.I will be then able to save a lot of computing time(I guess that)

    &#xA;

    Thanks in advance

    &#xA;

  • dnn_backend_native_layer_mathunary : add atanh support

    29 juin 2020, par Ting Fu
    dnn_backend_native_layer_mathunary : add atanh support
    

    It can be tested with the model generated with below python script :

    import tensorflow as tf
    import numpy as np
    import imageio

    in_img = imageio.imread('input.jpeg')
    in_img = in_img.astype(np.float32)/255.0
    in_data = in_img[np.newaxis, :]

    x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')

    please uncomment the part you want to test

    x_sinh_1 = tf.sinh(x)
    x_out = tf.divide(x_sinh_1, 1.176) # sinh(1.0)

    x_cosh_1 = tf.cosh(x)
    x_out = tf.divide(x_cosh_1, 1.55) # cosh(1.0)

    x_tanh_1 = tf.tanh(x)
    x__out = tf.divide(x_tanh_1, 0.77) # tanh(1.0)

    x_asinh_1 = tf.asinh(x)
    x_out = tf.divide(x_asinh_1, 0.89) # asinh(1.0/1.1)

    x_acosh_1 = tf.add(x, 1.1)
    x_acosh_2 = tf.acosh(x_acosh_1) # accept (1, inf)
    x_out = tf.divide(x_acosh_2, 1.4) # acosh(2.1)

    x_atanh_1 = tf.divide(x, 1.1)
    x_atanh_2 = tf.atanh(x_atanh_1) # accept (-1, 1)
    x_out = tf.divide(x_atanh_2, 1.55) # atanhh(1.0/1.1)

    y = tf.identity(x_out, name='dnn_out') #please only preserve the x_out you want to test

    sess=tf.Session()
    sess.run(tf.global_variables_initializer())

    graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
    tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)

    print("image_process.pb generated, please use \
    path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")

    output = sess.run(y, feed_dict=x : in_data)
    imageio.imsave("out.jpg", np.squeeze(output))

    Signed-off-by : Ting Fu <ting.fu@intel.com>

    • [DH] libavfilter/dnn/dnn_backend_native_layer_mathunary.c
    • [DH] libavfilter/dnn/dnn_backend_native_layer_mathunary.h
    • [DH] tools/python/convert_from_tensorflow.py
    • [DH] tools/python/convert_header.py