
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (64)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Ajouter notes et légendes aux images
7 février 2011, parPour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
Modification lors de l’ajout d’un média
Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...) -
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)
Sur d’autres sites (7887)
-
dnn_backend_native_layer_mathunary : add cos support
6 juin 2020, par Ting Fudnn_backend_native_layer_mathunary : add cos support
It can be tested with the model generated with below python scripy
import tensorflow as tf
import numpy as np
import imageioin_img = imageio.imread('input.jpeg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
x1 = tf.multiply(x, 1.5)
x2 = tf.cos(x1)
y = tf.identity(x2, name='dnn_out')sess=tf.Session()
sess.run(tf.global_variables_initializer())graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")output = sess.run(y, feed_dict=x : in_data)
imageio.imsave("out.jpg", np.squeeze(output))Signed-off-by : Ting Fu <ting.fu@intel.com>
Signed-off-by : Guo Yejun <yejun.guo@intel.com> -
dnn_backend_native_layer_conv2d.c:Add mutithread function
6 septembre 2020, par Xu Jundnn_backend_native_layer_conv2d.c:Add mutithread function
Use pthread to multithread dnn_execute_layer_conv2d.
Can be tested with command "./ffmpeg_g -i input.png -vf \
format=yuvj420p,dnn_processing=dnn_backend=native:model= \
espcn.model:input=x:output=y:options=conv2d_threads=23 \
-y sr_native.jpg -benchmark"before patch : utime=11.238s stime=0.005s rtime=11.248s
after patch : utime=20.817s stime=0.047s rtime=1.051s
on my 3900X 12c24t @4.2GHzAbout the increase of utime, it's because that CPU HyperThreading
technology makes logical cores twice of physical cores while cpu's
counting performance improves less than double. And utime sums
all cpu's logical cores' runtime. As a result, using threads num
near cpu's logical core's number will double utime, while reduce
rtime less than half for HyperThreading CPUs.Signed-off-by : Xu Jun <xujunzz@sjtu.edu.cn>
Signed-off-by : Guo, Yejun <yejun.guo@intel.com> -
dnn_backend_native_layer_mathunary : add sin support
6 juin 2020, par Ting Fudnn_backend_native_layer_mathunary : add sin support
It can be tested with the model file generated with below python scripy :
import tensorflow as tf
import numpy as np
import imageioin_img = imageio.imread('input.jpeg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
x1 = tf.multiply(x, 3.14)
x2 = tf.sin(x1)
y = tf.identity(x2, name='dnn_out')sess=tf.Session()
sess.run(tf.global_variables_initializer())graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")output = sess.run(y, feed_dict=x : in_data)
imageio.imsave("out.jpg", np.squeeze(output))Signed-off-by : Ting Fu <ting.fu@intel.com>
Signed-off-by : Guo Yejun <yejun.guo@intel.com>