
Recherche avancée
Autres articles (77)
-
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
-
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...) -
Pas question de marché, de cloud etc...
10 avril 2011Le vocabulaire utilisé sur ce site essaie d’éviter toute référence à la mode qui fleurit allègrement
sur le web 2.0 et dans les entreprises qui en vivent.
Vous êtes donc invité à bannir l’utilisation des termes "Brand", "Cloud", "Marché" etc...
Notre motivation est avant tout de créer un outil simple, accessible à pour tout le monde, favorisant
le partage de créations sur Internet et permettant aux auteurs de garder une autonomie optimale.
Aucun "contrat Gold ou Premium" n’est donc prévu, aucun (...)
Sur d’autres sites (9797)
-
arm : vp9itxfm : Skip empty slices in the first pass of idct_idct 16x16 and 32x32
9 janvier 2017, par Martin Storsjöarm : vp9itxfm : Skip empty slices in the first pass of idct_idct 16x16 and 32x32
This work is sponsored by, and copyright, Google.
Previously all subpartitions except the eob=1 (DC) case ran with
the same runtime :Cortex A7 A8 A9 A53
vp9_inv_dct_dct_16x16_sub16_add_neon : 3188.1 2435.4 2499.0 1969.0
vp9_inv_dct_dct_32x32_sub32_add_neon : 18531.7 16582.3 14207.6 12000.3By skipping individual 4x16 or 4x32 pixel slices in the first pass,
we reduce the runtime of these functions like this :vp9_inv_dct_dct_16x16_sub1_add_neon : 274.6 189.5 211.7 235.8
vp9_inv_dct_dct_16x16_sub2_add_neon : 2064.0 1534.8 1719.4 1248.7
vp9_inv_dct_dct_16x16_sub4_add_neon : 2135.0 1477.2 1736.3 1249.5
vp9_inv_dct_dct_16x16_sub8_add_neon : 2446.7 1828.7 1993.6 1494.7
vp9_inv_dct_dct_16x16_sub12_add_neon : 2832.4 2118.3 2266.5 1735.1
vp9_inv_dct_dct_16x16_sub16_add_neon : 3211.7 2475.3 2523.5 1983.1
vp9_inv_dct_dct_32x32_sub1_add_neon : 756.2 456.7 862.0 553.9
vp9_inv_dct_dct_32x32_sub2_add_neon : 10682.2 8190.4 8539.2 6762.5
vp9_inv_dct_dct_32x32_sub4_add_neon : 10813.5 8014.9 8518.3 6762.8
vp9_inv_dct_dct_32x32_sub8_add_neon : 11859.6 9313.0 9347.4 7514.5
vp9_inv_dct_dct_32x32_sub12_add_neon : 12946.6 10752.4 10192.2 8280.2
vp9_inv_dct_dct_32x32_sub16_add_neon : 14074.6 11946.5 11001.4 9008.6
vp9_inv_dct_dct_32x32_sub20_add_neon : 15269.9 13662.7 11816.1 9762.6
vp9_inv_dct_dct_32x32_sub24_add_neon : 16327.9 14940.1 12626.7 10516.0
vp9_inv_dct_dct_32x32_sub28_add_neon : 17462.7 15776.1 13446.2 11264.7
vp9_inv_dct_dct_32x32_sub32_add_neon : 18575.5 17157.0 14249.3 12015.1I.e. in general a very minor overhead for the full subpartition case due
to the additional loads and cmps, but a significant speedup for the cases
when we only need to process a small part of the actual input data.In common VP9 content in a few inspected clips, 70-90% of the non-dc-only
16x16 and 32x32 IDCTs only have nonzero coefficients in the upper left
8x8 or 16x16 subpartitions respectively.This is cherrypicked from libav commit
9c8bc74c2b40537b0997f646c87c008042d788c2.Signed-off-by : Michael Niedermayer <michael@niedermayer.cc>
-
Checkasm : assembly testing and benchmarking tool
11 juillet 2015, par Henrik GramnerCheckasm : assembly testing and benchmarking tool
It provides the following features :
* verify correctness by comparing output to the C version.
* detect failure to save and restore clobbered callee-saved registers.
* detect 32-bit parameters being used as if they were 64-bit in x86-64
(the upper halves are not guaranteed to be zero - but in practice
they very often are, which makes those bugs hard to spot otherwise).
* easy benchmarking.Compile by running ’make checkasm’.
Execute by running ’tests/checkasm/checkasm’.Optional arguments are ’—bench’ to run benchmarks for all functions,
’—bench=<pattern>’ to run benchmarks for all functions that starts with
<pattern>, and ’<integer>’ to seed the PRNG for reproducible results.Contains unit tests for most h264pred functions to get started, more tests
can be added afterwards using those as a reference.Loosely based on code from x264. Currently only supports x86 and x86-64,
but additional architectures shouldn’t be too much of an obstacle to add.Note that functions with floating point parameters or floating point
return values are not supported. Some compiler-specific features or
preprocessor hacks would likely be required to add support for that.Signed-off-by : Janne Grunau <janne-libav@jannau.net>
-
dnn_backend_native_layer_mathunary : add floor support
6 août 2020, par Mingyu Yindnn_backend_native_layer_mathunary : add floor support
It can be tested with the model generated with below python script :
import tensorflow as tf
import os
import numpy as np
import imageio
from tensorflow.python.framework import graph_util
name = 'floor'pb_file_path = os.getcwd()
if not os.path.exists(pb_file_path+'/{}_savemodel/'.format(name)) :
os.mkdir(pb_file_path+'/{}_savemodel/'.format(name))with tf.Session(graph=tf.Graph()) as sess :
in_img = imageio.imread('detection.jpg')
in_img = in_img.astype(np.float32)
in_data = in_img[np.newaxis, :]
input_x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
y_ = tf.math.floor(input_x*255)/255
y = tf.identity(y_, name='dnn_out')
sess.run(tf.global_variables_initializer())
constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])with tf.gfile.FastGFile(pb_file_path+'/{}_savemodel/model.pb'.format(name), mode='wb') as f :
f.write(constant_graph.SerializeToString())print("model.pb generated, please in ffmpeg path use\n \n \
python tools/python/convert.py {}_savemodel/model.pb —outdir={}_savemodel/ \n \nto generate model.model\n".format(name,name))output = sess.run(y, feed_dict= input_x : in_data)
imageio.imsave("out.jpg", np.squeeze(output))print("To verify, please ffmpeg path use\n \n \
./ffmpeg -i detection.jpg -vf format=rgb24,dnn_processing=model={}_savemodel/model.pb:input=dnn_in:output=dnn_out:dnn_backend=tensorflow -f framemd5 {}_savemodel/tensorflow_out.md5\n \
or\n \
./ffmpeg -i detection.jpg -vf format=rgb24,dnn_processing=model={}_savemodel/model.pb:input=dnn_in:output=dnn_out:dnn_backend=tensorflow {}_savemodel/out_tensorflow.jpg\n \nto generate output result of tensorflow model\n".format(name, name, name, name))print("To verify, please ffmpeg path use\n \n \
./ffmpeg -i detection.jpg -vf format=rgb24,dnn_processing=model={}_savemodel/model.model:input=dnn_in:output=dnn_out:dnn_backend=native -f framemd5 {}_savemodel/native_out.md5\n \
or \n \
./ffmpeg -i detection.jpg -vf format=rgb24,dnn_processing=model={}_savemodel/model.model:input=dnn_in:output=dnn_out:dnn_backend=native {}_savemodel/out_native.jpg\n \nto generate output result of native model\n".format(name, name, name, name))Signed-off-by : Mingyu Yin <mingyu.yin@intel.com>