Recherche avancée

Médias (1)

Mot : - Tags -/Rennes

Autres articles (89)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

Sur d’autres sites (15979)

  • Revision eee904c9b9 : Adaptive mode search scheduling This commit enables an adaptive mode search ord

    18 septembre 2014, par Jingning Han

    Changed Paths :
     Modify /vp9/encoder/vp9_encoder.c


     Modify /vp9/encoder/vp9_rd.h


     Modify /vp9/encoder/vp9_rdopt.c


     Modify /vp9/encoder/vp9_speed_features.c


     Modify /vp9/encoder/vp9_speed_features.h



    Adaptive mode search scheduling

    This commit enables an adaptive mode search order scheduling scheme
    in the rate-distortion optimization. It changes the compression
    performance by -0.433% and -0.420% for derf and stdhd respectively.
    It provides speed improvement for speed 3 :

    bus CIF 1000 kbps
    24590 b/f, 35.513 dB, 7864 ms ->
    24696 b/f, 35.491 dB, 7408 ms (6% speed-up)

    stockholm 720p 1000 kbps
    8983 b/f, 35.078 dB, 65698 ms ->
    8962 b/f, 35.054 dB, 60298 ms (8%)

    old_town_cross 720p 1000 kbps
    11804 b/f, 35.666 dB, 62492 ms ->
    11778 b/f, 35.609 dB, 56040 ms (10%)

    blue_sky 1080p 1500 kbps
    57173 b/f, 36.179 dB, 77879 ms ->
    57199 b/f, 36.131 dB, 69821 ms (10%)

    pedestrian_area 1080p 2000 kbps
    74241 b/f, 41.105 dB, 144031 ms ->
    74271 b/f, 41.091 dB, 133614 ms (8%)

    Change-Id : Iaad28cbc99399030fc5f9951eb5aa7fa633f320e

  • vaapi_encode_mjpeg : fix bad component id bug

    7 juin 2019, par U. Artie Eoff
    vaapi_encode_mjpeg : fix bad component id bug
    

    The compound literals assigned to "components"
    only exist within the scope of the if/else
    block (thanks Mark Thompson for the better
    explanation).

    Thus, after this if/else block, "components"
    ends up pointing to an arbitrary/undefined
    array. With some compilers and depending on
    optimization settings, these arbitrary values
    may end up being the same value (i.e. 0 with
    GNU GCC 9.x). Unfortunately, the GNU GCC
    compiler, at least, never prints any warnings
    about this.

    This patch fixes this issue by assigning the
    constant arrays to local variables at function
    scope and then pointing "components" to those
    as necessary.

    Fixes #7915

    Signed-off-by : U. Artie Eoff <ullysses.a.eoff@intel.com>

    • [DH] libavcodec/vaapi_encode_mjpeg.c
  • dnn : add openvino as one of dnn backend

    25 mai 2020, par Guo, Yejun
    dnn : add openvino as one of dnn backend
    

    OpenVINO is a Deep Learning Deployment Toolkit at
    https://github.com/openvinotoolkit/openvino, it supports CPU, GPU
    and heterogeneous plugins to accelerate deep learning inferencing.

    Please refer to https://github.com/openvinotoolkit/openvino/blob/master/build-instruction.md
    to build openvino (c library is built at the same time). Please add
    option -DENABLE_MKL_DNN=ON for cmake to enable CPU path. The header
    files and libraries are installed to /usr/local/deployment_tools/inference_engine/
    with default options on my system.

    To build FFmpeg with openvion, take my system as an example, run with :
    $ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH :/usr/local/deployment_tools/inference_engine/lib/intel64/ :/usr/local/deployment_tools/inference_engine/external/tbb/lib/
    $ ../ffmpeg/configure —enable-libopenvino —extra-cflags=-I/usr/local/deployment_tools/inference_engine/include/ —extra-ldflags=-L/usr/local/deployment_tools/inference_engine/lib/intel64
    $ make

    Here are the features provided by OpenVINO inference engine :
    - support more DNN model formats
    It supports TensorFlow, Caffe, ONNX, MXNet and Kaldi by converting them
    into OpenVINO format with a python script. And torth model
    can be first converted into ONNX and then to OpenVINO format.

    see the script at https://github.com/openvinotoolkit/openvino/tree/master/model-optimizer/mo.py
    which also does some optimization at model level.

    - optimize at inference stage
    It optimizes for X86 CPUs with SSE, AVX etc.

    It also optimizes based on OpenCL for Intel GPUs.
    (only Intel GPU supported becuase Intel OpenCL extension is used for optimization)

    Signed-off-by : Guo, Yejun <yejun.guo@intel.com>
    Signed-off-by : Pedro Arthur <bygrandao@gmail.com>

    • [DH] configure
    • [DH] libavfilter/dnn/Makefile
    • [DH] libavfilter/dnn/dnn_backend_openvino.c
    • [DH] libavfilter/dnn/dnn_backend_openvino.h
    • [DH] libavfilter/dnn/dnn_interface.c
    • [DH] libavfilter/dnn_interface.h