Recherche avancée

Médias (91)

Autres articles (7)

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

  • MediaSPIP Core : La Configuration

    9 novembre 2010, par

    MediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
    Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

Sur d’autres sites (2912)

  • dnn_backend_native_layer_mathunary : add floor support

    6 août 2020, par Mingyu Yin
    dnn_backend_native_layer_mathunary : add floor support
    

    It can be tested with the model generated with below python script :

    import tensorflow as tf
    import os
    import numpy as np
    import imageio
    from tensorflow.python.framework import graph_util
    name = 'floor'

    pb_file_path = os.getcwd()
    if not os.path.exists(pb_file_path+'/{}_savemodel/'.format(name)) :
    os.mkdir(pb_file_path+'/{}_savemodel/'.format(name))

    with tf.Session(graph=tf.Graph()) as sess :
    in_img = imageio.imread('detection.jpg')
    in_img = in_img.astype(np.float32)
    in_data = in_img[np.newaxis, :]
    input_x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
    y_ = tf.math.floor(input_x*255)/255
    y = tf.identity(y_, name='dnn_out')
    sess.run(tf.global_variables_initializer())
    constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])

    with tf.gfile.FastGFile(pb_file_path+'/{}_savemodel/model.pb'.format(name), mode='wb') as f :
    f.write(constant_graph.SerializeToString())

    print("model.pb generated, please in ffmpeg path use\n \n \
    python tools/python/convert.py {}_savemodel/model.pb —outdir={}_savemodel/ \n \nto generate model.model\n".format(name,name))

    output = sess.run(y, feed_dict= input_x : in_data)
    imageio.imsave("out.jpg", np.squeeze(output))

    print("To verify, please ffmpeg path use\n \n \
    ./ffmpeg -i detection.jpg -vf format=rgb24,dnn_processing=model={}_savemodel/model.pb:input=dnn_in:output=dnn_out:dnn_backend=tensorflow -f framemd5 {}_savemodel/tensorflow_out.md5\n \
    or\n \
    ./ffmpeg -i detection.jpg -vf format=rgb24,dnn_processing=model={}_savemodel/model.pb:input=dnn_in:output=dnn_out:dnn_backend=tensorflow {}_savemodel/out_tensorflow.jpg\n \nto generate output result of tensorflow model\n".format(name, name, name, name))

    print("To verify, please ffmpeg path use\n \n \
    ./ffmpeg -i detection.jpg -vf format=rgb24,dnn_processing=model={}_savemodel/model.model:input=dnn_in:output=dnn_out:dnn_backend=native -f framemd5 {}_savemodel/native_out.md5\n \
    or \n \
    ./ffmpeg -i detection.jpg -vf format=rgb24,dnn_processing=model={}_savemodel/model.model:input=dnn_in:output=dnn_out:dnn_backend=native {}_savemodel/out_native.jpg\n \nto generate output result of native model\n".format(name, name, name, name))

    Signed-off-by : Mingyu Yin <mingyu.yin@intel.com>

    • [DH] libavfilter/dnn/dnn_backend_native_layer_mathunary.c
    • [DH] libavfilter/dnn/dnn_backend_native_layer_mathunary.h
    • [DH] tests/dnn/dnn-layer-mathunary-test.c
    • [DH] tools/python/convert_from_tensorflow.py
    • [DH] tools/python/convert_header.py
  • Simulate packet loss in internet video streaming

    31 décembre 2015, par james

    I have encoded H.264 video using FFmpeg. I want to apply packet loss model like Gilbert Eliot model to depict video transmission over unreliable network.Like in this thread TV noise is simulated. But noise on the internet is different. So is there a way to simulate packet loss/ noise for internet video streaming using FFmpeg or MATLAB ?

  • passing additional values to s3 event notification for lambda consumption

    8 septembre 2017, par user1790300

    I have to write code in react-native that allows a user to upload videos to amazon s3 to be transcoded for consumption by various devices. For the processing after the upload occurs ; I am reviewing two approaches :

    1) I can use Lambda with ffmpeg to handle the transcoding immediately after the uploading occurs (my fear here would be the amount of time required to transcode the videos and the effect on pricing if it takes a considerable amount of time).

    2) I can have s3 pass an sns message to a rest api after the created event occurs and the rest api generate a rabbitmq message that will be processed by worker that will perform the transcoding using ffmpeg.

    Option 1) seems to be the preferable option based on a completion time perspective. How concerned should I be with using 1) considering how long video transcoding might take as opposed to option 2) ?

    Also, regardless, I need a way to pass additional parameters to lambda or along the sns messaging that would allow me to somehow associate the user who uploaded the video with their account. Is there a way to pass additional text-based values to s3 to pass along to lambda or along sns when the upload completes, as a caveat I plan to upload the video directly to s3 using the rest layer(found this answer here : http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html#RESTObjectPUT-responses-examples) ?