Recherche avancée

Médias (5)

Mot : - Tags -/open film making

Autres articles (77)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (7990)

  • Create 1 RTSP stream from 3 RTSP streams on Raspberry Pi 3

    29 mai 2020, par user3260912

    So this is interesting question and was wondering if anyone may have any insights to how I could do this. I currently have 6 IP cameras and run a Java process that opens ffmpeg to rip the RTSP streams from those 6 IP cameras, save a image to RAM, and then use ImageMagick to convert those files to a collage JPG image so I have all IP cameras in one image. That file then updates as rapidly as possible using space in /dev/shm - in reality, about 6 FPS. But, it uses 45-50% CPU on 6 cores.

    



    I'm looking for a way to reduce some CPU load off my main computer, though. I have 2 Raspberry Pi model 3Bs and am thinking I could maybe put them to good use. Not sure how the performance would be, but willing to test this.

    



    What I'm wanting to do is this :

    



      

    1. Use ffmpeg to pull down images from 3 of the IP camera RTSP streams on each Raspberry Pi into /dev/shm
    2. 


    3. Using ImageMagick, montage the temp images pulled, into /dev/shm
    4. 


    5. Create a RTSP stream on each Raspberry Pi of that montaged image in /dev/shm
    6. 


    7. Use my desktop to pull down the RTSP of the collaged images and collage those to the same format I do today (only using 2 RTSP stream threads, instead of 6 to do this.)
    8. 


    



    Is there a way to set ImageMagick to set image output format as mjpeg2 or have ffmpeg create a rtsp stream off the rapidly updating JPEG image file ?

    


  • dnn/native : add native support for divide

    11 avril 2020, par Guo, Yejun
    dnn/native : add native support for divide
    

    it can be tested with model file generated with below python script :
    import tensorflow as tf
    import numpy as np
    import imageio

    in_img = imageio.imread('input.jpg')
    in_img = in_img.astype(np.float32)/255.0
    in_data = in_img[np.newaxis, :]

    x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
    z1 = 2 / x
    z2 = 1 / z1
    z3 = z2 / 0.25 + 0.3
    z4 = z3 - x * 1.5 - 0.3
    y = tf.identity(z4, name='dnn_out')

    sess=tf.Session()
    sess.run(tf.global_variables_initializer())

    graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
    tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)

    print("image_process.pb generated, please use \
    path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")

    output = sess.run(y, feed_dict=x : in_data)
    imageio.imsave("out.jpg", np.squeeze(output))

    Signed-off-by : Guo, Yejun <yejun.guo@intel.com>

    • [DH] libavfilter/dnn/dnn_backend_native_layer_mathbinary.c
    • [DH] libavfilter/dnn/dnn_backend_native_layer_mathbinary.h
    • [DH] tools/python/convert_from_tensorflow.py
    • [DH] tools/python/convert_header.py
  • How to create a spectrogram image from an audio file in Python just like how FFMPEG does ?

    2 mai 2020, par hamandishe Mk

    My code :

    &#xA;&#xA;

    import matplotlib.pyplot as plt&#xA;from matplotlib.pyplot import specgram&#xA;import librosa&#xA;import librosa.display&#xA;import numpy as np&#xA;import io&#xA;from PIL import Image&#xA;&#xA;samples, sample_rate = librosa.load(&#x27;thabo.wav&#x27;)&#xA;fig = plt.figure(figsize=[4, 4])&#xA;ax = fig.add_subplot(111)&#xA;ax.axes.get_xaxis().set_visible(False)&#xA;ax.axes.get_yaxis().set_visible(False)&#xA;ax.set_frame_on(False)&#xA;S = librosa.feature.melspectrogram(y=samples, sr=sample_rate)&#xA;librosa.display.specshow(librosa.power_to_db(S, ref=np.max))&#xA;buf = io.BytesIO()&#xA;plt.savefig(buf,  bbox_inches=&#x27;tight&#x27;,pad_inches=0)&#xA;&#xA;# plt.close(&#x27;all&#x27;)&#xA;buf.seek(0)&#xA;im = Image.open(buf)&#xA;# im = Image.open(buf).convert(&#x27;L&#x27;)&#xA;im.show()&#xA;buf.close()&#xA;

    &#xA;&#xA;

    Spectrogram produced

    &#xA;&#xA;

    enter image description here

    &#xA;&#xA;

    Using FFMPEG

    &#xA;&#xA;

    ffmpeg -i thabo.wav -lavfi showspectrumpic=s=224x224:mode=separate:legend=disabled spectrogram.png

    &#xA;&#xA;

    Spectrogram produced

    &#xA;&#xA;

    enter image description here

    &#xA;&#xA;

    Please help, i want a spectrogram that is exactly the same as the one produced by FFMPEG, for use with a speech recognition model exported from google's teachable machine.&#xA;Offline recognition

    &#xA;