Recherche avancée

Médias (1)

Mot : - Tags -/net art

Autres articles (53)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Automated installation script of MediaSPIP

    25 avril 2011, par

    To overcome the difficulties mainly due to the installation of server side software dependencies, an "all-in-one" installation script written in bash was created to facilitate this step on a server with a compatible Linux distribution.
    You must have access to your server via SSH and a root account to use it, which will install the dependencies. Contact your provider if you do not have that.
    The documentation of the use of this installation script is available here.
    The code of this (...)

  • Demande de création d’un canal

    12 mars 2010, par

    En fonction de la configuration de la plateforme, l’utilisateur peu avoir à sa disposition deux méthodes différentes de demande de création de canal. La première est au moment de son inscription, la seconde, après son inscription en remplissant un formulaire de demande.
    Les deux manières demandent les mêmes choses fonctionnent à peu près de la même manière, le futur utilisateur doit remplir une série de champ de formulaire permettant tout d’abord aux administrateurs d’avoir des informations quant à (...)

Sur d’autres sites (5385)

  • dnn/native : add native support for minimum

    26 avril 2020, par Guo, Yejun
    dnn/native : add native support for minimum
    

    it can be tested with model file generated with below python script :
    import tensorflow as tf
    import numpy as np
    import imageio

    in_img = imageio.imread('input.jpg')
    in_img = in_img.astype(np.float32)/255.0
    in_data = in_img[np.newaxis, :]

    x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
    x1 = tf.minimum(0.7, x)
    x2 = tf.maximum(x1, 0.4)
    y = tf.identity(x2, name='dnn_out')

    sess=tf.Session()
    sess.run(tf.global_variables_initializer())

    graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
    tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)

    print("image_process.pb generated, please use \
    path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")

    output = sess.run(y, feed_dict=x : in_data)
    imageio.imsave("out.jpg", np.squeeze(output))

    Signed-off-by : Guo, Yejun <yejun.guo@intel.com>

    • [DH] libavfilter/dnn/dnn_backend_native_layer_mathbinary.c
    • [DH] libavfilter/dnn/dnn_backend_native_layer_mathbinary.h
    • [DH] tools/python/convert_from_tensorflow.py
    • [DH] tools/python/convert_header.py
  • 【carrierwave+ffmpeg】can't upload the video with sound

    10 avril 2020, par ken akita

    I want to make "Fake Instagram" Rails application.&#xA;then I made the uploader and I can upload the video(mp4), but no sound.&#xA;I can listen to the sound on my desktop but can't on my application, like this image, speakers not working.&#xA;enter image description here

    &#xA;&#xA;

    Code is here. I guess something wrong with image_uploader.rb or show.html.erb.&#xA;Note that first I made the ImageUploader to upload image, then I want to add video posting system to upload video. Both uploading done, but video without sound.&#xA;Really thank you for your advise !

    &#xA;&#xA;

    *image_uploader.rb&#xA;require &#x27;streamio-ffmpeg&#x27;&#xA;&#xA;class ImageUploader &lt; CarrierWave::Uploader::Base&#xA;  include CarrierWave::MiniMagick&#xA;  process resize_to_fit: [230, 183]&#xA;&#xA;  storage :file&#xA;&#xA;  def store_dir&#xA;    "uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"&#xA;  end&#xA;&#xA;  version :thumb do&#xA;    process resize_to_fit: [230, 183]&#xA;  end&#xA;  version :screenshot do&#xA;    process :screenshot&#xA;    def full_filename (for_file = model.logo.file)&#xA;      "screenshot.jpg"&#xA;    end&#xA;  end&#xA;&#xA;  def screenshot&#xA;    tmpfile = File.join(File.dirname(current_path), "tmpfile")&#xA;&#xA;    File.rename(current_path, tmpfile)&#xA;&#xA;    movie = FFMPEG::Movie.new(tmpfile)&#xA;    movie.screenshot(current_path &#x2B; ".jpg", {resolution: &#x27;230x183&#x27; }, preserve_aspect_ratio: :width)&#xA;    File.rename(current_path &#x2B; ".jpg", current_path)&#xA;&#xA;    File.delete(tmpfile)&#xA;  end&#xA;&#xA;  def extension_whitelist&#xA;    %w(jpg jpeg gif png mp4)&#xA;  end&#xA;end&#xA;

    &#xA;&#xA;

    *show.html.erb&#xA;<h1>Stroll</h1>&#xA;<p>&lt;%= notice %></p>&#xA;<p></p>&#xA;&lt;% if @stroll[:image] == nil %>&#xA;  &lt;%= image_tag"03summer_ver8-l.jpg", width:"230", height:"183" %>&#xA;&lt;% elsif @stroll.image.file.content_type.include?(&#x27;video/&#x27;) %>&#xA;  &lt;%= link_to @stroll.image_url.to_s do %>&#xA;  &lt;%= image_tag(@stroll.image_url(:screenshot).to_s, id: "image", :alt => "screenshot") %>&#xA;  &lt;% end %>&#xA;&lt;% else %>&#xA;  &lt;%= image_tag(@stroll.image.url) if @stroll.image &amp;&amp; @stroll.image.url %>&#xA;&lt;% end %>&#xA;<p>user &lt;%= @stroll.user.name %></p>&#xA;<p>『&lt;%= @stroll.title %>』</p>&#xA;<p>content: &lt;%= @stroll.content %></p>&#xA;<p>tag: &lt;%= @stroll.tag %></p>&#xA;<p>comment: </p>&#xA;<div>&#xA;  &lt;%= render partial: &#x27;comments/index&#x27;, locals: { comments: @comments, stroll: @stroll } %>&#xA;</div>&#xA;&lt;% if user_signed_in? &amp;&amp; @stroll.user_id != current_user.id %>&#xA;  &lt;%= render partial: &#x27;comments/form&#x27;, locals: { comment: @comment, stroll: @stroll } %>&#xA;&lt;% end %>&#xA;&lt;% if current_user %>&#xA;  &lt;% unless @stroll.user_id == current_user.id %>&#xA;    &lt;% if @favorite.present? %>&#xA;      &lt;%= link_to &#x27;cancel&#x27;, favorite_path(id: @favorite.id), method: :delete, class: &#x27;btn btn-info&#x27; %>&#xA;      &lt;% else %>&#xA;      &lt;%= link_to &#x27;Like!&#x27;, favorites_path(stroll_id: @stroll.id), method: :post, class: &#x27;btn btn-warning&#x27; %>&#xA;    &lt;% end %>&#xA;  &lt;% end %>&#xA;&lt;% end %>&#xA;&amp;emsp;&#xA;&lt;%= link_to "Index", strolls_path %>&#xA;

    &#xA;

  • How to create a spectrogram image from an audio file in Python just like how FFMPEG does ?

    2 mai 2020, par hamandishe Mk

    My code :

    &#xA;&#xA;

    import matplotlib.pyplot as plt&#xA;from matplotlib.pyplot import specgram&#xA;import librosa&#xA;import librosa.display&#xA;import numpy as np&#xA;import io&#xA;from PIL import Image&#xA;&#xA;samples, sample_rate = librosa.load(&#x27;thabo.wav&#x27;)&#xA;fig = plt.figure(figsize=[4, 4])&#xA;ax = fig.add_subplot(111)&#xA;ax.axes.get_xaxis().set_visible(False)&#xA;ax.axes.get_yaxis().set_visible(False)&#xA;ax.set_frame_on(False)&#xA;S = librosa.feature.melspectrogram(y=samples, sr=sample_rate)&#xA;librosa.display.specshow(librosa.power_to_db(S, ref=np.max))&#xA;buf = io.BytesIO()&#xA;plt.savefig(buf,  bbox_inches=&#x27;tight&#x27;,pad_inches=0)&#xA;&#xA;# plt.close(&#x27;all&#x27;)&#xA;buf.seek(0)&#xA;im = Image.open(buf)&#xA;# im = Image.open(buf).convert(&#x27;L&#x27;)&#xA;im.show()&#xA;buf.close()&#xA;

    &#xA;&#xA;

    Spectrogram produced

    &#xA;&#xA;

    enter image description here

    &#xA;&#xA;

    Using FFMPEG

    &#xA;&#xA;

    ffmpeg -i thabo.wav -lavfi showspectrumpic=s=224x224:mode=separate:legend=disabled spectrogram.png

    &#xA;&#xA;

    Spectrogram produced

    &#xA;&#xA;

    enter image description here

    &#xA;&#xA;

    Please help, i want a spectrogram that is exactly the same as the one produced by FFMPEG, for use with a speech recognition model exported from google's teachable machine.&#xA;Offline recognition

    &#xA;