
Recherche avancée
Médias (1)
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (53)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Automated installation script of MediaSPIP
25 avril 2011, parTo overcome the difficulties mainly due to the installation of server side software dependencies, an "all-in-one" installation script written in bash was created to facilitate this step on a server with a compatible Linux distribution.
You must have access to your server via SSH and a root account to use it, which will install the dependencies. Contact your provider if you do not have that.
The documentation of the use of this installation script is available here.
The code of this (...) -
Demande de création d’un canal
12 mars 2010, parEn fonction de la configuration de la plateforme, l’utilisateur peu avoir à sa disposition deux méthodes différentes de demande de création de canal. La première est au moment de son inscription, la seconde, après son inscription en remplissant un formulaire de demande.
Les deux manières demandent les mêmes choses fonctionnent à peu près de la même manière, le futur utilisateur doit remplir une série de champ de formulaire permettant tout d’abord aux administrateurs d’avoir des informations quant à (...)
Sur d’autres sites (5385)
-
dnn/native : add native support for minimum
26 avril 2020, par Guo, Yejundnn/native : add native support for minimum
it can be tested with model file generated with below python script :
import tensorflow as tf
import numpy as np
import imageioin_img = imageio.imread('input.jpg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
x1 = tf.minimum(0.7, x)
x2 = tf.maximum(x1, 0.4)
y = tf.identity(x2, name='dnn_out')sess=tf.Session()
sess.run(tf.global_variables_initializer())graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")output = sess.run(y, feed_dict=x : in_data)
imageio.imsave("out.jpg", np.squeeze(output))Signed-off-by : Guo, Yejun <yejun.guo@intel.com>
-
【carrierwave+ffmpeg】can't upload the video with sound
10 avril 2020, par ken akitaI want to make "Fake Instagram" Rails application.
then I made the uploader and I can upload the video(mp4), but no sound.
I can listen to the sound on my desktop but can't on my application, like this image, speakers not working.
enter image description here



Code is here. I guess something wrong with image_uploader.rb or show.html.erb.
Note that first I made the ImageUploader to upload image, then I want to add video posting system to upload video. Both uploading done, but video without sound.
Really thank you for your advise !



*image_uploader.rb
require 'streamio-ffmpeg'

class ImageUploader < CarrierWave::Uploader::Base
 include CarrierWave::MiniMagick
 process resize_to_fit: [230, 183]

 storage :file

 def store_dir
 "uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
 end

 version :thumb do
 process resize_to_fit: [230, 183]
 end
 version :screenshot do
 process :screenshot
 def full_filename (for_file = model.logo.file)
 "screenshot.jpg"
 end
 end

 def screenshot
 tmpfile = File.join(File.dirname(current_path), "tmpfile")

 File.rename(current_path, tmpfile)

 movie = FFMPEG::Movie.new(tmpfile)
 movie.screenshot(current_path + ".jpg", {resolution: '230x183' }, preserve_aspect_ratio: :width)
 File.rename(current_path + ".jpg", current_path)

 File.delete(tmpfile)
 end

 def extension_whitelist
 %w(jpg jpeg gif png mp4)
 end
end




*show.html.erb
<h1>Stroll</h1>
<p><%= notice %></p>
<p></p>
<% if @stroll[:image] == nil %>
 <%= image_tag"03summer_ver8-l.jpg", width:"230", height:"183" %>
<% elsif @stroll.image.file.content_type.include?('video/') %>
 <%= link_to @stroll.image_url.to_s do %>
 <%= image_tag(@stroll.image_url(:screenshot).to_s, id: "image", :alt => "screenshot") %>
 <% end %>
<% else %>
 <%= image_tag(@stroll.image.url) if @stroll.image && @stroll.image.url %>
<% end %>
<p>user <%= @stroll.user.name %></p>
<p>『<%= @stroll.title %>』</p>
<p>content: <%= @stroll.content %></p>
<p>tag: <%= @stroll.tag %></p>
<p>comment: </p>
<div>
 <%= render partial: 'comments/index', locals: { comments: @comments, stroll: @stroll } %>
</div>
<% if user_signed_in? && @stroll.user_id != current_user.id %>
 <%= render partial: 'comments/form', locals: { comment: @comment, stroll: @stroll } %>
<% end %>
<% if current_user %>
 <% unless @stroll.user_id == current_user.id %>
 <% if @favorite.present? %>
 <%= link_to 'cancel', favorite_path(id: @favorite.id), method: :delete, class: 'btn btn-info' %>
 <% else %>
 <%= link_to 'Like!', favorites_path(stroll_id: @stroll.id), method: :post, class: 'btn btn-warning' %>
 <% end %>
 <% end %>
<% end %>
&emsp;
<%= link_to "Index", strolls_path %>



-
How to create a spectrogram image from an audio file in Python just like how FFMPEG does ?
2 mai 2020, par hamandishe MkMy code :



import matplotlib.pyplot as plt
from matplotlib.pyplot import specgram
import librosa
import librosa.display
import numpy as np
import io
from PIL import Image

samples, sample_rate = librosa.load('thabo.wav')
fig = plt.figure(figsize=[4, 4])
ax = fig.add_subplot(111)
ax.axes.get_xaxis().set_visible(False)
ax.axes.get_yaxis().set_visible(False)
ax.set_frame_on(False)
S = librosa.feature.melspectrogram(y=samples, sr=sample_rate)
librosa.display.specshow(librosa.power_to_db(S, ref=np.max))
buf = io.BytesIO()
plt.savefig(buf, bbox_inches='tight',pad_inches=0)

# plt.close('all')
buf.seek(0)
im = Image.open(buf)
# im = Image.open(buf).convert('L')
im.show()
buf.close()




Spectrogram produced






Using FFMPEG



ffmpeg -i thabo.wav -lavfi showspectrumpic=s=224x224:mode=separate:legend=disabled spectrogram.png



Spectrogram produced






Please help, i want a spectrogram that is exactly the same as the one produced by FFMPEG, for use with a speech recognition model exported from google's teachable machine.
Offline recognition