
Recherche avancée
Autres articles (66)
-
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (13931)
-
copy .wav audio file settings to new .wav file
18 novembre 2020, par Jonascurrently I am working with a speech to text translation model that takes a .wav file and turns the audible speech within the audio into a text transcript. The model worked before on .wav audio recordings that were recorded directly. However now I am trying to do the same with audio that was at first present within a video.


The steps are as follows :


- 

- retrieve a video file from a stream url through ffmpeg
- strip the .aac audio from the video
- convert the .aac audio to .wav
- save the .wav to s3 for later usage










The ffmpeg command I use is listed below for reference :


rm /tmp/jonas/*
 ffmpeg -i {stream_url} -c copy -bsf:a aac_adtstoasc /tmp/jonas/{filename}.aac
 ffmpeg -i /tmp/jonas/{filename}.aac /tmp/jonas/{filename}.wav
 aws s3 cp /tmp/jonas/{filename}.wav {s3_audio_save_location}



The problem now is that my speech to text model does not work on this audio anymore. I use sox to convert the audio but sox does not seem to grab the audio. Also without sox the model does not work. This leads me to believe there is a difference in the .wav audio formatting and therefore I would like to know how I can either format the .wav with the same settings as a .wav that does work or find a way to compare the .wav audio formatting and set the new .wav to the correct format manually through ffmpeg


I tried with PyPy exiftool and found the metadata of the two files :


The metadata of the working .wav file is


The metadata of the .wav file that does not work is


So as can be seen the working .wav file has some different settings that I would like to mimic in the second .wav file presumably that would make my model work again :)


with kind regards,
Jonas


-
dnn_backend_native_layer_mathunary : add atan support
18 juin 2020, par Ting Fudnn_backend_native_layer_mathunary : add atan support
It can be tested with the model generated with below python script :
import tensorflow as tf
import numpy as np
import imageioin_img = imageio.imread('input.jpeg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
x1 = tf.atan(x)
x2 = tf.divide(x1, 3.1416/4) # pi/4
y = tf.identity(x2, name='dnn_out')sess=tf.Session()
sess.run(tf.global_variables_initializer())graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")output = sess.run(y, feed_dict=x : in_data)
imageio.imsave("out.jpg", np.squeeze(output))Signed-off-by : Ting Fu <ting.fu@intel.com>
Signed-off-by : Guo Yejun <yejun.guo@intel.com> -
FFmpeg proccess killed while converting .mov file
10 novembre 2020, par Vala KhosraviI'm using FFmpeg to reduce my videos file size when I give a .mov file as an input with this command :


ffmpeg -i in.mov -c:a copy -crf 20 out.mov



program start working and after a while, it gets killed. here are the last lines of the log that I get :


Output #0, mov, to '/home/ubuntu/test.mov':
 Metadata:
 major_brand : qt
 minor_version : 0
 compatible_brands: qt
 com.apple.quicktime.creationdate: 2020-08-29T15:03:17+0430
 com.apple.quicktime.make: Apple
 com.apple.quicktime.model: MacBookPro14,1
 com.apple.quicktime.software: Mac OS X 10.15.1 (19B88)
 encoder : Lavf57.83.100
 Stream #0:0(und): Video: h264 (libx264) (avc1 / 0x31637661), yuv420p, 2866x1716 [SAR 1:1 DAR 1433:858], q=-1--1, 60 fps, 15360 tbn, 60 tbc (default)
 Metadata:
 creation_time : 2020-11-10T09:04:43.000000Z
 handler_name : Core Media Data Handler
 encoder : Lavc57.107.100 libx264
 Side data:
 cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
 Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 207 kb/s (default)
 Metadata:
 creation_time : 2020-11-10T09:04:43.000000Z
 handler_name : Core Media Data Handler
Killed 23 fps=3.6 q=0.0 size= 0kB time=00:00:01.34 bitrate= 0.0kbits/s dup=1 drop=0 speed=0.21x



I tried so many different flags for FFmpeg but still getting the same error.


What's the solution ?
here is my input video file