
Recherche avancée
Médias (1)
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (97)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Participer à sa documentation
10 avril 2011La documentation est un des travaux les plus importants et les plus contraignants lors de la réalisation d’un outil technique.
Tout apport extérieur à ce sujet est primordial : la critique de l’existant ; la participation à la rédaction d’articles orientés : utilisateur (administrateur de MediaSPIP ou simplement producteur de contenu) ; développeur ; la création de screencasts d’explication ; la traduction de la documentation dans une nouvelle langue ;
Pour ce faire, vous pouvez vous inscrire sur (...) -
Contribute to a better visual interface
13 avril 2011MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.
Sur d’autres sites (6031)
-
Merge commit 'cd4663dc80323ba64989d0c103d51ad3ee0e9c2f'
12 novembre 2017, par James Almer -
tf.contrib.signal.stft returns an empty matrix
9 décembre 2017, par matt-pielatThis is the piece of code I run :
import tensorflow as tf
sess = tf.InteractiveSession()
filename = 'song.mp3' # 30 second mp3 file
SAMPLES_PER_SEC = 44100
audio_binary = tf.read_file(filename)
pcm = tf.contrib.ffmpeg.decode_audio(audio_binary, file_format='mp3', samples_per_second=SAMPLES_PER_SEC, channel_count = 1)
stft = tf.contrib.signal.stft(pcm, frame_length=1024, frame_step=512, fft_length=1024)
sess.close()The mp3 file is properly decoded because
print(pcm.eval().shape)
returns :(1323119, 1)
And there are even some actual non-zero values when I print them with
print(pcm.eval()[1000:1010])
:[[ 0.18793298]
[ 0.16214484]
[ 0.16022217]
[ 0.15918455]
[ 0.16428113]
[ 0.19858395]
[ 0.22861415]
[ 0.2347789 ]
[ 0.22684409]
[ 0.20728172]]But for some reason
print(stft.eval().shape)
evaluates to :(1323119, 0, 513) # why the zero dimension?
And therefore
print(stft.eval())
is :[]
According to this the second dimension of the
tf.contrib.signal.stft
output is equal to the number of frames. Why are there no frames though ? -
Using ffmpeg to generate dash manifest and it cannot be played by dash.js
18 mars 2019, par PunkheadI’m using ffmpeg to encode incoming stream via rtmp protocol, the code as following :
ffmpeg -re -i rtmp://localhost:1935${StreamPath} -use_timeline 1 /
-use_template 1 -window_size 10 -min_seg_duration 5000 -f dash out.mpdThe manifest looks like this :
<?xml version="1.0" encoding="utf-8"?>
<mpd xmlns="urn:mpeg:dash:schema:mpd:2011" profiles="urn:mpeg:dash:profile:isoff-live:2011" type="static" mediapresentationduration="PT1M36.4S" minbuffertime="PT8.3S">
<programinformation>
</programinformation>
<period start="PT0.0S">
<adaptationset contenttype="video" segmentalignment="true" bitstreamswitching="true" framerate="30/1">
<representation mimetype="video/mp4" codecs="avc1.640028" width="1920" height="1080" framerate="30/1">
<segmenttemplate timescale="15360" initialization="init-stream$RepresentationID$.m4s" media="chunk-stream$RepresentationID$-$Number%05d$.m4s" startnumber="4">
<segmenttimeline>
<s t="384000" d="128000"></s>
<s d="71680"></s>
<s d="128000" r="4"></s>
<s d="56832"></s>
<s d="128000"></s>
<s d="72704"></s>
</segmenttimeline>
</segmenttemplate>
</representation>
</adaptationset>
<adaptationset contenttype="audio" segmentalignment="true" bitstreamswitching="true">
<representation mimetype="audio/mp4" codecs="mp4a.40.2" bandwidth="128000" audiosamplingrate="44100">
<audiochannelconfiguration schemeiduri="urn:mpeg:dash:23003:3:audio_channel_configuration:2011" value="2"></audiochannelconfiguration>
<segmenttemplate timescale="44100" initialization="init-stream$RepresentationID$.m4s" media="chunk-stream$RepresentationID$-$Number%05d$.m4s" startnumber="4">
<segmenttimeline>
<s t="1099755" d="367616"></s>
<s d="205824"></s>
<s d="367616" r="4"></s>
<s d="162816"></s>
<s d="367616"></s>
<s d="207872"></s>
</segmenttimeline>
</segmenttemplate>
</representation>
</adaptationset>
</period>
</mpd>When I try to play it on dash.js player, a error occured :
[112] Parsing complete: ( xml2json: 3.50ms, objectiron: 1.76ms, total: 0.00526s) Debug.js:127
[116] SegmentTimeline detected using calculated Live Edge Time Debug.js:127
[118] MediaSource attached to element. Waiting on open... Debug.js:127
[119] Manifest has been refreshed at Tue Jan 02 2018 01:57:35 GMT+0800 [1514829455.1] Debug.js:127
[155] MediaSource is open! Debug.js:127
[156] Duration successfully set to: 96.4 Debug.js:127
[157] Added 0 inline events Debug.js:127
[158] video codec: video/mp4;codecs="avc1.640028" Stream.js:225
Uncaught TypeError: Cannot read property 'type' of null
at z (Stream.js:225)
at C (Stream.js:285)
at D (Stream.js:373)
at E (Stream.js:398)
at Object.d [as activate] (Stream.js:107)
at y (StreamController.js:363)
at MediaSource.c (StreamController.js:342)then it fails to playback...
Is it because I didn’t set the parameters right on ffmpeg or this is a bug in dash.js ?
I really stuck here !