
Recherche avancée
Autres articles (27)
-
Personnaliser les catégories
21 juin 2013, parFormulaire de création d’une catégorie
Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire.
Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
List of compatible distributions
26 avril 2011, parThe table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)
Sur d’autres sites (9001)
-
Updated(reproducible) - Gaps when recording using MediaRecorder API(audio/webm opus)
25 mars 2019, par Jack Juiceson----- UPDATE HAS BEEN ADDED BELOW -----
I have an issue with MediaRecorder API (https://www.w3.org/TR/mediastream-recording/#mediarecorder-api).
I’m using it to record the speech from the web page(Chrome was used in this case) and save it as chunks.
I need to be able to play it while and after it is recorded, so it’s important to keep those chunks.Here is the code which is recording data :
navigator.mediaDevices.getUserMedia({ audio: true, video: false }).then(function(stream) {
recorder = new MediaRecorder(stream, { mimeType: 'audio/webm; codecs="opus"' })
recorder.ondataavailable = function(e) {
// Read blob from `e.data`, decode64 and send to sever;
}
recorder.start(1000)
})The issue is that the WebM file which I get when I concatenate all the parts is corrupted(rarely) !. I can play it as WebM, but when I try to convert it(ffmpeg) to something else, it gives me a file with shifted timings.
For example. I’m trying to convert a file which has duration
00:36:27.78
to wav, but I get a file with duration00:36:26.04
, which is 1.74s less.At the beginning of file - the audio is the same, but after about 10min WebM file plays with a small delay.
After some research, I found out that it also does not play correctly with the browser’s MediaSource API, which I use for playing the chunks. I tried 2 ways of playing those chunks :
In a case when I just merge all the parts into a single blob - it works fine.
In case when I add them via the sourceBuffer object, it has some gaps (i can see them by inspectingbuffered
property).
697.196 - 697.528 ( 330ms)
996.198 - 996.754 ( 550ms)
1597.16 - 1597.531 ( 370ms)
1896.893 - 1897.183 ( 290ms)Those gaps are 1.55s in total and they are exactly in the places where the desync between wav & webm files start. Unfortunately, the file where it is reproducible cannot be shared because it’s customer’s private data and I was not able to reproduce such issue on different media yet.
What can be the cause for such an issue ?
----- UPDATE -----
I was able to reproduce the issue on https://jsfiddle.net/96uj34nf/4/In order to see the problem, click on the "Print buffer zones" button and it will display time ranges. You can see that there are two gaps :
0 - 136.349, 141.388 - 195.439, 197.57 - 198.589- 136.349 - 141.388
- 195.439 - 197.57
So, as you can see there are 5 and 2 second gaps. Would be happy if someone could shed some light on why it is happening or how to avoid this issue.
Thank you
-
Gaps when recording using MediaRecorder API(audio/webm opus)
9 août 2018, par Jack Juiceson----- UPDATE HAS BEEN ADDED BELOW -----
I have an issue with MediaRecorder API (https://www.w3.org/TR/mediastream-recording/#mediarecorder-api).
I’m using it to record the speech from the web page(Chrome was used in this case) and save it as chunks.
I need to be able to play it while and after it is recorded, so it’s important to keep those chunks.Here is the code which is recording data :
navigator.mediaDevices.getUserMedia({ audio: true, video: false }).then(function(stream) {
recorder = new MediaRecorder(stream, { mimeType: 'audio/webm; codecs="opus"' })
recorder.ondataavailable = function(e) {
// Read blob from `e.data`, decode64 and send to sever;
}
recorder.start(1000)
})The issue is that the WebM file which I get when I concatenate all the parts is corrupted(rarely) !. I can play it as WebM, but when I try to convert it(ffmpeg) to something else, it gives me a file with shifted timings.
For example. I’m trying to convert a file which has duration
00:36:27.78
to wav, but I get a file with duration00:36:26.04
, which is 1.74s less.At the beginning of file - the audio is the same, but after about 10min WebM file plays with a small delay.
After some research, I found out that it also does not play correctly with the browser’s MediaSource API, which I use for playing the chunks. I tried 2 ways of playing those chunks :
In a case when I just merge all the parts into a single blob - it works fine.
In case when I add them via the sourceBuffer object, it has some gaps (i can see them by inspectingbuffered
property).
697.196 - 697.528 ( 330ms)
996.198 - 996.754 ( 550ms)
1597.16 - 1597.531 ( 370ms)
1896.893 - 1897.183 ( 290ms)Those gaps are 1.55s in total and they are exactly in the places where the desync between wav & webm files start. Unfortunately, the file where it is reproducible cannot be shared because it’s customer’s private data and I was not able to reproduce such issue on different media yet.
What can be the cause for such an issue ?
----- UPDATE -----
I was able to reproduce the issue on https://jsfiddle.net/96uj34nf/4/In order to see the problem, click on the "Print buffer zones" button and it will display time ranges. You can see that there are two gaps :
0 - 136.349, 141.388 - 195.439, 197.57 - 198.589- 136.349 - 141.388
- 195.439 - 197.57
So, as you can see there are 5 and 2 second gaps. Would be happy if someone could shed some light on why it is happening or how to avoid this issue.
Thank you
-
Error in converting audio file format from ogg to wav [on hold]
9 juin 2014, par Sumit BishtI am trying to convert an ogg format file that was created using webrtc (html5 usermedia content generated on firefox) and transferred and decoded on the server into a wav file through ffmpeg but am getting this error on cmmand line while trying to convert :
$ ffmpeg -i 2014-6-5_16-17-54.ogg res1.wav
ffmpeg version 2.0.1 Copyright (c) 2000-2013 the FFmpeg developers
built on May 1 2014 13:12:12 with gcc 4.4.7 (GCC) 20120313 (Red Hat 4.4.7-4)
configuration: --enable-gpl --enable-version3 --enable-shared --enable-nonfree --enable-postproc --enable-libfaac --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxvid
libavutil 52. 38.100 / 52. 38.100
libavcodec 55. 18.102 / 55. 18.102
libavformat 55. 12.100 / 55. 12.100
libavdevice 55. 3.100 / 55. 3.100
libavfilter 3. 79.101 / 3. 79.101
libswscale 2. 3.100 / 2. 3.100
libswresample 0. 17.102 / 0. 17.102
libpostproc 52. 3.100 / 52. 3.100
Guessed Channel Layout for Input Stream #0.0 : mono
Input #0, ogg, from '2014-6-5_16-17-54.ogg':
Duration: 00:00:01.84, start: 0.000000, bitrate: 18 kb/s
Stream #0:0: Audio: opus, 48000 Hz, mono
Metadata:
ENCODER : Mozilla29.0.1
[graph 0 input from stream 0:0 @ 0x18dca20] Invalid sample format (null)
Error opening filters!Although, I am able to play the file on server and using the same command, am able to convert .ogg files generated somewhere else. What might be I missing ?
Edit :
Here’s the source code that is used to write to the file :1) During startup - use the methods of getUserMedia API.
navigator.getUserMedia({
audio: true,
video: false
}, function(stream) {
audioStream = RecordRTC(stream, {
bufferSize: 16384
});
audioStream.startRecording();2) During stopping of the recording - extracting the recorded information.
function(audioDataURL) {
var audioFile = {};
audioFile = {
contents: audioDataURL
**strong text**};On server end, the following code is creating a file from this data :
dataURL = dataURL.split(',').pop(); // dataURL is the audioDataURL as defined above
fileBuffer = new Buffer(dataURL, 'base64');
fs.writeFileSync(filePath, fileBuffer);