
Recherche avancée
Médias (3)
-
GetID3 - Bloc informations de fichiers
9 avril 2013, par
Mis à jour : Mai 2013
Langue : français
Type : Image
-
GetID3 - Boutons supplémentaires
9 avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
-
Collections - Formulaire de création rapide
19 février 2013, par
Mis à jour : Février 2013
Langue : français
Type : Image
Autres articles (77)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...) -
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)
Sur d’autres sites (10522)
-
Reduce HLS latency from +30 seconds
4 juin 2014, par RickUbuntu 12.04
nginx 1.2.4
avconv -version
avconv version 0.8.10-4:0.8.10-0ubuntu0.12.04.1, Copyright (c) 2000-2013 the Libav developers
built on Feb 6 2014 20:56:59 with gcc 4.6.3
avconv 0.8.10-4:0.8.10-0ubuntu0.12.04.1
libavutil 51. 22. 2 / 51. 22. 2
libavcodec 53. 35. 0 / 53. 35. 0
libavformat 53. 21. 1 / 53. 21. 1
libavdevice 53. 2. 0 / 53. 2. 0
libavfilter 2. 15. 0 / 2. 15. 0
libswscale 2. 1. 0 / 2. 1. 0
libpostproc 52. 0. 0 / 52. 0. 0I’m using avconv and nginx to create an HLS stream but right now my latency is regularly well over 30 seconds. After much reading I am aware that HLS has built in latency and that 10s is expected and even preferred but 30s seems quite extreme.
I’ve seen a lot of discussion on the nginx-rtmp google group, this thread in particular had a lot of suggestions. I have attempted to reduce solve my issue by reducing the
hls_fragment
and thehls_playlist_length
but they haven’t had a significant effect.nginx.conf :
#user nobody;
worker_processes 1;
error_log logs/error.log debug;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 8888;
server_name localhost;
add_header 'Access-Control-Allow-Origin' "*";
location /hls {
types {
application/vnd.apple.mpegurl m3u8;
video/mp2t ts;
}
root /tmp;
}
# rtmp stat
location /stat {
rtmp_stat all;
rtmp_stat_stylesheet stat.xsl;
}
location /stat.xsl {
# you can move stat.xsl to a different location
root /usr/build/nginx-rtmp-module;
}
# rtmp control
location /control {
rtmp_control all;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
rtmp {
server {
listen 1935;
ping 30s;
notify_method get;
application myapp {
live on;
hls on;
hls_path /tmp/hls;
hls_base_url http://x.x.x.x:8888/hls/;
hls_sync 2ms;
hls_fragment 2s;
#hls_variant _low BANDWIDTH=160000;
#hls_variant _mid BANDWIDTH=320000;
#hls_variant _hi BANDWIDTH=640000;
}
}
}avconv command :
avconv -r 30 -y -f image2pipe -codec:v mjpeg -i - -f flv -codec:v libx264 -profile:v baseline -preset ultrafast -tune zerolatency -an -f flv rtmp://127.0.0.1:1935/myapp/mystream
-
Setting start date in gource
5 juin 2014, par rfc1484I’m trying to run gource from a certain date, but it reports me a ’codec not found error’.
This is the command without a start date :
gource -1280x720 --seconds-per-day 1 --stop-at-end --hide filenames --hide files --git-branch master --camera-mode track --output-ppm-stream - | ffmpeg -y -b 3000k -r 25 -f image2pipe -vcodec ppm -i - test.mp4
Which works fine and generates the .mp4 file as expected.
This is the command with a start date :
gource -1280x720 --start-date '2014-01-01' --seconds-per-day 1 --stop-at-end --hide filenames --hide files --git-branch master --camera-mode track --output-ppm-stream - | ffmpeg -y -b 3000k -r 25 -f image2pipe -vcodec ppm -i - test.mp4
Which doesn’t work and generates the following error message :
[image2pipe @ 0x78a600] Could not find codec parameters (Video : ppm)
Any suggestions on how to fix this ?
-
Error in converting audio file format from ogg to wav [on hold]
9 juin 2014, par Sumit BishtI am trying to convert an ogg format file that was created using webrtc (html5 usermedia content generated on firefox) and transferred and decoded on the server into a wav file through ffmpeg but am getting this error on cmmand line while trying to convert :
$ ffmpeg -i 2014-6-5_16-17-54.ogg res1.wav
ffmpeg version 2.0.1 Copyright (c) 2000-2013 the FFmpeg developers
built on May 1 2014 13:12:12 with gcc 4.4.7 (GCC) 20120313 (Red Hat 4.4.7-4)
configuration: --enable-gpl --enable-version3 --enable-shared --enable-nonfree --enable-postproc --enable-libfaac --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxvid
libavutil 52. 38.100 / 52. 38.100
libavcodec 55. 18.102 / 55. 18.102
libavformat 55. 12.100 / 55. 12.100
libavdevice 55. 3.100 / 55. 3.100
libavfilter 3. 79.101 / 3. 79.101
libswscale 2. 3.100 / 2. 3.100
libswresample 0. 17.102 / 0. 17.102
libpostproc 52. 3.100 / 52. 3.100
Guessed Channel Layout for Input Stream #0.0 : mono
Input #0, ogg, from '2014-6-5_16-17-54.ogg':
Duration: 00:00:01.84, start: 0.000000, bitrate: 18 kb/s
Stream #0:0: Audio: opus, 48000 Hz, mono
Metadata:
ENCODER : Mozilla29.0.1
[graph 0 input from stream 0:0 @ 0x18dca20] Invalid sample format (null)
Error opening filters!Although, I am able to play the file on server and using the same command, am able to convert .ogg files generated somewhere else. What might be I missing ?
Edit :
Here’s the source code that is used to write to the file :1) During startup - use the methods of getUserMedia API.
navigator.getUserMedia({
audio: true,
video: false
}, function(stream) {
audioStream = RecordRTC(stream, {
bufferSize: 16384
});
audioStream.startRecording();2) During stopping of the recording - extracting the recorded information.
function(audioDataURL) {
var audioFile = {};
audioFile = {
contents: audioDataURL
**strong text**};On server end, the following code is creating a file from this data :
dataURL = dataURL.split(',').pop(); // dataURL is the audioDataURL as defined above
fileBuffer = new Buffer(dataURL, 'base64');
fs.writeFileSync(filePath, fileBuffer);