
Recherche avancée
Médias (91)
-
Collections - Formulaire de création rapide
19 février 2013, par
Mis à jour : Février 2013
Langue : français
Type : Image
-
Les Miserables
4 juin 2012, par
Mis à jour : Février 2013
Langue : English
Type : Texte
-
Ne pas afficher certaines informations : page d’accueil
23 novembre 2011, par
Mis à jour : Novembre 2011
Langue : français
Type : Image
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
-
Richard Stallman et la révolution du logiciel libre - Une biographie autorisée (version epub)
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (106)
-
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...) -
Le plugin : Gestion de la mutualisation
2 mars 2010, parLe plugin de Gestion de mutualisation permet de gérer les différents canaux de mediaspip depuis un site maître. Il a pour but de fournir une solution pure SPIP afin de remplacer cette ancienne solution.
Installation basique
On installe les fichiers de SPIP sur le serveur.
On ajoute ensuite le plugin "mutualisation" à la racine du site comme décrit ici.
On customise le fichier mes_options.php central comme on le souhaite. Voilà pour l’exemple celui de la plateforme mediaspip.net :
< ?php (...) -
Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs
12 avril 2011, parLa manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.
Sur d’autres sites (12145)
-
ffmpeg on fb0 from Nexus Galaxy error : "could not get frame filename number 2"
3 mars 2012, par user848106I use ffmpeg to convert fb0 files from Androids and produce screenshots. For some reason this does not work with the Nexus Galaxy.
I get this error :
[image2 @ 0000000001E0E350] Could not get frame filename number 2 from pattern '
image.png'
av_interleaved_write_frame(): Invalid argumentHere is the process :
C:\dev\scripts>adb pull /dev/graphics/fb0
3292 KB/s (16777216 bytes in 4.976s)
C:\dev\scripts>ffmpeg -vframes 1 -vcodec rawvideo -f rawvideo -pix
_fmt rgb32 -s 720x1080 -i fb0 -f image2 -vcodec png image.png
ffmpeg version N-36635-gceb0dd9 Copyright (c) 2000-2012 the FFmpeg developers
built on Jan 9 2012 17:45:55 with gcc 4.6.2
configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-ru
ntime-cpudetect --enable-avisynth --enable-bzlib --enable-frei0r --enable-libope
ncore-amrnb --enable-libopencore-amrwb --enable-libfreetype --enable-libgsm --en
able-libmp3lame --enable-libopenjpeg --enable-librtmp --enable-libschroedinger -
-enable-libspeex --enable-libtheora --enable-libvo-aacenc --enable-libvo-amrwben
c --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxavs --enable-
libxvid --enable-zlib
libavutil 51. 34.100 / 51. 34.100
libavcodec 53. 54.100 / 53. 54.100
libavformat 53. 29.100 / 53. 29.100
libavdevice 53. 4.100 / 53. 4.100
libavfilter 2. 58.100 / 2. 58.100
libswscale 2. 1.100 / 2. 1.100
libswresample 0. 6.100 / 0. 6.100
libpostproc 51. 2.100 / 51. 2.100
[rawvideo @ 000000000037D5A0] Estimating duration from bitrate, this may be inac
curate
Input #0, rawvideo, from 'fb0':
Duration: N/A, start: 0.000000, bitrate: N/A
Stream #0:0: Video: rawvideo (BGRA / 0x41524742), bgra, 720x1080, 25 tbr, 25
tbn, 25 tbc
[buffer @ 000000000037D420] w:720 h:1080 pixfmt:bgra tb:1/1000000 sar:0/1 sws_pa
ram:
Output #0, image2, to 'image.png':
Metadata:
encoder : Lavf53.29.100
Stream #0:0: Video: png, bgra, 720x1080, q=2-31, 200 kb/s, 90k tbn, 25 tbc
Stream mapping:
Stream #0:0 -> #0:0 (rawvideo -> png)
Press [q] to stop, [?] for help
[image2 @ 0000000001E0E350] Could not get frame filename number 2 from pattern '
image.png'
av_interleaved_write_frame(): Invalid argument -
No output file when converting audio using FFmpeg in Android
18 mars 2017, par ShaI’m trying to convert
m4a
audio file towav
using FFmpeg. The code executes fine and gives no error, but I don’t see any output file in my directory.This is what I am executing :
String[] cmd = {"-y", "-i", "/storage/emulated/0/jd.m4a", "-f","wav" ,"/storage/emulated/0/DCIM/Camera/output.wav"};
And this is what gets printed :
03-17 15:40:51.539 10111-10111/io.whispero.soundmerger E/onProgress: ffmpeg version n3.0.1 Copyright (c) 2000-2016 the FFmpeg developers
03-17 15:40:51.542 10111-10111/io.whispero.soundmerger E/onProgress: built with gcc 4.8 (GCC)
03-17 15:40:51.545 10111-10111/io.whispero.soundmerger E/onProgress: configuration: --target-os=linux --cross-prefix=/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/bin/arm-linux-androideabi- --arch=arm --cpu=cortex-a8 --enable-runtime-cpudetect --sysroot=/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/sysroot --enable-pic --enable-libx264 --enable-libass --enable-libfreetype --enable-libfribidi --enable-libmp3lame --enable-fontconfig --enable-pthreads --disable-debug --disable-ffserver --enable-version3 --enable-hardcoded-tables --disable-ffplay --disable-ffprobe --enable-gpl --enable-yasm --disable-doc --disable-shared --enable-static --pkg-config=/home/vagrant/SourceCode/ffmpeg-android/ffmpeg-pkg-config --prefix=/home/vagrant/SourceCode/ffmpeg-android/build/armeabi-v7a --extra-cflags='-I/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/include -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -fno-strict-overflow -fstack-protector-all' --extra-ldflags='-L/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/lib -Wl,-z,relro -Wl,-z,now -pie' --extra-libs='-lpng -lexpat -lm' --extra-cxxflags=
03-17 15:40:51.547 10111-10111/io.whispero.soundmerger E/onProgress: libavutil 55. 17.103 / 55. 17.103
03-17 15:40:51.552 10111-10111/io.whispero.soundmerger E/onProgress: libavcodec 57. 24.102 / 57. 24.102
03-17 15:40:51.554 10111-10111/io.whispero.soundmerger E/onProgress: libavformat 57. 25.100 / 57. 25.100
03-17 15:40:51.556 10111-10111/io.whispero.soundmerger E/onProgress: libavdevice 57. 0.101 / 57. 0.101
03-17 15:40:51.559 10111-10111/io.whispero.soundmerger E/onProgress: libavfilter 6. 31.100 / 6. 31.100
03-17 15:40:51.561 10111-10111/io.whispero.soundmerger E/onProgress: libswscale 4. 0.100 / 4. 0.100
03-17 15:40:51.562 10111-10111/io.whispero.soundmerger E/onProgress: libswresample 2. 0.101 / 2. 0.101
03-17 15:40:51.564 10111-10111/io.whispero.soundmerger E/onProgress: libpostproc 54. 0.100 / 54. 0.100
03-17 15:40:51.581 10111-10111/io.whispero.soundmerger E/onProgress: Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/storage/emulated/0/jd.m4a':
03-17 15:40:51.585 10111-10111/io.whispero.soundmerger E/onProgress: Metadata:
03-17 15:40:51.587 10111-10111/io.whispero.soundmerger E/onProgress: major_brand : M4A
03-17 15:40:51.589 10111-10111/io.whispero.soundmerger E/onProgress: minor_version : 0
03-17 15:40:51.593 10111-10111/io.whispero.soundmerger E/onProgress: compatible_brands: M4A mp42isom
03-17 15:40:51.595 10111-10111/io.whispero.soundmerger E/onProgress: creation_time : 2017-02-16 10:36:39
03-17 15:40:51.597 10111-10111/io.whispero.soundmerger E/onProgress: Duration: 00:00:03.39, start: 0.000000, bitrate: 82 kb/s
03-17 15:40:51.602 10111-10111/io.whispero.soundmerger E/onProgress: Stream #0:0(eng): Audio: aac (LC) (mp4a / 0x6134706D), 16000 Hz, mono, fltp, 24 kb/s (default)
03-17 15:40:51.608 10111-10111/io.whispero.soundmerger E/onProgress: Metadata:
03-17 15:40:51.612 10111-10111/io.whispero.soundmerger E/onProgress: creation_time : 2017-02-16 10:36:39
03-17 15:40:51.614 10111-10111/io.whispero.soundmerger E/onProgress: Output #0, wav, to '/storage/emulated/0/DCIM/Camera/hyder.wav':
03-17 15:40:51.617 10111-10111/io.whispero.soundmerger E/onProgress: Metadata:
03-17 15:40:51.619 10111-10111/io.whispero.soundmerger E/onProgress: major_brand : M4A
03-17 15:40:51.621 10111-10111/io.whispero.soundmerger E/onProgress: minor_version : 0
03-17 15:40:51.623 10111-10111/io.whispero.soundmerger E/onProgress: compatible_brands: M4A mp42isom
03-17 15:40:51.625 10111-10111/io.whispero.soundmerger E/onProgress: ISFT : Lavf57.25.100
03-17 15:40:51.627 10111-10111/io.whispero.soundmerger E/onProgress: Stream #0:0(eng): Audio: pcm_s16le ([1][0][0][0] / 0x0001), 16000 Hz, mono, s16, 256 kb/s (default)
03-17 15:40:51.629 10111-10111/io.whispero.soundmerger E/onProgress: Metadata:
03-17 15:40:51.631 10111-10111/io.whispero.soundmerger E/onProgress: creation_time : 2017-02-16 10:36:39
03-17 15:40:51.633 10111-10111/io.whispero.soundmerger E/onProgress: encoder : Lavc57.24.102 pcm_s16le
03-17 15:40:51.636 10111-10111/io.whispero.soundmerger E/onProgress: Stream mapping:
03-17 15:40:51.639 10111-10111/io.whispero.soundmerger E/onProgress: Stream #0:0 -> #0:0 (aac (native) -> pcm_s16le (native))
03-17 15:40:51.642 10111-10111/io.whispero.soundmerger E/onProgress: Press [q] to stop, [?] for help
03-17 15:40:51.645 10111-10111/io.whispero.soundmerger E/onProgress: size= 106kB time=00:00:03.39 bitrate= 256.2kbits/s speed= 181x
03-17 15:40:51.647 10111-10111/io.whispero.soundmerger E/onProgress: video:0kB audio:106kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.071860%
03-17 15:40:51.650 10111-10111/io.whispero.soundmerger E/SUCCESS: ffmpeg version n3.0.1 Copyright (c) 2000-2016 the FFmpeg developers
built with gcc 4.8 (GCC)
configuration: --target-os=linux --cross-prefix=/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/bin/arm-linux-androideabi- --arch=arm --cpu=cortex-a8 --enable-runtime-cpudetect --sysroot=/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/sysroot --enable-pic --enable-libx264 --enable-libass --enable-libfreetype --enable-libfribidi --enable-libmp3lame --enable-fontconfig --enable-pthreads --disable-debug --disable-ffserver --enable-version3 --enable-hardcoded-tables --disable-ffplay --disable-ffprobe --enable-gpl --enable-yasm --disable-doc --disable-shared --enable-static --pkg-config=/home/vagrant/SourceCode/ffmpeg-android/ffmpeg-pkg-config --prefix=/home/vagrant/SourceCode/ffmpeg-android/build/armeabi-v7a --extra-cflags='-I/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/include -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -fno-strict-overflow -fstack-protector-all' --extra-ldflags='-L/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/lib -Wl,-z,relro -Wl,-z,now -pie' --extra-libs='-lpng -lexpat -lm' --extra-cxxflags=
libavutil 55. 17.103 / 55. 17.103
libavcodec 57. 24.102 / 57. 24.102
libavformat 57. 25.100 / 57. 25.100
libavdevice 57. 0.101 / 57. 0.101
libavfilter 6. 31.100 / 6. 31.100
libswscale 4. 0.100 / 4. 0.100
libswresample 2. 0.101 / 2. 0.101
libpostproc 54. 0.100 / 54. 0.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/storage/emulated/0/jd.m4a':
Metadata:
major_brand : M4A
minor_version : 0
compatible_brands: M4A mp42isom
creation_time : 2017-02-16 10:36:39
Duration: 00:00:03.39, start: 0.000000, bitrate: 82 kb/s
Stream #0:0(eng): Audio: aac (LC) (mp4a / 0x6134706D), 16000 Hz, mono, fltp, 24 kb/s (default)
Metadata:
creation_time : 2017-02-16 10:36:39
Output #0, wav, to '/storage/emulated/0/DCIM/Camera/hyder.wav':
Metadata:
major_brand : M4A
minor_version : 0
compatible_brands: M4A mp42isom
ISFT : Lavf57.25.100
Stream #0:0(eng): Audio: pcm_s16le ([1][0][0][0] / 0x0001), 16000 Hz, mono, s16, 256 kb/s (default)
Metadata:
creation_time : 2017-02-16 10:36:39
encoder : Lavc57.24.102 pcm_s16le
Stream mapping:
Stream #0:0 -> #0:0 (aac (native) -> pcm_s16le (native))
Press [q] to stop, [?] for help
size= 106kB time=00:00:03.39 bitrate= 256.2kbits/s speed= 181x
video:0kB audio:106kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.071860%
03-17 15:40:51.653 10111-10111/io.whispero.soundmerger E/onFinish: onFinishPlease help why I am not seeing any output audio file :(
Thanks.
-
RTP packets detected as UDP
28 février 2017, par user3172852Here is what I am trying to do :
WebRTC endpoint > RTP Endpoint > ffmpeg > RTMP server.
This is what my SDP file looks like.
var cm_offer = "v=0\n" +
"o=- 3641290734 3641290734 IN IP4 127.0.0.1\n" +
"s=nginx\n" +
"c=IN IP4 127.0.0.1\n" +
"t=0 0\n" +
"m=audio 60820 RTP/AVP 0\n" +
"a=rtpmap:0 PCMU/8000\n" +
"a=recvonly\n" +
"m=video 59618 RTP/AVP 101\n" +
"a=rtpmap:101 H264/90000\n" +
"a=recvonly\n";What’s happening is that wireshark can detect the incoming packets at port 59618, but not as RTP packets but UDP packets. I am trying to capture the packets using ffmpeg with the following command :
ubuntu@ip-132-31-40-100:~$ ffmpeg -i udp://127.0.0.1:59618 -vcodec copy stream.mp4
ffmpeg version git-2017-01-22-f1214ad Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 4.8 (Ubuntu 4.8.4-2ubuntu1~14.04.3)
configuration: --extra-libs=-ldl --prefix=/opt/ffmpeg --mandir=/usr/share/man --enable-avresample --disable-debug --enable-nonfree --enable-gpl --enable-version3 --enable-libopencore-amrnb --enable-libopencore-amrwb --disable-decoder=amrnb --disable-decoder=amrwb --enable-libpulse --enable-libfreetype --enable-gnutls --enable-libx264 --enable-libx265 --enable-libfdk-aac --enable-libvorbis --enable-libmp3lame --enable-libopus --enable-libvpx --enable-libspeex --enable-libass --enable-avisynth --enable-libsoxr --enable-libxvid --enable-libvidstab --enable-libwavpack --enable-nvenc
libavutil 55. 44.100 / 55. 44.100
libavcodec 57. 75.100 / 57. 75.100
libavformat 57. 63.100 / 57. 63.100
libavdevice 57. 2.100 / 57. 2.100
libavfilter 6. 69.100 / 6. 69.100
libavresample 3. 2. 0 / 3. 2. 0
libswscale 4. 3.101 / 4. 3.101
libswresample 2. 4.100 / 2. 4.100
libpostproc 54. 2.100 / 54. 2.100All I get is a blinking cursor and The stream.mp4 file is not written to disk after I exit (ctrl+c).
So can you help me figure out :
- why wireshark cannot detect the packets as RTP (I suspect it has something to do with SDP)
- How to handle SDP answer when the RTP endpoint is pushing to ffmpeg which doesn’t send an answer back.
Here is the entire code (hello world tutorial modified)
/*
* (C) Copyright 2014-2015 Kurento (http://kurento.org/)
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
function getopts(args, opts)
{
var result = opts.default || {};
args.replace(
new RegExp("([^?=&]+)(=([^&]*))?", "g"),
function($0, $1, $2, $3) { result[$1] = decodeURI($3); });
return result;
};
var args = getopts(location.search,
{
default:
{
ws_uri: 'wss://' + location.hostname + ':8433/kurento',
ice_servers: undefined
}
});
function setIceCandidateCallbacks(webRtcPeer, webRtcEp, onerror)
{
webRtcPeer.on('icecandidate', function(candidate) {
console.log("Local candidate:",candidate);
candidate = kurentoClient.getComplexType('IceCandidate')(candidate);
webRtcEp.addIceCandidate(candidate, onerror)
});
webRtcEp.on('OnIceCandidate', function(event) {
var candidate = event.candidate;
console.log("Remote candidate:",candidate);
webRtcPeer.addIceCandidate(candidate, onerror);
});
}
function setIceCandidateCallbacks2(webRtcPeer, rtpEp, onerror)
{
webRtcPeer.on('icecandidate', function(candidate) {
console.log("Localr candidate:",candidate);
candidate = kurentoClient.getComplexType('IceCandidate')(candidate);
rtpEp.addIceCandidate(candidate, onerror)
});
}
window.addEventListener('load', function()
{
console = new Console();
var webRtcPeer;
var pipeline;
var webRtcEpt;
var videoInput = document.getElementById('videoInput');
var videoOutput = document.getElementById('videoOutput');
var startButton = document.getElementById("start");
var stopButton = document.getElementById("stop");
startButton.addEventListener("click", function()
{
showSpinner(videoInput, videoOutput);
var options = {
localVideo: videoInput,
remoteVideo: videoOutput
};
if (args.ice_servers) {
console.log("Use ICE servers: " + args.ice_servers);
options.configuration = {
iceServers : JSON.parse(args.ice_servers)
};
} else {
console.log("Use freeice")
}
webRtcPeer = kurentoUtils.WebRtcPeer.WebRtcPeerSendrecv(options, function(error)
{
if(error) return onError(error)
this.generateOffer(onOffer)
});
function onOffer(error, sdpOffer)
{
if(error) return onError(error)
kurentoClient(args.ws_uri, function(error, client)
{
if(error) return onError(error);
client.create("MediaPipeline", function(error, _pipeline)
{
if(error) return onError(error);
pipeline = _pipeline;
pipeline.create("WebRtcEndpoint", function(error, webRtc){
if(error) return onError(error);
webRtcEpt = webRtc;
setIceCandidateCallbacks(webRtcPeer, webRtc, onError)
webRtc.processOffer(sdpOffer, function(error, sdpAnswer){
if(error) return onError(error);
webRtcPeer.processAnswer(sdpAnswer, onError);
});
webRtc.gatherCandidates(onError);
webRtc.connect(webRtc, function(error){
if(error) return onError(error);
console.log("Loopback established");
});
});
pipeline.create("RtpEndpoint", function(error, rtp){
if(error) return onError(error);
//setIceCandidateCallbacks2(webRtcPeer, rtp, onError)
var cm_offer = "v=0\n" +
"o=- 3641290734 3641290734 IN IP4 127.0.0.1\n" +
"s=nginx\n" +
"c=IN IP4 127.0.0.1\n" +
"t=0 0\n" +
"m=audio 60820 RTP/AVP 0\n" +
"a=rtpmap:0 PCMU/8000\n" +
"a=recvonly\n" +
"m=video 59618 RTP/AVP 101\n" +
"a=rtpmap:101 H264/90000\n" +
"a=recvonly\n";
rtp.processOffer(cm_offer, function(error, cm_sdpAnswer){
if(error) return onError(error);
//webRtcPeer.processAnswer(cm_sdpAnswer, onError);
});
//rtp.gatherCandidates(onError);
webRtcEpt.connect(rtp, function(error){
if(error) return onError(error);
console.log("RTP endpoint connected to webRTC");
});
});
});
});
}
});
stopButton.addEventListener("click", stop);
function stop() {
if (webRtcPeer) {
webRtcPeer.dispose();
webRtcPeer = null;
}
if(pipeline){
pipeline.release();
pipeline = null;
}
hideSpinner(videoInput, videoOutput);
}
function onError(error) {
if(error)
{
console.error(error);
stop();
}
}
})
function showSpinner() {
for (var i = 0; i < arguments.length; i++) {
arguments[i].poster = 'img/transparent-1px.png';
arguments[i].style.background = "center transparent url('img/spinner.gif') no-repeat";
}
}
function hideSpinner() {
for (var i = 0; i < arguments.length; i++) {
arguments[i].src = '';
arguments[i].poster = 'img/webrtc.png';
arguments[i].style.background = '';
}
}
/**
* Lightbox utility (to display media pipeline image in a modal dialog)
*/
$(document).delegate('*[data-toggle="lightbox"]', 'click', function(event) {
event.preventDefault();
$(this).ekkoLightbox();
});