
Recherche avancée
Autres articles (59)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Mise à disposition des fichiers
14 avril 2011, parPar défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)
Sur d’autres sites (5736)
-
I received connection refused error while trying to stream live video through RTMP with FFMPEG
25 septembre 2020, par FemzyI am working on a nodeJs app that can send camera stream to third party plartform i.e Facebook and Youtube using the RTMP protoco ;.. It works well on my localhost but once i deploy to the server, it only give me errors. The error I get is below on this content..
Here is my codes


server.js




const child_process = require('child_process'); // To be used later for running FFmpeg
const express = require('express');
const http = require('http');
const WebSocketServer = require('ws').Server;

const app = express();
const server = http.createServer(app).listen(4000, () => {
 console.log('Listening...');
});

// Serve static files out of the www directory, where we will put our HTML page
app.use(express.static(__dirname + '/www'));


const wss = new WebSocketServer({
 server: server
});
wss.on('connection', (ws, req) => {
 
 
 
 const rtmpUrl = 'rtmp://a.rtmp.youtube.com/live2/MyStreamId';
 console.log('Target RTMP URL:', rtmpUrl);
 
 // Launch FFmpeg to handle all appropriate transcoding, muxing, and RTMP.
 // If 'ffmpeg' isn't in your path, specify the full path to the ffmpeg binary.
 const ffmpeg = child_process.spawn('ffmpeg', [
 // Facebook requires an audio track, so we create a silent one here.
 // Remove this line, as well as `-shortest`, if you send audio from the browser.
 //'-f', 'lavfi', '-i', 'anullsrc',
 
 // FFmpeg will read input video from STDIN
 '-i', '-',
 
 // Because we're using a generated audio source which never ends,
 // specify that we'll stop at end of other input. Remove this line if you
 // send audio from the browser.
 //'-shortest',
 
 // If we're encoding H.264 in-browser, we can set the video codec to 'copy'
 // so that we don't waste any CPU and quality with unnecessary transcoding.
 // If the browser doesn't support H.264, set the video codec to 'libx264'
 // or similar to transcode it to H.264 here on the server.
 '-vcodec', 'copy',
 
 // AAC audio is required for Facebook Live. No browser currently supports
 // encoding AAC, so we must transcode the audio to AAC here on the server.
 '-acodec', 'aac',
 
 // FLV is the container format used in conjunction with RTMP
 '-f', 'flv',
 
 // The output RTMP URL.
 // For debugging, you could set this to a filename like 'test.flv', and play
 // the resulting file with VLC. Please also read the security considerations
 // later on in this tutorial.
 rtmpUrl 
 ]);
 
 // If FFmpeg stops for any reason, close the WebSocket connection.
 ffmpeg.on('close', (code, signal) => {
 console.log('FFmpeg child process closed, code ' + code + ', signal ' + signal);
 ws.terminate();
 });
 
 // Handle STDIN pipe errors by logging to the console.
 // These errors most commonly occur when FFmpeg closes and there is still
 // data to write. If left unhandled, the server will crash.
 ffmpeg.stdin.on('error', (e) => {
 console.log('FFmpeg STDIN Error', e);
 });
 
 // FFmpeg outputs all of its messages to STDERR. Let's log them to the console.
 ffmpeg.stderr.on('data', (data) => {
 console.log('FFmpeg STDERR:', data.toString());
 });

 // When data comes in from the WebSocket, write it to FFmpeg's STDIN.
 ws.on('message', (msg) => {
 console.log('DATA', msg);
 ffmpeg.stdin.write(msg);
 });
 
 // If the client disconnects, stop FFmpeg.
 ws.on('close', (e) => {
 ffmpeg.kill('SIGINT');
 });
 
});







On the server.js file i create a websocket to receive stream data from the client side and then use FFMPEG to send the stream data over to youtube via the RTMP url


Here is my client.js code




const ws = new WebSocket(
 'wss://my-websocket-server.com'

 );
 ws.addEventListener('open', (e) => {
 console.log('WebSocket Open', e);
 drawVideosToCanvas();
 mediaStream = getMixedVideoStream(); // 30 FPS
 mediaRecorder = new MediaRecorder(mediaStream, {
 mimeType: 'video/webm;codecs=h264',
 //videoBitsPerSecond : 3000000000
 bitsPerSecond: 6000000
 });

 mediaRecorder.addEventListener('dataavailable', (e) => {
 ws.send(e.data);
 });
 mediaRecorder.onstop = function() {
 ws.close.bind(ws);
 isRecording = false;
 actionBtn.textContent = 'Start Streaming';
 actionBtn.onclick = startRecording;
 }
 mediaRecorder.onstart = function() {
 isRecording = true;
 actionBtn.textContent = 'Stop Streaming';
 actionBtn.onclick = stopRecording;
 screenShareBtn.onclick = startSharing;
 screenShareBtn.disabled = false;
 }
 //mediaRecorder.addEventListener('stop', ws.close.bind(ws));

 mediaRecorder.start(1000); // Start recording, and dump data every second

 });







On my client.js file, i captured users camera and then open the websocket server to send the data to the server.. Every thing works fine on local host expect for when i deploy it to live server..
i am wondering if there is a bad configuration on the server.. The server is Centos 7.8 and the app was runing on Apache software
Here is how i configured the virtual host for the websocket domain




ServerName my-websocket.com

 RewriteEngine on
 RewriteCond %{HTTP:Upgrade} websocket [NC]
 RewriteCond %{HTTP:Connection} upgrade [NC]
 RewriteRule .* "ws://127.0.0.1:3000/$1" [P,L]

 ProxyPass "/" "http://127.0.0.1:3000/$1"
 ProxyPassReverse "/" "http://127.0.0.1:3000/$1"
 ProxyRequests off







I don't know much about server configuration but i just thought may be the configuration has to do with why FFMPEg can not open connection to RTMP protocol on the server.


here is the error am getting




FFmpeg STDERR: Input #0, lavfi, from 'anullsrc':
 Duration:
FFmpeg STDERR: N/A, start: 0.000000, bitrate: 705 kb/s
 Stream #0:0: Audio: pcm_u8, 44100 Hz, stereo, u8, 705 kb/s

DATA <buffer 1a="1a">
DATA <buffer 45="45" df="df" a3="a3" 42="42" 86="86" 81="81" 01="01" f7="f7" f2="f2" 04="04" f3="f3" 08="08" 82="82" 88="88" 6d="6d" 61="61" 74="74" 72="72" 6f="6f" 73="73" 6b="6b" 87="87" 0442="0442" 85="85" 02="02" 18="18" 53="53" 80="80" 67="67" ff="ff" 53991="53991" more="more" bytes="bytes">
DATA <buffer 40="40" c1="c1" 81="81" 00="00" f0="f0" 80="80" 7b="7b" 83="83" 3e="3e" 3b="3b" 07="07" d6="d6" 4e="4e" 1c="1c" 11="11" b4="b4" 7f="7f" cb="cb" 5e="5e" 68="68" 9b="9b" d5="d5" 2a="2a" e3="e3" 06="06" c6="c6" f3="f3" 94="94" ff="ff" 29="29" 16="16" b2="b2" 60="60" 04ac="04ac" 37="37" fb="fb" 1a="1a" 15="15" ea="ea" 39="39" a0="a0" cd="cd" 02="02" b8="b8" 56206="56206" more="more" bytes="bytes">
FFmpeg STDERR: Input #1, matroska,webm, from 'pipe:':
 Metadata:
 encoder :
FFmpeg STDERR: Chrome
 Duration: N/A, start: 0.000000, bitrate: N/A
 Stream #1:0(eng): Audio: opus, 48000 Hz, mono, fltp (default)
 Stream #1:1(eng): Video: h264 (Constrained Baseline), yuv420p(progressive), 1366x768, SAR 1:1 DAR 683:384, 30.30 fps, 30 tbr, 1k tbn, 60 tbc (default)

FFmpeg STDERR: [tcp @ 0xe5fac0] Connection to tcp://a.rtmp.youtube.com:1935 failed (Connection refused), trying next address
[rtmp @ 0xe0fb80] Cannot open connection tcp://a.rtmp.youtube.com:1935

FFmpeg STDERR: rtmp://a.rtmp.youtube.com/live2/mystreamid: Network is unreachable

FFmpeg child process closed, code 1, signal null</buffer></buffer></buffer>







I will really appreciate if I could get some insight on what may be causing this issue or what i can do to solve it..Thanks in advance..


-
FFMpeg volume goes up on last audio segment [duplicate]
10 septembre 2020, par PaulUsing the FFMPEG version2020-06-17 version by johnvansickle I have the following issue :


I try to combine 1 video (without audio) and 2 audio fragments. Both audio fragments are identical concerning the volume. However when I mix the it all together somehow the last audio fragment is played louder. No matter in which order, no matter how many audio fragments I try to assemble.


Below the statement I execute against ffmpeg, line breaks added for readability.


-hide_banner -loglevel warning -y 
-i video.mp4 
-i audio-1.mp3 
-i audio-2.mp3 
-filter_complex [1]adelay=0[s1];[2]adelay=4000[s2];[s1][s2]amix=2[combined] 
-c:v copy dest.mp4



-
MPEG-DASH Livestreaming using ffmpeg, avconv, MP4Box and Dashjs
28 avril 2017, par Sushant MongiaI’m working on delivering Live Streaming with DASH capabilities. Long story short, it’s a very crude testbed setup so I might be off the mark in some respects. I’m also posting this as a simple setup for the community and people out there struggling to look up a Live Streaming with DASH Tutorial.
Setup :
OS : Ubuntu 16.04
Encoding Tools :
ffmpeg : To record a livestream using a desktop webcam in mpeg2 format
avconv : To convert mpeg2 to mpeg4 file format
MP4Box : To DASH it, i.e. produce the .mpd, some conf files, seg_init and the segments
Dashjs : Reference Client 2.4.1
Server : ApacheProcess :
I’ve written 3 bash scripts, with basically an infinite while loop in them, with ffmpeg , avconv and the MP4Box commands, one in each of them. I first run the ffmpeg script that records a video using the desktop webcam and then I run the avconv script that kills the ffmpeg command and converts the file from mpeg2 format to mpeg 4 format. Now since the ffmpeg command is in an infinite while loop, it restarts. Then I run the MP4Box command that DASH-es the avconv commands’ output. Then everything is sent to the DASHjs client and pretty much the whole setup gets repeated every 5 seconds.Commands :
ffmpeg -f v4l2 -framerate 25 -video_size 640x480 -i /dev/video0 -f mpegts -codec:v mpeg1video -s 640x480 -b:v 1000k -bf 0 livestream
avconv -i livestream out.mp4
MP4Box command in a loop, MP4Box -dash-live 4000 -fps 24 -frag 6000 -profile dashavc264:live -dynamic -mpd-refresh 5000 -dash-ctx dashtest.txt -time-shift -1 -inter 0 -segment-name output-seg -bs-switching no out.mp4
Problem :
The ffmpeg sends a chunk of 5 seconds (that’s due to the sleep command in my avconv bash script) and the MP4Box reads that 5 second chunk and loops that chunk. So when the next chunk comes in, newer segments are produced, but the player is still playing the older segments, typically just the very first few segments in a loop.Questions :
1) Am I missing out on some core concept here ? Are the commands and their respective attributes with the right parameters and the right values ?
2) I believe there should be a way to pipeline these processes in a better manner, should I be looking into writing a python script maybe ?Happy to provide more info ! Cheers