
Recherche avancée
Autres articles (99)
-
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Gestion des droits de création et d’édition des objets
8 février 2011, parPar défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;
Sur d’autres sites (12380)
-
Permissions issue with Python and ffmpeg on a Mac
13 avril 2020, par EventHorizonI am fairly new to Python ( 4 weeks), and I have been struggling with this all day.



I am using MacOS 10.13, Python 3.7 via Anaconda Navigator 1.9.12 and Spyder 4.0.1.



Somehow (only a noob, remember) I had 2 Anaconda environments. I don't do production code, just research, so I figured I would make life simple and just use the base environment. I deleted the other environment.



I had previously got FFmpeg working and was able to do frame grabs, build mpeg animations, and convert them to gifs for blog posts and such. I had FFmpeg installed in the directories associated with the deleted environment, so it went away.



No worries, I got the git URL, used Terminal to install it in /opt/anaconda3/bin. It's all there and I can run FFmpeg from the Terminal.



My problem : When I attempt to run a module that previously worked fine, I get the following message :



[Errno 13] Permission denied : '/opt/anaconda3/bin/ffmpeg'



In my module I set the default location of FFmpeg : plt.rcParams['animation.ffmpeg_path'] = '/opt/anaconda3/bin/ffmpeg'



In my module I have the following lines :



writer = animation.FFMpegWriter(fps=frameRate, metadata=metadata)
writer.setup(fig, "animation.mp4", 100)




This calls matplotlib's 'animation.py', which runs the following :



def setup(self, fig, outfile, dpi=None):
 '''
 Perform setup for writing the movie file.

 Parameters
 ----------
 fig : `~matplotlib.figure.Figure`
 The figure object that contains the information for frames
 outfile : str
 The filename of the resulting movie file
 dpi : int, optional
 The DPI (or resolution) for the file. This controls the size
 in pixels of the resulting movie file. Default is fig.dpi.
 '''
 self.outfile = outfile
 self.fig = fig
 if dpi is None:
 dpi = self.fig.dpi
 self.dpi = dpi
 self._w, self._h = self._adjust_frame_size()

 # Run here so that grab_frame() can write the data to a pipe. This
 # eliminates the need for temp files.
 self._run()

def _run(self):
 # Uses subprocess to call the program for assembling frames into a
 # movie file. *args* returns the sequence of command line arguments
 # from a few configuration options.
 command = self._args()
 _log.info('MovieWriter.run: running command: %s', command)
 PIPE = subprocess.PIPE
 self._proc = subprocess.Popen(
 command, stdin=PIPE, stdout=PIPE, stderr=PIPE,
 creationflags=subprocess_creation_flags)




Everything works fine up to the last line (i.e. 'command' looks like a well-formatted FFmpeg command line, PIPE returns -1) but subprocess.Popen() bombs out with the error message above.



I have tried changing file permissions - taking a sledgehammer approach and setting everything in /opt/anaconda3/bin/ffmpeg to 777, read, write, and execute. But that doesn't seem to make any difference. I really am clueless when it comes to Apple's OS, file permissions, etc. Any suggestions ?


-
Convert Webrtc track stream to URL (RTSP/UDP/RTP/Http) in Video tag
19 juillet 2020, par Zeeshan YounisI am new in WebRTC and i have done client/server connection, from client i choose WebCam and post stream to server using Track and on Server side i am getting that track and assign track stream to video source. Everything till now fine but problem is now i include AI(Artificial Intelligence) and now i want to convert my track stream to URL maybe UDP/RTSP/RTP etc. So AI will use that URL for object detection. I don't know how we can convert track stream to URL.
Although there is a couple of packages like https://ffmpeg.org/ and RTP to Webrtc etc, i am using Nodejs, Socket.io and Webrtc, below you can check my client and server side code for getting and posting stream, i am following thi github code https://github.com/Basscord/webrtc-video-broadcast.
Now my main concern is to make track as a URL for video tag, is it possible or not or please suggest, any help would be appreciated.


Server.js


This is nodejs server code



const express = require("express");
const app = express();

let broadcaster;
const port = 4000;

const http = require("http");
const server = http.createServer(app);

const io = require("socket.io")(server);
app.use(express.static(__dirname + "/public"));

io.sockets.on("error", e => console.log(e));
io.sockets.on("connection", socket => {
 socket.on("broadcaster", () => {
 broadcaster = socket.id;
 socket.broadcast.emit("broadcaster");
 });
 socket.on("watcher", () => {
 socket.to(broadcaster).emit("watcher", socket.id);
 });
 socket.on("offer", (id, message) => {
 socket.to(id).emit("offer", socket.id, message);
 });
 socket.on("answer", (id, message) => {
 socket.to(id).emit("answer", socket.id, message);
 });
 socket.on("candidate", (id, message) => {
 socket.to(id).emit("candidate", socket.id, message);
 });
 socket.on("disconnect", () => {
 socket.to(broadcaster).emit("disconnectPeer", socket.id);
 });
});
server.listen(port, () => console.log(`Server is running on port ${port}`));







Broadcast.js
This is the code for emit stream(track)



const peerConnections = {};
const config = {
 iceServers: [
 {
 urls: ["stun:stun.l.google.com:19302"]
 }
 ]
};

const socket = io.connect(window.location.origin);

socket.on("answer", (id, description) => {
 peerConnections[id].setRemoteDescription(description);
});

socket.on("watcher", id => {
 const peerConnection = new RTCPeerConnection(config);
 peerConnections[id] = peerConnection;

 let stream = videoElement.srcObject;
 stream.getTracks().forEach(track => peerConnection.addTrack(track, stream));

 peerConnection.onicecandidate = event => {
 if (event.candidate) {
 socket.emit("candidate", id, event.candidate);
 }
 };

 peerConnection
 .createOffer()
 .then(sdp => peerConnection.setLocalDescription(sdp))
 .then(() => {
 socket.emit("offer", id, peerConnection.localDescription);
 });
});

socket.on("candidate", (id, candidate) => {
 peerConnections[id].addIceCandidate(new RTCIceCandidate(candidate));
});

socket.on("disconnectPeer", id => {
 peerConnections[id].close();
 delete peerConnections[id];
});

window.onunload = window.onbeforeunload = () => {
 socket.close();
};

// Get camera and microphone
const videoElement = document.querySelector("video");
const audioSelect = document.querySelector("select#audioSource");
const videoSelect = document.querySelector("select#videoSource");

audioSelect.onchange = getStream;
videoSelect.onchange = getStream;

getStream()
 .then(getDevices)
 .then(gotDevices);

function getDevices() {
 return navigator.mediaDevices.enumerateDevices();
}

function gotDevices(deviceInfos) {
 window.deviceInfos = deviceInfos;
 for (const deviceInfo of deviceInfos) {
 const option = document.createElement("option");
 option.value = deviceInfo.deviceId;
 if (deviceInfo.kind === "audioinput") {
 option.text = deviceInfo.label || `Microphone ${audioSelect.length + 1}`;
 audioSelect.appendChild(option);
 } else if (deviceInfo.kind === "videoinput") {
 option.text = deviceInfo.label || `Camera ${videoSelect.length + 1}`;
 videoSelect.appendChild(option);
 }
 }
}

function getStream() {
 if (window.stream) {
 window.stream.getTracks().forEach(track => {
 track.stop();
 });
 }
 const audioSource = audioSelect.value;
 const videoSource = videoSelect.value;
 const constraints = {
 audio: { deviceId: audioSource ? { exact: audioSource } : undefined },
 video: { deviceId: videoSource ? { exact: videoSource } : undefined }
 };
 return navigator.mediaDevices
 .getUserMedia(constraints)
 .then(gotStream)
 .catch(handleError);
}

function gotStream(stream) {
 window.stream = stream;
 audioSelect.selectedIndex = [...audioSelect.options].findIndex(
 option => option.text === stream.getAudioTracks()[0].label
 );
 videoSelect.selectedIndex = [...videoSelect.options].findIndex(
 option => option.text === stream.getVideoTracks()[0].label
 );
 videoElement.srcObject = stream;
 socket.emit("broadcaster");
}

function handleError(error) {
 console.error("Error: ", error);
}







RemoteServer.js
This code is getting track and assign to video tag



let peerConnection;
const config = {
 iceServers: [
 {
 urls: ["stun:stun.l.google.com:19302"]
 }
 ]
};

const socket = io.connect(window.location.origin);
const video = document.querySelector("video");

socket.on("offer", (id, description) => {
 peerConnection = new RTCPeerConnection(config);
 peerConnection
 .setRemoteDescription(description)
 .then(() => peerConnection.createAnswer())
 .then(sdp => peerConnection.setLocalDescription(sdp))
 .then(() => {
 socket.emit("answer", id, peerConnection.localDescription);
 });
 peerConnection.ontrack = event => {
 video.srcObject = event.streams[0];
 };
 peerConnection.onicecandidate = event => {
 if (event.candidate) {
 socket.emit("candidate", id, event.candidate);
 }
 };
});

socket.on("candidate", (id, candidate) => {
 peerConnection
 .addIceCandidate(new RTCIceCandidate(candidate))
 .catch(e => console.error(e));
});

socket.on("connect", () => {
 socket.emit("watcher");
});

socket.on("broadcaster", () => {
 socket.emit("watcher");
});

socket.on("disconnectPeer", () => {
 peerConnection.close();
});

window.onunload = window.onbeforeunload = () => {
 socket.close();
};







-
Measuring success for your SEO content
20 mars 2020, par Jake Thornton — Uncategorized