
Recherche avancée
Autres articles (47)
-
Le plugin : Gestion de la mutualisation
2 mars 2010, parLe plugin de Gestion de mutualisation permet de gérer les différents canaux de mediaspip depuis un site maître. Il a pour but de fournir une solution pure SPIP afin de remplacer cette ancienne solution.
Installation basique
On installe les fichiers de SPIP sur le serveur.
On ajoute ensuite le plugin "mutualisation" à la racine du site comme décrit ici.
On customise le fichier mes_options.php central comme on le souhaite. Voilà pour l’exemple celui de la plateforme mediaspip.net :
< ?php (...) -
Installation en mode ferme
4 février 2011, parLe mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
C’est la méthode que nous utilisons sur cette même plateforme.
L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...) -
Le plugin : Podcasts.
14 juillet 2010, parLe problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
Types de fichiers supportés dans les flux
Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...)
Sur d’autres sites (7620)
-
AppRTC : Google’s WebRTC test app and its parameters
23 juillet 2014, par silviaIf you’ve been interested in WebRTC and haven’t lived under a rock, you will know about Google’s open source testing application for WebRTC : AppRTC.
When you go to the site, a new video conferencing room is automatically created for you and you can share the provided URL with somebody else and thus connect (make sure you’re using Google Chrome, Opera or Mozilla Firefox).
We’ve been using this application forever to check whether any issues with our own WebRTC applications are due to network connectivity issues, firewall issues, or browser bugs, in which case AppRTC breaks down, too. Otherwise we’re pretty sure to have to dig deeper into our own code.
Now, AppRTC creates a pretty poor quality video conference, because the browsers use a 640×480 resolution by default. However, there are many query parameters that can be added to the AppRTC URL through which the connection can be manipulated.
Here are my favourite parameters :
- hd=true : turns on high definition, ie. minWidth=1280,minHeight=720
- stereo=true : turns on stereo audio
- debug=loopback : connect to yourself (great to check your own firewalls)
- tt=60 : by default, the channel is closed after 30min – this gives you 60 (max 1440)
For example, here’s how a stereo, HD loopback test would look like : https://apprtc.appspot.com/?r=82313387&hd=true&stereo=true&debug=loopback .
This is not the limit of the available parameter, though. Here are some others that you may find interesting for some more in-depth geekery :
- ss=[stunserver] : in case you want to test a different STUN server to the default Google ones
- ts=[turnserver] : in case you want to test a different TURN server to the default Google ones
- tp=[password] : password for the TURN server
- audio=true&video=false : audio-only call
- audio=false : video-only call
- audio=googEchoCancellation=false,googAutoGainControl=true : disable echo cancellation and enable gain control
- audio=googNoiseReduction=true : enable noise reduction (more Google-specific parameters)
- asc=ISAC/16000 : preferred audio send codec is ISAC at 16kHz (use on Android)
- arc=opus/48000 : preferred audio receive codec is opus at 48kHz
- dtls=false : disable datagram transport layer security
- dscp=true : enable DSCP
- ipv6=true : enable IPv6
AppRTC’s source code is available here. And here is the file with the parameters (in case you want to check if they have changed).
Have fun playing with the main and always up-to-date WebRTC application : AppRTC.
UPDATE 12 May 2014
AppRTC now also supports the following bitrate controls :
- arbr=[bitrate] : set audio receive bitrate
- asbr=[bitrate] : set audio send bitrate
- vsbr=[bitrate] : set video receive bitrate
- vrbr=[bitrate] : set video send bitrate
Example usage : https://apprtc.appspot.com/?r=&asbr=128&vsbr=4096&hd=true
-
How to obtain time markers for video splitting using python/OpenCV
10 novembre 2018, par Bleddyn Raw-ReesI’m working on my MSc project which is researching automated deletion of low value content in digital file stores. I’m specifically looking at the sort of long shots that often occur in natural history filming whereby a static camera is left rolling in order to capture the rare snow leopard or whatever. These shots may only have some 60s of useful content with perhaps several hours of worthless content either side.
As a first step I have a simple motion detection program from Adrian Rosebrock’s tutorial [http://www.pyimagesearch.com/2015/05/25/basic-motion-detection-and-tracking-with-python-and-opencv/#comment-393376]. Next I intend to use FFMPEG to split the video.
What I would like help with is how to get in and out points based on the first and last points that motion is detected in the video.
Here is the code should you wish to see it...
# import the necessary packages
import argparse
import datetime
import imutils
import time
import cv2
# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-v", "--video", help="path to the video file")
ap.add_argument("-a", "--min-area", type=int, default=500, help="minimum area size")
args = vars(ap.parse_args())
# if the video argument is None, then we are reading from webcam
if args.get("video", None) is None:
camera = cv2.VideoCapture(0)
time.sleep(0.25)
# otherwise, we are reading from a video file
else:
camera = cv2.VideoCapture(args["video"])
# initialize the first frame in the video stream
firstFrame = None
# loop over the frames of the video
while True:
# grab the current frame and initialize the occupied/unoccupied
# text
(grabbed, frame) = camera.read()
text = "Unoccupied"
# if the frame could not be grabbed, then we have reached the end
# of the video
if not grabbed:
break
# resize the frame, convert it to grayscale, and blur it
frame = imutils.resize(frame, width=500)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (21, 21), 0)
# if the first frame is None, initialize it
if firstFrame is None:
firstFrame = gray
continue
# compute the absolute difference between the current frame and
# first frame
frameDelta = cv2.absdiff(firstFrame, gray)
thresh = cv2.threshold(frameDelta, 25, 255, cv2.THRESH_BINARY)[1]
# dilate the thresholded image to fill in holes, then find contours
# on thresholded image
thresh = cv2.dilate(thresh, None, iterations=2)
(_, cnts, _) = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# loop over the contours
for c in cnts:
# if the contour is too small, ignore it
if cv2.contourArea(c) < args["min_area"]:
continue
# compute the bounding box for the contour, draw it on the frame,
# and update the text
(x, y, w, h) = cv2.boundingRect(c)
cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)
text = "Occupied"
# draw the text and timestamp on the frame
cv2.putText(frame, "Room Status: {}".format(text), (10, 20),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)
cv2.putText(frame, datetime.datetime.now().strftime("%A %d %B %Y %I:%M:%S%p"),
(10, frame.shape[0] - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 0, 255), 1)
# show the frame and record if the user presses a key
cv2.imshow("Security Feed", frame)
cv2.imshow("Thresh", thresh)
cv2.imshow("Frame Delta", frameDelta)
key = cv2.waitKey(1) & 0xFF
# if the `q` key is pressed, break from the lop
if key == ord("q"):
break
# cleanup the camera and close any open windows
camera.release()
cv2.destroyAllWindows() -
how to send audio or video by packet though udp
20 janvier 2019, par Wei Wenhow to send part of video and audio from mp4 as packet though udp from server
Client will play the part of packet resevice.import java.awt.Dimension ; import java.awt.image.BufferedImage ; import
java.io.ByteArrayOutputStream ; import java.io.IOException ; import
java.io.ObjectOutputStream ; import java.math.BigInteger ; import
java.net.DatagramPacket ; import java.net.DatagramSocket ; import
java.net.ServerSocket ; import java.net.Socket ; import
java.nio.ByteBuffer ; import java.nio.ShortBuffer ; import
java.util.ArrayList ; import java.util.Arrays ; import
javax.imageio.ImageIO ; import javax.sound.sampled.AudioFileFormat ;
import javax.sound.sampled.AudioFormat ; import javax.swing.JTextArea ;import org.bytedeco.javacv.FFmpegFrameGrabber ; import
org.bytedeco.javacv.Frame ; import
org.bytedeco.javacv.Java2DFrameConverter ;import Enum.EType.ClientState ; import View.SingleDisplayWindow ;
import java.security.InvalidKeyException ; import
java.security.NoSuchAlgorithmException ; import java.util.Timer ; import
java.util.TimerTask ; import java.util.concurrent.CountDownLatch ;
import java.util.concurrent.ExecutionException ;import javax.crypto.BadPaddingException ; import
javax.crypto.IllegalBlockSizeException ; import
javax.crypto.NoSuchPaddingException ; import
org.bytedeco.javacv.FrameGrabber ;public class SCon private final static int PORT = 8888 ;
private final JTextArea TEXT_AREA ; private volatile
SingleDisplayWindow DISPLAY ; /////private final String BD_USER_NAME, DB_PASSWORD ; private Database
database ;private boolean isRunning ;
private RSA serverRSA, clientRSA ;
private int keyIndex, typeID = 0 ; private String mediatype = "" ;
private ArrayList sHandlers ;private FileStreamingThread fileStreamingThread ; private
VideoStreamingThread videoStreamingThread ; private BroadcastThread
broadcastThread ; private ConnectThread connectThread ;private volatile static byte[] currentVideoFrame = new byte[0],
currentAudioFrame = new byte[0] ; // current image musicpublic void run() startServer() ;
isRunning = true; fileStreamingThread = new
FileStreamingThread(videoFile) ; videoStreamingThread = new
VideoStreamingThread(videoFile) ;
//CountDownLatch latch = new CountDownLatch(1) ; fileStreamingThread.start() ; videoStreamingThread.start() ;
//latch.countDown() ;broadcastThread = new BroadcastThread(); broadcastThread.start();
connectThread = new ConnectThread(); connectThread.start(); }public void stop() isRunning = false ;
try { new Socket("localhost", PORT);
} catch (IOException e) { e.printStackTrace(); }
while (fileStreamingThread.isAlive()) {
}
while (broadcastThread.isAlive()) {
}
while (connectThread.isAlive()) {
}
for (SHandler sHandler : sHandlers) { sHandler.connectionClose();
} sHandlers.clear(); DISPLAY.dispose();
TEXT_AREA.append("\nServer stop\n"); }
private class VideoStreamingThread extends Thread { privateFFmpegFrameGrabber grabber ; // Used to extract frames from video file.
private Java2DFrameConverter converter ; // Used to convert frames to
image private int curIndex ; // Current key indexpublic VideoStreamingThread(String video_file) { videoFile =
videoFile ; grabber = new FFmpegFrameGrabber(videoFile) ;
converter = new Java2DFrameConverter() ; try
grabber.restart() ;} catch (FrameGrabber.Exception e) {
e.printStackTrace(); } curIndex = keyIndex; }
public void run() { try {
while (isRunning) {
curIndex = keyIndex;
Frame frame = null;
System.out.println("v1");
if ((frame = grabber.grab()) != null) { // Grab next frame from video file
if (frame.image != null) { // image frame
BufferedImage bi = converter.convert(frame); // convert frame to image
// Convert BufferedImage to byte[]
ByteArrayOutputStream baos = new ByteArrayOutputStream();
ImageIO.write(bi, "jpg", baos);
// Encrypt data and store as the current image of byte[] type
currentVideoFrame = ciphers[curIndex].doFinal(baos.toByteArray());
//////////////////
DISPLAY.setSize(new Dimension(bi.getWidth(), bi.getHeight()));
DISPLAY.updateImage(bi); // Display image
// Thread.sleep((long) ( 999 / grabber.getFrameRate()));
///////////////
typeID = 1;
mediatype = grabber.getFormat();
}
} else {
grabber.restart();
} // Restart when reached end of video
}
grabber.close();
} catch (IOException e) {
e.printStackTrace();
} catch (IllegalBlockSizeException e) {
e.printStackTrace();
} catch (BadPaddingException e) {
e.printStackTrace();
} //catch (InterruptedException e) {e.printStackTrace(); } }
public synchronized int getCurKeyIndex() { return curIndex; }
public synchronized void getVideoFile(String video_file) {
videoFile = video_file; grabber = newFFmpegFrameGrabber(video_file) ; converter = new
Java2DFrameConverter() ;try {
grabber.release();
grabber.restart();
} catch (FrameGrabber.Exception e) {
e.printStackTrace(); } } } private class FileStreamingThread extends Thread { private FFmpegFrameGrabbergrabber ; // Used to extract frames from video file. private int
curIndex ; // Current key indexpublic FileStreamingThread(String video_file) { videoFile =
videoFile ; grabber = new FFmpegFrameGrabber(videoFile) ; try
grabber.restart() ;} catch (FrameGrabber.Exception e) {
e.printStackTrace(); } curIndex = keyIndex; }
public void run() { try {
while (isRunning) {
curIndex = keyIndex;
Frame frame = null;
System.out.println("a2");
if ((frame = grabber.grabSamples()) != null) { // Grab next frame from video file
if (frame.samples != null) { // audio frame
// Encrypt audio
ShortBuffer channelSamplesShortBuffer = (ShortBuffer) frame.samples[0];
channelSamplesShortBuffer.rewind();
ByteBuffer outBuffer = ByteBuffer.allocate(channelSamplesShortBuffer.capacity() * 2);
for (int i = 0; i < channelSamplesShortBuffer.capacity(); i++) {
short val = channelSamplesShortBuffer.get(i);
outBuffer.putShort(val);
}
AudioFileFormat audiofileFormat = new AudioFileFormat(null, null, typeID);
AudioFormat audioFormat = new AudioFormat(44100, 16, 2, true, true);
//System.out.println(grabber.getSampleFormat());
// Encrypt data and store as the current audio of byte[] type
currentAudioFrame = ciphers[curIndex].doFinal(outBuffer.array());
DISPLAY.updateAudio(outBuffer.array(), grabber.getFormat()); // Display image audio
// Thread.sleep((long) (1000 / grabber.getSampleRate()));
// Thread.sleep((long) (1000 / grabber.getAudioBitrate()));
// System.out.println(grabber.getFormat());
// System.out.println("audioInputStream.getFormat() = " +grabber.getFormat()) ; // System.out.println("Sample.length
= " + grabber.length) ; // System.out.println("FrameLength :" + grabber.getFrameLength()) ; //
System.out.println("Frame Size :" + grabber.getFrameSize()) ; //
System.out.println("SampleSizeInBits :" +
grabber.getSampleSizeInBits()) ; //
System.out.println("Frame Rate : " + grabber.getFrameRate()) ; //
System.out.println("Sample Rate :" + grabber.getSampleRate()) ; //
System.out.println("Encoding :" + grabber.getEncoding()) ; //
System.out.println("Channels : " + grabber.getChannels()) ;
// AudioFormat audioFormat = new AudioFormat(grabber.getSampleRate(), grabber.getAudioBitrate(),
grabber.getAudioChannels(), true, true) ;
// DISPLAY.updateAudio(outBuffer.array(), audioFormat) ; //
Display image audio
outBuffer.clear() ;typeID = 2;
mediatype = grabber.getFormat();
}
} else {
grabber.restart();
} // Restart when reached end of video
}
grabber.close();
} catch (IOException e) {
e.printStackTrace();
} catch (IllegalBlockSizeException e) {
e.printStackTrace();
} catch (BadPaddingException e) {
e.printStackTrace();
} }
public synchronized int getCurKeyIndex() { return curIndex; }
public synchronized void getVideoFile(String video_file) {
videoFile = video_file; grabber = newFFmpegFrameGrabber(video_file) ;
try {
grabber.release();
grabber.restart();
} catch (FrameGrabber.Exception e) {
e.printStackTrace(); } } }public void setVideoFile(String videoFile) this.videoFile =
videoFile ;public void setThreadFile(String video_file)
fileStreamingThread.getVideoFile(video_file) ;
videoStreamingThread.getVideoFile(video_file) ;private class BroadcastThread extends Thread public void run()
while (isRunning)
Thread.yield() ;for (int i = 0; i < sHandlers.size(); i++) {
if (sHandlers.get(i).getClientState() == ClientState.R) {
sHandlers.get(i).setClientState(ClientState.W);
BroadcastWorker workerThread = new BroadcastWorker(sHandlers.get(i));
workerThread.start();
}
} } } }private class BroadcastWorker extends Thread SHandler sHandler =
null ;public BroadcastWorker(SHandler sHandler) { this.sHandler =
sHandler ;
public void run() { try {
DatagramSocket out = new DatagramSocket(); // used to send UDP packets
while (sHandler.getClientState() == ClientState.W) {
Thread.yield();
StreamFile s = new StreamFile(typeID, currentVideoFrame, currentAudioFrame, mediatype);
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
ObjectOutputStream os = new ObjectOutputStream(outputStream);
os.writeObject(s);
byte[] data = outputStream.toByteArray();
// Create and send UDP packet
DatagramPacket videoPacket = new DatagramPacket(data, data.length,
sHandler.getClientSocket().getInetAddress(),
Integer.parseInt(sHandler.getClientPort()));
out.send(videoPacket);
} } catch (IOException e) {
e.printStackTrace(); } } }private class ConnectThread extends Thread public void run()
TEXT_AREA.append("\nWaiting for clients’ connection.....\n") ;try {
ServerSocket serverSocket = new ServerSocket(PORT);
Socket clientSocket = null;
while (isRunning) {
clientSocket = serverSocket.accept();
if (isRunning) {
SHandler sHandler = new SHandler(clientSocket, serverRSA, clientRSA, sessionKeys[keyIndex],
TEXT_AREA);
sHandler.start();
sHandlers.add(sHandler);
}
}
serverSocket.close();
if (clientSocket != null) {
clientSocket.close();
}
} catch (IOException e) {
e.printStackTrace(); } } } }my audio and image not sync.