
Recherche avancée
Médias (3)
-
GetID3 - Bloc informations de fichiers
9 avril 2013, par
Mis à jour : Mai 2013
Langue : français
Type : Image
-
GetID3 - Boutons supplémentaires
9 avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
-
Collections - Formulaire de création rapide
19 février 2013, par
Mis à jour : Février 2013
Langue : français
Type : Image
Autres articles (95)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
ANNEXE : Les plugins utilisés spécifiquement pour la ferme
5 mars 2010, parLe site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)
Sur d’autres sites (6757)
-
Jitsi and ffplay
15 juin 2014, par KotkotI’m playing with jitsi. Got examples form source code. I modified it a bit.
Here is what I’ve got.
I am trying to play the transmitted stream in VLC of ffplay or any other player,
but I cannot.I use these application parameters to run the code :
--local-port-base=5000 --remote-host=localhost --remote-port-base=10000
What am I doing wrong ?
package com.company;
/*
* Jitsi, the OpenSource Java VoIP and Instant Messaging client.
*
* Distributable under LGPL license.
* See terms of license at gnu.org.
*/
import org.jitsi.service.libjitsi.LibJitsi;
import org.jitsi.service.neomedia.*;
import org.jitsi.service.neomedia.device.MediaDevice;
import org.jitsi.service.neomedia.format.MediaFormat;
import org.jitsi.service.neomedia.format.MediaFormatFactory;
import java.io.PrintStream;
import java.net.DatagramSocket;
import java.net.InetAddress;
import java.net.InetSocketAddress;
import java.util.HashMap;
import java.util.Map;
/**
* Implements an example application in the fashion of JMF's AVTransmit2 example
* which demonstrates the use of the <tt>libjitsi</tt> library for the purposes
* of transmitting audio and video via RTP means.
*
* @author Lyubomir Marinov
*/
public class VideoTransmitter {
/**
* The port which is the source of the transmission i.e. from which the
* media is to be transmitted.
*
* @see #LOCAL_PORT_BASE_ARG_NAME
*/
private int localPortBase;
/**
* The <tt>MediaStream</tt> instances initialized by this instance indexed
* by their respective <tt>MediaType</tt> ordinal.
*/
private MediaStream[] mediaStreams;
/**
* The <tt>InetAddress</tt> of the host which is the target of the
* transmission i.e. to which the media is to be transmitted.
*
* @see #REMOTE_HOST_ARG_NAME
*/
private InetAddress remoteAddr;
/**
* The port which is the target of the transmission i.e. to which the media
* is to be transmitted.
*
* @see #REMOTE_PORT_BASE_ARG_NAME
*/
private int remotePortBase;
/**
* Initializes a new <tt>AVTransmit2</tt> instance which is to transmit
* audio and video to a specific host and a specific port.
*
* @param localPortBase the port which is the source of the transmission
* i.e. from which the media is to be transmitted
* @param remoteHost the name of the host which is the target of the
* transmission i.e. to which the media is to be transmitted
* @param remotePortBase the port which is the target of the transmission
* i.e. to which the media is to be transmitted
* @throws Exception if any error arises during the parsing of the specified
* <tt>localPortBase</tt>, <tt>remoteHost</tt> and <tt>remotePortBase</tt>
*/
private VideoTransmitter(
String localPortBase,
String remoteHost, String remotePortBase)
throws Exception {
this.localPortBase
= (localPortBase == null)
? -1
: Integer.valueOf(localPortBase).intValue();
this.remoteAddr = InetAddress.getByName(remoteHost);
this.remotePortBase = Integer.valueOf(remotePortBase).intValue();
}
/**
* Starts the transmission. Returns null if transmission started ok.
* Otherwise it returns a string with the reason why the setup failed.
*/
private String start()
throws Exception {
/*
* Prepare for the start of the transmission i.e. initialize the
* MediaStream instances.
*/
MediaType[] mediaTypes = MediaType.values();
MediaService mediaService = LibJitsi.getMediaService();
int localPort = localPortBase;
int remotePort = remotePortBase;
mediaStreams = new MediaStream[mediaTypes.length];
for (MediaType mediaType : mediaTypes) {
if(mediaType != MediaType.VIDEO) continue;
/*
* The default MediaDevice (for a specific MediaType) is configured
* (by the user of the application via some sort of UI) into the
* ConfigurationService. If there is no ConfigurationService
* instance known to LibJitsi, the first available MediaDevice of
* the specified MediaType will be chosen by MediaService.
*/
MediaDevice device
= mediaService.getMediaDeviceForPartialDesktopStreaming(100,100,100,100);
if (device == null) {
continue;
}
MediaStream mediaStream = mediaService.createMediaStream(device);
// direction
/*
* The AVTransmit2 example sends only and the AVReceive2 receives
* only. In a call, the MediaStream's direction will most commonly
* be set to SENDRECV.
*/
mediaStream.setDirection(MediaDirection.SENDONLY);
// format
String encoding;
double clockRate;
/*
* The AVTransmit2 and AVReceive2 examples use the H.264 video
* codec. Its RTP transmission has no static RTP payload type number
* assigned.
*/
byte dynamicRTPPayloadType;
switch (device.getMediaType()) {
case AUDIO:
encoding = "PCMU";
clockRate = 8000;
/* PCMU has a static RTP payload type number assigned. */
dynamicRTPPayloadType = -1;
break;
case VIDEO:
encoding = "H264";
clockRate = MediaFormatFactory.CLOCK_RATE_NOT_SPECIFIED;
/*
* The dymanic RTP payload type numbers are usually negotiated
* in the signaling functionality.
*/
dynamicRTPPayloadType = 99;
break;
default:
encoding = null;
clockRate = MediaFormatFactory.CLOCK_RATE_NOT_SPECIFIED;
dynamicRTPPayloadType = -1;
}
if (encoding != null) {
MediaFormat format
= mediaService.getFormatFactory().createMediaFormat(
encoding,
clockRate);
/*
* The MediaFormat instances which do not have a static RTP
* payload type number association must be explicitly assigned
* a dynamic RTP payload type number.
*/
if (dynamicRTPPayloadType != -1) {
mediaStream.addDynamicRTPPayloadType(
dynamicRTPPayloadType,
format);
}
mediaStream.setFormat(format);
}
// connector
StreamConnector connector;
if (localPortBase == -1) {
connector = new DefaultStreamConnector();
} else {
int localRTPPort = localPort++;
int localRTCPPort = localPort++;
connector
= new DefaultStreamConnector(
new DatagramSocket(localRTPPort),
new DatagramSocket(localRTCPPort));
}
mediaStream.setConnector(connector);
// target
/*
* The AVTransmit2 and AVReceive2 examples follow the common
* practice that the RTCP port is right after the RTP port.
*/
int remoteRTPPort = remotePort++;
int remoteRTCPPort = remotePort++;
mediaStream.setTarget(
new MediaStreamTarget(
new InetSocketAddress(remoteAddr, remoteRTPPort),
new InetSocketAddress(remoteAddr, remoteRTCPPort)));
// name
/*
* The name is completely optional and it is not being used by the
* MediaStream implementation at this time, it is just remembered so
* that it can be retrieved via MediaStream#getName(). It may be
* integrated with the signaling functionality if necessary.
*/
mediaStream.setName(mediaType.toString());
mediaStreams[mediaType.ordinal()] = mediaStream;
}
/*
* Do start the transmission i.e. start the initialized MediaStream
* instances.
*/
for (MediaStream mediaStream : mediaStreams) {
if (mediaStream != null) {
mediaStream.start();
}
}
return null;
}
/**
* Stops the transmission if already started
*/
private void stop() {
if (mediaStreams != null) {
for (int i = 0; i < mediaStreams.length; i++) {
MediaStream mediaStream = mediaStreams[i];
if (mediaStream != null) {
try {
mediaStream.stop();
} finally {
mediaStream.close();
mediaStreams[i] = null;
}
}
}
mediaStreams = null;
}
}
/**
* The name of the command-line argument which specifies the port from which
* the media is to be transmitted. The command-line argument value will be
* used as the port to transmit the audio RTP from, the next port after it
* will be to transmit the audio RTCP from. Respectively, the subsequent
* ports will be used to transmit the video RTP and RTCP from."
*/
private static final String LOCAL_PORT_BASE_ARG_NAME
= "--local-port-base=";
/**
* The name of the command-line argument which specifies the name of the
* host to which the media is to be transmitted.
*/
private static final String REMOTE_HOST_ARG_NAME = "--remote-host=";
/**
* The name of the command-line argument which specifies the port to which
* the media is to be transmitted. The command-line argument value will be
* used as the port to transmit the audio RTP to, the next port after it
* will be to transmit the audio RTCP to. Respectively, the subsequent ports
* will be used to transmit the video RTP and RTCP to."
*/
private static final String REMOTE_PORT_BASE_ARG_NAME
= "--remote-port-base=";
/**
* The list of command-line arguments accepted as valid by the
* <tt>AVTransmit2</tt> application along with their human-readable usage
* descriptions.
*/
private static final String[][] ARGS
= {
{
LOCAL_PORT_BASE_ARG_NAME,
"The port which is the source of the transmission i.e. from"
+ " which the media is to be transmitted. The specified"
+ " value will be used as the port to transmit the audio"
+ " RTP from, the next port after it will be used to"
+ " transmit the audio RTCP from. Respectively, the"
+ " subsequent ports will be used to transmit the video RTP"
+ " and RTCP from."
},
{
REMOTE_HOST_ARG_NAME,
"The name of the host which is the target of the transmission"
+ " i.e. to which the media is to be transmitted"
},
{
REMOTE_PORT_BASE_ARG_NAME,
"The port which is the target of the transmission i.e. to which"
+ " the media is to be transmitted. The specified value"
+ " will be used as the port to transmit the audio RTP to"
+ " the next port after it will be used to transmit the"
+ " audio RTCP to. Respectively, the subsequent ports will"
+ " be used to transmit the video RTP and RTCP to."
}
};
public static void main(String[] args)
throws Exception {
// We need two parameters to do the transmission. For example,
// ant run-example -Drun.example.name=AVTransmit2 -Drun.example.arg.line="--remote-host=127.0.0.1 --remote-port-base=10000"
if (args.length < 2) {
prUsage();
} else {
Map argMap = parseCommandLineArgs(args);
LibJitsi.start();
try {
// Create a audio transmit object with the specified params.
VideoTransmitter at
= new VideoTransmitter(
argMap.get(LOCAL_PORT_BASE_ARG_NAME),
argMap.get(REMOTE_HOST_ARG_NAME),
argMap.get(REMOTE_PORT_BASE_ARG_NAME));
// Start the transmission
String result = at.start();
// result will be non-null if there was an error. The return
// value is a String describing the possible error. Print it.
if (result == null) {
System.err.println("Start transmission for 600 seconds...");
// Transmit for 60 seconds and then close the processor
// This is a safeguard when using a capture data source
// so that the capture device will be properly released
// before quitting.
// The right thing to do would be to have a GUI with a
// "Stop" button that would call stop on AVTransmit2
try {
Thread.sleep(600_000);
} catch (InterruptedException ie) {
}
// Stop the transmission
at.stop();
System.err.println("...transmission ended.");
} else {
System.err.println("Error : " + result);
}
} finally {
LibJitsi.stop();
}
}
}
/**
* Parses the arguments specified to the <tt>AVTransmit2</tt> application on
* the command line.
*
* @param args the arguments specified to the <tt>AVTransmit2</tt>
* application on the command line
* @return a <tt>Map</tt> containing the arguments specified to the
* <tt>AVTransmit2</tt> application on the command line in the form of
* name-value associations
*/
static Map parseCommandLineArgs(String[] args) {
Map argMap = new HashMap();
for (String arg : args) {
int keyEndIndex = arg.indexOf('=');
String key;
String value;
if (keyEndIndex == -1) {
key = arg;
value = null;
} else {
key = arg.substring(0, keyEndIndex + 1);
value = arg.substring(keyEndIndex + 1);
}
argMap.put(key, value);
}
return argMap;
}
/**
* Outputs human-readable description about the usage of the
* <tt>AVTransmit2</tt> application and the command-line arguments it
* accepts as valid.
*/
private static void prUsage() {
PrintStream err = System.err;
err.println("Usage: " + VideoTransmitter.class.getName() + " <args>");
err.println("Valid args:");
for (String[] arg : ARGS)
err.println(" " + arg[0] + " " + arg[1]);
}
}
</args> -
Working on images asynchronously
To get my quota on buzzwords for the day we are going to look at using ZeroMQ and Imagick to create a simple asynchronous image processing system. Why asynchronous ? First of all, separating the image handling from a interactive PHP scripts allows us to scale the image processing separately from the web heads. For example we could do the image processing on separate servers, which have SSDs attached and more memory. In this example making the images available to all worker nodes is left to the reader.
Secondly, separating the image processing from a web script can provide more responsive experience to the user. This doesn’t necessarily mean faster, but let’s say in a multiple image upload scenario this method allows the user to do something else on the site while we process the images in the background. This can be beneficial especially in cases where users upload hundreds of images at a time. To achieve a simple distributed image processing infrastructure we are going to use ZeroMQ for communicating between different components and Imagick to work on the images.
The first part we are going to create is a simple “Worker” -process skeleton. Naturally for a live environment you would like to have more error handling and possibly use pcntl for process control, but for the sake of brewity the example is barebones :
-
< ?php
-
-
define (’THUMBNAIL_ADDR’, ’tcp ://127.0.0.1:5000’) ;
-
define (’COLLECTOR_ADDR’, ’tcp ://127.0.0.1:5001’) ;
-
-
class Worker {
-
-
private $in ;
-
private $out ;
-
-
public function __construct ($in_addr, $out_addr)
-
{
-
$context = new ZMQContext () ;
-
-
$this->in = new ZMQSocket ($context, ZMQ: :SOCKET_PULL) ;
-
$this->in->bind ($in_addr) ;
-
-
$this->out = new ZMQSocket ($context, ZMQ: :SOCKET_PUSH) ;
-
$this->out->connect ($out_addr) ;
-
}
-
-
public function work () {
-
while ($command = $this->in->recvMulti ()) {
-
if (isset ($this->commands [$command [0]])) {
-
echo "Received work" . PHP_EOL ;
-
-
$callback = $this->commands [$command [0]] ;
-
-
array_shift ($command) ;
-
$response = call_user_func_array ($callback, $command) ;
-
-
if (is_array ($response))
-
$this->out->sendMulti ($response) ;
-
else
-
$this->out->send ($response) ;
-
}
-
else {
-
error_log ("There is no registered worker for $command [0]") ;
-
}
-
}
-
}
-
-
public function register ($command, $callback)
-
{
-
$this->commands [$command] = $callback ;
-
}
-
}
-
?>
The Worker class allows us to register commands with callbacks associated with them. In our case the Worker class doesn’t actually care or know about the parameters being passed to the actual callback, it just blindly passes them on. We are using two separate sockets in this example, one for incoming work requests and one for passing the results onwards. This allows us to create a simple pipeline by adding more workers in the mix. For example we could first have a watermark worker, which takes the original image and composites a watermark on it, passes the file onwards to thumbnail worker, which then creates different sizes of thumbnails and passes the final results to event collector.
The next part we are going to create a is a simple worker script that does the actual thumbnailing of the images :
-
< ?php
-
include __DIR__ . ’/common.php’ ;
-
-
// Create worker class and bind the inbound address to ’THUMBNAIL_ADDR’ and connect outbound to ’COLLECTOR_ADDR’
-
$worker = new Worker (THUMBNAIL_ADDR, COLLECTOR_ADDR) ;
-
-
// Register our thumbnail callback, nothing special here
-
$worker->register (’thumbnail’, function ($filename, $width, $height) {
-
$info = pathinfo ($filename) ;
-
-
$out = sprintf ("%s/%s_%dx%d.%s",
-
$info [’dirname’],
-
$info [’filename’],
-
$width,
-
$height,
-
$info [’extension’]) ;
-
-
$status = 1 ;
-
$message = ’’ ;
-
-
try {
-
$im = new Imagick ($filename) ;
-
$im->thumbnailImage ($width, $height) ;
-
$im->writeImage ($out) ;
-
}
-
catch (Exception $e) {
-
$status = 0 ;
-
$message = $e->getMessage () ;
-
}
-
-
return array (
-
’status’ => $status,
-
’filename’ => $filename,
-
’thumbnail’ => $out,
-
’message’ => $message,
-
) ;
-
}) ;
-
-
// Run the worker, will block
-
echo "Running thumbnail worker.." . PHP_EOL ;
-
$worker->work () ;
As you can see from the code the thumbnail worker registers a callback for ‘thumbnail’ command. The callback does the thumbnailing based on input and returns the status, original filename and the thumbnail filename. We have connected our Workers “outbound” socket to event collector, which will receive the results from the thumbnail worker and do something with them. What the “something” is depends on you. For example you could push the response into a websocket to show immediate feeedback to the user or store the results into a database.
Our example event collector will just do a var_dump on every event it receives from the thumbnailer :
-
< ?php
-
include __DIR__ . ’/common.php’ ;
-
-
$socket = new ZMQSocket (new ZMQContext (), ZMQ: :SOCKET_PULL) ;
-
$socket->bind (COLLECTOR_ADDR) ;
-
-
echo "Waiting for events.." . PHP_EOL ;
-
while (($message = $socket->recvMulti ())) {
-
var_dump ($message) ;
-
}
-
?>
The final piece of the puzzle is the client that pumps messages into the pipeline. The client connects to the thumbnail worker, passes on filename and desired dimensions :
-
< ?php
-
include __DIR__ . ’/common.php’ ;
-
-
$socket = new ZMQSocket (new ZMQContext (), ZMQ: :SOCKET_PUSH) ;
-
$socket->connect (THUMBNAIL_ADDR) ;
-
-
$socket->sendMulti (
-
array (
-
’thumbnail’,
-
realpath (’./test.jpg’),
-
50,
-
50,
-
)
-
) ;
-
echo "Sent request" . PHP_EOL ;
-
?>
After this our processing pipeline will look like this :
Now, if we notice that thumbnail workers or the event collectors can’t keep up with the rate of images we are pushing through we can start scaling the pipeline by adding more processes on each layer. ZeroMQ PUSH socket will automatically round-robin between all connected nodes, which makes adding more workers and event collectors simple. After adding more workers our pipeline will look like this :
Using ZeroMQ also allows us to create more flexible architectures by adding forwarding devices in the middle, adding request-reply workers etc. So, the last thing to do is to run our pipeline and see the results :
Let’s create our test image first :
$ convert magick:rose test.jpg
From the command-line run the thumbnail script :
$ php thumbnail.php Running thumbnail worker..
In a separate terminal window run the event collector :
$ php collector.php Waiting for events..
And finally run the client to send the thumbnail request :
$ php client.php Sent request $
If everything went according to the plan you should now see the following output in the event collector window :
array(4) [0]=> string(1) "1" [1]=> string(56) "
/test.jpg" [2]=> string(62) " /test_50x50.jpg" [3]=> string(0) "" Happy hacking !
-
-
Evolution #3071 (Nouveau) : Performance boucle DATA sur CSV
9 octobre 2013, par lrt rlnIl semble que l’analyse CSV avant passage dans la boucle DATA pose de gros soucis de perf. Le temps d’exécution passe rapidement à 2/3 minutes (avec gros set_time_limit) de traitement sur des fichiers >10Mo. A titre comparatif un simple fgetcsv avec un while (($data = fgetcsv($handle, 1000, ",")) est 50 fois plus rapide en exécution que le traitement par la boucle DATA. Il y a sans doute moyen d’optimiser /inc/csv.php pour éviter un plantage assez systèmatique de PHP sur bouclage de fichiers de + 5000 lignes.