
Recherche avancée
Médias (1)
-
Richard Stallman et le logiciel libre
19 octobre 2011, par
Mis à jour : Mai 2013
Langue : français
Type : Texte
Autres articles (54)
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs
Sur d’autres sites (6074)
-
Revision 29744 : Update de la base de donnée
8 juillet 2009, par kent1@… — LogUpdate de la base de donnée
-
Anomalie #2388 : formulaires CVT multi-etapes et formulaires_editer_objet_charger
1er novembre 2011, par Severo Lipariah pardon, if (is_numeric($id_chat) && intval($id_chat)==$id_chat) $valeurs = formulaires_editer_objet_charger(’chat’,$id_chat,$id_rubrique,$lier_trad,$retour,$config_fonc,$row,$hidden) ; else $valeurs = array() ; $valeurs[’nom’] = ’’ ; $valeurs[’race’] = ’’ ; $valeurs[’date’] = ’’ ; (...)
-
Multiprocess FATE Revisited
26 juin 2010, par Multimedia Mike — FATE Server, PythonI thought I had brainstormed a simple, elegant, multithreaded, deadlock-free refactoring for FATE in a previous post. However, I sort of glossed over the test ordering logic which I had not yet prototyped. The grim, possibly deadlock-afflicted reality is that the main thread needs to be notified as tests are completed. So, the main thread sends test specs through a queue to be executed by n tester threads and those threads send results to a results aggregator thread. Additionally, the results aggregator will need to send completed test IDs back to the main thread.
But when I step back and look at the graph, I can’t rationalize why there should be a separate results aggregator thread. That was added to cut down on deadlock possibilities since the main thread and the tester threads would not be waiting for data from each other. Now that I’ve come to terms with the fact that the main and the testers need to exchange data in realtime, I think I can safely eliminate the result thread. Adding more threads is not the best way to guard against race conditions and deadlocks. Ask xine.
I’m still hung up on the deadlock issue. I have these queues through which the threads communicate. At issue is the fact that they can cause a thread to block when inserting an item if the queue is "full". How full is full ? Immaterial ; seeking to answer such a question is not how you guard against race conditions. Rather, it seems to me that one side should be doing non-blocking queue operations.
This is how I’m planning to revise the logic in the main thread :
test_set = set of all tests to execute tests_pending = test_set tests_blocked = empty set tests_queue = multi-consumer queue to send test specs to tester threads results_queue = multi-producer queue through which tester threads send results while there are tests in tests_pending : pop a test from test_set if test depends on any tests that appear in tests_pending : add test to tests_blocked else : add test to tests_queue in a non-blocking manner if tests_queue is full, add test to tests_blocked
while there are results in the results_queue :
get a result from result_queue in non-blocking manner
remove the corresponding test from tests_pendingif tests_blocked is non-empty :
sleep for 1 second
test_set = tests_blocked
tests_blocked = empty set
else :
insert n shutdown signals, one from each threadgo to the top of the loop and repeat until there are no more tests
while there are results in the results_queue :
get a result from result_queue in a blocking mannerNot mentioned in the pseudocode (so it doesn’t get too verbose) is logic to check whether the retrieved test result is actually an end-of-thread signal. These are accounted and the whole test process is done when one is received for each thread.
On the tester thread side, it’s safe for them to do blocking test queue retrievals and blocking result queue insertions. The reason for the 1-second delay before resetting tests_blocked and looping again is because I want to guard against the situation where tests A and B are to be run, A depends of B running first, and while B is running (and happens to be a long encoding test), the main thread is spinning about, obsessively testing whether it’s time to insert A into the tests queue.
It all sounds just crazy enough to work. In fact, I coded it up and it does work, sort of. The queue gets blocked pretty quickly. Instead of sleeping, I decided it’s better to perform the put operation using a 1-second timeout.
Still, I’m paranoid about the precise operation of the IPC queue mechanism at work here. What happens if I try to stuff in a test spec that’s a bit too large ? Will the module take whatever I give it and serialize it through the queue as soon as it can ? I think an impromptu science project is in order.
big-queue.py :
PYTHON :-
# !/usr/bin/python
-
-
import multiprocessing
-
import Queue
-
-
def f(q) :
-
str = q.get()
-
print "reader function got a string of %d characters" % (len(str))
-
-
q = multiprocessing.Queue()
-
p = multiprocessing.Process(target=f, args=(q,))
-
p.start()
-
try :
-
q.put_nowait(’a’ * 100000000)
-
except Queue.Full :
-
print "queue full"
$ ./big-queue.py reader function got a string of 100000000 characters
Since 100 MB doesn’t even make it choke, FATE’s little test specs shouldn’t pose any difficulty.
-