
Recherche avancée
Médias (3)
-
Exemple de boutons d’action pour une collection collaborative
27 février 2013, par
Mis à jour : Mars 2013
Langue : français
Type : Image
-
Exemple de boutons d’action pour une collection personnelle
27 février 2013, par
Mis à jour : Février 2013
Langue : English
Type : Image
-
Collections - Formulaire de création rapide
19 février 2013, par
Mis à jour : Février 2013
Langue : français
Type : Image
Autres articles (94)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
ANNEXE : Les plugins utilisés spécifiquement pour la ferme
5 mars 2010, parLe site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)
Sur d’autres sites (8959)
-
Anomalie #2899 : erreur 404 renvoie sur la page accueil sur certaines configurations d’apache (rég...
7 novembre 2013, par guytarr °c’est un bug ennuyeux mais il y en a d’autres plus ennuyeux encore. Je remet la priorité sur "normal", mes excuses pour le bruit.
-
Multiprocess FATE Revisited
26 juin 2010, par Multimedia Mike — FATE Server, PythonI thought I had brainstormed a simple, elegant, multithreaded, deadlock-free refactoring for FATE in a previous post. However, I sort of glossed over the test ordering logic which I had not yet prototyped. The grim, possibly deadlock-afflicted reality is that the main thread needs to be notified as tests are completed. So, the main thread sends test specs through a queue to be executed by n tester threads and those threads send results to a results aggregator thread. Additionally, the results aggregator will need to send completed test IDs back to the main thread.
But when I step back and look at the graph, I can’t rationalize why there should be a separate results aggregator thread. That was added to cut down on deadlock possibilities since the main thread and the tester threads would not be waiting for data from each other. Now that I’ve come to terms with the fact that the main and the testers need to exchange data in realtime, I think I can safely eliminate the result thread. Adding more threads is not the best way to guard against race conditions and deadlocks. Ask xine.
I’m still hung up on the deadlock issue. I have these queues through which the threads communicate. At issue is the fact that they can cause a thread to block when inserting an item if the queue is "full". How full is full ? Immaterial ; seeking to answer such a question is not how you guard against race conditions. Rather, it seems to me that one side should be doing non-blocking queue operations.
This is how I’m planning to revise the logic in the main thread :
test_set = set of all tests to execute tests_pending = test_set tests_blocked = empty set tests_queue = multi-consumer queue to send test specs to tester threads results_queue = multi-producer queue through which tester threads send results while there are tests in tests_pending : pop a test from test_set if test depends on any tests that appear in tests_pending : add test to tests_blocked else : add test to tests_queue in a non-blocking manner if tests_queue is full, add test to tests_blocked
while there are results in the results_queue :
get a result from result_queue in non-blocking manner
remove the corresponding test from tests_pendingif tests_blocked is non-empty :
sleep for 1 second
test_set = tests_blocked
tests_blocked = empty set
else :
insert n shutdown signals, one from each threadgo to the top of the loop and repeat until there are no more tests
while there are results in the results_queue :
get a result from result_queue in a blocking mannerNot mentioned in the pseudocode (so it doesn’t get too verbose) is logic to check whether the retrieved test result is actually an end-of-thread signal. These are accounted and the whole test process is done when one is received for each thread.
On the tester thread side, it’s safe for them to do blocking test queue retrievals and blocking result queue insertions. The reason for the 1-second delay before resetting tests_blocked and looping again is because I want to guard against the situation where tests A and B are to be run, A depends of B running first, and while B is running (and happens to be a long encoding test), the main thread is spinning about, obsessively testing whether it’s time to insert A into the tests queue.
It all sounds just crazy enough to work. In fact, I coded it up and it does work, sort of. The queue gets blocked pretty quickly. Instead of sleeping, I decided it’s better to perform the put operation using a 1-second timeout.
Still, I’m paranoid about the precise operation of the IPC queue mechanism at work here. What happens if I try to stuff in a test spec that’s a bit too large ? Will the module take whatever I give it and serialize it through the queue as soon as it can ? I think an impromptu science project is in order.
big-queue.py :
PYTHON :-
# !/usr/bin/python
-
-
import multiprocessing
-
import Queue
-
-
def f(q) :
-
str = q.get()
-
print "reader function got a string of %d characters" % (len(str))
-
-
q = multiprocessing.Queue()
-
p = multiprocessing.Process(target=f, args=(q,))
-
p.start()
-
try :
-
q.put_nowait(’a’ * 100000000)
-
except Queue.Full :
-
print "queue full"
$ ./big-queue.py reader function got a string of 100000000 characters
Since 100 MB doesn’t even make it choke, FATE’s little test specs shouldn’t pose any difficulty.
-
-
aviobuf : Increase the default SHORT_SEEK_THRESHOLD to 32 KB
30 octobre 2020, par Martin Storsjöaviobuf : Increase the default SHORT_SEEK_THRESHOLD to 32 KB
The previous threshold, 4 KB, maybe was reasonable when it was set
(in 2010), but in today's settings and with typical network speeds
and data sizes, it's pretty small. 32 KB probably is a more reasonable
default now, regardless of input.This changes the test references for two seek tests.
When using the normal seek function, which boils down to the lseek(2)
function, a seek to an out of bounds position doesn't return an error,
but that condition is only reported when doing the subsequent read
(which returns EOF). When doing more seeks by fast forwarding, the
fact that the seeked to destination is out of bounds is noticed and
reported sooner in these cases.Signed-off-by : Martin Storsjö <martin@martin.st>