
Recherche avancée
Médias (1)
-
Somos millones 1
21 juillet 2014, par
Mis à jour : Juin 2015
Langue : français
Type : Video
Autres articles (55)
-
Installation en mode ferme
4 février 2011, parLe mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
C’est la méthode que nous utilisons sur cette même plateforme.
L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...) -
Librairies et binaires spécifiques au traitement vidéo et sonore
31 janvier 2010, parLes logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
Binaires complémentaires et facultatifs flvtool2 : (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (3199)
-
Anomalie #2388 : formulaires CVT multi-etapes et formulaires_editer_objet_charger
1er novembre 2011, par Severo Lipariah pardon, if (is_numeric($id_chat) && intval($id_chat)==$id_chat) $valeurs = formulaires_editer_objet_charger(’chat’,$id_chat,$id_rubrique,$lier_trad,$retour,$config_fonc,$row,$hidden) ; else $valeurs = array() ; $valeurs[’nom’] = ’’ ; $valeurs[’race’] = ’’ ; $valeurs[’date’] = ’’ ; (...)
-
Multiprocess FATE Revisited
26 juin 2010, par Multimedia Mike — FATE Server, PythonI thought I had brainstormed a simple, elegant, multithreaded, deadlock-free refactoring for FATE in a previous post. However, I sort of glossed over the test ordering logic which I had not yet prototyped. The grim, possibly deadlock-afflicted reality is that the main thread needs to be notified as tests are completed. So, the main thread sends test specs through a queue to be executed by n tester threads and those threads send results to a results aggregator thread. Additionally, the results aggregator will need to send completed test IDs back to the main thread.
But when I step back and look at the graph, I can’t rationalize why there should be a separate results aggregator thread. That was added to cut down on deadlock possibilities since the main thread and the tester threads would not be waiting for data from each other. Now that I’ve come to terms with the fact that the main and the testers need to exchange data in realtime, I think I can safely eliminate the result thread. Adding more threads is not the best way to guard against race conditions and deadlocks. Ask xine.
I’m still hung up on the deadlock issue. I have these queues through which the threads communicate. At issue is the fact that they can cause a thread to block when inserting an item if the queue is "full". How full is full ? Immaterial ; seeking to answer such a question is not how you guard against race conditions. Rather, it seems to me that one side should be doing non-blocking queue operations.
This is how I’m planning to revise the logic in the main thread :
test_set = set of all tests to execute tests_pending = test_set tests_blocked = empty set tests_queue = multi-consumer queue to send test specs to tester threads results_queue = multi-producer queue through which tester threads send results while there are tests in tests_pending : pop a test from test_set if test depends on any tests that appear in tests_pending : add test to tests_blocked else : add test to tests_queue in a non-blocking manner if tests_queue is full, add test to tests_blocked
while there are results in the results_queue :
get a result from result_queue in non-blocking manner
remove the corresponding test from tests_pendingif tests_blocked is non-empty :
sleep for 1 second
test_set = tests_blocked
tests_blocked = empty set
else :
insert n shutdown signals, one from each threadgo to the top of the loop and repeat until there are no more tests
while there are results in the results_queue :
get a result from result_queue in a blocking mannerNot mentioned in the pseudocode (so it doesn’t get too verbose) is logic to check whether the retrieved test result is actually an end-of-thread signal. These are accounted and the whole test process is done when one is received for each thread.
On the tester thread side, it’s safe for them to do blocking test queue retrievals and blocking result queue insertions. The reason for the 1-second delay before resetting tests_blocked and looping again is because I want to guard against the situation where tests A and B are to be run, A depends of B running first, and while B is running (and happens to be a long encoding test), the main thread is spinning about, obsessively testing whether it’s time to insert A into the tests queue.
It all sounds just crazy enough to work. In fact, I coded it up and it does work, sort of. The queue gets blocked pretty quickly. Instead of sleeping, I decided it’s better to perform the put operation using a 1-second timeout.
Still, I’m paranoid about the precise operation of the IPC queue mechanism at work here. What happens if I try to stuff in a test spec that’s a bit too large ? Will the module take whatever I give it and serialize it through the queue as soon as it can ? I think an impromptu science project is in order.
big-queue.py :
PYTHON :-
# !/usr/bin/python
-
-
import multiprocessing
-
import Queue
-
-
def f(q) :
-
str = q.get()
-
print "reader function got a string of %d characters" % (len(str))
-
-
q = multiprocessing.Queue()
-
p = multiprocessing.Process(target=f, args=(q,))
-
p.start()
-
try :
-
q.put_nowait(’a’ * 100000000)
-
except Queue.Full :
-
print "queue full"
$ ./big-queue.py reader function got a string of 100000000 characters
Since 100 MB doesn’t even make it choke, FATE’s little test specs shouldn’t pose any difficulty.
-
-
Optical Drive Value Proposition
28 août 2010, par Multimedia Mike — GeneralI have the absolute worst luck in the optical drive department. Ever since I started building my own computers in 1995 — close to the beginning of the CD-ROM epoch — I have burned through a staggering number of optical drives. Seriously, especially in the time period between about 1995-1998, I was going through a new drive every 4-6 months or so. This was also during that CD-ROM speed race where the the drive packages kept advertising loftier ‘X’ speed ratings. I didn’t play a lot of CD-ROM games during that timeframe, though I did listen to quite a few audio CDs through the computer.
I use “optical drive” as a general term to describe CD-ROM drives, CD-R/RW drives, DVD-ROM drives, DVD-R/RW drives, and drives capable of doing any combination of reading and writing CDs and DVDs. In my observation, optical media seems to be falling out of favor somewhat, giving way to online digital distribution for things like games and software, as well as flash drives and external hard drives vs. recordable or rewritable media for backup and sneakernet duty. Somewhere along the line, I started to buy computers that didn’t even have optical drives. That’s why I have purchased at least 2 external USB drives (seen in the picture above). I don’t have much confidence that either works correctly. My main desktop until recently, a Mac Mini, has an internal optical drive that grew flaky and unreliable a few months after the unit was purchased.
I just have really rotten luck with optical drives. The most reliable drive in my house is the one on the headless machine that, until recently, was the main workhorse on the FATE farm. The eject switch didn’t work correctly so I have to log in remotely,
'sudo eject'
, walk to the other room, pop in the disc, walk back to the other room, and work with the disc.Maybe optical media is on its way out, but I still have many hundreds of CD-ROMs. Perhaps I should move forward on this brainstorm to archive all of my optical discs on hard drives (and then think of some data mining experiments, just for the academic appeal), before it’s too late ; optical discs don’t last forever.
So if I needed a good optical drive, what should I consider ? I’ve always been the type to go cheap, I admit. Many of my optical drives were on the lower end of the cost spectrum, which might have played some role in their rapid replacement. However, I’m not sold on the idea that I’m getting quality just because I’m paying a higher price. That LG unit at the top of the pile up there was relatively pricey and still didn’t fare well in the long (or even medium) term.
Come to think of it, I used to have a ridiculous stockpile of castoff (but somehow still functional) optical drives. So many, in fact, that in 2004 I had a full size PC tower that I filled with 4 working drives, just because I could. Okay, I admit that there was a period where I had some reliable drives.
That might be an idea, actually– throw together such a computer for heavy duty archival purposes. I visited Weird Stuff Warehouse today (needed some PC100 RAM for an old machine and they came through) and I think I could put together such a box rather cheaply.
It’s a dirty job, but… well, you know the rest.