
Advanced search
Medias (1)
-
Rennes Emotion Map 2010-11
19 October 2011, by
Updated: July 2013
Language: français
Type: Text
Other articles (50)
-
Mise à jour de la version 0.1 vers 0.2
24 June 2013, byExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1); Installation des dépendances pour Smush; Installation de MediaInfo et FFprobe pour la récupération des métadonnées; On n’utilise plus ffmpeg2theora; On n’installe plus flvtool2 au profit de flvtool++; On n’installe plus ffmpeg-php qui n’est plus maintenu au profit de (...) -
Support de tous types de médias
10 April 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...); audio (MP3, Ogg, Wav et autres...); vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...); contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google (...)
-
Supporting all media types
13 April 2011, byUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats: images: png, gif, jpg, bmp and more audio: MP3, Ogg, Wav and more video: AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data: OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
On other websites (7540)
-
Multiprocess FATE Revisited
26 June 2010, by Multimedia Mike — FATE Server, PythonI thought I had brainstormed a simple, elegant, multithreaded, deadlock-free refactoring for FATE in a previous post. However, I sort of glossed over the test ordering logic which I had not yet prototyped. The grim, possibly deadlock-afflicted reality is that the main thread needs to be notified as tests are completed. So, the main thread sends test specs through a queue to be executed by n tester threads and those threads send results to a results aggregator thread. Additionally, the results aggregator will need to send completed test IDs back to the main thread.
But when I step back and look at the graph, I can’t rationalize why there should be a separate results aggregator thread. That was added to cut down on deadlock possibilities since the main thread and the tester threads would not be waiting for data from each other. Now that I’ve come to terms with the fact that the main and the testers need to exchange data in realtime, I think I can safely eliminate the result thread. Adding more threads is not the best way to guard against race conditions and deadlocks. Ask xine.
I’m still hung up on the deadlock issue. I have these queues through which the threads communicate. At issue is the fact that they can cause a thread to block when inserting an item if the queue is "full". How full is full? Immaterial; seeking to answer such a question is not how you guard against race conditions. Rather, it seems to me that one side should be doing non-blocking queue operations.
This is how I’m planning to revise the logic in the main thread:
test_set = set of all tests to execute tests_pending = test_set tests_blocked = empty set tests_queue = multi-consumer queue to send test specs to tester threads results_queue = multi-producer queue through which tester threads send results while there are tests in tests_pending: pop a test from test_set if test depends on any tests that appear in tests_pending: add test to tests_blocked else: add test to tests_queue in a non-blocking manner if tests_queue is full, add test to tests_blocked
while there are results in the results_queue:
get a result from result_queue in non-blocking manner
remove the corresponding test from tests_pendingif tests_blocked is non-empty:
sleep for 1 second
test_set = tests_blocked
tests_blocked = empty set
else:
insert n shutdown signals, one from each threadgo to the top of the loop and repeat until there are no more tests
while there are results in the results_queue:
get a result from result_queue in a blocking mannerNot mentioned in the pseudocode (so it doesn’t get too verbose) is logic to check whether the retrieved test result is actually an end-of-thread signal. These are accounted and the whole test process is done when one is received for each thread.
On the tester thread side, it’s safe for them to do blocking test queue retrievals and blocking result queue insertions. The reason for the 1-second delay before resetting tests_blocked and looping again is because I want to guard against the situation where tests A and B are to be run, A depends of B running first, and while B is running (and happens to be a long encoding test), the main thread is spinning about, obsessively testing whether it’s time to insert A into the tests queue.
It all sounds just crazy enough to work. In fact, I coded it up and it does work, sort of. The queue gets blocked pretty quickly. Instead of sleeping, I decided it’s better to perform the put operation using a 1-second timeout.
Still, I’m paranoid about the precise operation of the IPC queue mechanism at work here. What happens if I try to stuff in a test spec that’s a bit too large? Will the module take whatever I give it and serialize it through the queue as soon as it can? I think an impromptu science project is in order.
big-queue.py:
PYTHON:-
#!/usr/bin/python
-
-
import multiprocessing
-
import Queue
-
-
def f(q):
-
str = q.get()
-
print "reader function got a string of %d characters" % (len(str))
-
-
q = multiprocessing.Queue()
-
p = multiprocessing.Process(target=f, args=(q,))
-
p.start()
-
try:
-
q.put_nowait(’a’ * 100000000)
-
except Queue.Full:
-
print "queue full"
$ ./big-queue.py reader function got a string of 100000000 characters
Since 100 MB doesn’t even make it choke, FATE’s little test specs shouldn’t pose any difficulty.
-
-
Monster Battery Power Revisited
28 May 2010, by Multimedia Mike — Python, Science ProjectsSo I have this new fat netbook battery and I performed an experiment to determine how long it really lasts. In my last post on the matter, it was suggested that I should rely on the information that gnome-power-manager is giving me. However, I have rarely seen GPM report more than about 2 hours of charge; even on a full battery, it only reports 3h25m when I profiled it as lasting over 5 hours in my typical use. So I started digging to understand how GPM gets its numbers and determine if, perhaps, it’s not getting accurate data from the system.
I started poking around /proc for the data I wanted. You can learn a lot in /proc as long as you know the right question to ask. I had to remember what the power subsystem is called — ACPI — and this led me to /proc/acpi/battery/BAT0/state which has data such as:
present: yes capacity state: ok charging state: charged present rate: unknown remaining capacity: 100 mAh present voltage: 8326 mV
"Remaining capacity" rated in mAh is a little odd; I would later determine that this should actually be expressed as a percentage (i.e., 100% charge at the time of this reading). Examining the GPM source code, it seems to determine as a function of the current CPU load (queried via /proc/stat) and the battery state queried via a facility called devicekit. I couldn’t immediately find any source code to the latter but I was able to install a utility called ’devkit-power’. Mostly, it appears to rehash data already found in the above /proc file.
Curiously, the file /proc/acpi/battery/BAT0/info, which displays essential information about the battery, reports the design capacity of my battery as only 4400 mAh which is true for the original battery; the new monster battery is supposed to be 10400 mAh. I can imagine that all of these data points could be conspiring to under-report my remaining battery life.
Science project: Repeat the previous power-related science project but also parse and track the remaining capacity and present voltage fields from the battery state proc file.
Let’s skip straight to the results (which are consistent with my last set of results in terms of longevity):
So there is definitely something strange going on with the reporting— the 4400 mAh battery reports discharge at a linear rate while the 10400 mAh battery reports precipitous dropoff after 60%.
Another curious item is that my script broke at first when there was 20% power remaining which, as you can imagine, is a really annoying time to discover such a bug. At that point, the "time to empty" reported by devkit-power jumped from 0 seconds to 20 hours (the first state change observed for that field).
Here’s my script, this time elevated from Bash script to Python. It requires xdotool and devkit-power to be installed (both should be available in the package manager for a distro).
PYTHON:-
#!/usr/bin/python
-
-
import commands
-
import random
-
import sys
-
import time
-
-
XDOTOOL = "/usr/bin/xdotool"
-
BATTERY_STATE = "/proc/acpi/battery/BAT0/state"
-
DEVKIT_POWER = "/usr/bin/devkit-power -i /org/freedesktop/DeviceKit/Power/devices/battery_BAT0"
-
-
print "count, unixtime, proc_remaining_capacity, proc_present_voltage, devkit_percentage, devkit_voltage"
-
-
count = 0
-
while 1:
-
commands.getstatusoutput("%s mousemove %d %d" % (XDOTOOL, random.randrange(0,800), random.randrange(0, 480)))
-
battery_state = open(BATTERY_STATE).read().splitlines()
-
for line in battery_state:
-
if line.startswith("remaining capacity:"):
-
proc_remaining_capacity = int(line.lstrip("remaining capacity: ").rstrip("mAh"))
-
elif line.startswith("present voltage:"):
-
proc_present_voltage = int(line.lstrip("present voltage: ").rstrip("mV"))
-
devkit_state = commands.getoutput(DEVKIT_POWER).splitlines()
-
for line in devkit_state:
-
line = line.strip()
-
if line.startswith("percentage:"):
-
devkit_percentage = int(line.lstrip("percentage:").rstrip(’\%’))
-
elif line.startswith("voltage:"):
-
devkit_voltage = float(line.lstrip("voltage:").rstrip(’V’)) * 1000
-
print "%d, %d, %d, %d, %d, %d" % (count, time.time(), proc_remaining_capacity, proc_present_voltage, devkit_percentage, devkit_voltage)
-
sys.stdout.flush()
-
time.sleep(60)
-
count += 1
-
-
script ubuntu lucid : x264
4 March 2012Dans le log du script d’installation j’ai cette erreur :
Téléchargement, compilation et installation de x264
-
Initialized empty Git repository in /usr/local/src/x264/.git/
-
Package x264 was not found in the pkg-config search path.
-
Perhaps you should add the directory containing `x264.pc’
-
to the PKG_CONFIG_PATH environment variable
-
No package ’x264’ found
si je tape "echo $PKG_CONFIG_PATH" j’ai une ligne vide
je suppose que la suite est en rapport :
-
Makefile:3: config.mak: Aucun fichier ou dossier de ce type
-
cat: config.h: Aucun fichier ou dossier de ce type
-
./configure
-
Found yasm 0.8.0.2194
-
Minimum version is yasm-1.0.0
-
If you really want to compile without asm, configure with —disable-asm.
-
make: *** [config.mak] Erreur 1
-
Found yasm 0.8.0.2194
-
Minimum version is yasm-1.0.0
-
If you really want to compile without asm, configure with —disable-asm.
-