
Recherche avancée
Autres articles (80)
-
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)
Sur d’autres sites (13551)
-
Multiprocess FATE Revisited
26 juin 2010, par Multimedia Mike — FATE Server, PythonI thought I had brainstormed a simple, elegant, multithreaded, deadlock-free refactoring for FATE in a previous post. However, I sort of glossed over the test ordering logic which I had not yet prototyped. The grim, possibly deadlock-afflicted reality is that the main thread needs to be notified as tests are completed. So, the main thread sends test specs through a queue to be executed by n tester threads and those threads send results to a results aggregator thread. Additionally, the results aggregator will need to send completed test IDs back to the main thread.
But when I step back and look at the graph, I can’t rationalize why there should be a separate results aggregator thread. That was added to cut down on deadlock possibilities since the main thread and the tester threads would not be waiting for data from each other. Now that I’ve come to terms with the fact that the main and the testers need to exchange data in realtime, I think I can safely eliminate the result thread. Adding more threads is not the best way to guard against race conditions and deadlocks. Ask xine.
I’m still hung up on the deadlock issue. I have these queues through which the threads communicate. At issue is the fact that they can cause a thread to block when inserting an item if the queue is "full". How full is full ? Immaterial ; seeking to answer such a question is not how you guard against race conditions. Rather, it seems to me that one side should be doing non-blocking queue operations.
This is how I’m planning to revise the logic in the main thread :
test_set = set of all tests to execute tests_pending = test_set tests_blocked = empty set tests_queue = multi-consumer queue to send test specs to tester threads results_queue = multi-producer queue through which tester threads send results while there are tests in tests_pending : pop a test from test_set if test depends on any tests that appear in tests_pending : add test to tests_blocked else : add test to tests_queue in a non-blocking manner if tests_queue is full, add test to tests_blocked
while there are results in the results_queue :
get a result from result_queue in non-blocking manner
remove the corresponding test from tests_pendingif tests_blocked is non-empty :
sleep for 1 second
test_set = tests_blocked
tests_blocked = empty set
else :
insert n shutdown signals, one from each threadgo to the top of the loop and repeat until there are no more tests
while there are results in the results_queue :
get a result from result_queue in a blocking mannerNot mentioned in the pseudocode (so it doesn’t get too verbose) is logic to check whether the retrieved test result is actually an end-of-thread signal. These are accounted and the whole test process is done when one is received for each thread.
On the tester thread side, it’s safe for them to do blocking test queue retrievals and blocking result queue insertions. The reason for the 1-second delay before resetting tests_blocked and looping again is because I want to guard against the situation where tests A and B are to be run, A depends of B running first, and while B is running (and happens to be a long encoding test), the main thread is spinning about, obsessively testing whether it’s time to insert A into the tests queue.
It all sounds just crazy enough to work. In fact, I coded it up and it does work, sort of. The queue gets blocked pretty quickly. Instead of sleeping, I decided it’s better to perform the put operation using a 1-second timeout.
Still, I’m paranoid about the precise operation of the IPC queue mechanism at work here. What happens if I try to stuff in a test spec that’s a bit too large ? Will the module take whatever I give it and serialize it through the queue as soon as it can ? I think an impromptu science project is in order.
big-queue.py :
PYTHON :-
# !/usr/bin/python
-
-
import multiprocessing
-
import Queue
-
-
def f(q) :
-
str = q.get()
-
print "reader function got a string of %d characters" % (len(str))
-
-
q = multiprocessing.Queue()
-
p = multiprocessing.Process(target=f, args=(q,))
-
p.start()
-
try :
-
q.put_nowait(’a’ * 100000000)
-
except Queue.Full :
-
print "queue full"
$ ./big-queue.py reader function got a string of 100000000 characters
Since 100 MB doesn’t even make it choke, FATE’s little test specs shouldn’t pose any difficulty.
-
-
FFmpeg and Code Coverage Tools
21 août 2010, par Multimedia Mike — FATE Server, PythonCode coverage tools likely occupy the same niche as profiling tools : Tools that you’re supposed to use somewhere during the software engineering process but probably never quite get around to it, usually because you’re too busy adding features or fixing bugs. But there may come a day when you wish to learn how much of your code is actually being exercised in normal production use. For example, the team charged with continuously testing the FFmpeg project, would be curious to know how much code is being exercised, especially since many of the FATE test specs explicitly claim to be "exercising XYZ subsystem".
The primary GNU code coverage tool is called gcov and is probably already on your GNU-based development system. I set out to determine how much FFmpeg source code is exercised while running the full FATE suite. I ran into some problems when trying to use gcov on a project-wide scale. I spackled around those holes with some very ad-hoc solutions. I’m sure I was just overlooking some more obvious solutions about which you all will be happy to enlighten me.
Results
I’ve learned to cut to the chase earlier in blog posts (results first, methods second). With that, here are the results I produced from this experiment. This Google spreadsheet contains 3 sheets : The first contains code coverage stats for a bunch of FFmpeg C files sorted first by percent coverage (ascending), then by number of lines (descending), thus highlighting which files have the most uncovered code (ffserver.c currently tops that chart). The second sheet has files for which no stats were generated. The third sheet has "problems". These files were rejected by my ad-hoc script.Here’s a link to the data in CSV if you want to play with it yourself.
Using gcov with FFmpeg
To instrument a program for gcov analysis, compile and link the target program with the -fprofile-arcs and -ftest-coverage options. These need to be applied at both the compile and link stages, so in the case of FFmpeg, configure with :./configure \ —extra-cflags="-fprofile-arcs -ftest-coverage" \ —extra-ldflags="-fprofile-arcs -ftest-coverage"
The building process results in a bunch of .gcno files which pertain to code coverage. After running the program as normal, a bunch of .gcda files are generated. To get coverage statistics from these files, run
'gcov sourcefile.c'
. This will print some basic statistics as well as generate a corresponding .gcov file with more detailed information about exactly which lines have been executed, and how many times.Be advised that the source file must either live in the same directory from which gcov is invoked, or else the path to the source must be given to gcov via the
'-o, --object-directory'
option.Resetting Statistics
Statistics in the .gcda are cumulative. Should you wish to reset the statistics, doing this in the build directory should suffice :find . -name "*.gcda" | xargs rm -f
Getting Project-Wide Data
As mentioned, I had to get a little creative here to get a big picture of FFmpeg code coverage. After building FFmpeg with the code coverage options and running FATE,for file in `find . -name "*.c"` \ do \ echo "*****" $file \ gcov -o `dirname $file` `basename $file` \ done > ffmpeg-code-coverage.txt 2>&1
After that, I ran the ffmpeg-code-coverage.txt file through a custom Python script to print out the 3 CSV files that I later dumped into the Google Spreadsheet.
Further Work
I’m sure there are better ways to do this, and I’m sure you all will let me know what they are. But I have to get the ball rolling somehow.There’s also TestCocoon. I’d like to try that program and see if it addresses some of gcov’s shortcomings (assuming they are indeed shortcomings rather than oversights).
Source for script : process-gcov-slop.py
PYTHON :-
# !/usr/bin/python
-
-
import re
-
-
lines = open("ffmpeg-code-coverage.txt").read().splitlines()
-
no_coverage = ""
-
coverage = "filename, % covered, total lines\n"
-
problems = ""
-
-
stats_exp = re.compile(’Lines executed :(\d+\.\d+)% of (\d+)’)
-
for i in xrange(len(lines)) :
-
line = lines[i]
-
if line.startswith("***** ") :
-
filename = line[line.find(’./’)+2 :]
-
i += 1
-
if lines[i].find(":cannot open graph file") != -1 :
-
no_coverage += filename + ’\n’
-
else :
-
while lines[i].find(filename) == -1 and not lines[i].startswith("***** ") :
-
i += 1
-
try :
-
(percent, total_lines) = stats_exp.findall(lines[i+1])[0]
-
coverage += filename + ’, ’ + percent + ’, ’ + total_lines + ’\n’
-
except IndexError :
-
problems += filename + ’\n’
-
-
open("no_coverage.csv", ’w’).write(no_coverage)
-
open("coverage.csv", ’w’).write(coverage)
-
open("problems.csv", ’w’).write(problems)
-
-
2 GB Should Be Enough For Me
31 août 2010, par Multimedia Mike — GeneralMy new EeePC 1201PN netbook has 2 GB of RAM. Call me shortsighted but I feel like “that ought to be enough for me”. I’m not trying to claim that it ought to be enough for everyone. I am, however, questioning the utility of swap space for those skilled in the art of computing.
Technology marches on : This ancient 128 MB RAM module is larger than my digital camera’s battery charger… and I just realized that comparison doesn’t make any sense
Does anyone else have this issue ? It has gotten to the point where I deliberately disable swap partitions on Linux desktops I’m using (
'swapoff -a'
), and try not to allocate a swap partition during install time. I’m encountering Linux installers that seem to be making it tougher to do this, essentially pleading with you to create a swap partition– “Seriously, you might need 8 total gigabytes of virtual memory one day.” I’m of the opinion that if 2 GB of physical memory isn’t enough for my normal operation, I might need to re-examine my processes.In the course of my normal computer usage (which is definitely not normal by the standard of a normal computer user), swap space is just another way for the software to screw things up behind the scenes. In this case, the mistake is performance-related as the software makes poor decisions about what needs to be kept in RAM.
And then there are the netbook-oriented Linux distributions that insisted upon setting aside as swap 1/2 gigabyte of the already constrained 4 gigabytes of my Eee PC 701′s on-board flash memory, never offering the choice to opt out of swap space during installation. Earmarking flash memory for swap space is generally regarded as exceptionally poor form. To be fair, I don’t know that SSD has been all that prevalent in netbooks since the very earliest units in the netbook epoch.
Am I alone in this ? Does anyone else prefer to keep all of their memory physical in this day and age ?