Recherche avancée

Médias (2)

Mot : - Tags -/documentation

Autres articles (110)

  • Script d’installation automatique de MediaSPIP

    25 avril 2011, par

    Afin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
    Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
    La documentation de l’utilisation du script d’installation (...)

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

  • Que fait exactement ce script ?

    18 janvier 2011, par

    Ce script est écrit en bash. Il est donc facilement utilisable sur n’importe quel serveur.
    Il n’est compatible qu’avec une liste de distributions précises (voir Liste des distributions compatibles).
    Installation de dépendances de MediaSPIP
    Son rôle principal est d’installer l’ensemble des dépendances logicielles nécessaires coté serveur à savoir :
    Les outils de base pour pouvoir installer le reste des dépendances Les outils de développements : build-essential (via APT depuis les dépôts officiels) ; (...)

Sur d’autres sites (4994)

  • Multiprocess FATE Revisited

    26 juin 2010, par Multimedia Mike — FATE Server, Python

    I thought I had brainstormed a simple, elegant, multithreaded, deadlock-free refactoring for FATE in a previous post. However, I sort of glossed over the test ordering logic which I had not yet prototyped. The grim, possibly deadlock-afflicted reality is that the main thread needs to be notified as tests are completed. So, the main thread sends test specs through a queue to be executed by n tester threads and those threads send results to a results aggregator thread. Additionally, the results aggregator will need to send completed test IDs back to the main thread.



    But when I step back and look at the graph, I can’t rationalize why there should be a separate results aggregator thread. That was added to cut down on deadlock possibilities since the main thread and the tester threads would not be waiting for data from each other. Now that I’ve come to terms with the fact that the main and the testers need to exchange data in realtime, I think I can safely eliminate the result thread. Adding more threads is not the best way to guard against race conditions and deadlocks. Ask xine.



    I’m still hung up on the deadlock issue. I have these queues through which the threads communicate. At issue is the fact that they can cause a thread to block when inserting an item if the queue is "full". How full is full ? Immaterial ; seeking to answer such a question is not how you guard against race conditions. Rather, it seems to me that one side should be doing non-blocking queue operations.

    This is how I’m planning to revise the logic in the main thread :

    test_set = set of all tests to execute
    tests_pending = test_set
    tests_blocked = empty set
    tests_queue = multi-consumer queue to send test specs to tester threads
    results_queue = multi-producer queue through which tester threads send results
    while there are tests in tests_pending :
      pop a test from test_set
      if test depends on any tests that appear in tests_pending :
        add test to tests_blocked
      else :
        add test to tests_queue in a non-blocking manner
        if tests_queue is full, add test to tests_blocked
    

    while there are results in the results_queue :
    get a result from result_queue in non-blocking manner
    remove the corresponding test from tests_pending

    if tests_blocked is non-empty :
    sleep for 1 second
    test_set = tests_blocked
    tests_blocked = empty set
    else :
    insert n shutdown signals, one from each thread

    go to the top of the loop and repeat until there are no more tests

    while there are results in the results_queue :
    get a result from result_queue in a blocking manner

    Not mentioned in the pseudocode (so it doesn’t get too verbose) is logic to check whether the retrieved test result is actually an end-of-thread signal. These are accounted and the whole test process is done when one is received for each thread.

    On the tester thread side, it’s safe for them to do blocking test queue retrievals and blocking result queue insertions. The reason for the 1-second delay before resetting tests_blocked and looping again is because I want to guard against the situation where tests A and B are to be run, A depends of B running first, and while B is running (and happens to be a long encoding test), the main thread is spinning about, obsessively testing whether it’s time to insert A into the tests queue.

    It all sounds just crazy enough to work. In fact, I coded it up and it does work, sort of. The queue gets blocked pretty quickly. Instead of sleeping, I decided it’s better to perform the put operation using a 1-second timeout.

    Still, I’m paranoid about the precise operation of the IPC queue mechanism at work here. What happens if I try to stuff in a test spec that’s a bit too large ? Will the module take whatever I give it and serialize it through the queue as soon as it can ? I think an impromptu science project is in order.

    big-queue.py :

    PYTHON :
    1. # !/usr/bin/python
    2.  
    3. import multiprocessing
    4. import Queue
    5.  
    6. def f(q) :
    7.   str = q.get()
    8.   print "reader function got a string of %d characters" % (len(str))
    9.  
    10. q = multiprocessing.Queue()
    11. p = multiprocessing.Process(target=f, args=(q,))
    12. p.start()
    13. try :
    14.   q.put_nowait(’a’ * 100000000)
    15. except Queue.Full :
    16.   print "queue full"
    $ ./big-queue.py
    reader function got a string of 100000000 characters
    

    Since 100 MB doesn’t even make it choke, FATE’s little test specs shouldn’t pose any difficulty.

  • Resurrecting SCD

    6 août 2010, par Multimedia Mike — Reverse Engineering

    When I became interested in reverse engineering all the way back in 2000, the first Win32 disassembler I stumbled across was simply called "Win32 Program Disassembler" authored by one Sang Cho. I took to calling it ’scd’ for Sang Cho’s Disassembler. The original program versions and source code are still available for download. I remember being able to compile v0.23 of the source code with gcc under Unix ; 0.25 is no go due to extensive reliance on the Win32 development environment.

    I recently wanted to use scd again but had some trouble compiling. As was the case the first time I tried compiling the source code a decade ago, it’s necessary to transform line endings from DOS -> Unix using ’dos2unix’ (I see that this has been renamed to/replaced by ’fromdos’ on Ubuntu).

    Beyond this, it seems that there are some C constructs that used to be valid but are now rejected by gcc. The first looks like this :

    C :
    1. return (int) c = *(PBYTE)((int)lpFile + vCodeOffset) ;

    Ahem, "error : lvalue required as left operand of assignment". Removing the "(int)" before the ’c’ makes the problem go away. It’s a strange way to write a return statement in general. However, ’c’ is a global variable that is apparently part of some YACC/BISON-type output.

    The second issue is when a case-switch block has a default label but with no code inside. Current gcc doesn’t like that. It’s necessary to at least provide a break statement after the default label.

    Finally, the program turns out to not be 64-bit safe. It is necessary to compile it in 32-bit mode (compile and link with the ’-m32’ flag or build on a 32-bit system). The static 32-bit binary should run fine under a 64-bit kernel.

    Alternatively : What are some other Win32 disassemblers that work under Linux ?

  • VP8 Documentation and Test Vector Contributions

    14 octobre 2010, par noreply@blogger.com (John Luther)

    Janne Salonen of the WebM team in Oulu, Finland (formerly On2 Finland) has added a tabular description of the VP8 syntax to the VP8 Bitstream Guide. The new annex provides a concise reference of the elements in the bitstream and we hope will make implementing and testing VP8 decoders easier. The updated document and source can be downloaded from our documentation page.

    We’re working on more improvements to the bitstream guide and invite other community members to help. As with the VP8 code, we gladly give attribution credit to documentation contributors and have added an AUTHORS file to the bitstream-guide Git repository.

    New VP8 Test Vectors

    The Oulu team has also produced some new VP8 test vectors. We analyzed a large set of WebM videos and produced two important corner use cases. The first produces the worst-case memory bandwidth (i.e., lots of global motion, all fractional motion vectors). The second produces the worst-case boolean decoder bin rate over dozens of consecutive frames. These vectors have been added to the VP8 test repository. Our team will consider other corner cases in the next batch of streams we add to the repository.

    Aki Kuusela is Hantro Embedded Engineering Manager at Google.