Recherche avancée

Médias (0)

Mot : - Tags -/configuration

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (69)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Mise à disposition des fichiers

    14 avril 2011, par

    Par défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
    Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
    Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)

Sur d’autres sites (7871)

  • FFmpeg and Code Coverage Tools

    21 août 2010, par Multimedia Mike — FATE Server, Python

    Code coverage tools likely occupy the same niche as profiling tools : Tools that you’re supposed to use somewhere during the software engineering process but probably never quite get around to it, usually because you’re too busy adding features or fixing bugs. But there may come a day when you wish to learn how much of your code is actually being exercised in normal production use. For example, the team charged with continuously testing the FFmpeg project, would be curious to know how much code is being exercised, especially since many of the FATE test specs explicitly claim to be "exercising XYZ subsystem".

    The primary GNU code coverage tool is called gcov and is probably already on your GNU-based development system. I set out to determine how much FFmpeg source code is exercised while running the full FATE suite. I ran into some problems when trying to use gcov on a project-wide scale. I spackled around those holes with some very ad-hoc solutions. I’m sure I was just overlooking some more obvious solutions about which you all will be happy to enlighten me.

    Results
    I’ve learned to cut to the chase earlier in blog posts (results first, methods second). With that, here are the results I produced from this experiment. This Google spreadsheet contains 3 sheets : The first contains code coverage stats for a bunch of FFmpeg C files sorted first by percent coverage (ascending), then by number of lines (descending), thus highlighting which files have the most uncovered code (ffserver.c currently tops that chart). The second sheet has files for which no stats were generated. The third sheet has "problems". These files were rejected by my ad-hoc script.

    Here’s a link to the data in CSV if you want to play with it yourself.

    Using gcov with FFmpeg
    To instrument a program for gcov analysis, compile and link the target program with the -fprofile-arcs and -ftest-coverage options. These need to be applied at both the compile and link stages, so in the case of FFmpeg, configure with :

      ./configure \
        —extra-cflags="-fprofile-arcs -ftest-coverage" \
        —extra-ldflags="-fprofile-arcs -ftest-coverage"
    

    The building process results in a bunch of .gcno files which pertain to code coverage. After running the program as normal, a bunch of .gcda files are generated. To get coverage statistics from these files, run 'gcov sourcefile.c'. This will print some basic statistics as well as generate a corresponding .gcov file with more detailed information about exactly which lines have been executed, and how many times.

    Be advised that the source file must either live in the same directory from which gcov is invoked, or else the path to the source must be given to gcov via the '-o, --object-directory' option.

    Resetting Statistics
    Statistics in the .gcda are cumulative. Should you wish to reset the statistics, doing this in the build directory should suffice :

      find . -name "*.gcda" | xargs rm -f
    

    Getting Project-Wide Data
    As mentioned, I had to get a little creative here to get a big picture of FFmpeg code coverage. After building FFmpeg with the code coverage options and running FATE,

    for file in `find . -name "*.c"` \
    do \
      echo "*****" $file \
      gcov -o `dirname $file` `basename $file` \
    done > ffmpeg-code-coverage.txt 2>&1
    

    After that, I ran the ffmpeg-code-coverage.txt file through a custom Python script to print out the 3 CSV files that I later dumped into the Google Spreadsheet.

    Further Work
    I’m sure there are better ways to do this, and I’m sure you all will let me know what they are. But I have to get the ball rolling somehow.

    There’s also TestCocoon. I’d like to try that program and see if it addresses some of gcov’s shortcomings (assuming they are indeed shortcomings rather than oversights).

    Source for script : process-gcov-slop.py

    PYTHON :
    1. # !/usr/bin/python
    2.  
    3. import re
    4.  
    5. lines = open("ffmpeg-code-coverage.txt").read().splitlines()
    6. no_coverage = ""
    7. coverage = "filename, % covered, total lines\n"
    8. problems = ""
    9.  
    10. stats_exp = re.compile(’Lines executed :(\d+\.\d+)% of (\d+)’)
    11. for i in xrange(len(lines)) :
    12.   line = lines[i]
    13.   if line.startswith("***** ") :
    14.     filename = line[line.find(’./’)+2 :]
    15.     i += 1
    16.     if lines[i].find(":cannot open graph file") != -1 :
    17.       no_coverage += filename + \n
    18.     else :
    19.       while lines[i].find(filename) == -1 and not lines[i].startswith("***** ") :
    20.         i += 1
    21.       try :
    22.         (percent, total_lines) = stats_exp.findall(lines[i+1])[0]
    23.         coverage += filename + ’, ’ + percent + ’, ’ + total_lines + \n
    24.       except IndexError :
    25.         problems += filename + \n
    26.  
    27. open("no_coverage.csv", ’w’).write(no_coverage)
    28. open("coverage.csv", ’w’).write(coverage)
    29. open("problems.csv", ’w’).write(problems)
  • mobileFlash string/code clean-up

    21 août 2010, par Scott Schiller

    m script/soundmanager2-jsmin.js m script/soundmanager2-nodebug-jsmin.js m script/soundmanager2.js mobileFlash string/code clean-up

  • Homepage design, doc updates, code cleanup

    20 août 2010, par Scott Schiller

    m .gitignore + demo/_image/discogs.gif + demo/_image/mixcrate.png + demo/_image/opera.png - demo/demo m demo/index.css m doc/getstarted/index.html m doc/resources/index.html m index.html Homepage design, doc updates, code cleanup