Recherche avancée

Médias (1)

Mot : - Tags -/book

Autres articles (96)

  • Sélection de projets utilisant MediaSPIP

    29 avril 2011, par

    Les exemples cités ci-dessous sont des éléments représentatifs d’usages spécifiques de MediaSPIP pour certains projets.
    Vous pensez avoir un site "remarquable" réalisé avec MediaSPIP ? Faites le nous savoir ici.
    Ferme MediaSPIP @ Infini
    L’Association Infini développe des activités d’accueil, de point d’accès internet, de formation, de conduite de projets innovants dans le domaine des Technologies de l’Information et de la Communication, et l’hébergement de sites. Elle joue en la matière un rôle unique (...)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • Gestion des droits de création et d’édition des objets

    8 février 2011, par

    Par défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;

Sur d’autres sites (5594)

  • FFmpeg and Code Coverage Tools

    21 août 2010, par Multimedia Mike — FATE Server, Python

    Code coverage tools likely occupy the same niche as profiling tools : Tools that you’re supposed to use somewhere during the software engineering process but probably never quite get around to it, usually because you’re too busy adding features or fixing bugs. But there may come a day when you wish to learn how much of your code is actually being exercised in normal production use. For example, the team charged with continuously testing the FFmpeg project, would be curious to know how much code is being exercised, especially since many of the FATE test specs explicitly claim to be "exercising XYZ subsystem".

    The primary GNU code coverage tool is called gcov and is probably already on your GNU-based development system. I set out to determine how much FFmpeg source code is exercised while running the full FATE suite. I ran into some problems when trying to use gcov on a project-wide scale. I spackled around those holes with some very ad-hoc solutions. I’m sure I was just overlooking some more obvious solutions about which you all will be happy to enlighten me.

    Results
    I’ve learned to cut to the chase earlier in blog posts (results first, methods second). With that, here are the results I produced from this experiment. This Google spreadsheet contains 3 sheets : The first contains code coverage stats for a bunch of FFmpeg C files sorted first by percent coverage (ascending), then by number of lines (descending), thus highlighting which files have the most uncovered code (ffserver.c currently tops that chart). The second sheet has files for which no stats were generated. The third sheet has "problems". These files were rejected by my ad-hoc script.

    Here’s a link to the data in CSV if you want to play with it yourself.

    Using gcov with FFmpeg
    To instrument a program for gcov analysis, compile and link the target program with the -fprofile-arcs and -ftest-coverage options. These need to be applied at both the compile and link stages, so in the case of FFmpeg, configure with :

      ./configure \
        —extra-cflags="-fprofile-arcs -ftest-coverage" \
        —extra-ldflags="-fprofile-arcs -ftest-coverage"
    

    The building process results in a bunch of .gcno files which pertain to code coverage. After running the program as normal, a bunch of .gcda files are generated. To get coverage statistics from these files, run 'gcov sourcefile.c'. This will print some basic statistics as well as generate a corresponding .gcov file with more detailed information about exactly which lines have been executed, and how many times.

    Be advised that the source file must either live in the same directory from which gcov is invoked, or else the path to the source must be given to gcov via the '-o, --object-directory' option.

    Resetting Statistics
    Statistics in the .gcda are cumulative. Should you wish to reset the statistics, doing this in the build directory should suffice :

      find . -name "*.gcda" | xargs rm -f
    

    Getting Project-Wide Data
    As mentioned, I had to get a little creative here to get a big picture of FFmpeg code coverage. After building FFmpeg with the code coverage options and running FATE,

    for file in `find . -name "*.c"` \
    do \
      echo "*****" $file \
      gcov -o `dirname $file` `basename $file` \
    done > ffmpeg-code-coverage.txt 2>&1
    

    After that, I ran the ffmpeg-code-coverage.txt file through a custom Python script to print out the 3 CSV files that I later dumped into the Google Spreadsheet.

    Further Work
    I’m sure there are better ways to do this, and I’m sure you all will let me know what they are. But I have to get the ball rolling somehow.

    There’s also TestCocoon. I’d like to try that program and see if it addresses some of gcov’s shortcomings (assuming they are indeed shortcomings rather than oversights).

    Source for script : process-gcov-slop.py

    PYTHON :
    1. # !/usr/bin/python
    2.  
    3. import re
    4.  
    5. lines = open("ffmpeg-code-coverage.txt").read().splitlines()
    6. no_coverage = ""
    7. coverage = "filename, % covered, total lines\n"
    8. problems = ""
    9.  
    10. stats_exp = re.compile(’Lines executed :(\d+\.\d+)% of (\d+)’)
    11. for i in xrange(len(lines)) :
    12.   line = lines[i]
    13.   if line.startswith("***** ") :
    14.     filename = line[line.find(’./’)+2 :]
    15.     i += 1
    16.     if lines[i].find(":cannot open graph file") != -1 :
    17.       no_coverage += filename + \n
    18.     else :
    19.       while lines[i].find(filename) == -1 and not lines[i].startswith("***** ") :
    20.         i += 1
    21.       try :
    22.         (percent, total_lines) = stats_exp.findall(lines[i+1])[0]
    23.         coverage += filename + ’, ’ + percent + ’, ’ + total_lines + \n
    24.       except IndexError :
    25.         problems += filename + \n
    26.  
    27. open("no_coverage.csv", ’w’).write(no_coverage)
    28. open("coverage.csv", ’w’).write(coverage)
    29. open("problems.csv", ’w’).write(problems)
  • Resurrecting SCD

    6 août 2010, par Multimedia Mike — Reverse Engineering

    When I became interested in reverse engineering all the way back in 2000, the first Win32 disassembler I stumbled across was simply called "Win32 Program Disassembler" authored by one Sang Cho. I took to calling it ’scd’ for Sang Cho’s Disassembler. The original program versions and source code are still available for download. I remember being able to compile v0.23 of the source code with gcc under Unix ; 0.25 is no go due to extensive reliance on the Win32 development environment.

    I recently wanted to use scd again but had some trouble compiling. As was the case the first time I tried compiling the source code a decade ago, it’s necessary to transform line endings from DOS -> Unix using ’dos2unix’ (I see that this has been renamed to/replaced by ’fromdos’ on Ubuntu).

    Beyond this, it seems that there are some C constructs that used to be valid but are now rejected by gcc. The first looks like this :

    C :
    1. return (int) c = *(PBYTE)((int)lpFile + vCodeOffset) ;

    Ahem, "error : lvalue required as left operand of assignment". Removing the "(int)" before the ’c’ makes the problem go away. It’s a strange way to write a return statement in general. However, ’c’ is a global variable that is apparently part of some YACC/BISON-type output.

    The second issue is when a case-switch block has a default label but with no code inside. Current gcc doesn’t like that. It’s necessary to at least provide a break statement after the default label.

    Finally, the program turns out to not be 64-bit safe. It is necessary to compile it in 32-bit mode (compile and link with the ’-m32’ flag or build on a 32-bit system). The static 32-bit binary should run fine under a 64-bit kernel.

    Alternatively : What are some other Win32 disassemblers that work under Linux ?

  • Brute Force Dimensional Analysis

    15 juillet 2010, par Multimedia Mike — Game Hacking, Python

    I was poking at the data files of a really bad (is there any other kind ?) interactive movie video game known simply by one letter : D. The Sega Saturn version of the game is comprised primarily of Sega FILM/CPK files, about which I wrote the book. The second most prolific file type bears the extension ’.dg2’. Cursory examination of sample files revealed an apparently headerless format. Many of the video files are 288x144 in resolution. Multiplying that width by that height and then doubling it (as in, 2 bytes/pixel) yields 82944, which happens to be the size of a number of these DG2 files. Now, if only I had a tool that could take a suspected raw RGB file and convert it to a more standard image format.

    Here’s the FFmpeg conversion recipe I used :

     ffmpeg -f rawvideo -pix_fmt rgb555 -s 288x144 -i raw_file -y output.png
    

    So that covers the files that are suspected to be 288x144 in dimension. But what about other file sizes ? My brute force approach was to try all possible dimensions that would yield a particular file size. The Python code for performing this operation is listed at the end of this post.

    It’s interesting to view the progression as the script compresses to different sizes :



    That ’D’ is supposed to be red. So right away, we see that rgb555(le) is not the correct input format. Annoyingly, FFmpeg cannot handle rgb555be as a raw input format. But this little project worked well enough as a proof of concept.

    If you want to toy around with these files (and I know you do), I have uploaded a selection at : http://multimedia.cx/dg2/.

    Here is my quick Python script for converting one of these files to every acceptable resolution.

    work-out-resolution.py :

    PYTHON :
    1. # !/usr/bin/python
    2.  
    3. import commands
    4. import math
    5. import os
    6. import sys
    7.  
    8. FFMPEG = "/path/to/ffmpeg"
    9.  
    10. def convert_file(width, height, filename) :
    11.  outfile = "%s-%dx%d.png" % (filename, width, height)
    12.  command = "%s -f rawvideo -pix_fmt rgb555 -s %dx%d -i %s -y %s" % (FFMPEG, width, height, filename, outfile)
    13.  commands.getstatusoutput(command)
    14.  
    15. if len(sys.argv) <2 :
    16.  print "USAGE : work-out-resolution.py <file>"
    17.  sys.exit(1)
    18.  
    19. filename = sys.argv[1]
    20. if not os.path.exists(filename) :
    21.  print filename + " does not exist"
    22.  sys.exit(1)
    23.  
    24. filesize = os.path.getsize(filename) / 2
    25.  
    26. limit = int(math.sqrt(filesize)) + 1
    27. for i in xrange(1, limit) :
    28.  if filesize % i == 0 and filesize & 1 == 0 :
    29.   convert_file(i, filesize / i, filename)
    30.   convert_file(filesize / i, i, filename)