Recherche avancée

Médias (0)

Mot : - Tags -/diogene

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (36)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Mise à disposition des fichiers

    14 avril 2011, par

    Par défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
    Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
    Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

Sur d’autres sites (6811)

  • Basic Video Palette Conversion

    20 août 2011, par Multimedia Mike — General, Python

    How do you take a 24-bit RGB image and convert it to an 8-bit paletted image for the purpose of compression using a codec that requires 8-bit input images ? Seems simple enough and that’s what I’m tackling in this post.

    Ask FFmpeg/Libav To Do It
    Ideally, FFmpeg / Libav should be able to handle this automatically. Indeed, FFmpeg used to be able to, at least at the time I wrote this post about ZMBV and was unhappy with FFmpeg’s default results. Somewhere along the line, FFmpeg and Libav lost the ability to do this. I suspect it got removed during some swscale refactoring.

    Still, there’s no telling if the old system would have computed palettes correctly for QuickTime files.

    Distance Approach
    When I started writing my SMC video encoder, I needed to convert RGB (from PNG files) to PAL8 colorspace. The path of least resistance was to match the pixels in the input image to the default 256-color palette that QuickTime assumes (and is hardcoded into FFmpeg/Libav).

    How to perform the matching ? Find the palette entry that is closest to a given input pixel, where "closest" is the minimum distance as computed by the usual distance formula (square root of the sum of the squares of the diffs of all the components).



    That means for each pixel in an image, check the pixel against 256 palette entries (early termination is possible if an acceptable threshold is met). As you might imagine, this can be a bit time-consuming. I wondered about a faster approach...

    Lookup Table
    I think this is the approach that FFmpeg used to use, but I went and derived it for myself after studying the default QuickTime palette table. There’s a pattern there— all of the RGB entries are comprised of combinations of 6 values — 0x00, 0x33, 0x66, 0x99, 0xCC, and 0xFF. If you mix and match these for red, green, and blue values, you come up with 6 * 6 * 6 = 216 different colors. This happens to be identical to the web-safe color palette.

    The first (0th) entry in the table is (FF, FF, FF), followed by (FF, FF, CC), (FF, FF, 99), and on down to (FF, FF, 00) when the green component gets knocked down and step and the next color is (FF, CC, FF). The first 36 palette entries in the table all have a red component of 0xFF. Thus, if an input RGB pixel has a red color closest to 0xFF, it must map to one of those first 36 entries.

    I created a table which maps indices 0..215 to values from 5..0. Each of the R, G, and B components of an input pixel are used to index into this table and derive 3 indices ri, gi, and bi. Finally, the index into the palette table is given by :

      index = ri * 36 + gi * 6 + bi
    

    For example, the pixel (0xFE, 0xFE, 0x01) would yield ri, gi, and bi values of 0, 0, and 5. Therefore :

      index = 0 * 36 + 0 * 6 + 5
    

    The palette index is 5, which maps to color (0xFF, 0xFF, 0x00).

    Validation
    So I was pretty pleased with myself for coming up with that. Now, ideally, swapping out one algorithm for another in my SMC encoder should yield identical results. That wasn’t the case, initially.

    One problem is that the regulation QuickTime palette actually has 40 more entries above and beyond the typical 216-entry color cube (rounding out the grand total of 256 colors). Thus, using the distance approach with the full default table provides for a little more accuracy.

    However, there still seems to be a problem. Let’s check our old standby, the Big Buck Bunny logo image :



    Distance approach using the full 256-color QuickTime default palette


    Distance approach using the 216-color palette


    Table lookup approach using the 216-color palette

    I can’t quite account for that big red splotch there. That’s the most notable difference between images 1 and 2 and the only visible difference between images 2 and 3.

    To prove to myself that the distance approach is equivalent to the table approach, I wrote a Python script to iterate through all possible RGB combinations and verify the equivalence. If you’re not up on your base 2 math, that’s 224 or 16,777,216 colors to run through. I used Python’s multiprocessing module to great effect and really maximized a Core i7 CPU with 8 hardware threads.

    So I’m confident that the palette conversion techniques are sound. The red spot is probably attributable to a bug in my WIP SMC encoder.

    Source Code
    Update August 23, 2011 : Here’s the Python code I used for proving equivalence between the 2 approaches. In terms of leveraging multiple CPUs, it’s possibly the best program I have written to date.

    PYTHON :
    1. # !/usr/bin/python
    2.  
    3. from multiprocessing import Pool
    4.  
    5. palette = []
    6. pal8_table = []
    7.  
    8. def process_r(r) :
    9.  counts = []
    10.  
    11.  for i in xrange(216) :
    12.   counts.append(0)
    13.  
    14.  print "r = %d" % (r)
    15.  for g in xrange(256) :
    16.   for b in xrange(256) :
    17.    min_dsqrd = 0xFFFFFFFF
    18.    best_index = 0
    19.    for i in xrange(len(palette)) :
    20.     dr = palette[i][0] - r
    21.     dg = palette[i][1] - g
    22.     db = palette[i][2] - b
    23.     dsqrd = dr * dr + dg * dg + db * db
    24.     if dsqrd <min_dsqrd :
    25.      min_dsqrd = dsqrd
    26.      best_index = i
    27.    counts[best_index] += 1
    28.  
    29.    # check if the distance approach deviates from the table-based approach
    30.    i = best_index
    31.    r = palette[i][0]
    32.    g = palette[i][1]
    33.    b = palette[i][2]
    34.    ri = pal8_table[r]
    35.    gi = pal8_table[g]
    36.    bi = pal8_table[b]
    37.    table_index = ri * 36 + gi * 6 + bi ;
    38.    if table_index != best_index :
    39.     print "(0x%02X 0x%02X 0x%02X) : distance index = %d, table index = %d" % (r, g, b, best_index, table_index)
    40.  
    41.  return counts
    42.  
    43. if __name__ == ’__main__’ :
    44.  counts = []
    45.  for i in xrange(216) :
    46.   counts.append(0)
    47.  
    48.  # initialize reference palette
    49.  color_steps = [ 0xFF, 0xCC, 0x99, 0x66, 0x33, 0x00 ]
    50.  for r in color_steps :
    51.   for g in color_steps :
    52.    for b in color_steps :
    53.     palette.append([r, g, b])
    54.  
    55.  # initialize palette conversion table
    56.  for i in range(0, 26) :
    57.   pal8_table.append(5)
    58.  for i in range(26, 77) :
    59.   pal8_table.append(4)
    60.  for i in range(77, 128) :
    61.   pal8_table.append(3)
    62.  for i in range(128, 179) :
    63.   pal8_table.append(2)
    64.  for i in range(179, 230) :
    65.   pal8_table.append(1)
    66.  for i in range(230, 256) :
    67.   pal8_table.append(0)
    68.  
    69.  # create a pool of worker threads and break up the overall job
    70.  pool = Pool()
    71.  it = pool.imap_unordered(process_r, range(256))
    72.  try :
    73.   while 1 :
    74.    partial_counts = it.next()
    75.    for i in xrange(216) :
    76.     counts[i] += partial_counts[i]
    77.  except StopIteration :
    78.   pass
    79.  
    80.  print "index, count, red, green, blue"
    81.  for i in xrange(len(counts)) :
    82.   print "%d, %d, %d, %d, %d" % (i, counts[i], palette[i][0], palette[i][1], palette[i][2])
  • FFmpeg and Code Coverage Tools

    21 août 2010, par Multimedia Mike — FATE Server, Python

    Code coverage tools likely occupy the same niche as profiling tools : Tools that you’re supposed to use somewhere during the software engineering process but probably never quite get around to it, usually because you’re too busy adding features or fixing bugs. But there may come a day when you wish to learn how much of your code is actually being exercised in normal production use. For example, the team charged with continuously testing the FFmpeg project, would be curious to know how much code is being exercised, especially since many of the FATE test specs explicitly claim to be "exercising XYZ subsystem".

    The primary GNU code coverage tool is called gcov and is probably already on your GNU-based development system. I set out to determine how much FFmpeg source code is exercised while running the full FATE suite. I ran into some problems when trying to use gcov on a project-wide scale. I spackled around those holes with some very ad-hoc solutions. I’m sure I was just overlooking some more obvious solutions about which you all will be happy to enlighten me.

    Results
    I’ve learned to cut to the chase earlier in blog posts (results first, methods second). With that, here are the results I produced from this experiment. This Google spreadsheet contains 3 sheets : The first contains code coverage stats for a bunch of FFmpeg C files sorted first by percent coverage (ascending), then by number of lines (descending), thus highlighting which files have the most uncovered code (ffserver.c currently tops that chart). The second sheet has files for which no stats were generated. The third sheet has "problems". These files were rejected by my ad-hoc script.

    Here’s a link to the data in CSV if you want to play with it yourself.

    Using gcov with FFmpeg
    To instrument a program for gcov analysis, compile and link the target program with the -fprofile-arcs and -ftest-coverage options. These need to be applied at both the compile and link stages, so in the case of FFmpeg, configure with :

      ./configure \
        —extra-cflags="-fprofile-arcs -ftest-coverage" \
        —extra-ldflags="-fprofile-arcs -ftest-coverage"
    

    The building process results in a bunch of .gcno files which pertain to code coverage. After running the program as normal, a bunch of .gcda files are generated. To get coverage statistics from these files, run 'gcov sourcefile.c'. This will print some basic statistics as well as generate a corresponding .gcov file with more detailed information about exactly which lines have been executed, and how many times.

    Be advised that the source file must either live in the same directory from which gcov is invoked, or else the path to the source must be given to gcov via the '-o, --object-directory' option.

    Resetting Statistics
    Statistics in the .gcda are cumulative. Should you wish to reset the statistics, doing this in the build directory should suffice :

      find . -name "*.gcda" | xargs rm -f
    

    Getting Project-Wide Data
    As mentioned, I had to get a little creative here to get a big picture of FFmpeg code coverage. After building FFmpeg with the code coverage options and running FATE,

    for file in `find . -name "*.c"` \
    do \
      echo "*****" $file \
      gcov -o `dirname $file` `basename $file` \
    done > ffmpeg-code-coverage.txt 2>&1
    

    After that, I ran the ffmpeg-code-coverage.txt file through a custom Python script to print out the 3 CSV files that I later dumped into the Google Spreadsheet.

    Further Work
    I’m sure there are better ways to do this, and I’m sure you all will let me know what they are. But I have to get the ball rolling somehow.

    There’s also TestCocoon. I’d like to try that program and see if it addresses some of gcov’s shortcomings (assuming they are indeed shortcomings rather than oversights).

    Source for script : process-gcov-slop.py

    PYTHON :
    1. # !/usr/bin/python
    2.  
    3. import re
    4.  
    5. lines = open("ffmpeg-code-coverage.txt").read().splitlines()
    6. no_coverage = ""
    7. coverage = "filename, % covered, total lines\n"
    8. problems = ""
    9.  
    10. stats_exp = re.compile(’Lines executed :(\d+\.\d+)% of (\d+)’)
    11. for i in xrange(len(lines)) :
    12.   line = lines[i]
    13.   if line.startswith("***** ") :
    14.     filename = line[line.find(’./’)+2 :]
    15.     i += 1
    16.     if lines[i].find(":cannot open graph file") != -1 :
    17.       no_coverage += filename + \n
    18.     else :
    19.       while lines[i].find(filename) == -1 and not lines[i].startswith("***** ") :
    20.         i += 1
    21.       try :
    22.         (percent, total_lines) = stats_exp.findall(lines[i+1])[0]
    23.         coverage += filename + ’, ’ + percent + ’, ’ + total_lines + \n
    24.       except IndexError :
    25.         problems += filename + \n
    26.  
    27. open("no_coverage.csv", ’w’).write(no_coverage)
    28. open("coverage.csv", ’w’).write(coverage)
    29. open("problems.csv", ’w’).write(problems)
  • Monster Battery Power Revisited

    28 mai 2010, par Multimedia Mike — Python, Science Projects

    So I have this new fat netbook battery and I performed an experiment to determine how long it really lasts. In my last post on the matter, it was suggested that I should rely on the information that gnome-power-manager is giving me. However, I have rarely seen GPM report more than about 2 hours of charge ; even on a full battery, it only reports 3h25m when I profiled it as lasting over 5 hours in my typical use. So I started digging to understand how GPM gets its numbers and determine if, perhaps, it’s not getting accurate data from the system.

    I started poking around /proc for the data I wanted. You can learn a lot in /proc as long as you know the right question to ask. I had to remember what the power subsystem is called — ACPI — and this led me to /proc/acpi/battery/BAT0/state which has data such as :

    present :                 yes
    capacity state :          ok
    charging state :          charged
    present rate :            unknown
    remaining capacity :      100 mAh
    present voltage :         8326 mV
    

    "Remaining capacity" rated in mAh is a little odd ; I would later determine that this should actually be expressed as a percentage (i.e., 100% charge at the time of this reading). Examining the GPM source code, it seems to determine as a function of the current CPU load (queried via /proc/stat) and the battery state queried via a facility called devicekit. I couldn’t immediately find any source code to the latter but I was able to install a utility called ’devkit-power’. Mostly, it appears to rehash data already found in the above /proc file.

    Curiously, the file /proc/acpi/battery/BAT0/info, which displays essential information about the battery, reports the design capacity of my battery as only 4400 mAh which is true for the original battery ; the new monster battery is supposed to be 10400 mAh. I can imagine that all of these data points could be conspiring to under-report my remaining battery life.

    Science project : Repeat the previous power-related science project but also parse and track the remaining capacity and present voltage fields from the battery state proc file.

    Let’s skip straight to the results (which are consistent with my last set of results in terms of longevity) :



    So there is definitely something strange going on with the reporting— the 4400 mAh battery reports discharge at a linear rate while the 10400 mAh battery reports precipitous dropoff after 60%.

    Another curious item is that my script broke at first when there was 20% power remaining which, as you can imagine, is a really annoying time to discover such a bug. At that point, the "time to empty" reported by devkit-power jumped from 0 seconds to 20 hours (the first state change observed for that field).

    Here’s my script, this time elevated from Bash script to Python. It requires xdotool and devkit-power to be installed (both should be available in the package manager for a distro).

    PYTHON :
    1. # !/usr/bin/python
    2.  
    3. import commands
    4. import random
    5. import sys
    6. import time
    7.  
    8. XDOTOOL = "/usr/bin/xdotool"
    9. BATTERY_STATE = "/proc/acpi/battery/BAT0/state"
    10. DEVKIT_POWER = "/usr/bin/devkit-power -i /org/freedesktop/DeviceKit/Power/devices/battery_BAT0"
    11.  
    12. print "count, unixtime, proc_remaining_capacity, proc_present_voltage, devkit_percentage, devkit_voltage"
    13.  
    14. count = 0
    15. while 1 :
    16.   commands.getstatusoutput("%s mousemove %d %d" % (XDOTOOL, random.randrange(0,800), random.randrange(0, 480)))
    17.   battery_state = open(BATTERY_STATE).read().splitlines()
    18.   for line in battery_state :
    19.     if line.startswith("remaining capacity :") :
    20.       proc_remaining_capacity = int(line.lstrip("remaining capacity : ").rstrip("mAh"))
    21.     elif line.startswith("present voltage :") :
    22.       proc_present_voltage = int(line.lstrip("present voltage : ").rstrip("mV"))
    23.   devkit_state = commands.getoutput(DEVKIT_POWER).splitlines()
    24.   for line in devkit_state :
    25.     line = line.strip()
    26.     if line.startswith("percentage :") :
    27.       devkit_percentage = int(line.lstrip("percentage :").rstrip(\%))
    28.     elif line.startswith("voltage :") :
    29.       devkit_voltage = float(line.lstrip("voltage :").rstrip(’V’)) * 1000
    30.   print "%d, %d, %d, %d, %d, %d" % (count, time.time(), proc_remaining_capacity, proc_present_voltage, devkit_percentage, devkit_voltage)
    31.   sys.stdout.flush()
    32.   time.sleep(60)
    33.   count += 1