Recherche avancée

Médias (91)

Autres articles (28)

  • Modifier la date de publication

    21 juin 2013, par

    Comment changer la date de publication d’un média ?
    Il faut au préalable rajouter un champ "Date de publication" dans le masque de formulaire adéquat :
    Administrer > Configuration des masques de formulaires > Sélectionner "Un média"
    Dans la rubrique "Champs à ajouter, cocher "Date de publication "
    Cliquer en bas de la page sur Enregistrer

  • Contribute to translation

    13 avril 2011

    You can help us to improve the language used in the software interface to make MediaSPIP more accessible and user-friendly. You can also translate the interface into any language that allows it to spread to new linguistic communities.
    To do this, we use the translation interface of SPIP where the all the language modules of MediaSPIP are available. Just subscribe to the mailing list and request further informantion on translation.
    MediaSPIP is currently available in French and English (...)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

Sur d’autres sites (7040)

  • 2 GB Should Be Enough For Me

    31 août 2010, par Multimedia Mike — General

    My new EeePC 1201PN netbook has 2 GB of RAM. Call me shortsighted but I feel like “that ought to be enough for me”. I’m not trying to claim that it ought to be enough for everyone. I am, however, questioning the utility of swap space for those skilled in the art of computing.



    Technology marches on : This ancient 128 MB RAM module is larger than my digital camera’s battery charger… and I just realized that comparison doesn’t make any sense

    Does anyone else have this issue ? It has gotten to the point where I deliberately disable swap partitions on Linux desktops I’m using ('swapoff -a'), and try not to allocate a swap partition during install time. I’m encountering Linux installers that seem to be making it tougher to do this, essentially pleading with you to create a swap partition– “Seriously, you might need 8 total gigabytes of virtual memory one day.” I’m of the opinion that if 2 GB of physical memory isn’t enough for my normal operation, I might need to re-examine my processes.

    In the course of my normal computer usage (which is definitely not normal by the standard of a normal computer user), swap space is just another way for the software to screw things up behind the scenes. In this case, the mistake is performance-related as the software makes poor decisions about what needs to be kept in RAM.

    And then there are the netbook-oriented Linux distributions that insisted upon setting aside as swap 1/2 gigabyte of the already constrained 4 gigabytes of my Eee PC 701′s on-board flash memory, never offering the choice to opt out of swap space during installation. Earmarking flash memory for swap space is generally regarded as exceptionally poor form. To be fair, I don’t know that SSD has been all that prevalent in netbooks since the very earliest units in the netbook epoch.

    Am I alone in this ? Does anyone else prefer to keep all of their memory physical in this day and age ?

  • FFmpeg and Code Coverage Tools

    21 août 2010, par Multimedia Mike — FATE Server, Python

    Code coverage tools likely occupy the same niche as profiling tools : Tools that you’re supposed to use somewhere during the software engineering process but probably never quite get around to it, usually because you’re too busy adding features or fixing bugs. But there may come a day when you wish to learn how much of your code is actually being exercised in normal production use. For example, the team charged with continuously testing the FFmpeg project, would be curious to know how much code is being exercised, especially since many of the FATE test specs explicitly claim to be "exercising XYZ subsystem".

    The primary GNU code coverage tool is called gcov and is probably already on your GNU-based development system. I set out to determine how much FFmpeg source code is exercised while running the full FATE suite. I ran into some problems when trying to use gcov on a project-wide scale. I spackled around those holes with some very ad-hoc solutions. I’m sure I was just overlooking some more obvious solutions about which you all will be happy to enlighten me.

    Results
    I’ve learned to cut to the chase earlier in blog posts (results first, methods second). With that, here are the results I produced from this experiment. This Google spreadsheet contains 3 sheets : The first contains code coverage stats for a bunch of FFmpeg C files sorted first by percent coverage (ascending), then by number of lines (descending), thus highlighting which files have the most uncovered code (ffserver.c currently tops that chart). The second sheet has files for which no stats were generated. The third sheet has "problems". These files were rejected by my ad-hoc script.

    Here’s a link to the data in CSV if you want to play with it yourself.

    Using gcov with FFmpeg
    To instrument a program for gcov analysis, compile and link the target program with the -fprofile-arcs and -ftest-coverage options. These need to be applied at both the compile and link stages, so in the case of FFmpeg, configure with :

      ./configure \
        —extra-cflags="-fprofile-arcs -ftest-coverage" \
        —extra-ldflags="-fprofile-arcs -ftest-coverage"
    

    The building process results in a bunch of .gcno files which pertain to code coverage. After running the program as normal, a bunch of .gcda files are generated. To get coverage statistics from these files, run 'gcov sourcefile.c'. This will print some basic statistics as well as generate a corresponding .gcov file with more detailed information about exactly which lines have been executed, and how many times.

    Be advised that the source file must either live in the same directory from which gcov is invoked, or else the path to the source must be given to gcov via the '-o, --object-directory' option.

    Resetting Statistics
    Statistics in the .gcda are cumulative. Should you wish to reset the statistics, doing this in the build directory should suffice :

      find . -name "*.gcda" | xargs rm -f
    

    Getting Project-Wide Data
    As mentioned, I had to get a little creative here to get a big picture of FFmpeg code coverage. After building FFmpeg with the code coverage options and running FATE,

    for file in `find . -name "*.c"` \
    do \
      echo "*****" $file \
      gcov -o `dirname $file` `basename $file` \
    done > ffmpeg-code-coverage.txt 2>&1
    

    After that, I ran the ffmpeg-code-coverage.txt file through a custom Python script to print out the 3 CSV files that I later dumped into the Google Spreadsheet.

    Further Work
    I’m sure there are better ways to do this, and I’m sure you all will let me know what they are. But I have to get the ball rolling somehow.

    There’s also TestCocoon. I’d like to try that program and see if it addresses some of gcov’s shortcomings (assuming they are indeed shortcomings rather than oversights).

    Source for script : process-gcov-slop.py

    PYTHON :
    1. # !/usr/bin/python
    2.  
    3. import re
    4.  
    5. lines = open("ffmpeg-code-coverage.txt").read().splitlines()
    6. no_coverage = ""
    7. coverage = "filename, % covered, total lines\n"
    8. problems = ""
    9.  
    10. stats_exp = re.compile(’Lines executed :(\d+\.\d+)% of (\d+)’)
    11. for i in xrange(len(lines)) :
    12.   line = lines[i]
    13.   if line.startswith("***** ") :
    14.     filename = line[line.find(’./’)+2 :]
    15.     i += 1
    16.     if lines[i].find(":cannot open graph file") != -1 :
    17.       no_coverage += filename + \n
    18.     else :
    19.       while lines[i].find(filename) == -1 and not lines[i].startswith("***** ") :
    20.         i += 1
    21.       try :
    22.         (percent, total_lines) = stats_exp.findall(lines[i+1])[0]
    23.         coverage += filename + ’, ’ + percent + ’, ’ + total_lines + \n
    24.       except IndexError :
    25.         problems += filename + \n
    26.  
    27. open("no_coverage.csv", ’w’).write(no_coverage)
    28. open("coverage.csv", ’w’).write(coverage)
    29. open("problems.csv", ’w’).write(problems)
  • IJG swings again, and misses

    1er février 2010, par Mans — Multimedia

    Earlier this month the IJG unleashed version 8 of its ubiquitous libjpeg library on the world. Eager to try out the “major breakthrough in image coding technology” promised in the README file accompanying v7, I downloaded the release. A glance at the README file suggests something major indeed is afoot :

    Version 8.0 is the first release of a new generation JPEG standard to overcome the limitations of the original JPEG specification.

    The text also hints at the existence of a document detailing these marvellous new features, and a Google search later a copy has found its way onto my monitor. As I read, however, my state of mind shifts from an initial excited curiosity, through bewilderment and disbelief, finally arriving at pure merriment.

    Already on the first page it becomes clear no new JPEG standard in fact exists. All we have is an unsolicited proposal sent to the ITU-T by members of the IJG. Realising that even the most brilliant of inventions must start off as mere proposals, I carry on reading. The summary informs me that I am about to witness the introduction of three extensions to the T.81 JPEG format :

    1. An alternative coefficient scan sequence for DCT coefficient serialization
    2. A SmartScale extension in the Start-Of-Scan (SOS) marker segment
    3. A Frame Offset definition in or in addition to the Start-Of-Frame (SOF) marker segment

    Together these three extensions will, it is promised, “bring DCT based JPEG back to the forefront of state-of-the-art image coding technologies.”

    Alternative scan

    The first of the proposed extensions introduces an alternative DCT coefficient scan sequence to be used in place of the zigzag scan employed in most block transform based codecs.

    Alternative scan sequence

    Alternative scan sequence

    The advantage of this scan would be that combined with the existing progressive mode, it simplifies decoding of an initial low-resolution image which is enhanced through subsequent passes. The author of the document calls this scheme “image-pyramid/hierarchical multi-resolution coding.” It is not immediately obvious to me how this constitutes even a small advance in image coding technology.

    At this point I am beginning to suspect that our friend from the IJG has been trapped in a half-world between interlaced GIF images transmitted down noisy phone lines and today’s inferno of SVC, MVC, and other buzzwords.

    (Not so) SmartScale

    Disguised behind this camel-cased moniker we encounter a method which, we are told, will provide better image quality at high compression ratios. The author has combined two well-known (to us) properties in a (to him) clever way.

    The first property concerns the perceived impact of different types of distortion in an image. When encoding with JPEG, as the quantiser is increased, the decoded image becomes ever more blocky. At a certain point, a better subjective visual quality can be achieved by down-sampling the image before encoding it, thus allowing a lower quantiser to be used. If the decoded image is scaled back up to the original size, the unpleasant, blocky appearance is replaced with a smooth blur.

    The second property belongs to the DCT where, as we all know, the top-left (DC) coefficient is the average of the entire block, its neighbours represent the lowest frequency components etc. A top-left-aligned subset of the coefficient block thus represents a low-resolution version of the full block in the spatial domain.

    In his flash of genius, our hero came up with the idea of using the DCT for down-scaling the image. Unfortunately, he appears to possess precious little knowledge of sampling theory and human visual perception. Any block-based resampling will inevitably produce sharp artefacts along the block edges. The human visual system is particularly sensitive to sharp edges, so this is one of the most unwanted types of distortion in an encoded image.

    Despite the obvious flaws in this approach, I decided to give it a try. After all, the software is already written, allowing downscaling by factors of 8/8..16.

    Using a 1280×720 test image, I encoded it with each of the nine scaling options, from unity to half size, each time adjusting the quality parameter for a final encoded file size of no more than 200000 bytes. The following table presents the encoded file size, the libjpeg quality parameter used, and the SSIM metric for each of the images.

    Scale Size Quality SSIM
    8/8 198462 59 0.940
    8/9 196337 70 0.936
    8/10 196133 79 0.934
    8/11 197179 84 0.927
    8/12 193872 89 0.915
    8/13 197153 92 0.914
    8/14 188334 94 0.899
    8/15 198911 96 0.886
    8/16 197190 97 0.869

    Although the smaller images allowed a higher quality setting to be used, the SSIM value drops significantly. Numbers may of course be misleading, but the images below speak for themselves. These are cut-outs from the full image, the original on the left, unscaled JPEG-compressed in the middle, and JPEG with 8/16 scaling to the right.

    Looking at these images, I do not need to hesitate before picking the JPEG variant I prefer.

    Frame offset

    The third and final extension proposed is quite simple and also quite pointless : a top-left cropping to be applied to the decoded image. The alleged utility of this feature would be to enable lossless cropping of a JPEG image. In a typical image workflow, however, JPEG is only used for the final published version, so the need for this feature appears quite far-fetched.

    The grand finale

    Throughout the text, the author makes references to “the fundamental DCT property for image representation.” In his own words :

    This property was found by the author during implementation of the new DCT scaling features and is after his belief one of the most important discoveries in digital image coding after releasing the JPEG standard in 1992.

    The secret is to be revealed in an annex to the main text. This annex quotes in full a post by the author to the comp.dsp Usenet group in a thread with the subject why DCT. Reading the entire thread proves quite amusing. A few excerpts follow.

    The actual reason is much simpler, and therefore apparently very difficult to recognize by complicated-thinking people.

    Here is the explanation :

    What are people doing when they have a bunch of images and want a quick preview ? They use thumbnails ! What are thumbnails ? Thumbnails are small downscaled versions of the original image ! If you want more details of the image, you can zoom in stepwise by enlarging (upscaling) the image.

    So with proper understanding of the fundamental DCT property, the MPEG folks could make their videos more scalable, but, as in the case of JPEG, they are unable to recognize this simple but basic property, unfortunately, and pursue rather inferior approaches in actual developments.

    These are just phrases, and they don’t explain anything. But this is typical for the current state in this field : The relevant people ignore and deny the true reasons, and thus they turn in a circle and no progress is being made.

    However, there are dark forces in action today which ignore and deny any fruitful advances in this field. That is the reason that we didn’t see any progress in JPEG for more than a decade, and as long as those forces dominate, we will see more confusion and less enlightenment. The truth is always simple, and the DCT *is* simple, but this fact is suppressed by established people who don’t want to lose their dubious position.

    I believe a trip to the Total Perspective Vortex may be in order. Perhaps his tin-foil hat will save him.