Recherche avancée

Médias (2)

Mot : - Tags -/doc2img

Autres articles (8)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

  • Possibilité de déploiement en ferme

    12 avril 2011, par

    MediaSPIP peut être installé comme une ferme, avec un seul "noyau" hébergé sur un serveur dédié et utilisé par une multitude de sites différents.
    Cela permet, par exemple : de pouvoir partager les frais de mise en œuvre entre plusieurs projets / individus ; de pouvoir déployer rapidement une multitude de sites uniques ; d’éviter d’avoir à mettre l’ensemble des créations dans un fourre-tout numérique comme c’est le cas pour les grandes plate-formes tout public disséminées sur le (...)

Sur d’autres sites (2472)

  • Perspective transformations

    9 juin 2010, par Mikko Koppanen — Imagick, PHP stuff

    Finally (after a long break) I managed to force myself to update the PHP documentation and this time it was distortImage code example. Things have been hectic lately but that does not quite explain the 6 months(?) break between this and the previous post. As a matter of a fact there is no excuse for such a long silence so I will try to update this blog a bit more often from now on.

    Back in the day I used to blog the examples and update the documentation if I remembered but I am trying to fix this bad habit. Most of the latest examples have been updated in to the manual. In the case of the two last examples I updated the documentation first and then blogged on the subject.

    I took some time to actually understand the perspective transformations properly using the excellent ImageMagick examples (mainly created by Anthony Thyssen) as a reference. The basic idea of perspective distortion seems simple : to distort the control points to new locations. Grabbing the syntax for Imagick was easy, an array of control point pairs in the form of :

    1. array(source_x, source_y, dest_x, dest_y ... )

    The following example uses the built-in checkerboard pattern to demonstrate perspective distortion :

    1. < ?php
    2. /* Create new object */
    3. $im = new Imagick() ;
    4.  
    5. /* Create new checkerboard pattern */
    6. $im->newPseudoImage(100, 100, "pattern:checkerboard") ;
    7.  
    8. /* Set the image format to png */
    9. $im->setImageFormat(’png’) ;
    10.  
    11. /* Fill background area with transparent */
    12. $im->setImageVirtualPixelMethod(Imagick: :VIRTUALPIXELMETHOD_TRANSPARENT) ;
    13.  
    14. /* Activate matte */
    15. $im->setImageMatte(true) ;
    16.  
    17. /* Control points for the distortion */
    18. $controlPoints = array( 10, 10,
    19.             10, 5,
    20.  
    21.             10, $im->getImageHeight() - 20,
    22.             10, $im->getImageHeight() - 5,
    23.  
    24.             $im->getImageWidth() - 10, 10,
    25.             $im->getImageWidth() - 10, 20,
    26.  
    27.             $im->getImageWidth() - 10, $im->getImageHeight() - 10,
    28.             $im->getImageWidth() - 10, $im->getImageHeight() - 30) ;
    29.  
    30. /* Perform the distortion */ 
    31. $im->distortImage(Imagick: :DISTORTION_PERSPECTIVE, $controlPoints, true) ;
    32.  
    33. /* Ouput the image */ 
    34. header("Content-Type : image/png") ;
    35. echo $im ;
    36.  ?>

    Here is the source image :
    checker before

    And the result :
    after

  • IJG swings again, and misses

    1er février 2010, par Mans — Multimedia

    Earlier this month the IJG unleashed version 8 of its ubiquitous libjpeg library on the world. Eager to try out the “major breakthrough in image coding technology” promised in the README file accompanying v7, I downloaded the release. A glance at the README file suggests something major indeed is afoot :

    Version 8.0 is the first release of a new generation JPEG standard to overcome the limitations of the original JPEG specification.

    The text also hints at the existence of a document detailing these marvellous new features, and a Google search later a copy has found its way onto my monitor. As I read, however, my state of mind shifts from an initial excited curiosity, through bewilderment and disbelief, finally arriving at pure merriment.

    Already on the first page it becomes clear no new JPEG standard in fact exists. All we have is an unsolicited proposal sent to the ITU-T by members of the IJG. Realising that even the most brilliant of inventions must start off as mere proposals, I carry on reading. The summary informs me that I am about to witness the introduction of three extensions to the T.81 JPEG format :

    1. An alternative coefficient scan sequence for DCT coefficient serialization
    2. A SmartScale extension in the Start-Of-Scan (SOS) marker segment
    3. A Frame Offset definition in or in addition to the Start-Of-Frame (SOF) marker segment

    Together these three extensions will, it is promised, “bring DCT based JPEG back to the forefront of state-of-the-art image coding technologies.”

    Alternative scan

    The first of the proposed extensions introduces an alternative DCT coefficient scan sequence to be used in place of the zigzag scan employed in most block transform based codecs.

    Alternative scan sequence

    Alternative scan sequence

    The advantage of this scan would be that combined with the existing progressive mode, it simplifies decoding of an initial low-resolution image which is enhanced through subsequent passes. The author of the document calls this scheme “image-pyramid/hierarchical multi-resolution coding.” It is not immediately obvious to me how this constitutes even a small advance in image coding technology.

    At this point I am beginning to suspect that our friend from the IJG has been trapped in a half-world between interlaced GIF images transmitted down noisy phone lines and today’s inferno of SVC, MVC, and other buzzwords.

    (Not so) SmartScale

    Disguised behind this camel-cased moniker we encounter a method which, we are told, will provide better image quality at high compression ratios. The author has combined two well-known (to us) properties in a (to him) clever way.

    The first property concerns the perceived impact of different types of distortion in an image. When encoding with JPEG, as the quantiser is increased, the decoded image becomes ever more blocky. At a certain point, a better subjective visual quality can be achieved by down-sampling the image before encoding it, thus allowing a lower quantiser to be used. If the decoded image is scaled back up to the original size, the unpleasant, blocky appearance is replaced with a smooth blur.

    The second property belongs to the DCT where, as we all know, the top-left (DC) coefficient is the average of the entire block, its neighbours represent the lowest frequency components etc. A top-left-aligned subset of the coefficient block thus represents a low-resolution version of the full block in the spatial domain.

    In his flash of genius, our hero came up with the idea of using the DCT for down-scaling the image. Unfortunately, he appears to possess precious little knowledge of sampling theory and human visual perception. Any block-based resampling will inevitably produce sharp artefacts along the block edges. The human visual system is particularly sensitive to sharp edges, so this is one of the most unwanted types of distortion in an encoded image.

    Despite the obvious flaws in this approach, I decided to give it a try. After all, the software is already written, allowing downscaling by factors of 8/8..16.

    Using a 1280×720 test image, I encoded it with each of the nine scaling options, from unity to half size, each time adjusting the quality parameter for a final encoded file size of no more than 200000 bytes. The following table presents the encoded file size, the libjpeg quality parameter used, and the SSIM metric for each of the images.

    Scale Size Quality SSIM
    8/8 198462 59 0.940
    8/9 196337 70 0.936
    8/10 196133 79 0.934
    8/11 197179 84 0.927
    8/12 193872 89 0.915
    8/13 197153 92 0.914
    8/14 188334 94 0.899
    8/15 198911 96 0.886
    8/16 197190 97 0.869

    Although the smaller images allowed a higher quality setting to be used, the SSIM value drops significantly. Numbers may of course be misleading, but the images below speak for themselves. These are cut-outs from the full image, the original on the left, unscaled JPEG-compressed in the middle, and JPEG with 8/16 scaling to the right.

    Looking at these images, I do not need to hesitate before picking the JPEG variant I prefer.

    Frame offset

    The third and final extension proposed is quite simple and also quite pointless : a top-left cropping to be applied to the decoded image. The alleged utility of this feature would be to enable lossless cropping of a JPEG image. In a typical image workflow, however, JPEG is only used for the final published version, so the need for this feature appears quite far-fetched.

    The grand finale

    Throughout the text, the author makes references to “the fundamental DCT property for image representation.” In his own words :

    This property was found by the author during implementation of the new DCT scaling features and is after his belief one of the most important discoveries in digital image coding after releasing the JPEG standard in 1992.

    The secret is to be revealed in an annex to the main text. This annex quotes in full a post by the author to the comp.dsp Usenet group in a thread with the subject why DCT. Reading the entire thread proves quite amusing. A few excerpts follow.

    The actual reason is much simpler, and therefore apparently very difficult to recognize by complicated-thinking people.

    Here is the explanation :

    What are people doing when they have a bunch of images and want a quick preview ? They use thumbnails ! What are thumbnails ? Thumbnails are small downscaled versions of the original image ! If you want more details of the image, you can zoom in stepwise by enlarging (upscaling) the image.

    So with proper understanding of the fundamental DCT property, the MPEG folks could make their videos more scalable, but, as in the case of JPEG, they are unable to recognize this simple but basic property, unfortunately, and pursue rather inferior approaches in actual developments.

    These are just phrases, and they don’t explain anything. But this is typical for the current state in this field : The relevant people ignore and deny the true reasons, and thus they turn in a circle and no progress is being made.

    However, there are dark forces in action today which ignore and deny any fruitful advances in this field. That is the reason that we didn’t see any progress in JPEG for more than a decade, and as long as those forces dominate, we will see more confusion and less enlightenment. The truth is always simple, and the DCT *is* simple, but this fact is suppressed by established people who don’t want to lose their dubious position.

    I believe a trip to the Total Perspective Vortex may be in order. Perhaps his tin-foil hat will save him.

  • ARM compiler shoot-out, round 2

    http://samples.ffmpeg.org/V-codecs/h264/cathedral-beta2-400extra-crop-avc.mp4 http://samples.ffmpeg.org/V-codecs/h264/NeroAVC.mp4 http://samples.ffmpeg.org/V-codecs/h264/indiana_jones_4-tlr3_h640w.mov http://samples.ffmpeg.org/MPEG-4/NeroRecodeSample-MP4/NeroRecodeSample.mp4 http://samples.ffmpeg.org/A-codecs/MP3/Silent_Light.mp3 http://samples.ffmpeg.org/A-codecs/vorbis/Lumme-Badloop.ogg
    20 août 2009, par Mans — ARM, Compilers

    In my recent test of ARM compilers, I had to leave out Texas Instrument’s compiler since it failed to build FFmpeg. Since then, the TI compiler team has been busy fixing bugs, and a snapshot I was given to test was able to build enough of a somewhat patched FFmpeg that I can now present round two in this shoot-out.

    The contenders this time were the fastest GCC variant from round one, ARM RVCT, and newcomer TI TMS470. With the same rules as last time, the exact versions and optimisation options were like this :

    • CodeSourcery GCC 2009q1 (based on 4.3.3), -mfpu=neon -mfloat-abi=softfp -mcpu=cortex-a8 -std=c99 -fomit-frame-pointer -O3 -fno-math-errno -fno-signed-zeros -fno-tree-vectorize
    • ARM RVCT 4.0 Build 591, -mfpu=neon -mfloat-abi=softfp -mcpu=cortex-a8 -std=c99 -fomit-frame-pointer -O3 -fno-math-errno -fno-signed-zeros
    • TI TMS470 4.7.0-a9229, --float_support=vfpv3 -mv=7a8 -O3 -mf=5


    To keep things fair, I left the vectoriser off also with the TI compiler. The table below lists the decoding times for the sample files, this time normalised against the participating GCC compiler. Remember, smaller numbers are better. Also keep in mind that this test was done with a development snapshot of TMS470, not an approved release.

    Sample name Codec Code type GCC RVCT TI
    cathedral H.264 CABAC integer 1.00 0.95 1.02
    NeroAVC H.264 CABAC integer 1.00 0.96 1.05
    indiana_jones_4 H.264 CAVLC integer 1.00 0.92 1.02
    NeroRecodeSample MPEG-4 ASP integer 1.00 1.01 1.08
    Silent_Light MP3 64-bit integer 1.00 0.48 0.72
    When_I_Grow_Up FLAC integer 1.00 0.87 0.93
    Lumme-Badloop Vorbis float 1.00 0.94 1.05
    Canyon AC-3 float 1.00 0.88 1.01
    lotr DTS float 1.00 1.00 1.08

    Overall, the TI TMS470 compiler comes off slightly worse than GCC. In two cases, however, it was significantly better than GCC, but not as good as RVCT. Incidentally, those were also the ones where RVCT scored the biggest win over GCC.

    My conclusions from this test are twofold :

    • ARM’s own compiler is very hard to beat. They do seem to know how their chips work.
    • GCC is incredibly bad at 64-bit arithmetic on 32-bit machines.

    The logical next step is to test these compilers with vectorisation enabled. FFmpeg should offer plenty of opportunities for this feature to shine. Unfortunately, that test will have to wait until the RVCT vectoriser is fixed. The current release does not compile FFmpeg with vectorisation enabled.