Recherche avancée

Médias (1)

Mot : - Tags -/iphone

Autres articles (63)

  • MediaSPIP Core : La Configuration

    9 novembre 2010, par

    MediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
    Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

Sur d’autres sites (10204)

  • IJG swings again, and misses

    1er février 2010, par Mans — Multimedia

    Earlier this month the IJG unleashed version 8 of its ubiquitous libjpeg library on the world. Eager to try out the “major breakthrough in image coding technology” promised in the README file accompanying v7, I downloaded the release. A glance at the README file suggests something major indeed is afoot :

    Version 8.0 is the first release of a new generation JPEG standard to overcome the limitations of the original JPEG specification.

    The text also hints at the existence of a document detailing these marvellous new features, and a Google search later a copy has found its way onto my monitor. As I read, however, my state of mind shifts from an initial excited curiosity, through bewilderment and disbelief, finally arriving at pure merriment.

    Already on the first page it becomes clear no new JPEG standard in fact exists. All we have is an unsolicited proposal sent to the ITU-T by members of the IJG. Realising that even the most brilliant of inventions must start off as mere proposals, I carry on reading. The summary informs me that I am about to witness the introduction of three extensions to the T.81 JPEG format :

    1. An alternative coefficient scan sequence for DCT coefficient serialization
    2. A SmartScale extension in the Start-Of-Scan (SOS) marker segment
    3. A Frame Offset definition in or in addition to the Start-Of-Frame (SOF) marker segment

    Together these three extensions will, it is promised, “bring DCT based JPEG back to the forefront of state-of-the-art image coding technologies.”

    Alternative scan

    The first of the proposed extensions introduces an alternative DCT coefficient scan sequence to be used in place of the zigzag scan employed in most block transform based codecs.

    Alternative scan sequence

    Alternative scan sequence

    The advantage of this scan would be that combined with the existing progressive mode, it simplifies decoding of an initial low-resolution image which is enhanced through subsequent passes. The author of the document calls this scheme “image-pyramid/hierarchical multi-resolution coding.” It is not immediately obvious to me how this constitutes even a small advance in image coding technology.

    At this point I am beginning to suspect that our friend from the IJG has been trapped in a half-world between interlaced GIF images transmitted down noisy phone lines and today’s inferno of SVC, MVC, and other buzzwords.

    (Not so) SmartScale

    Disguised behind this camel-cased moniker we encounter a method which, we are told, will provide better image quality at high compression ratios. The author has combined two well-known (to us) properties in a (to him) clever way.

    The first property concerns the perceived impact of different types of distortion in an image. When encoding with JPEG, as the quantiser is increased, the decoded image becomes ever more blocky. At a certain point, a better subjective visual quality can be achieved by down-sampling the image before encoding it, thus allowing a lower quantiser to be used. If the decoded image is scaled back up to the original size, the unpleasant, blocky appearance is replaced with a smooth blur.

    The second property belongs to the DCT where, as we all know, the top-left (DC) coefficient is the average of the entire block, its neighbours represent the lowest frequency components etc. A top-left-aligned subset of the coefficient block thus represents a low-resolution version of the full block in the spatial domain.

    In his flash of genius, our hero came up with the idea of using the DCT for down-scaling the image. Unfortunately, he appears to possess precious little knowledge of sampling theory and human visual perception. Any block-based resampling will inevitably produce sharp artefacts along the block edges. The human visual system is particularly sensitive to sharp edges, so this is one of the most unwanted types of distortion in an encoded image.

    Despite the obvious flaws in this approach, I decided to give it a try. After all, the software is already written, allowing downscaling by factors of 8/8..16.

    Using a 1280×720 test image, I encoded it with each of the nine scaling options, from unity to half size, each time adjusting the quality parameter for a final encoded file size of no more than 200000 bytes. The following table presents the encoded file size, the libjpeg quality parameter used, and the SSIM metric for each of the images.

    Scale Size Quality SSIM
    8/8 198462 59 0.940
    8/9 196337 70 0.936
    8/10 196133 79 0.934
    8/11 197179 84 0.927
    8/12 193872 89 0.915
    8/13 197153 92 0.914
    8/14 188334 94 0.899
    8/15 198911 96 0.886
    8/16 197190 97 0.869

    Although the smaller images allowed a higher quality setting to be used, the SSIM value drops significantly. Numbers may of course be misleading, but the images below speak for themselves. These are cut-outs from the full image, the original on the left, unscaled JPEG-compressed in the middle, and JPEG with 8/16 scaling to the right.

    Looking at these images, I do not need to hesitate before picking the JPEG variant I prefer.

    Frame offset

    The third and final extension proposed is quite simple and also quite pointless : a top-left cropping to be applied to the decoded image. The alleged utility of this feature would be to enable lossless cropping of a JPEG image. In a typical image workflow, however, JPEG is only used for the final published version, so the need for this feature appears quite far-fetched.

    The grand finale

    Throughout the text, the author makes references to “the fundamental DCT property for image representation.” In his own words :

    This property was found by the author during implementation of the new DCT scaling features and is after his belief one of the most important discoveries in digital image coding after releasing the JPEG standard in 1992.

    The secret is to be revealed in an annex to the main text. This annex quotes in full a post by the author to the comp.dsp Usenet group in a thread with the subject why DCT. Reading the entire thread proves quite amusing. A few excerpts follow.

    The actual reason is much simpler, and therefore apparently very difficult to recognize by complicated-thinking people.

    Here is the explanation :

    What are people doing when they have a bunch of images and want a quick preview ? They use thumbnails ! What are thumbnails ? Thumbnails are small downscaled versions of the original image ! If you want more details of the image, you can zoom in stepwise by enlarging (upscaling) the image.

    So with proper understanding of the fundamental DCT property, the MPEG folks could make their videos more scalable, but, as in the case of JPEG, they are unable to recognize this simple but basic property, unfortunately, and pursue rather inferior approaches in actual developments.

    These are just phrases, and they don’t explain anything. But this is typical for the current state in this field : The relevant people ignore and deny the true reasons, and thus they turn in a circle and no progress is being made.

    However, there are dark forces in action today which ignore and deny any fruitful advances in this field. That is the reason that we didn’t see any progress in JPEG for more than a decade, and as long as those forces dominate, we will see more confusion and less enlightenment. The truth is always simple, and the DCT *is* simple, but this fact is suppressed by established people who don’t want to lose their dubious position.

    I believe a trip to the Total Perspective Vortex may be in order. Perhaps his tin-foil hat will save him.

  • Linux Media Player Survey Circa 2001

    2 septembre 2010, par Multimedia Mike — General

    Here’s a document I scavenged from my archives. It was dated September 1, 2001 and I now publish it 9 years later. It serves as sort of a time capsule for the state of media player programs at the time. Looking back on this list, I can’t understand why I couldn’t find MPlayer while I was conducting this survey, especially since MPlayer is the project I eventually started to work for a few months after writing this piece.

    For a little context, I had been studying multimedia concepts and tech for a year and was itching to get my hands dirty with practical multimedia coding. But I wanted to tackle what I perceived as unsolved problems– like playback of proprietary codecs. I didn’t want to have to build a new media playback framework just to start working on my problems. So I surveyed the players available to see which ones I could plug into and use as a testbed for implementing new decoders.

    Regarding Real Player, I wrote : “We’re trying to move away from the proprietary, closed-source “solutions”. Heh. Was I really an insufferable open source idealist back in the day ?

    Anyway, here’s the text with some Where are they now ? commentary [in brackets] :


    Towards an All-Inclusive Media Playing Solution for Linux

    I don’t feel that the media playing solutions for Linux set their sights high enough, even though they do tend to be quite ambitious.

    I want to create a media player for Linux that can open a file, figure out what type of file it is (AVI, MOV, etc.), determine the compression algorithms used to encode the audio and video chunks inside (MPEG, Cinepak, Sorenson, etc.) and replay the file using the best audio, video, and CPU facilities available on the computer.

    Video and audio playback is a solved problem on Linux ; I don’t wish to solve that problem again. The problem that isn’t solved is reliance on proprietary multimedia solutions through some kind of WINE-like layer in order to decode compressed multimedia files.

    Survey of Linux solutions for decoding proprietary multimedia
    updated 2001-09-01

    AVI Player for XMMS
    This is based on Avifile. All the same advantages and limitations apply.
    [Top Google hit is a Freshmeat page that doesn’t indicate activity since 2001-2002.]

    Avifile
    This player does a great job at taking apart AVI and ASF files and then feeding the compressed chunks of multimedia data through to the binary Win32 decoders.

    The program is written in C++ and I’m not very good at interpreting that kind of code. But I’m learning all over again. Examining the object hierarchy, it appears that the designers had the foresight to include native support for decoders that are compiled into the program from source code. However, closer examination reveals that there is support for ONE source decoder and that’s the “decoder” for uncompressed data. Still, I tried to manipulate this routine to accept and decode data from other codecs but no dice. It’s really confounding. The program always crashes when I feed non-uncompressed data through the source decoder.
    [Lives at http://avifile.sourceforge.net/ ; not updated since 2006.]

    Real Player
    There’s not much to do with this since it is closed source and proprietary. Even though there is a plugin architecture, that’s not satisfactory. We’re trying to move away from the proprietary, closed-source “solutions”.
    [Still kickin’ with version 11.]

    XAnim
    This is a well-established Unix media player. To his credit, the author does as well as he can with the resources he has. In other words, he supports the non-proprietary video codecs well, and even has support for some proprietary video codecs through binary-only decoders.

    The source code is extremely difficult to work with as the author chose to use the X coding format which I’ve never seen used anywhere else except for X header files. The infrastructure for extending the program and supporting other codecs and file formats is there, I suppose, but I would have to wrap my head around the coding style. Maybe I can learn to work past that. The other thing that bothers me about this program is the decoding approach : It seems that each video decoder includes routines to decompress the multimedia data into every conceivable RGB and YUV output format. This seems backwards to me ; it seems better to have one decoder function that decodes the data into its native format it was compressed from (e.g., YV12 for MPEG data) and then pass that data to another layer of the program that’s in charge of presenting the data and possibly converting it if necessary. This layer would encompass highly-optimized software conversion routines including special CPU-specific instructions (e.g., MMX and SSE) and eliminate the need to place those routines in lots of other routines. But I’m getting ahead of myself.
    [This one was pretty much dead before I made this survey, the most recent update being in 1999. Still, we owe it much respect as the granddaddy of Unix multimedia playback programs.]

    Xine
    This seems like a promising program. It was originally designed to play MPEGs from DVDs. It can also play MPEG files on a hard drive and utilizes the Xv extensions for hardware YUV playback. It’s also supposed to play AVI files using the same technique as Avifile but I have never, ever gotten it to work. If an AVI file has both video and sound, the binary video decoder can’t decode any frames. If the AVI file has video and no sound, the program gets confused and crashes, as far as I can tell.

    Still, it’s promising, and I’ve been trying to work around these crashes. It doesn’t yet have the type of modularization I’d like to see. Right now, it tailored to suit MPEG playback and AVI playback is an afterthought. Still, it appears to have a generalized interface for dropping in new file demultiplexers.

    I tried to extend the program for supporting source decoders by rewriting w32codec.c from scratch. I’m not having a smooth time of it so far. I’m able to perform some manipulations on the output window. However, I can’t get the program to deal with an RGB image format. It has trouble allocating an RGB surface with XvShmCreateImage(). This isn’t suprising, per my limited knowledge of X which is that Xv applies to YUV images, but it could also apply to RGB images as well. Anyway, the program should be able to fall back on regular RGB pixmaps if that Xv call fails.

    Right now, this program is looking the most promising. It will take some work to extend the underlying infrastructure, but it seems doable since I know C quite well and can understand the flow of this program, as opposed to Avifile and its C++. The C code also compiles about 10 times faster.
    [My home project for many years after a brief flirtation with MPlayer. It is still alive ; its latest release was just a month ago.]

    XMovie
    This library is a Quicktime movie player. I haven’t looked at it too extensively yet, but I do remember looking at it at one point and reading the documentation that said it doesn’t support key frames. Still, I should examine it again since they released a new version recently.
    [Heroine Virtual still puts out some software but XMovie has not been updated since 2005.]

    XMPS
    This program compiles for me, but doesn’t do much else. It can play an MP3 file. I have been able to get MPEG movies to play through it, but it refuses to show the full video frame, constricting it to a small window (obviously a bug).
    [This project is hosted on SourceForge and is listed with a registration date of 2003, well after this survey was made. So the project obviously lived elsewhere in 2001. Meanwhile, it doesn’t look like any files ever made it to SF for hosting.]

    XTheater
    I can’t even get this program to compile. It’s supposed to be an MPEG player based on SMPEG. As such, it probably doesn’t hold much promise for being easily extended into a general media player.
    [Last updated in 2002.]

    GMerlin
    I can’t get this to compile yet. I have a bug report in to the dev group.
    [Updated consistently in the last 9 years. Last update was in February of this year. I can’t find any record of my bug report, though.]

  • Creating A Lossless SMC Encoder

    26 avril 2011, par Multimedia Mike — General

    Look, I can’t explain how or why I come up with this stuff. For some reason, I thought it would be interesting to write a new encoder for the Apple SMC video codec. I can’t even remember why. I just sat down the other day, started writing, and now I have a lossless SMC encoder that I’m not sure what to do with. Maybe this is to be my new thing— writing encoders for marginal multimedia formats.

    Introduction
    SMC is a vector quantizer (a lossy method) but I decided to attack it from the angle of lossless encoding. A.k.a. Apple Graphics Codec, SMC operates on 4x4 blocks in an 8-bit paletted colorspace. Each 4x4 block can be encoded with 1, 2, 4, 8, or 16 colors. Blocks can also be skipped (copied from previous frame) or copied from blocks rendered immediately prior within the same frame.

    Step 1 : Validating Infrastructure
    The goal of this step is to encode the most braindead SMC frame possible and see if FFmpeg/libav’s QuickTime muxer can create a valid file. I think the simplest frame would be one in which each vector is encoded with the single-color mode, starting with color 0 and incrementing through the palette.

    Status : Successful. The only ’trick’ was to set avctx->bits_per_coded_sample to 8. (For fun, this can also be set to 40 (8 | 0x20) to specify a grayscale palette.)



    Step 2 : Preprocessing
    The video frames will arrive at the encoder as 32-bit RGB. These will need to be converted to a paletted colorspace before encoding. I don’t want to use FFmpeg’s default dithering approach as this will result in a substantial loss of quality as described in this post. I would rather maintain a palette built from observed colors throughout successive frames. If the total number of unique observed colors ever exceeds 256, error out.

    That’s what I would like to do. However, I noticed that FFmpeg/libav’s QuickTime muxer has never taken into account the possibility of encoding palettes. The path of least resistance in this case is to dither the input to match QuickTime’s default 8-bit palette (if a paletted QuickTime file does not specify a palette, a default 1-, 2-, 4-, or 8-bit palette is selected).

    Status : Successful, if slow. I definitely need to optimize this step later.

    Step 3 : Most Naive Encoding
    The most basic encoding is to "encode" each block as a 16-color block. This will actually result in a slightly larger frame size than a raw encoding since each 4x4 block will be prepended by a byte opcode (0xE0 in this case) to indicate encoding mode. This should demonstrate that the encoder is functioning at the most basic level.

    Status : Successful. Try not to laugh too hard at the Big Buck Bunny dithered to an 8-bit palette :



    Step 4 : Better Representation
    It seems to me that encoding this format (losslessly) will entail performing vector operations on lots of 16-element (4x4-pixel) vectors. These could be done on the frame as-is, but it strikes me as more efficient and perhaps less error prone to rearrange the input images into a vector of vectors (or array of arrays if you prefer) :

      0  1  2  3  w ...
      4  5  6  7  x ...
      8  9  A  B  y ...
      C  D  E  F  z ...
    
      0 : [0 1 2 3 4 5 6 7 8 9 A B C D E F]
      1 : [...]
    

    Status : Successful.

    Step 5 : Add Interframe Skip Codes
    Time to add a bit of brainpower to the proceedings : On non-keyframes, compare the current vector to the vector at the same position from the previous frame.

    Test this by encoding a pair of identical frames. Ideally, all codes should be skip codes.

    Status : Successful, though my vector matching function could probably be improved.

    Step 6 : Analyze Blocks For Optimal Color Coding
    This is where things get potentially interesting, algorithmically. At least, I need to figure out (or look up) an algorithm to count the unique elements in a vector.

    Naive algorithm (i.e., first thing I can think of) :

    • initialize a count variable to 0
    • initialize an array of 256 flags to false
    • for each 8-bit element in vector :
      • if flag array[element] is 0, set array[element] to true and increment count

    Status : Successful. Here is the distribution for the 640x360 Big Buck Bunny title :

    1194 4636 4113 2140 1138 568 325 154 80 36 9 5 2 0 0 0

    Or, in pretty graph form, demonstrating that vectors with few distinct elements dominate :



    Step 7 : Encode Monochrome Blocks
    At this point, the structure is starting to come together pretty well. This phase involves encoding a 0x60 opcode and a palette index when the count_distinct() function returns 1.

    Status : Absolutely no problem.

    Step 8 : Encode 2-, 4-, and 8-color Modes
    This step is a little more involved. This is where SMC’s 2-, 4-, and 8-color circular palette caches come into play. E.g., when the first 2-color block is encoded, the pair of colors it uses will be inserted into entry 0 of the 2-color cache. During the next 2-color block encoding, if the block uses a pair of colors that already occurs in the cache, the encoding can reference that cache entry. Otherwise, it adds the pair to the next available cache entry, looping back around to 0 as necessary.

    I think I should modify the count_distinct() function to also return a 16-byte array that contains a sorted list of the palette indicies used in the vector. The color pair cache will contain 256 16-bit, 32-bit ints for the quads and 64-bit ints for the octets. This will allow a slightly faster linear cache search.

    Status : The 2-color encoding wasn’t too much trouble and I was able to adapt it to the 4-color mode pretty quickly afterward. I’m still having trouble with the insane 8-color coding mode, though. So that’s commented out for the time being.

    Step 9 : Run Encoding and Putting It All Together
    For each frame, convert the input pixels to a paletted format via one method or another (match to default QuickTime palette for first pass). Then, preprocess each vector to determine the minimum number of elements that can be used to represent it, storing the sorted list of distinct colors in a separate array. The number of elements can either be 0 (only for interframes and indicates a skip block), 1, 2, 4, 8, or 16. Also during this phase, for each vector after the first, test if the vector is the same as the previous vector. If it is, denote this fact in the preprocessed encoding (set the high bit of the element count number).

    Finally, pack it into the bytestream. Iterate through the element count array and search for the longest runs of elements that are encoded with the same mode (up to 256 for skip modes, up to 16 for other modes). If the high bit of an element count is set, that indicates that a copy mode can be encoded. Look for the longest run of element counts with the high bit set and encode a copy mode.

    Status : In-process. Will finish this as motivation strikes.