Recherche avancée

Médias (0)

Mot : - Tags -/masques

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (54)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

Sur d’autres sites (9261)

  • VP8 Codec Optimization Update

    16 juin 2010, par noreply@blogger.com (John Luther) — inside webm

    Since WebM launched in May, the team has been working hard to make the VP8 video codec faster. Our community members have contributed improvements, but there’s more work to be done in some interesting areas related to performance (more on those below).


    Encoder


    The VP8 encoder is ripe for speed optimizations. Scott LaVarnway’s efforts in writing an x86 assembly version of the quantizer will help in this goal significantly as the quantizer is called many times while the encoder makes decisions about how much detail from the image will be transmitted.

    For those of you eager to get involved, one piece of low-hanging fruit is writing a SIMD version of the ARNR temporal filtering code. Also, much of the assembly code only makes use of the SSE2 instruction set, and there surely are newer extensions that could be made use of. There are also redundant code removal and other general cleanup to be done ; (Yaowu Xu has submitted some changes for these).

    At a higher level, someone can explore some alternative motion search strategies in the encoder. Eventually the motion search can be decoupled entirely to allow motion fields to be calculated elsewhere (for example, on a graphics processor).

    Decoder


    Decoder optimizations can bring higher resolutions and smoother playback to less powerful hardware.

    Jeff Muizelaar has submitted some changes which combine the IDCT and summation with the predicted block into a single function, helping us avoid storing the intermediate result, thus reducing memory transfers and avoiding cache pollution. This changes the assembly code in a fundamental way, so we will need to sync the other platforms up or switch them to a generic C implementation and accept the performance regression. Johann Koenig is working on implementing this change for ARM processors, and we’ll merge these changes into the mainline soon.

    In addition, Tim Terriberry is attacking a different method of bounds checking on the "bool decoder." The bool decoder is performance-critical, as it is called several times for each bit in the input stream. The current code handles this check with a simple clamp in the innermost loops and a less-frequent copy into a circular buffer. This can be expensive at higher data rates. Tim’s patch removes the circular buffer, but uses a more complex clamp in the innermost loops. These inner loops have historically been troublesome on embedded platforms.

    To contribute in these efforts, I’ve started working on rewriting higher-level parts of the decoder. I believe there is an opportunity to improve performance by paying better attention to data locality and cache layout, and reducing memory bus traffic in general. Another area I plan to explore is improving utilization in the multi-threaded decoder by separating the bitstream decoding from the rest of the image reconstruction, using work units larger than a single macroblock, and not tying functionality to a specific thread. To get involved in these areas, subscribe to the codec-devel mailing list and provide feedback on the code as it’s written.

    Embedded Processors


    We want to optimize multiple platforms, not just desktops. Fritz Koenig has already started looking at the performance of VP8 on the Intel Atom platform. This platform need some attention as we wrote our current x86 assembly code with an out-of-order processor in mind. Since Atom is an in-order processor (much like the original Pentium), the instruction scheduling of all of the x86 assembly code needs to be reexamined. One option we’re looking at is scheduling the code for the Atom processor and seeing if that impacts the performance on other x86 platforms such as the Via C3 and AMD Geode. This is shaping up to be a lot of work, but doing it would provide us with an opportunity to tighten up our assembly code.

    These issues, along with wanting to make better use of the larger register file on x86_64, may reignite every assembly programmer’s (least ?) favorite debate : whether or not to use intrinsics. Yunqing Wang has been experimenting with this a bit, but initial results aren’t promising. If you have experience in dealing with a lot of assembly code across several similar-but-kinda-different platforms, these maintainability issues might be familiar to you. I hope you’ll share your thoughts and experiences on the codec-devel mailing list.

    Optimizing codecs is an iterative (some would say never-ending) process, so stay tuned for more posts on the progress we’re making, and by all means, start hacking yourself.

    It’s exciting to see that we’re starting to get substantial code contributions from developers outside of Google, and I look forward to more as WebM grows into a strong community effort.

    John Koleszar is a software engineer at Google.

  • Révision 17746 : Fluidifier et simplifier le processus d’inscription :

    22 avril 2011, par cedric -

    proposer un lien dans le mail qui permet de confirmer simplement son inscription et son email, et provoque un login automatique dans la foulée. On factorise le code d’attribution/verification/suppression de jeton avec le processus de mot de passe (...)

  • Evolution #4417 : Augmenter la longueur du mot de passe demandé pour créer un nouvel auteur

    19 décembre 2019, par jean marie grall

    vu sur Seenthis à l’instant :

    It’s Time to Kill Your Eight-Character Password
    It’s time to throw away any passwords of eight characters or less and replace them with much longer passwords — let’s say at least 12 characters.
    https://www.tomsguide.com/us/8-character-password-dead,news-29429.html

    Dans mon idée, c’est surtout pour mixer lettres/chiffres et éviter d’avoir chocolat ou basket comme mot de passe :)
    Pour al longueur, c’est à discuter effectivement. Je ne sais pas ce qui se fait chez les autres...