Recherche avancée

Médias (91)

Autres articles (80)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

  • XMP PHP

    13 mai 2011, par

    Dixit Wikipedia, XMP signifie :
    Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
    Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
    XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)

Sur d’autres sites (5260)

  • Podcast Producer 2 : Where I’m at

    27 août 2009

    If all of this scheduled publishing stuff has worked, you’ll find a handful of posts about Podcast Producer 2 below.

    I just took all of the notes I’ve collected while working on a PCP2 project over the last month or so, and wrote them up. It’s entirely possible that there are all sorts of things I’m missing, or misunderstanding. Now that the NDA is expired, hopefully some more folks will go public with their own discoveries.

    So, if you’re coming across these posts and you know things I don’t about this strange world of ruby and media, please let me know in the commments !

  • Sony Launches Less Useful Z5U

    18 novembre 2009

    Sony today announced the NXCAM, an AVCHD-based "professional" camera which bears a striking resemblance to the EX1 and Z5U.

    You get 1080p exmor CMOS chips (presumably 1/3" ?) and records AVCHD to the highly popular (sarcasm) Memory Stick media.

    Pricing hasn’t been announced, but presumably it’ll be in the $4000 range like the Z5U. I’ll be curious to see how this shakes out in the market.

  • Revised FATE Test Spec System

    9 juin 2010, par Multimedia Mike — FATE Server

    FATE involves some database tables that define the test specifications. Like everything else in FATE, the concept could use some improvement. After I prototyped an improved, multithreaded testing client, the next logical revision seemed to be the test spec system.

    History
    The test spec system has been handled by a single table that includes an FFmpeg command line (with a few possible modifiers thrown in), an integer ID, a human-friendly ID, a description, the expected command line return code, the expected command output, a maximum runtime, and a Boolean to indicate whether the test is to be considered active.

    Adjunct to this test database is a large corpus of test media named the FATE suite.

    At first, the FATE testing script used a direct MySQL database protocol to query the test specs from the server before every build/test cycle. I soon realized this was ludicrously inefficient since the test specs don’t change that often. So I cached the tests in a static file to be retrieved via HTTP, first in Python’s "pickled" (serialized) format, then in an SQLite database.

    Planned Upgrades
    There are 2 major features I would like to build into the system going forward :

    1. The ability to version the entire suite so that it’s possible to test old branches of FFmpeg
    2. Another database field to indicate which, if any, other test specs must be executed before this spec can be executed

    I think I will take this opportunity to switch the test cache serialization format to JSON. I switched from Python pickling to SQLite because the latter was more portable between languages. JSON has that same benefit. Further, working with JSON data doesn’t require a round trip to disk (i.e., want to generate an SQLite database for sending via HTTP ? It needs to go onto disk first. It’s possible to create and manipulate a database entirely in memory but not fetch the bits).

    Things To Research

    • Pondering how version control systems operate and what they have to teach regarding how to version this data (including the question of whether I can just use an existing version control mechanism instead of creating my own system)
    • Efficient caching mechanism
    • Tagging test specs for alternate purposes such as longevity testing
    • Learn about web form programming in the 21st century so that it’s not quite as painful to maintain the system.

    Preliminary Versioning Concept
    Here is one approach I am thinking of : Create test groups. Each test spec is assigned to at least one test group. I can think of at least 2 groups : functional (the base test set in existence that validates functionality) and profiling (the projected test set that will be used for ongoing performance and memory profiling). The web frontend will allow for the creation of labels that will apply to a single group. Doing so will apply that label to all active tests in the group.