Recherche avancée

Médias (1)

Mot : - Tags -/ipad

Autres articles (44)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Pas question de marché, de cloud etc...

    10 avril 2011

    Le vocabulaire utilisé sur ce site essaie d’éviter toute référence à la mode qui fleurit allègrement
    sur le web 2.0 et dans les entreprises qui en vivent.
    Vous êtes donc invité à bannir l’utilisation des termes "Brand", "Cloud", "Marché" etc...
    Notre motivation est avant tout de créer un outil simple, accessible à pour tout le monde, favorisant
    le partage de créations sur Internet et permettant aux auteurs de garder une autonomie optimale.
    Aucun "contrat Gold ou Premium" n’est donc prévu, aucun (...)

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

Sur d’autres sites (8173)

  • Revision 37267 : on incrémente si besoin de la saisie

    14 avril 2010, par joseph@… — Log

    on incrémente si besoin de la saisie

  • Adventures in Unicode

    29 novembre 2012, par Multimedia Mike — Programming, php, Python, sqlite3, unicode

    Tangential to multimedia hacking is proper metadata handling. Recently, I have gathered an interest in processing a large corpus of multimedia files which are likely to contain metadata strings which do not fall into the lower ASCII set. This is significant because the lower ASCII set intersects perfectly with my own programming comfort zone. Indeed, all of my programming life, I have insisted on covering my ears and loudly asserting “LA LA LA LA LA ! ALL TEXT EVERYWHERE IS ASCII !” I suspect I’m not alone in this.

    Thus, I took this as an opportunity to conquer my longstanding fear of Unicode. I developed a self-learning course comprised of a series of exercises which add up to this diagram :



    Part 1 : Understanding Text Encoding
    Python has regular strings by default and then it has Unicode strings. The latter are prefixed by the letter ‘u’. This is what ‘ö’ looks like encoded in each type.

    1. >>> ’ö’, u’ö’
    2. (\xc3\xb6’, u\xf6’)

    A large part of my frustration with Unicode comes from Python yelling at me about UnicodeDecodeErrors and an inability to handle the number 0xc3 for some reason. This usually comes when I’m trying to wrap my head around an unrelated problem and don’t care to get sidetracked by text encoding issues. However, when I studied the above output, I finally understood where the 0xc3 comes from. I just didn’t understand what the encoding represents exactly.

    I can see from assorted tables that ‘ö’ is character 0xF6 in various encodings (in Unicode and Latin-1), so u’\xf6′ makes sense. But what does ‘\xc3\xb6′ mean ? It’s my style to excavate straight down to the lowest levels, and I wanted to understand exactly how characters are represented in memory. The UTF-8 encoding tables inform us that any Unicode code point above 0x7F but less than 0×800 will be encoded with 2 bytes :

     110xxxxx 10xxxxxx
    

    Applying this pattern to the \xc3\xb6 encoding :

                hex : 0xc3      0xb6
               bits : 11000011  10110110
     important bits : ---00011  —110110
          assembled : 00011110110
         code point : 0xf6
    

    I was elated when I drew that out and made the connection. Maybe I’m the last programmer to figure this stuff out. But I’m still happy that I actually understand those Python errors pertaining to the number 0xc3 and that I won’t have to apply canned solutions without understanding the core problem.

    I’m cheating on this part of this exercise just a little bit since the diagram implied that the Unicode text needs to come from a binary file. I’ll return to that in a bit. For now, I’ll just contrive the following Unicode string from the Python REPL :

    1. >>> u = u’Üñìçôđé’
    2. >>> u
    3. u\xdc\xf1\xec\xe7\xf4\u0111\xe9’

    Part 2 : From Python To SQLite3
    The next step is to see what happens when I use Python’s SQLite3 module to dump the string into a new database. Will the Unicode encoding be preserved on disk ? What will UTF-8 look like on disk anyway ?

    1. >>> import sqlite3
    2. >>> conn = sqlite3.connect(’unicode.db’)
    3. >>> conn.execute("CREATE TABLE t (t text)")
    4. >>> conn.execute("INSERT INTO t VALUES (?)", (u, ))
    5. >>> conn.commit()
    6. >>> conn.close()

    Next, I manually view the resulting database file (unicode.db) using a hex editor and look for strings. Here we go :

    000007F0   02 29 C3 9C  C3 B1 C3 AC  C3 A7 C3 B4  C4 91 C3 A9
    

    Look at that ! It’s just like the \xc3\xf6 encoding we see in the regular Python strings.

    Part 3 : From SQLite3 To A Web Page Via PHP
    Finally, use PHP (love it or hate it, but it’s what’s most convenient on my hosting provider) to query the string from the database and display it on a web page, completing the outlined processing pipeline.

    1. < ?php
    2. $dbh = new PDO("sqlite:unicode.db") ;
    3. foreach ($dbh->query("SELECT t from t") as $row) ;
    4. $unicode_string = $row[’t’] ;
    5.  ?>
    6.  
    7. <html>
    8. <head><meta http-equiv="Content-Type" content="text/html ; charset=utf-8"></meta></head>
    9. <body><h1>< ?=$unicode_string ?></h1></body>
    10. </html>

    I tested the foregoing PHP script on 3 separate browsers that I had handy (Firefox, Internet Explorer, and Chrome) :



    I’d say that counts as success ! It’s important to note that the “meta http-equiv” tag is absolutely necessary. Omit and see something like this :



    Since we know what the UTF-8 stream looks like, it’s pretty obvious how the mapping is operating here : 0xc3 and 0xc4 correspond to ‘Ã’ and ‘Ä’, respectively. This corresponds to an encoding named ISO/IEC 8859-1, a.k.a. Latin-1. Speaking of which…

    Part 4 : Converting Binary Data To Unicode
    At the start of the experiment, I was trying to extract metadata strings from these binary multimedia files and I noticed characters like our friend ‘ö’ from above. In the bytestream, this was represented simply with 0xf6. I mistakenly believed that this was the on-disk representation of UTF-8. Wrong. Turns out it’s Latin-1.

    However, I still need to solve the problem of transforming such strings into Unicode to be shoved through the pipeline diagrammed above. For this experiment, I created a 9-byte file with the Latin-1 string ‘Üñìçôdé’ couched by 0′s, to simulate yanking a string out of a binary file. Here’s unicode.file :

    00000000   00 DC F1 EC  E7 F4 64 E9  00         ......d..
    

    (Aside : this experiment uses plain ‘d’ since the ‘đ’ with a bar through it doesn’t occur in Latin-1 ; shows up all over the place in Vietnamese, at least.)

    I’ve been mashing around Python code via the REPL, trying to get this string into a Unicode-friendly format. This is a successful method but it’s probably not the best :

    1. >>> import struct
    2. >>> f = open(’unicode.file’, ’r’).read()
    3. >>> u = u’’
    4. >>> for c in struct.unpack("B"*7, f[1 :8]) :
    5. ... u += unichr(c)
    6. ...
    7. >>> u
    8. u\xdc\xf1\xec\xe7\xf4d\xe9’
    9. >>> print u
    10. Üñìçôdé

    Conclusion
    Dealing with text encoding matters reminds me of dealing with integer endian-ness concerns. When you’re just dealing with one system, you probably don’t need to think too much about it because the system is usually handling everything consistently underneath the covers.

    However, when the data leaves one system and will be interpreted by another system, that’s when a programmer needs to be cognizant of matters such as integer endianness or text encoding.

  • Anomalie #3991 : Erreur compression CSS et base64

    29 août 2017, par tcharlss (*´_ゝ`)

    La ligne fautive se trouve ici : https://zone.spip.org/trac/spip-zone/browser/_core_/plugins/compresseur/inc/compresseur_minifier.php#L100

    // zero est zero, quelle que soit l’unite (sauf pour % car casse les @keyframes cf https://core.spip.net/issues/3128)
    $contenu = preg_replace("/([^0-9.]0)(em|px|pt)/ms", "$1", $contenu) ;
    

    Ça cherche le nombre zéro précédé de n’importe quel caractère (autre qu’un chiffre) ou d’un point.
    Du coup ça peut matcher avec les data URIs :

    @font-facefont-family :’spip’ ;src:url("data:application/font-woff ;base64,abc0pxyz") ;
    

    Pour éviter ce souci, on pourrait préciser exactement quels caractères peuvent précéder le zéro pour considérer qu’il s’agit d’une unité. On peut avoir :

    1) deux points

    font-size:0px ;
    

    2) un ou plusieurs espaces

    font-size : 0px ;
    font-size : calc(10px + 0px) ;
    

    3) une parenthèse dans le cas de calc()

    font-size : calc(0px) ;
    

    4) Autres unités

    À noter qu’il y a aussi pas mal d’autres unités qui ne sont pas prises en compte dans la regex actuelle : https://www.w3schools.com/cssref/css_units.asp

    rem ex pc
    vh vw vmin vmax 
    cm mm in
    ch 
    

    Ce qui donne au final la regex suivante, qui laisse mes data URIs tranquilles :

    $contenu = preg_replace("/((?: :|\s+|\()0)(em|px|pt|rem|ex|pc|vh|vw|vmin|vmax|cm|mm|in|ch)/ms", "$1", $contenu) ;