
Recherche avancée
Autres articles (48)
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Submit bugs and patches
13 avril 2011Unfortunately a software is never perfect.
If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
You may also (...) -
Librairies et logiciels spécifiques aux médias
10 décembre 2010, parPour un fonctionnement correct et optimal, plusieurs choses sont à prendre en considération.
Il est important, après avoir installé apache2, mysql et php5, d’installer d’autres logiciels nécessaires dont les installations sont décrites dans les liens afférants. Un ensemble de librairies multimedias (x264, libtheora, libvpx) utilisées pour l’encodage et le décodage des vidéos et sons afin de supporter le plus grand nombre de fichiers possibles. Cf. : ce tutoriel ; FFMpeg avec le maximum de décodeurs et (...)
Sur d’autres sites (8999)
-
Understanding the VP8 Token Tree
7 juin 2010, par Multimedia Mike — VP8I got tripped up on another part of the VP8 decoding process today. So I drew a picture to help myself understand it. Then I went back and read David Conrad’s comment on my last post regarding my difficulty understanding the VP8 spec and saw that he ran into the same problem. Since we both experienced the same hindrance in trying to sort out this matter, I thought I may as well publish the picture I drew.
VP8 defines various trees for decoding different syntax elements. There is one tree for decoding the tokens and it is expressed in the VP8 spec as such :
C :-
const tree_index coef_tree [2 * (num_dct_tokens - 1)] =
-
{
-
-dct_eob, 2, /* eob = "0" */
-
-DCT_0, 4, /* 0 = "10" */
-
-DCT_1, 6, /* 1 = "110" */
-
8, 12,
-
-DCT_2, 10, /* 2 = "11100" */
-
-DCT_3, -DCT_4, /* 3 = "111010", 4 = "111011" */
-
14, 16,
-
-dct_cat1, -dct_cat2, /* cat1 = "111100", cat2 = "111101" */
-
18, 20,
-
-dct_cat3, -dct_cat4, /* cat3 = "1111100", cat4 = "1111101" */
-
-dct_cat5, -dct_cat6 /* cat4 = "1111110", cat4 = "1111111" */
-
} ;
Here is what the table looks like when you make a tree out of it (click for full size image) :
The catch is that it makes no sense for an end-of-block (EOB) token to follow a 0 token since EOB already indicates that the remainder of the coefficients should be 0 anyway. Thus, the spec states that, "decoding of certain DCT coefficients may skip the first branch, whose preceding coefficient is a DCT_0." I confess, I didn’t understand what "skip the first branch" meant until I drew the tree.
For those wondering why it might be sub-optimal (clarity-wise) for a spec to simply regurgitate vast chunks of C code, this makes a decent case. As you can see, the spec makes certain assumptions about how a binary tree should be organized in a static array (node n points to elements n*2 and n*2+1 as its branches ; leaves are either negative or 0). This is the second method I have seen ; another piece of code (not the VP8 spec) had the nodes in the first half of the array and pointed to leaves in the second half. There must be other arrangements.
-
-
ffmpeg set auto height
19 juillet 2013, par conmenI got following command which use in generate video thumbnail :
escapeshellcmd("/usr/local/bin/ffmpeg -ss " . ceil($time) . " -i '" . $videoPath . "' -f image2 -vframes 1 -s 150x110 " . $tFilePath)
I want to know is that possible for the image generate in Auto Height instead of fixed Weight x Height ?
Thanks.
-
Adventures in Unicode
Tangential to multimedia hacking is proper metadata handling. Recently, I have gathered an interest in processing a large corpus of multimedia files which are likely to contain metadata strings which do not fall into the lower ASCII set. This is significant because the lower ASCII set intersects perfectly with my own programming comfort zone. Indeed, all of my programming life, I have insisted on covering my ears and loudly asserting “LA LA LA LA LA ! ALL TEXT EVERYWHERE IS ASCII !” I suspect I’m not alone in this.
Thus, I took this as an opportunity to conquer my longstanding fear of Unicode. I developed a self-learning course comprised of a series of exercises which add up to this diagram :
Part 1 : Understanding Text Encoding
Python has regular strings by default and then it has Unicode strings. The latter are prefixed by the letter ‘u’. This is what ‘ö’ looks like encoded in each type.-
>>> ’ö’, u’ö’
-
(’\xc3\xb6’, u’\xf6’)
A large part of my frustration with Unicode comes from Python yelling at me about UnicodeDecodeErrors and an inability to handle the number 0xc3 for some reason. This usually comes when I’m trying to wrap my head around an unrelated problem and don’t care to get sidetracked by text encoding issues. However, when I studied the above output, I finally understood where the 0xc3 comes from. I just didn’t understand what the encoding represents exactly.
I can see from assorted tables that ‘ö’ is character 0xF6 in various encodings (in Unicode and Latin-1), so u’\xf6′ makes sense. But what does ‘\xc3\xb6′ mean ? It’s my style to excavate straight down to the lowest levels, and I wanted to understand exactly how characters are represented in memory. The UTF-8 encoding tables inform us that any Unicode code point above 0x7F but less than 0×800 will be encoded with 2 bytes :
110xxxxx 10xxxxxx
Applying this pattern to the \xc3\xb6 encoding :
hex : 0xc3 0xb6 bits : 11000011 10110110 important bits : ---00011 —110110 assembled : 00011110110 code point : 0xf6
I was elated when I drew that out and made the connection. Maybe I’m the last programmer to figure this stuff out. But I’m still happy that I actually understand those Python errors pertaining to the number 0xc3 and that I won’t have to apply canned solutions without understanding the core problem.
I’m cheating on this part of this exercise just a little bit since the diagram implied that the Unicode text needs to come from a binary file. I’ll return to that in a bit. For now, I’ll just contrive the following Unicode string from the Python REPL :
-
>>> u = u’Üñìçôđé’
-
>>> u
-
u’\xdc\xf1\xec\xe7\xf4\u0111\xe9’
Part 2 : From Python To SQLite3
The next step is to see what happens when I use Python’s SQLite3 module to dump the string into a new database. Will the Unicode encoding be preserved on disk ? What will UTF-8 look like on disk anyway ?-
>>> import sqlite3
-
>>> conn = sqlite3.connect(’unicode.db’)
-
>>> conn.execute("CREATE TABLE t (t text)")
-
>>> conn.execute("INSERT INTO t VALUES (?)", (u, ))
-
>>> conn.commit()
-
>>> conn.close()
Next, I manually view the resulting database file (unicode.db) using a hex editor and look for strings. Here we go :
000007F0 02 29 C3 9C C3 B1 C3 AC C3 A7 C3 B4 C4 91 C3 A9
Look at that ! It’s just like the \xc3\xf6 encoding we see in the regular Python strings.
Part 3 : From SQLite3 To A Web Page Via PHP
Finally, use PHP (love it or hate it, but it’s what’s most convenient on my hosting provider) to query the string from the database and display it on a web page, completing the outlined processing pipeline.-
< ?php
-
$dbh = new PDO("sqlite:unicode.db") ;
-
foreach ($dbh->query("SELECT t from t") as $row) ;
-
$unicode_string = $row[’t’] ;
-
?>
-
-
<html>
-
<head><meta http-equiv="Content-Type" content="text/html ; charset=utf-8"></meta></head>
-
<body><h1>< ?=$unicode_string ?></h1></body>
-
</html>
I tested the foregoing PHP script on 3 separate browsers that I had handy (Firefox, Internet Explorer, and Chrome) :
I’d say that counts as success ! It’s important to note that the “meta http-equiv” tag is absolutely necessary. Omit and see something like this :
Since we know what the UTF-8 stream looks like, it’s pretty obvious how the mapping is operating here : 0xc3 and 0xc4 correspond to ‘Ã’ and ‘Ä’, respectively. This corresponds to an encoding named ISO/IEC 8859-1, a.k.a. Latin-1. Speaking of which…
Part 4 : Converting Binary Data To Unicode
At the start of the experiment, I was trying to extract metadata strings from these binary multimedia files and I noticed characters like our friend ‘ö’ from above. In the bytestream, this was represented simply with 0xf6. I mistakenly believed that this was the on-disk representation of UTF-8. Wrong. Turns out it’s Latin-1.However, I still need to solve the problem of transforming such strings into Unicode to be shoved through the pipeline diagrammed above. For this experiment, I created a 9-byte file with the Latin-1 string ‘Üñìçôdé’ couched by 0′s, to simulate yanking a string out of a binary file. Here’s unicode.file :
00000000 00 DC F1 EC E7 F4 64 E9 00 ......d..
(Aside : this experiment uses plain ‘d’ since the ‘đ’ with a bar through it doesn’t occur in Latin-1 ; shows up all over the place in Vietnamese, at least.)
I’ve been mashing around Python code via the REPL, trying to get this string into a Unicode-friendly format. This is a successful method but it’s probably not the best :
-
>>> import struct
-
>>> f = open(’unicode.file’, ’r’).read()
-
>>> u = u’’
-
>>> for c in struct.unpack("B"*7, f[1 :8]) :
-
... u += unichr(c)
-
...
-
>>> u
-
u’\xdc\xf1\xec\xe7\xf4d\xe9’
-
>>> print u
-
Üñìçôdé
Conclusion
Dealing with text encoding matters reminds me of dealing with integer endian-ness concerns. When you’re just dealing with one system, you probably don’t need to think too much about it because the system is usually handling everything consistently underneath the covers.However, when the data leaves one system and will be interpreted by another system, that’s when a programmer needs to be cognizant of matters such as integer endianness or text encoding.
-