
Recherche avancée
Médias (1)
-
1 000 000 (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
Autres articles (81)
-
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)
Sur d’autres sites (12108)
-
Revision d8e68bd14b : Merge changes I922f8602,I0ac3343d into experimental * changes : Use 256-byte a
27 février 2013, par John KoleszarMerge changes I922f8602,I0ac3343d into experimental * changes : Use 256-byte aligned filter tables Set scale factors consistently for SPLITMV
-
Adding C64 SID Music
1er novembre 2012, par Multimedia Mike — GeneralI have been working on adding support for SID files — the music format for the Commodore 64 — to the game music website for awhile. I feel a bit out of my element since I’m not that familiar with the C64. But why should I let that slow me down ? Allow me to go through the steps I have previously outlined in order to make this happen.
I need to know what picture should represent the system in the search results page. The foregoing picture should be fine, but I’m getting way ahead of myself.
Phase 1 is finding adequate player software. The most venerable contender in this arena is libsidplay, or so I first thought. It turns out that there’s libsidplay (originally hosted at Geocities, apparently, and no longer on the net) and also libsidplay2. Both are kind of old (libsidplay2 was last updated in 2004). I tried to compile libsidplay2 and the C++ didn’t agree with current version of g++.
However, a recent effort named libsidplayfp is carrying on the SID emulation tradition. It works rather well, notwithstanding the fact that compiling the entire library has a habit of apparently hanging the Linux VM where I develop this stuff.
Phase 2 is to develop a testbench app around the playback library. With the help of the libsidplayfp library maintainers, I accomplished this. The testbench app consistently requires about 15% of a single core of a fairly powerful Core i7. So I look forward to recommendations that I port that playback library to pure JavaScript.
Phase 3 is plug into the web player. I haven’t worked on this yet. I’m confident that this will work since phase 2 worked (plus, I have a plan to combine phases 2 and 3).
One interesting issue that has arisen is that proper operation of libsidplayfp requires that 3 C64 ROM files be present (the, ahem, KERNAL, BASIC interpreter, and character generator). While these are copyrighted ROMs, they are easily obtainable on the internet. The goal of my project is to eliminate as much friction as possible for enjoying these old tunes. To that end, I will just bake the ROM files directly into the player.
Phase 4 is collecting a SID song corpus. This is the simplest part of the whole process thanks to the remarkable curation efforts of the High Voltage SID Collection (HVSC). Anyone can download a giant archive of every known SID file. So that’s a done deal.
Or is it ? One small issue is that I was hoping that the first iteration of my game music website would focus on, well, game music. There is a lot of music in the HVSC that are original compositions or come from demos. The way that the archive is organized makes it difficult to automatically discern whether a particular SID file comes from a game or not.
Phase 5 is munging the metadata. The good news here is that the files have the metadata built in. The not-so-great news is that there isn’t quite as much as I might like. Each file is tagged with title, author, and publisher/copyright. If there is more than one song in a file, they all have the same metadata. Fortunately, if I can import them all into my game music database, there is an opportunity to add a lot more metadata.
Further, there is no play length metadata for these files. This means I will need to set each to a default length like 2 minutes and do something like I did before in order to automatically determine if any songs terminate sooner.
Oddly, the issue I’m most concerned about is character encoding. This is the first project for which I’m making certain that I understand character encoding since I can’t reasonably get away with assuming that everything is ASCII. So far, based on the random sampling of SID files I have checked, there is a good chance of encountering metadata strings with characters that are not in the lower ASCII set. From what I have observed, these characters map to Unicode code points. So I finally get to learn about manipulating strings in such a way that it preserves the character encoding. At the very least, I need Python to rip the strings out of the binary SID files and make sure the Unicode remains intact while being inserted into an SQLite3 database.
-
Adventures in Unicode
Tangential to multimedia hacking is proper metadata handling. Recently, I have gathered an interest in processing a large corpus of multimedia files which are likely to contain metadata strings which do not fall into the lower ASCII set. This is significant because the lower ASCII set intersects perfectly with my own programming comfort zone. Indeed, all of my programming life, I have insisted on covering my ears and loudly asserting “LA LA LA LA LA ! ALL TEXT EVERYWHERE IS ASCII !” I suspect I’m not alone in this.
Thus, I took this as an opportunity to conquer my longstanding fear of Unicode. I developed a self-learning course comprised of a series of exercises which add up to this diagram :
Part 1 : Understanding Text Encoding
Python has regular strings by default and then it has Unicode strings. The latter are prefixed by the letter ‘u’. This is what ‘ö’ looks like encoded in each type.-
>>> ’ö’, u’ö’
-
(’\xc3\xb6’, u’\xf6’)
A large part of my frustration with Unicode comes from Python yelling at me about UnicodeDecodeErrors and an inability to handle the number 0xc3 for some reason. This usually comes when I’m trying to wrap my head around an unrelated problem and don’t care to get sidetracked by text encoding issues. However, when I studied the above output, I finally understood where the 0xc3 comes from. I just didn’t understand what the encoding represents exactly.
I can see from assorted tables that ‘ö’ is character 0xF6 in various encodings (in Unicode and Latin-1), so u’\xf6′ makes sense. But what does ‘\xc3\xb6′ mean ? It’s my style to excavate straight down to the lowest levels, and I wanted to understand exactly how characters are represented in memory. The UTF-8 encoding tables inform us that any Unicode code point above 0x7F but less than 0×800 will be encoded with 2 bytes :
110xxxxx 10xxxxxx
Applying this pattern to the \xc3\xb6 encoding :
hex : 0xc3 0xb6 bits : 11000011 10110110 important bits : ---00011 —110110 assembled : 00011110110 code point : 0xf6
I was elated when I drew that out and made the connection. Maybe I’m the last programmer to figure this stuff out. But I’m still happy that I actually understand those Python errors pertaining to the number 0xc3 and that I won’t have to apply canned solutions without understanding the core problem.
I’m cheating on this part of this exercise just a little bit since the diagram implied that the Unicode text needs to come from a binary file. I’ll return to that in a bit. For now, I’ll just contrive the following Unicode string from the Python REPL :
-
>>> u = u’Üñìçôđé’
-
>>> u
-
u’\xdc\xf1\xec\xe7\xf4\u0111\xe9’
Part 2 : From Python To SQLite3
The next step is to see what happens when I use Python’s SQLite3 module to dump the string into a new database. Will the Unicode encoding be preserved on disk ? What will UTF-8 look like on disk anyway ?-
>>> import sqlite3
-
>>> conn = sqlite3.connect(’unicode.db’)
-
>>> conn.execute("CREATE TABLE t (t text)")
-
>>> conn.execute("INSERT INTO t VALUES (?)", (u, ))
-
>>> conn.commit()
-
>>> conn.close()
Next, I manually view the resulting database file (unicode.db) using a hex editor and look for strings. Here we go :
000007F0 02 29 C3 9C C3 B1 C3 AC C3 A7 C3 B4 C4 91 C3 A9
Look at that ! It’s just like the \xc3\xf6 encoding we see in the regular Python strings.
Part 3 : From SQLite3 To A Web Page Via PHP
Finally, use PHP (love it or hate it, but it’s what’s most convenient on my hosting provider) to query the string from the database and display it on a web page, completing the outlined processing pipeline.-
< ?php
-
$dbh = new PDO("sqlite:unicode.db") ;
-
foreach ($dbh->query("SELECT t from t") as $row) ;
-
$unicode_string = $row[’t’] ;
-
?>
-
-
<html>
-
<head><meta http-equiv="Content-Type" content="text/html ; charset=utf-8"></meta></head>
-
<body><h1>< ?=$unicode_string ?></h1></body>
-
</html>
I tested the foregoing PHP script on 3 separate browsers that I had handy (Firefox, Internet Explorer, and Chrome) :
I’d say that counts as success ! It’s important to note that the “meta http-equiv” tag is absolutely necessary. Omit and see something like this :
Since we know what the UTF-8 stream looks like, it’s pretty obvious how the mapping is operating here : 0xc3 and 0xc4 correspond to ‘Ã’ and ‘Ä’, respectively. This corresponds to an encoding named ISO/IEC 8859-1, a.k.a. Latin-1. Speaking of which…
Part 4 : Converting Binary Data To Unicode
At the start of the experiment, I was trying to extract metadata strings from these binary multimedia files and I noticed characters like our friend ‘ö’ from above. In the bytestream, this was represented simply with 0xf6. I mistakenly believed that this was the on-disk representation of UTF-8. Wrong. Turns out it’s Latin-1.However, I still need to solve the problem of transforming such strings into Unicode to be shoved through the pipeline diagrammed above. For this experiment, I created a 9-byte file with the Latin-1 string ‘Üñìçôdé’ couched by 0′s, to simulate yanking a string out of a binary file. Here’s unicode.file :
00000000 00 DC F1 EC E7 F4 64 E9 00 ......d..
(Aside : this experiment uses plain ‘d’ since the ‘đ’ with a bar through it doesn’t occur in Latin-1 ; shows up all over the place in Vietnamese, at least.)
I’ve been mashing around Python code via the REPL, trying to get this string into a Unicode-friendly format. This is a successful method but it’s probably not the best :
-
>>> import struct
-
>>> f = open(’unicode.file’, ’r’).read()
-
>>> u = u’’
-
>>> for c in struct.unpack("B"*7, f[1 :8]) :
-
... u += unichr(c)
-
...
-
>>> u
-
u’\xdc\xf1\xec\xe7\xf4d\xe9’
-
>>> print u
-
Üñìçôdé
Conclusion
Dealing with text encoding matters reminds me of dealing with integer endian-ness concerns. When you’re just dealing with one system, you probably don’t need to think too much about it because the system is usually handling everything consistently underneath the covers.However, when the data leaves one system and will be interpreted by another system, that’s when a programmer needs to be cognizant of matters such as integer endianness or text encoding.
-