
Recherche avancée
Médias (1)
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (51)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)
Sur d’autres sites (8567)
-
Long Overdue MediaWiki Upgrade
5 février 2014, par Multimedia Mike — GeneralWhat do I do ? What I do ? This library book is 42 years overdue !
I admit that it’s mine, yet I can’t pay the fine,
Should I turn it in or should I hide it again ?
What do I do ? What do I do ?I internalized the forgoing paean to the perils of procrastination by Shel Silverstein in my formative years. It’s probably why I’ve never paid a single cent in late fees in my entire life.
However, I have been woefully negligent as the steward of the MediaWiki software that drives the world famous MultimediaWiki, the internet’s central repository of obscure technical knowledge related to multimedia. It is currently running of version 1.6 software. The latest version is 1.22.
The Story So Far
According to my records, I first set up the wiki late in 2005. I don’t know which MediaWiki release I was using at the time. I probably conducted a few upgrades in the early days, but that went by the wayside perhaps in 2007. My web host stopped allowing shell access and the MediaWiki upgrade process pretty much requires running a PHP script from a command line. Upgrade time came around and I put off the project. Weeks turned into months turned into years until, according to some notes, the wiki abruptly stopped working in July, 2011. Suddenly, there were PHP errors about “Namespace” being a reserved word.While I finally laid out a plan to upgrade the wiki after all these years, I eventually found that the problem had been caused when my webhost upgraded from PHP 5.2 -> 5.3. I also learned of a small number of code changes that caused the problem to go away, thus kicking the can down the road once more.
Then a new problem showed up last week. I think it might be related to a new version of PHP again. This time, a few other things on my site broke, and I learned that my webhost now allows me to select a PHP version to use (with the version then set to “auto”, which didn’t yield much information). Rolling back to an earlier version of PHP might have solved the problem easily.
But NO ! I made the determination that this goes no further. I want this wiki upgraded.
The Arduous Upgrade Path
There are 2 general upgrade paths I can think of :
- Upgrade in place on the server
- Upgrade offline and put the site back on the server
Approach #1 is problematic since I don’t have direct shell access, though I considered using something like PHP Shell. Approach #2 involves getting the entire set of wiki files and a backup of the MySQL tables. This is workable since I keep automated backups of these items anyway.
In fairly short order, I was able to set up a working copy of the MultimediaWiki hosted on a local Linux machine. Now what’s the move ? The MediaWiki software I’m running is 1.6.10. The very latest, as of this upgrade project is 1.22.2. I suppose it’s way too much to hope that the software will upgrade cleanly from 1.6.x straight to 1.22.x, but I guess it’s worth a shot…
HA ! No chance. Okay, next idea is to march through the various versions and upgrade each in turn. MediaWiki has all their historic releases online, all the way back to the 1.3 lineage. I decided that the latest of each lineage should upgrade cleanly from anything in the previous version of lineage. E.g., 1.6.10 should upgrade cleanly to 1.7.3 (last in the 1.7 series). This seemed to be a workable strategy. So I downloaded the latest of each series, unpacked, and copied all the wiki files over the working installation and ran ‘php update.php’ in the maintenance/ directory.
The process is tedious and not without its obstacles. I consider this penance for my years of wiki neglect. First, I run into the “PHP Parse error : syntax error, unexpected T_NAMESPACE, expecting T_STRING” issue, the same that I saw years ago after the webhost transitioned from PHP 5.2 -> 5.3. I could solve this by editing assorted files and changing “Namespace” -> “MWNamespace” (which is what MediaWiki did by version 1.13). But I would prefer not to.
Instead, I downloaded the source for PHP 5.2 and compiled it in a separate directory, then called ‘/path/to/php/5.2/bin/php update.php’. Problem solved.
The next problem is that a bunch of the database update scripts are specifying “Type=InnoDB”. This isn’t supported by modern MySQL databases. Now, it’s “Engine=InnoDB”. A quick search & replace at the command line fixes this for 1.6.x… and 1.7.x… and 1.8 through 1.12. Finally, at 1.13, it was no longer necessary. As a bonus, at 1.13, I was able to test the installation since Namespace had been renamed to MWNamespace. I would later learn that the table type modifications probably could have been simplified in by changing “$wgDBmysql4 = true ;” to “$wgDBmysql5 = true ;” somewhere in LocalSettings.php.
Command line upgrading worked smoothly up through 1.18 series when I got a new syntax error :
<br />
PHP Fatal error: Call to a member function addMessages() on a non-object in /mnt/sdb1/archive/wiki/extensions/Cite.php on line 68<br />Best I could do was comment out that line. I hope that doesn’t break anything important.
In the home stretch, the very last transition (1.21 -> 1.22) failed :
PHP Fatal error : Cannot redeclare wfProfileIn() (previously declared in /mnt/sdb1/archive/wiki/includes/profiler/Profiler.php:33) in /mnt/sdb1/archive/wiki/includes/ProfilerStub.php on line 25
Apparently, this problem arises occasionally since 1.18. I found a way around it thanks to this page : Deleted the file StartProfiler.php. Who am I to argue ?
Upon completing the transition to 1.22, the wiki doesn’t look correct– the pictures aren’t showing up. The solution was to fix the temporary directory via LocalSettings.php.
Back To Production
Okay, it all works again ! Locally, that is. How to get it back to the server ? My first idea was that, knowing that this upgrade process can succeed, try stepping through the upgrade process again, but tell the update.php scripts to access the database tables on multimedia.cx. This seemed to be working for awhile, even though the database update phase often took 4-5 minutes. However, the transition from 1.8.5 -> 1.9.6 took 75 minutes and then timed out. According to my notes, “This isn’t going to work.”The new process :
- Dump the database tables from the local database.
- Create a new database remotely (melanson_wiki_ng).
- Dump the database table into melanson_wiki_ng.
- Move the index.php file out of the wiki files directory temporarily (or rename).
- Modify the LocalSettings.php to talk to the new database.
- Perform a lftp mirror operation in order to send all the files up to the server.
- Send the index.php file and hope beyond hope that everything magically works.
And that’s the story of how the updated MultimediaWiki came back online. Despite the database dump file being over 110 MB, it only tool MySQL 1m45s to transmit it all to the remote server (let’s hear it for the ‘–compress’ option). For comparison, inserting the tables back into a fresh local database took 1m07s.
When the MultimediaWiki was first live again, it loaded, but ever so slowly. This is when I finally looked into optimization and found that I was lacking any caching. So as a bonus, the MultimediaWiki should be much faster now.
Going Forward
For all I know, I did everything described here in the hardest way possible. But at least I got it done. Unless I learn of a better process, future upgrades will probably look similar to this.Additionally, I should probably take some time to figure out what new features are part of the standard MediaWiki distribution nowadays.
-
Adding A New System To The Game Music Website
1er août 2012, par Multimedia Mike — GeneralAt first, I was planning to just make a little website where users could install a Chrome browser extension and play music from old 8-bit NES games. But, like many software projects, the goal sort of ballooned. I created a website where users can easily play old video game music. It doesn’t cover too many systems yet, but I have had individual requests to add just about every system you can think of.
The craziest part is that I know it’s possible to represent most of the systems. Eventually, it would be great to reach Chipamp parity (a combination plugin for Winamp that packages together plugins for many of these chiptunes). But there is a process to all of this. I have taken to defining a number of phases that are required to get a new system covered.
Phase 0 informally involves marveling at the obscurity of some of the console systems for which chiptune collections have evolved. WonderSwan ? Sharp X68000 ? PC-88 ? I may be viewing this through a terribly Ameri-centric lens. I’ve at least heard of the ZX Spectrum and the Amstrad CPC even if I’ve never seen either.
No matter. The goal is to get all their chiptunes cataloged and playable.
Phase 1 : Finding A Player
The first step is to find a bit of open source code that can play a particular format. If it’s a library that can handle many formats, like Game Music Emu or Audio Overload SDK, even better (probably). The specific open source license isn’t a big concern for me. I’m almost certain that some of the libraries that SaltyGME currently mixes are somehow incompatible, license-wise. I’ll worry about it when I encounter someone who A) cares, and B) is in a position to do something about it. Historical preservation comes first, and these software libraries aren’t getting any younger (I’m finding some that haven’t been touched in a decade).Phase 2 : Test Program
The next phase is to create a basic test bench program that sends a music file into the library, generates a buffer of audio, and shoves it out to the speakers via PulseAudio’s simple API (people like to rip on PulseAudio, but its simple API really lives up to its name and requires pages less boilerplate code to play a few samples than ALSA).Phase 3 : Plug Into Web Player
After successfully creating the test bench and understanding exactly which source files need to be built, the next phase is to hook it up to the main SaltyGME program via the ad-hoc plugin API I developed. This API requires that a player backend can, at the very least, initialize itself based on a buffer of bytes and generate audio samples into an array of 16-bit numbers. The API also provides functions for managing files with multiple tracks and toggling individual voices/channels if the library supports such a feature. Having the test bench application written beforehand usually smooths out this step.But really, I’m just getting started.
Phase 4 : Collecting A Song Corpus
Then there is the matter of staging a collection of songs for a given system. It seems like it would just be a matter of finding a large collection of songs for a given format, downloading them in bulk, and mirroring them. Honestly, that’s the easy part. People who are interested in this stuff have been lovingly curating massive collections of these songs for years (see SNESmusic.org for one of the best examples, and they also host a torrent of all their music for really quick and easy hoarding).
In my drive to make this game music website more useful for normal people, the goal is to extract as much metadata as possible to make searching better, and to package the data so that it’s as convenient as possible for users. Whenever I seek to add a new format to the collection, this is the phase where I invariably find that I have to fundamentally modify some of the assumptions I originally made in the player.First, there were the NES Sound Format (NSF) files, the original format I wanted to play. These are files that have any number of songs packed into a single file. Playback libraries expose APIs to jump to individual tracks. So the player was designed around that. Game Boy GBS files also fall into this category but present a different challenge vis-à-vis metadata, addressed in the next phase.
Then, there were the SPC files. Each SPC file is its own song and multiple SPC files are commonly bundled as RAR files. Not wanting to deal with RAR, or any format where I interacted with a general compression API to pull a few files out, I created a custom resource format (inspired by so many I have studied and documented) and compressed it with a simpler compression API. I also had to modify some of the player’s assumptions to deal with this archive format. Genesis VGMs, bundled either in .zip or .7z, followed the same model as SPC in RAR.
Then it was suggested that I attempt to bring SaltyGME closer to feature parity with Chipamp, rather than just being a Chrome browser frontend for Game Music Emu. When I studied the Portable Sound Format (PSF), I realized it didn’t fit into the player model I already had. PSF uses a sort of shared library model for code execution and I developed another resource archive format to cope with it. So that covers quite a few formats.
One more architecture challenge arose when I started to study one of the prevailing metadata formats, explained in the next phase.
Phase 5 : Metadata
Finally, for the collections to really be useful, I need to harvest that juicy metadata for search and presentation.I have created a series of programs and scripts to scrape metadata out of these music files and store it all in a database that drives the website and search engine. I recognize that it’s no good to have a large corpus of songs with minimal metadata and while importing bulk quantities of music, the scripts harshly reject songs that have too little metadata.
Again, challenges abound. One of the biggest challenges I’m facing is the peculiar quasi-freeform metadata format that emerged as .m3u that takes a form similar to :
################################################################# # # GRADIUS2 # (c) KONAMI by Furukawa Motoaki, IKACHAN # #################################################################
nemesis2.kss::KSS,62,[Nemesis2] (Opening),2:23,,0
nemesis2.kss::KSS,61,[Nemesis2] (Start),7,,0
nemesis2.kss::KSS,43,[Nemesis2] (Air Battle),34,0-
nemesis2.kss::KSS,44,[Nemesis2] (1st. BGM),51,0-
[...]A lot of file formats (including Game Boy GBS mentioned earlier) store their metadata separately using this format. I have some ideas about tools I can use to help me process this data but I’m pretty sure each one will require some manual intervention.
As alluded to in phase 4, .m3u presents another architectural challenge : Notice the second field in the CSV .m3u data. That’s a track number. A player can’t expect every track in a bundled chiptune file to be valid, nor to be in any particular order. Thus, I needed to alter the architecture once more to take this into account. However, instead of modifying the SaltyGME player, I simply extended the metadata database to include a playback order which, by default, is the same as the track order but can also accommodate this new issue. This also has the bonus of providing a facility to exclude playback of certain tracks. This comes in handy for many PSF archives which tend to include files that only provide support for other files and aren’t meant to be played on their own.
Bright Side
The reward for all of this effort is that the data lands in a proper database in the end. None of it goes back into the chiptune files themselves. This makes further modification easier as all of the data that is indexed and presented on the site comes from the database. Somewhere down the road, I should probably create an API for accessing this metadata. -
Call for Speakers : Share Your Voice at MatomoCamp 2025 !
10 juillet, par Alex CarmonaMatomoCamp is back for its 2025 edition — and the Call for Speakers is now open until July 31st !
As proud sponsors of this unique, community-driven event, we’re excited to invite experts, enthusiasts, and curious minds to contribute to MatomoCamp 2025, the annual online gathering dedicated to web analytics, open source, digital privacy, and of course — Matomo. MatomoCamp is the premier free online conference for the Matomo Analytics community in Europe, sponsored by Matomo Analytics. Taking place online on Wednesday 26 November 2025 from 9:30 AM to 5:30 PM CET, this event brings together professionals passionate about ethical analytics, data privacy, and building a better web.
Whether you’re a long-time user, developer, marketer, researcher, or just someone with a fresh perspective on ethical analytics, your voice belongs at MatomoCamp. Last year, we welcomed over 1,000 attendees from around the world. This year, we’re aiming even higher with an expanded programme designed to serve everyone from analytics beginners to enterprise architects.
What is MatomoCamp ?
MatomoCamp is a free, fully online event bringing together the global Matomo community. Across two days of talks, panels, demos, and workshops, participants explore :
- Web analytics & measurement
- Digital marketing & SEO
- Open source projects & collaboration
- Privacy-first data strategies
- Case studies, experiments, and more
With sessions in English, French, German, and beyond, MatomoCamp aims to make digital analytics more accessible, ethical, and transparent for everyone.
MatomoCamp returns this November 2025 with an exciting new vision. For the first time, we’re expanding beyond purely technical content to welcome speakers from all backgrounds who can contribute insights on analytics, privacy, marketing, and the ethical web. Whether you’re a developer, marketer, analyst, or privacy advocate, we want to hear from you.
Who can apply ?
Anyone ! We’re looking for people from all backgrounds who want to share :
- A practical tip or use case with Matomo
- Insights into digital analytics, privacy, or open source
- A success (or failure !) story from your own journey
- A workshop or hands-on demonstration
- A bold opinion on the future of web data
- A behind-the-scenes look at your work or research
You don’t have to be a professional speaker. We welcome first-time speakers, underrepresented voices, and community members who want to share something valuable — no matter how niche or broad.
What makes a great talk ?
There’s no one right formula. But here’s what works well :
- Real-world experience
- Specific, actionable insights
- A clear structure (15–45 minutes)
- Something you care deeply about
- A story only you can tell
And remember — this isn’t just about Matomo. Topics that touch on ethical analytics, open source values, or digital sovereignty are more than welcome.
Key info
- Where : Online (free & open to all)
- When : November 2025 (exact dates to be announced)
- Languages : English, French, German (other languages welcome !)
- Deadline to apply : July 31, 2025
- Submit your talk here
Why does this matter ?
At Matomo, we believe that data should empower, not exploit. MatomoCamp is more than just an event — it’s a celebration of what’s possible when communities come together to build a better digital future.
As sponsors, we’re proud to support this independent, open source event. But more importantly, we want to amplify your voice — because every perspective shared brings us closer to a more ethical, transparent, and inclusive analytics ecosystem.
Have a story to share ?
Don’t overthink it. If it matters to you, it will matter to someone else. Apply to speak before July 31st and join us at MatomoCamp 2025 !
Let’s build the future of analytics — together.