
Recherche avancée
Médias (2)
-
Valkaama DVD Label
4 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Image
-
Podcasting Legal guide
16 mai 2011, par
Mis à jour : Mai 2011
Langue : English
Type : Texte
Autres articles (99)
-
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...) -
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ; -
Configuration spécifique pour PHP5
4 février 2011, parPHP5 est obligatoire, vous pouvez l’installer en suivant ce tutoriel spécifique.
Il est recommandé dans un premier temps de désactiver le safe_mode, cependant, s’il est correctement configuré et que les binaires nécessaires sont accessibles, MediaSPIP devrait fonctionner correctement avec le safe_mode activé.
Modules spécifiques
Il est nécessaire d’installer certains modules PHP spécifiques, via le gestionnaire de paquet de votre distribution ou manuellement : php5-mysql pour la connectivité avec la (...)
Sur d’autres sites (12825)
-
The neutering of Google Code-In 2011
Posting this from the Google Summer of Code Mentor Summit, at a session about Google Code-In !
Google Code-In is the most innovative open-source program I’ve ever seen. It provided a way for students who had never done open source — or never even done programming — to get involved in open source work. It made it easy for people who weren’t sure of their ability, who didn’t know whether they could do open source, to get involved and realize that yes, they too could do amazing work — whether code useful to millions of people, documentation to make the code useful, translations to make it accessible, and more. Hundreds of students had a great experience, learned new things, and many stayed around in open source projects afterwards because they enjoyed it so much !
x264 benefitted greatly from Google Code-In. Most of the high bit depth assembly code was written through GCI — literally man-weeks of work by an professional developer, done by high-schoolers who had never written assembly before ! Furthermore, we got loads of bugs fixed in ffmpeg/libav, a regression test tool, and more. And best of all, we gained a new developer : Daniel Kang, who is now a student at MIT, an x264 and libav developer, and has gotten paid work applying the skills he learned in Google Code-In !
Some students in GCI complained about the system being “unfair”. Task difficulties were inconsistent and there were many ways to game the system to get lots of points. Some people complained about Daniel — he was completing a staggering number of tasks, so they must be too easy. Yet many of the other students considered these tasks too hard. I mean, I’m asking high school students to write hundreds of lines of complicated assembly code in one of the world’s most complicated instruction sets, and optimize it to meet extremely strict code-review standards ! Of course, there may have been valid complaints about other projects : I did hear from many students talking about gaming the system and finding the easiest, most “profitable” tasks. Though, with the payout capped at $500, the only prize for gaming the system is a high rank on the points list.
According to people at the session, in an effort to make GCI more “fair”, Google has decided to change the system. There are two big changes they’re making.
Firstly, Google is requiring projects to submit tasks on only two dates : the start, and the halfway point. But in Google Code-In, we certainly had no idea at the start what types of tasks would be the most popular — or new ideas that came up over time. Often students would come up with ideas for tasks, which we could then add ! A waterfall-style plan-everything-in-advance model does not work for real-world coding. The halfway point addition may solve this somewhat, but this is still going to dramatically reduce the number of ideas that can be proposed as tasks.
Secondly, Google is requiring projects to submit at least 5 tasks of each category just to apply. Quality assurance, translation, documentation, coding, outreach, training, user interface, and research. For large projects like Gnome, this is easy : they can certainly come up with 5 for each on such a large, general project. But often for a small, focused project, some of these are completely irrelevant. This rules out a huge number of smaller projects that just don’t have relevant work in all these categories. x264 may be saved here : as we work under the Videolan umbrella, we’ll likely be able to fudge enough tasks from Videolan to cover the gaps. But for hundreds of other organizations, they are going to be out of luck. It would make more sense to require, say, 5 out of 8 of the categories, to allow some flexibility, while still encouraging interesting non-coding tasks.
For example, what’s “user interface” for a software library with a stable API, say, a libc ? Can you make 5 tasks out of it that are actually useful ?
If x264 applied on its own, could you come up with 5 real, meaningful tasks in each category for it ? It might be possible, but it’d require a lot of stretching.
How many smaller or more-focused projects do you think are going to give up and not apply because of this ?
Is GCI supposed to be something for everyone, or just or Gnome, KDE, and other megaprojects ?
-
ffmpeg in opencv 2.4
27 août 2012, par Josh ElsdonI am using opencv, but it is very pick with video types that it will open. I have version 2.4 of opencv and it apparently has ffmpeg support built in, though it does not open any movie files correctly, obj.grab() returns false and retrieve(frame,0) return NULL. So it seems to me that FFmpeg is not working correctly, is there something I need to do to turn it on, from internet searches people seem to suggest that from 2.4 onward it is a non-issue(though apparently not). Any help, all the other threads seem to just aout stop short of the answer for me.
(windows 7, Visual studio 2010, opencv 2.4)
-
Re-solving My Search Engine Problem
14 years ago, I created a web database of 8-bit Nintendo Entertainment System games. To make it useful, I developed a very primitive search feature.
A few months ago, I decided to create a web database of video game music. To make it useful, I knew it would need to have a search feature. I realized I needed to solve the exact same problem again.
Requirements
The last time I solved this problem, I came up with an excruciatingly naïve idea. Hey, it worked. I really didn’t want to deploy the same solution again because it felt so silly the first time. Surely there are many better ways to solve it now ? Many different workable software solutions that do all the hard work for me ?The first time I attacked this, it was 1998 and hosting resources were scarce. On my primary web host I was able to put static HTML pages, perhaps with server side includes. The web host also offered dynamic scripting capabilities via something called htmlscript (a.k.a. MIVA Script). I had a secondary web host in my ISP which allowed me to host conventional CGI scripts on a Unix host, so that’s where I hosted the search function (Perl CGI script accessing a key/value data store file).
Nowadays, sky’s the limit. Any type of technology you want to deploy should be tractable. Still, a key requirement was that I didn’t want to pay for additional hosting resources for this silly little side project. That leaves me with options that my current shared web hosting plan allows, which includes such advanced features as PHP, Perl and Python scripts. I can also access MySQL.
Candidates
There are a lot of mature software packages out there which can index and search data and be plugged into a website. But a lot of them would be unworkable on my web hosting plan due to language or library package limitations. Further, a lot of them feel like overkill. At the most basic level, all I really want to do is map a series of video game titles to URLs in a website.Based on my research, Lucene seems to hold a fair amount of mindshare as an open source indexing and search solution. But I was unsure of my ability to run it on my hosting plan. I think MySQL does some kind of full text search, so I could have probably made a solution around that. Again, it just feels like way more power than I need for this project.
I used Swish-e once about 3 years ago for a little project. I wasn’t confident of my ability to run that on my server either. It has a Perl API but it requires custom modules.
My quest for a search solution grew deep enough that I started perusing a textbook on information retrieval techniques in preparation for possibly writing my own solution from scratch. However, in doing so, I figured out how I might subvert an existing solution to do what I want.
Back to Swish-e
Again, all I wanted to do was pull data out of a database and map that data to a URL in a website. Reading the Swish-e documentation, I learned that the software supports a mode specifically tailored for this. Rather than asking Swish-e to index a series of document files living on disk, you can specify a script for Swish-e to run and the script will generate what appears to be a set of phantom documents for Swish-e to index.
When I ’add’ a game music file to the game music website, I have a scripts that scrape the metadata (game title, system, song titles, composers, company, copyright, the original file name on disk, even the ripper/dumper who extracted the chiptune in the first place) and store it all in an SQLite database. When it’s time to update the database, another script systematically generates a series of pseudo-documents that spell out the metadata for each game and prefix each document with a path name. Searching for a term in the index returns a lists of paths that contain the search term. Thus, it makes sense for that path to be a site URL.
But what about a web script which can search this Swish-e index ? That’s when I noticed Swish-e’s C API and came up with a crazy idea : Write the CGI script directly in C. It feels like sheer madness (or at least the height of software insecurity) to write a CGI script directly in C in this day and age. But it works (with the help of cgic for input processing), just as long as I statically link the search script with libswish-e.a (and libz.a). The web host is an x86 machine, after all.
I’m not proud of what I did here— I’m proud of how little I had to do here. The searching CGI script is all of about 30 lines of C code. The one annoyance I experienced while writing it is that I had to consult the Swish-e source code to learn how to get my search results (the "swishdocpath" key — or any other key — for SwishResultPropertyStr() is not documented). Also, the C program just does the simplest job possible, only querying the term in the index and returning the results in plaintext, in order of relevance, to the client-side JavaScript code which requested them. JavaScript gets the job of sorting and grouping the results for presentation.
Tuning the Search
Almost immediately, I noticed that the search engine could not find one of my favorite SNES games, U.N. Squadron. That’s because all of its associated metadata names Area 88, the game’s original title. Thus, I had to modify the metadata database to allow attaching somewhat free-form tags to games in order to compensate. In this case, an alias title would show up in the game’s pseudo-document.Roman numerals are still a thorn in my side, just as they were 14 years ago in my original iteration. I dealt with it back then by converting all numbers to Roman numerals during the index and searching processes. I’m not willing to do that for this case and I’m still looking for a good solution.
Another annoying problem deals with Mega Man, a popular franchise. The proper spelling is 2 words but it’s common for people to mash it into one word, Megaman (see also : Spider-Man, Spiderman, Spider Man). The index doesn’t gracefully deal with that and I have some hacks in place to cope for the time being.
Positive Results
I’m pleased with the results so far, and so are the users I have heard from. I know one user expressed amazement that a search for Castlevania turned up Akumajou Densetsu, the Japanese version of Castlevania III : Dracula’s Curse. This didn’t surprise me because I manually added a hint for that mapping. (BTW, if you are a fan of Castlevania III, definitely check out the Akumajou Densetsu soundtrack which has an upgraded version of the same soundtrack using special audio channels.)I was a little more surprised when a user announced that searching for ’probotector’ correctly turned up Contra : Hard Corps. I looked into why this was. It turns out that the original chiptune filename was extremely descriptive : "Contra - Hard Corps [Probotector] (1994-08-08)(Konami)". The filenames themselves often carry a bunch of useful metadata which is why it’s important to index those as well.
And of course, many rippers, dumpers, and taggers have labored for over a decade to lovingly tag these songs with as much composer information as possible, which all gets indexed. The search engine gets a lot of compliments for its ability to find many songs written by favorite composers.