Recherche avancée

Médias (1)

Mot : - Tags -/bug

Autres articles (28)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • ANNEXE : Les extensions, plugins SPIP des canaux

    11 février 2010, par

    Un plugin est un ajout fonctionnel au noyau principal de SPIP. MediaSPIP consiste en un choix délibéré de plugins existant ou pas auparavant dans la communauté SPIP, qui ont pour certains nécessité soit leur création de A à Z, soit des ajouts de fonctionnalités.
    Les extensions que MediaSPIP nécessite pour fonctionner
    Depuis la version 2.1.0, SPIP permet d’ajouter des plugins dans le répertoire extensions/.
    Les "extensions" ne sont ni plus ni moins que des plugins dont la particularité est qu’ils se (...)

  • MediaSPIP Core : La Configuration

    9 novembre 2010, par

    MediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
    Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...)

Sur d’autres sites (5642)

  • Re-solving My Search Engine Problem

    28 juillet 2012, par Multimedia Mike — General, swish-e

    14 years ago, I created a web database of 8-bit Nintendo Entertainment System games. To make it useful, I developed a very primitive search feature.

    A few months ago, I decided to create a web database of video game music. To make it useful, I knew it would need to have a search feature. I realized I needed to solve the exact same problem again.

    Requirements
    The last time I solved this problem, I came up with an excruciatingly naïve idea. Hey, it worked. I really didn’t want to deploy the same solution again because it felt so silly the first time. Surely there are many better ways to solve it now ? Many different workable software solutions that do all the hard work for me ?

    The first time I attacked this, it was 1998 and hosting resources were scarce. On my primary web host I was able to put static HTML pages, perhaps with server side includes. The web host also offered dynamic scripting capabilities via something called htmlscript (a.k.a. MIVA Script). I had a secondary web host in my ISP which allowed me to host conventional CGI scripts on a Unix host, so that’s where I hosted the search function (Perl CGI script accessing a key/value data store file).

    Nowadays, sky’s the limit. Any type of technology you want to deploy should be tractable. Still, a key requirement was that I didn’t want to pay for additional hosting resources for this silly little side project. That leaves me with options that my current shared web hosting plan allows, which includes such advanced features as PHP, Perl and Python scripts. I can also access MySQL.

    Candidates
    There are a lot of mature software packages out there which can index and search data and be plugged into a website. But a lot of them would be unworkable on my web hosting plan due to language or library package limitations. Further, a lot of them feel like overkill. At the most basic level, all I really want to do is map a series of video game titles to URLs in a website.

    Based on my research, Lucene seems to hold a fair amount of mindshare as an open source indexing and search solution. But I was unsure of my ability to run it on my hosting plan. I think MySQL does some kind of full text search, so I could have probably made a solution around that. Again, it just feels like way more power than I need for this project.

    I used Swish-e once about 3 years ago for a little project. I wasn’t confident of my ability to run that on my server either. It has a Perl API but it requires custom modules.

    My quest for a search solution grew deep enough that I started perusing a textbook on information retrieval techniques in preparation for possibly writing my own solution from scratch. However, in doing so, I figured out how I might subvert an existing solution to do what I want.

    Back to Swish-e
    Again, all I wanted to do was pull data out of a database and map that data to a URL in a website. Reading the Swish-e documentation, I learned that the software supports a mode specifically tailored for this. Rather than asking Swish-e to index a series of document files living on disk, you can specify a script for Swish-e to run and the script will generate what appears to be a set of phantom documents for Swish-e to index.

    When I ’add’ a game music file to the game music website, I have a scripts that scrape the metadata (game title, system, song titles, composers, company, copyright, the original file name on disk, even the ripper/dumper who extracted the chiptune in the first place) and store it all in an SQLite database. When it’s time to update the database, another script systematically generates a series of pseudo-documents that spell out the metadata for each game and prefix each document with a path name. Searching for a term in the index returns a lists of paths that contain the search term. Thus, it makes sense for that path to be a site URL.

    But what about a web script which can search this Swish-e index ? That’s when I noticed Swish-e’s C API and came up with a crazy idea : Write the CGI script directly in C. It feels like sheer madness (or at least the height of software insecurity) to write a CGI script directly in C in this day and age. But it works (with the help of cgic for input processing), just as long as I statically link the search script with libswish-e.a (and libz.a). The web host is an x86 machine, after all.

    I’m not proud of what I did here— I’m proud of how little I had to do here. The searching CGI script is all of about 30 lines of C code. The one annoyance I experienced while writing it is that I had to consult the Swish-e source code to learn how to get my search results (the "swishdocpath" key — or any other key — for SwishResultPropertyStr() is not documented). Also, the C program just does the simplest job possible, only querying the term in the index and returning the results in plaintext, in order of relevance, to the client-side JavaScript code which requested them. JavaScript gets the job of sorting and grouping the results for presentation.

    Tuning the Search
    Almost immediately, I noticed that the search engine could not find one of my favorite SNES games, U.N. Squadron. That’s because all of its associated metadata names Area 88, the game’s original title. Thus, I had to modify the metadata database to allow attaching somewhat free-form tags to games in order to compensate. In this case, an alias title would show up in the game’s pseudo-document.

    Roman numerals are still a thorn in my side, just as they were 14 years ago in my original iteration. I dealt with it back then by converting all numbers to Roman numerals during the index and searching processes. I’m not willing to do that for this case and I’m still looking for a good solution.

    Another annoying problem deals with Mega Man, a popular franchise. The proper spelling is 2 words but it’s common for people to mash it into one word, Megaman (see also : Spider-Man, Spiderman, Spider Man). The index doesn’t gracefully deal with that and I have some hacks in place to cope for the time being.

    Positive Results
    I’m pleased with the results so far, and so are the users I have heard from. I know one user expressed amazement that a search for Castlevania turned up Akumajou Densetsu, the Japanese version of Castlevania III : Dracula’s Curse. This didn’t surprise me because I manually added a hint for that mapping. (BTW, if you are a fan of Castlevania III, definitely check out the Akumajou Densetsu soundtrack which has an upgraded version of the same soundtrack using special audio channels.)

    I was a little more surprised when a user announced that searching for ’probotector’ correctly turned up Contra : Hard Corps. I looked into why this was. It turns out that the original chiptune filename was extremely descriptive : "Contra - Hard Corps [Probotector] (1994-08-08)(Konami)". The filenames themselves often carry a bunch of useful metadata which is why it’s important to index those as well.

    And of course, many rippers, dumpers, and taggers have labored for over a decade to lovingly tag these songs with as much composer information as possible, which all gets indexed. The search engine gets a lot of compliments for its ability to find many songs written by favorite composers.

  • Progress with rtc.io

    12 août 2014, par silvia

    At the end of July, I gave a presentation about WebRTC and rtc.io at the WDCNZ Web Dev Conference in beautiful Wellington, NZ.

    webrtc_talk

    Putting that talk together reminded me about how far we have come in the last year both with the progress of WebRTC, its standards and browser implementations, as well as with our own small team at NICTA and our rtc.io WebRTC toolbox.

    WDCNZ presentation page5

    One of the most exciting opportunities is still under-exploited : the data channel. When I talked about the above slide and pointed out Bananabread, PeerCDN, Copay, PubNub and also later WebTorrent, that’s where I really started to get Web Developers excited about WebRTC. They can totally see the shift in paradigm to peer-to-peer applications away from the Server-based architecture of the current Web.

    Many were also excited to learn more about rtc.io, our own npm nodules based approach to a JavaScript API for WebRTC.

    rtcio_modules

    We believe that the World of JavaScript has reached a critical stage where we can no longer code by copy-and-paste of JavaScript snippets from all over the Web universe. We need a more structured module reuse approach to JavaScript. Node with JavaScript on the back end really only motivated this development. However, we’ve needed it for a long time on the front end, too. One big library (jquery anyone ?) that does everything that anyone could ever need on the front-end isn’t going to work any longer with the amount of functionality that we now expect Web applications to support. Just look at the insane growth of npm compared to other module collections :

    Packages per day across popular platforms (Shamelessly copied from : http://blog.nodejitsu.com/npm-innovation-through-modularity/)

    For those that – like myself – found it difficult to understand how to tap into the sheer power of npm modules as a font end developer, simply use browserify. npm modules are prepared following the CommonJS module definition spec. Browserify works natively with that and “compiles” all the dependencies of a npm modules into a single bundle.js file that you can use on the front end through a script tag as you would in plain HTML. You can learn more about browserify and module definitions and how to use browserify.

    For those of you not quite ready to dive in with browserify we have prepared prepared the rtc module, which exposes the most commonly used packages of rtc.io through an “RTC” object from a browserified JavaScript file. You can also directly download the JavaScript file from GitHub.

    Using rtc.io rtc JS library
    Using rtc.io rtc JS library

    So, I hope you enjoy rtc.io and I hope you enjoy my slides and large collection of interesting links inside the deck, and of course : enjoy WebRTC ! Thanks to Damon, JEeff, Cathy, Pete and Nathan – you’re an awesome team !

    On a side note, I was really excited to meet the author of browserify, James Halliday (@substack) at WDCNZ, whose talk on “building your own tools” seemed to take me back to the times where everything was done on the command-line. I think James is using Node and the Web in a way that would appeal to a Linux Kernel developer. Fascinating !!

    The post Progress with rtc.io first appeared on ginger’s thoughts.

  • Date and segment comparison feature

    31 octobre 2019, par Matomo Core Team — Analytics Tips, Development

    Get a clearer picture with the date and segment comparison feature

    What can you do with it ? What are the benefits ?

    Make informed decisions faster by easily comparing different segments and dates with each other.

    Compare report data for multiple segments next to each other

    Segment comparison feature

    Directly compare the behaviour of visitors from different segments e.g. customers with accounts vs. customers without accounts. Segment comparisons are a powerful way to compare different audience ; learn which ones perform better ; and in what way their actions differ. 

    Compare report data for two time periods next to each other

    Comparing date ranges

    See how your website performs compared to the previous month/week/year. Including seeing trends over those periods. Say, your business always picks up at the same times within a year, or there’s a sag in business for every user segment over this year and the last except one.

    By being able to compare date ranges you are able to get a quick overview of trends and period to period performance. Has a campaign worked better in September than in October ? Get an instant look by having the side-by-side comparison in Matomo.

    What is it capable of ?

    It lets you ask the question, “What is different ?”

    If you look at reports you’ll only see how people behave overall and if you look at specific segments you’ll see how they behave at face value, however, if you compare data together you’ll be quickly informed on what makes them unique. This data is still there when you don’t use the comparison feature, it’s just buried. Comparing data highlights discrepancies and leads to important questions and answers.

    For example, perhaps some class of users have very low engagement on a specific day compared to the rest of your visitors, and perhaps those users are responsible for an outsized proportion of churn. 

    Who could benefit from it, and why ?

    Everyone can benefit from using it (and probably should use it). It’s yours to experiment with ! You shouldn’t feel restricted to only comparing between the current and last period, or having questions before you start comparing. Follow your instincts and see what pops out when data from different segments is laid out next to each other.

    Where can you find it in Matomo ?

    • Segment comparison is activated by the new icon in the segment selector
    Segment comparison feature
    • Date comparison can be found in the calendar section of Matomo
    Date comparison feature
    • The list of active comparisons is visible at the top of the page for all pages that support comparison
    • Comparisons are visible in every report that supports comparing data, and reports that do not support it will display a message saying so

    How do you use it ?

    • To compare segments, click the icon in the segment selector
    • To compare periods, click the ‘compare’ checkbox in the period selector, then select what period you want to compare it against in the dropdown (previous period, previous year, or a custom range)
    • When comparisons are active, view your reports as normal

    Take it away !

    The comparison feature is a new tool from Matomo 3.12.0 that highlights discrepancies and differences in data that can lead to more clarity and understanding, so we’d encourage everyone to use it. 

    Try it out today in your Matomo and see the power behind this new data comparison mode !