
Recherche avancée
Médias (1)
-
Spitfire Parade - Crisis
15 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (57)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (10783)
-
5 Top Google Optimize Alternatives to Consider
17 mars 2023, par Erin — Analytics Tips -
How Funnel for Piwik Analytics enriches your Piwik experience giving you ultimate insights and debugging capabilities
13 janvier 2017, par InnoCraft — CommunityNo matter what type of website or app you have, whether you are trying to get your users to sign up for something or sell products, there is a certain number of steps your visitors have to go through. On every step you lose visitors and therefore potential revenue and conversions. Therefore it is critical to know where your visitors actually follow those steps in your website or app, where you lose them and where your visitors maybe get confused. By defining a funnel, you can improve your conversion rates, sales and revenue as you can exactly determine where you lose your visitors in converting your goal or a sale.
A Funnel defines a series of steps that you expect your visitors to take on their way to converting a goal. Funnels, a premium feature for Piwik developed by InnoCraft, lets you create funnels to get the data you need to improve your websites and mobile apps. Learn more about Funnel.
In this blog post we will cover the reports the Funnel plugin provides. The next blog post shows you how to configure and validate your funnel in Piwik.
Integration in Goal reports
At Piwik and InnoCraft, we usually start looking into our goal reports. Funnel integrates directly into each goal reporting page giving you a quick overview how your funnel is doing. This saves us a lot of time as we don’t have to separately look into each funnel page and only takes us maybe an additional second to keep an eye on our funnels. By clicking on the headline or “View funnel report” link, you can directly go to the funnel report to get a more detailed report if you notice any spike in the evolution of the conversions or conversion rate.
Getting an overall Funnel overview
Next we usually go to the “Funnel Overview” page where it shows a list of all activated Funnels and their performance over time. You will find the look familiar as it is similar to the “Goals Overview” page. If we find something unusual there, for example any spikes, we usually directly click on the headline of the Funnel to go to the detailed Funnel report. You can also choose a funnel from the left reporting menu or search for a funnel by entering the shortcut “f”.
Viewing a funnel report
A funnel reporting page looks very similar to a Goal reporting page. It starts with an evolution graph and sparklines showing you the performance of your funnel over time.
In the evolution graph you can select the metrics you want to plot. We usually have an eye on the funnel conversion rate and the number of “Funnel entries” or the number of “Funnel conversions”. The conversion rate alone does not show you how your funnel is performing. Imagine the rate is always stable at around 20% and you might think everything is alright, but if the number of visitors that take part in your funnel goes down, you might have a problem as the number of funnel conversions actually decreases even though the rate is the same. So we recommend to not only have a look at the conversion rate. The report will remember the metrics you want to plot each time you open it so you don’t have to re-select them over and over again.
The funnel overview
In the funnel overview we are giving you more details about the funnel and goal related conversion metrics so you don’t have to switch between the goal and funnel report and compare them easily.
When you analyze a funnel report, you might not always remember how the funnel is configured. Even though you specify names for each step you sometimes need to know on which pages a certain step will be activated. By clicking on the funnel summary link you can quickly look into the funnel configuration and also see all important metrics at a glance in a simple table without having to scroll.
You might also notice the Visitor Log link which will show you all actions for all visitors that have entered this funnel. This lets you really understand how your visitors navigate through your website and how they proceeded, exited or converted your funnel on a visitor level.
The Funnel visualization
Below the funnel overview you can visually see where your visitors entered, proceeded, converted and exited your funnel. We kept the UI clean so you can focus on the important things.
Most tools only give you the pages where visitors have entered your funnel but we do better and also show you the list of external referrers used by visitors to enter your funnel directly (marketing campaigns, search engines or other websites). Also we do not only show only the top 5 pages but up to 100 pages and 50 referrers (more can be configured if needed). When you hover a row, you will not only see the number of hits but also the percentage each row has contributed to the entries. Here you want to look and understand how your visitors enter your funnel and based on the data maybe invest in successful referrers, campaigns and pages. If the pages or referrers you expect to see there don’t show up, your users might not understand the path you had in mind for them.
Next you may notice how many visits have gone through each step, in this case 3487 visits. The green and red bar lets you quickly identify how many of your visitors have proceeded to the next step (green) compared to how many have exited the funnel at this step (red). Ideally, most of the bar is green and not red indicating that more visitors proceed to the next step than they exit.
Now the next feature is really valuable. When you hover the step title or the number of visits, you will notice that two icons appear :
Those two little icons are really powerful and give you even more insights to really dig into all the data. The left icon shows you the visitor log showing all actions of each visitor that have participated in this particular funnel step. This means for each step you get to see all the details and actions of each visitor. This lets you really debug and understand problems in your funnel.
At InnoCraft, we understand that plain numbers are often not so valuable. Only the evolution over time, when you put the numbers in relation to something else you can really understand how your website is doing. The icon to the right lets you do exactly this, it lets you view the row evolution for each funnel step. We are sure you will enjoy this feature. It lets you explore how each funnel step is doing over time. For example the number of entries for a step or how many proceeded to the next step from here over time. Here you ideally want to see that the “Proceeded Rate” increases over time, meaning more and more visitors actually proceed to the next step instead of exiting it.
We are sure you will really love those features that give you just those extra insights that other tools don’t give you.
On the right you can find out where your visitors went to, if they did not proceed any further in the funnel. This lets you better understand why they left the funnel and did not proceed any further.
At the end of the funnel report you find again the number of conversions and the conversion rate. Here we recommend looking into the visitor log when you hover the name of the last step as you can analyze how each visitor converted this funnel in detail.
Applying segments
Funnels lets you apply any Piwik segment to the Funnel report allowing you to dice your visitors multiplying the value you get out of Funnel. For example you may want to apply a segment and analyze the funnel for visitors that have visited your website or mobile app for the first time vs. recurring visitors. Sometimes it may be interesting how visitors from different countries go through your funnel, the possibilities are endless. We really recommend to take advantage of segments to understand your different target groups even better.
The plugin also adds some new segments to your Piwik letting you segment any Piwik report by visitors that have participated in a funnel or participated in a particular funnel step. For example you could go to the “Visitors => Locations” report and apply a segment for your funnel to see which countries have participated or converted most in your funnel.
Widgets, Scheduled Reports, and more.
This is not where the fun ends. Funnels defines new widgets that you can add to your dashboard or export it into a third party website. You can set up scheduled reports to receive the Funnel report automatically via email or sms or download the report to share it with your colleagues. It works also very well with Custom Alerts and you can view the Funnel report in the Piwik Mobile app. You can manage Funnels via HTTP API and also fetch all Funnel reports via the HTTP Reporting API. The plugin is really nicely integrated into Piwik we will need some more blog posts to show you all the ways Funnels advances your Piwik experience and how it lets you dig into all the data so you can increase your conversions and sales based on this data.
How to get Funnels and related features
You can get Funnels on the Piwik Marketplace. If you want to learn more about Funnels you might be also interested in the Funnel User Guide and the Funnel FAQ.
Similar to Funnels we also offer Users Flow which lets you visualize the flow of your users and visitors across several interactions.
-
Developing A Shader-Based Video Codec
22 juin 2013, par Multimedia Mike — Outlandish BrainstormsEarly last month, this thing called ORBX.js was in the news. It ostensibly has something to do with streaming video and codec technology, which naturally catches my interest. The hype was kicked off by Mozilla honcho Brendan Eich when he posted an article asserting that HD video decoding could be entirely performed in JavaScript. We’ve seen this kind of thing before using Broadway– an H.264 decoder implemented entirely in JS. But that exposes some very obvious limitations (notably CPU usage).
But this new video codec promises 1080p HD playback directly in JavaScript which is a lofty claim. How could it possibly do this ? I got the impression that performance was achieved using WebGL, an extension which allows JavaScript access to accelerated 3D graphics hardware. Browsing through the conversations surrounding the ORBX.js announcement, I found this confirmation from Eich himself :
You’re right that WebGL does heavy lifting.
As of this writing, ORBX.js remains some kind of private tech demo. If there were a public demo available, it would necessarily be easy to reverse engineer the downloadable JavaScript decoder.
But the announcement was enough to make me wonder how it could be possible to create a video codec which effectively leverages 3D hardware.
Prior Art
In theorizing about this, it continually occurs to me that I can’t possibly be the first person to attempt to do this (or the ORBX.js people, for that matter). In googling on the matter, I found various forums and Q&A posts where people asked if it were possible to, e.g., accelerate JPEG decoding and presentation using 3D hardware, with no answers. I also found a blog post which describes a plan to use 3D hardware to accelerate VP8 video decoding. It was a project done under the banner of Google’s Summer of Code in 2011, though I’m not sure which open source group mentored the effort. The project did not end up producing the shader-based VP8 codec originally chartered but mentions that “The ‘client side’ of the VP8 VDPAU implementation is working and is currently being reviewed by the libvdpau maintainers.” I’m not sure what that means. Perhaps it includes modifications to the public API that supports VP8, but is waiting for the underlying hardware to actually implement VP8 decoding blocks in hardware.What’s So Hard About This ?
Video decoding is a computationally intensive task. GPUs are known to be really awesome at chewing through computationally intensive tasks. So why aren’t GPUs a natural fit for decoding video codecs ?Generally, it boils down to parallelism, or lack of opportunities thereof. GPUs are really good at doing the exact same operations over lots of data at once. The problem is that decoding compressed video usually requires multiple phases that cannot be parallelized, and the individual phases often cannot be parallelized. In strictly mathematical terms, a compressed data stream will need to be decoded by applying a function f(x) over each data element, x0 .. xn. However, the function relies on having applied the function to the previous data element, i.e. :
f(xn) = f(f(xn-1))
What happens when you try to parallelize such an algorithm ? Temporal rifts in the space/time continuum, if you’re in a Star Trek episode. If you’re in the real world, you’ll get incorrect, unusuable data as the parallel computation is seeded with a bunch of invalid data at multiple points (which is illustrated in some of the pictures in the aforementioned blog post about accelerated VP8).
Example : JPEG
Let’s take a very general look at the various stages involved in decoding the ubiquitous JPEG format :
What are the opportunities to parallelize these various phases ?
- Huffman decoding (run length decoding and zig-zag reordering is assumed to be rolled into this phase) : not many opportunities for parallelizing the various Huffman formats out there, including this one. Decoding most Huffman streams is necessarily a sequential operation. I once hypothesized that it would be possible to engineer a codec to achieve some parallelism during the entropy decoding phase, and later found that On2′s VP8 codec employs the scheme. However, such a scheme is unlikely to break down to such a fine level that WebGL would require.
- Reverse DC prediction : JPEG — and many other codecs — doesn’t store full DC coefficients. It stores differences in successive DC coefficients. Reversing this process can’t be parallelized. See the discussion in the previous section.
- Dequantize coefficients : This could be very parallelized. It should be noted that software decoders often don’t dequantize all coefficients. Many coefficients are 0 and it’s a waste of a multiplication operation to dequantize. Thus, this phase is sometimes rolled into the Huffman decoding phase.
- Invert discrete cosine transform : This seems like it could be highly parallelizable. I will be exploring this further in this post.
- Convert YUV -> RGB for final display : This is a well-established use case for 3D acceleration.
Crash Course in 3D Shaders and Humility
So I wanted to see if I could accelerate some parts of JPEG decoding using something called shaders. I made an effort to understand 3D programming and its associated math throughout the 1990s but 3D technology left me behind a very long time ago while I got mixed up in this multimedia stuff. So I plowed through a few books concerning WebGL (thanks to my new Safari Books Online subscription). After I learned enough about WebGL/JS to be dangerous and just enough about shader programming to be absolutely lethal, I set out to try my hand at optimizing IDCT using shaders.Here’s my extremely high level (and probably hopelessly naive) view of the modern GPU shader programming model :
The WebGL program written in JavaScript drives the show. It sends a set of vertices into the WebGL system and each vertex is processed through a vertex shader. Then, each pixel that falls within a set of vertices is sent through a fragment shader to compute the final pixel attributes (R, G, B, and alpha value). Another consideration is textures : This is data that the program uploads to GPU memory which can be accessed programmatically by the shaders).
These shaders (vertex and fragment) are key to the GPU’s programmability. How are they programmed ? Using a special C-like shading language. Thought I : “C-like language ? I know C ! I should be able to master this in short order !” So I charged forward with my assumptions and proceeded to get smacked down repeatedly by the overall programming paradigm. I came to recognize this as a variation of the scientific method : Develop a hypothesis– in my case, a mental model of how the system works ; develop an experiment (short program) to prove or disprove the model ; realize something fundamental that I was overlooking ; formulate new hypothesis and repeat.
First Approach : Vertex Workhorse
My first pitch goes like this :- Upload DCT coefficients to GPU memory in the form of textures
- Program a vertex mesh that encapsulates 16×16 macroblocks
- Distribute the IDCT effort among multiple vertex shaders
- Pass transformed Y, U, and V blocks to fragment shader which will convert the samples to RGB
So the idea is that decoding of 16×16 macroblocks is parallelized. A macroblock embodies 6 blocks :
It would be nice to process one of these 6 blocks in each vertex. But that means drawing a square with 6 vertices. How do you do that ? I eventually realized that drawing a square with 6 vertices is the recommended method for drawing a square on 3D hardware. Using 2 triangles, each with 3 vertices (0, 1, 2 ; 3, 4, 5) :
A vertex shader knows which (x, y) coordinates it has been assigned, so it could figure out which sections of coefficients it needs to access within the textures. But how would a vertex shader know which of the 6 blocks it should process ? Solution : Misappropriate the vertex’s z coordinate. It’s not used for anything else in this case.
So I set all of that up. Then I hit a new roadblock : How to get the reconstructed Y, U, and V samples transported to the fragment shader ? I have found that communicating between shaders is quite difficult. Texture memory ? WebGL doesn’t allow shaders to write back to texture memory ; shaders can only read it. The standard way to communicate data from a vertex shader to a fragment shader is to declare variables as “varying”. Up until this point, I knew about varying variables but there was something I didn’t quite understand about them and it nagged at me : If 3 different executions of a vertex shader set 3 different values to a varying variable, what value is passed to the fragment shader ?
It turns out that the varying variable varies, which means that the GPU passes interpolated values to each fragment shader invocation. This completely destroys this idea.
Second Idea : Vertex Workhorse, Take 2
The revised pitch is to work around the interpolation issue by just having each vertex shader invocation performs all 6 block transforms. That seems like a lot of redundant. However, I figured out that I can draw a square with only 4 vertices by arranging them in an ‘N’ pattern and asking WebGL to draw a TRIANGLE_STRIP instead of TRIANGLES. Now it’s only doing the 4x the extra work, and not 6x. GPUs are supposed to be great at this type of work, so it shouldn’t matter, right ?I wired up an experiment and then ran into a new problem : While I was able to transform a block (or at least pretend to), and load up a varying array (that wouldn’t vary since all vertex shaders wrote the same values) to transmit to the fragment shader, the fragment shader can’t access specific values within the varying block. To clarify, a WebGL shader can use a constant value — or a value that can be evaluated as a constant at compile time — to index into arrays ; a WebGL shader can not compute an index into an array. Per my reading, this is a WebGL security consideration and the limitation may not be present in other OpenGL(-ES) implementations.
Not Giving Up Yet : Choking The Fragment Shader
You might want to be sitting down for this pitch :- Vertex shader only interpolates texture coordinates to transmit to fragment shader
- Fragment shader performs IDCT for a single Y sample, U sample, and V sample
- Fragment shader converts YUV -> RGB
Seems straightforward enough. However, that step concerning IDCT for Y, U, and V entails a gargantuan number of operations. When computing the IDCT for an entire block of samples, it’s possible to leverage a lot of redundancy in the math which equates to far fewer overall operations. If you absolutely have to compute each sample individually, for an 8×8 block, that requires 64 multiplication/accumulation (MAC) operations per sample. For 3 color planes, and including a few extra multiplications involved in the RGB conversion, that tallies up to about 200 MACs per pixel. Then there’s the fact that this approach means a 4x redundant operations on the color planes.
It’s crazy, but I just want to see if it can be done. My approach is to pre-compute a pile of IDCT constants in the JavaScript and transmit them to the fragment shader via uniform variables. For a first order optimization, the IDCT constants are formatted as 4-element vectors. This allows computing 16 dot products rather than 64 individual multiplication/addition operations. Ideally, GPU hardware executes the dot products faster (and there is also the possibility of lining these calculations up as matrices).
I can report that I actually got a sample correctly transformed using this approach. Just one sample, through. Then I ran into some new problems :
Problem #1 : Computing sample #1 vs. sample #0 requires a different table of 64 IDCT constants. Okay, so create a long table of 64 * 64 IDCT constants. However, this suffers from the same problem as seen in the previous approach : I can’t dynamically compute the index into this array. What’s the alternative ? Maintain 64 separate named arrays and implement 64 branches, when branching of any kind is ill-advised in shader programming to begin with ? I started to go down this path until I ran into…
Problem #2 : Shaders can only be so large. 64 * 64 floats (4 bytes each) requires 16 kbytes of data and this well exceeds the amount of shader storage that I can assume is allowed. That brings this path of exploration to a screeching halt.
Further Brainstorming
I suppose I could forgo pre-computing the constants and directly compute the IDCT for each sample which would entail lots more multiplications as well as 128 cosine calculations per sample (384 considering all 3 color planes). I’m a little stuck with the transform idea right now. Maybe there are some other transforms I could try.Another idea would be vector quantization. What little ORBX.js literature is available indicates that there is a method to allow real-time streaming but that it requires GPU assistance to yield enough horsepower to make it feasible. When I think of such severe asymmetry between compression and decompression, my mind drifts towards VQ algorithms. As I come to understand the benefits and limitations of GPU acceleration, I think I can envision a way that something similar to SVQ1, with its copious, hierarchical vector tables stored as textures, could be implemented using shaders.
So far, this all pertains to intra-coded video frames. What about opportunities for inter-coded frames ? The only approach that I can envision here is to use WebGL’s readPixels() function to fetch the rasterized frame out of the GPU, and then upload it again as a new texture which a new frame processing pipeline could reference. Whether this idea is plausible would require some profiling.
Using interframes in such a manner seems to imply that the entire codec would need to operate in RGB space and not YUV.
Conclusions
The people behind ORBX.js have apparently figured out a way to create a shader-based video codec. I have yet to even begin to reason out a plausible approach. However, I’m glad I did this exercise since I have finally broken through my ignorance regarding modern GPU shader programming. It’s nice to have a topic like multimedia that allows me a jumping-off point to explore other areas.