Recherche avancée

Médias (1)

Mot : - Tags -/punk

Autres articles (57)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (10783)

  • 5 Top Google Optimize Alternatives to Consider

    17 mars 2023, par Erin — Analytics Tips

    Google Optimize is a popular conversion rate optimization (CRO) tool from Alphabet (parent company of Google). With it, you can run A/B, multivariate, and redirect tests to figure out which web page designs perform best. 

    Google Optimize seamlessly integrates with Google Analytics (GA). It also has a free tier. So many marketers chose it as their default A/B testing tool…until recently. 

    Google will sunset Google Optimize by 30 September 2023

    Starting from this date, Google will no longer support Optimize and Optimize 360 (premium edition). All experiments, active after this date, will be paused automatically and you’ll no longer have access to your historical records (unless these are exported in advance).

    The better news is that you still have time to find a Google Optimize alternative — and this post will help you with that. 

    Disclaimer : Please note that the information provided in this blog post is for general informational purposes only and is not intended to provide legal advice. Every situation is unique and requires a specific legal analysis. If you have any questions regarding the legal implications of any matter, please consult with your legal team or seek advice from a qualified legal professional. 

    Best Google Optimize Alternatives 

    Google Optimize was among the first free A/B testing apps. But as with any product, it has some disadvantages. 

    Data updates happen every 24 hours, not in real-time. A free account has caps on the number of experiments. You cannot run more than 5 experiments at a time or implement over 16 combinations for multivariate testing (MVT). A premium version (Optimize 365) has fewer usage constraints, but it costs north of $150K per year. 

    Google Optimize has native integration with GA (of course), so you can review all the CRO data without switching apps. But Optimize doesn’t work well with Google Analytics alternatives, which many choose to use for privacy-friendly user tracking, higher data accuracy and GDPR compliance. 

    At the same time, many other conversion rate optimization (CRO) tools have emerged, often boasting better accuracy and more competitive features than Google Optimize.

    Here are 5 alternative A/B testing apps worth considering.

    Adobe Target 

    Adobe Target Homepage

    Adobe Target is an advanced personalization platform for optimising user and marketing experiences on digital properties. It uses machine learning algorithms to deliver dynamic content, personalised promotions and custom browsing experiences to visitors based on their behaviour and demographic data. 

    Adobe Target also provides A/B testing and multivariate testing (MVT) capabilities to help marketers test and refine their digital experiences.

    Key features : 

    • Visual experience builder for A/B tests setup and replication 
    • Full factorial multivariate tests and multi-armed bandit testing
    • Omnichannel personalisation across web properties 
    • Multiple audience segmentation and targeting options 
    • Personalised content, media and product recommendations 
    • Advanced customer intelligence (in conjunction with other Adobe products)

    Pros

    • Convenient A/B test design tool 
    • Acucate MVT and MAB results 
    • Powerful segmentation capabilities 
    • Access to extra behavioural analytics 
    • One-click personalisation activation 
    • Supports rules-based, location-based and contextual personalisation
    • Robust omnichannel analytics in conjunction with other Adobe products 

    Cons 

    • Requires an Adobe Marketing Cloud subscription 
    • No free trial or freemium tier 
    • More complex product setup and configuration 
    • Steep learning curve for new users 

    Price : On-demand. 

    Adobe Target is sold as part of Adobe Marketing Cloud. Licence costs vary, based on selected subscriptions and the number of users, but are typically above $10K.

    Google Optimize vs Adobe Target : The Verdict 

    Google Optimize comes with a free tier, unlike Adobe Target. It provides you with a basic builder for A/B and MVT tests, but none of the personalisation tools Adobe has. Because of ease-of-use and low price, other Google Optimize alternatives are better suited for small to medium-sized businesses, doing baseline CRO for funnel optimisation. 

    Adobe Target pulls you into the vast Adobe marketing ecosystem, offering omnipotent customer behaviour analytics, machine-learning-driven website optimisation, dynamic content recommendations, product personalisation and extensive reporting. The app is better suited for larger enterprises with a significant investment in digital marketing.

    Matomo A/B Testing

    Matomo A/B testing page

    Matomo A/B Testing is a CRO tool, integrated into Matomo. All Matomo Cloud users get instant access to it, while On-Premise (free) Matomo users can purchase A/B testing as a plugin

    With Matomo A/B Testing, you can create multiple variations of a web or mobile page and test them with different segments of their audience. Matomo also doesn’t have any strict experiment caps, unlike Google Optimize. 

    You can split-test multiple creative variants for on-site assets such as buttons, slogans, titles, call-to-actions, image positions and more. You can even benchmark the performance of two (or more !) completely different homepage designs, for instance. 

    With us, you can compliantly and ethically collect historical user data about any visitor, who’s entered any of the active tests — and monitor their entire customer journey. You can also leverage Matomo A/B Testing data as part of multi-touch attribution modelling to determine which channels bring the best leads and which assets drive them towards conversion. 

     

    Since Matomo A/B Testing is part of our analytics platform, it works well with other features such as goal tracking, heatmaps, user session recordings and more. 

    Key features

    • Run experiments for web, mobile, email and digital campaigns 
    • Convenient A/B test design interface 
    • One-click experiment scheduling 
    • Integration with historic visitor profiles
    • Near real-time conversion tracking 
    • Apply segmentation to Matomo reports 
    • Easy creative variation sharing via a URL 

    Pros

    • High data accuracy with no reporting gaps 
    • Monitor the evolution of your success metrics for each variation
    • Embed experiments across multiple digital channels 
    • Set a custom confidence threshold for winning variations 
    • No compromises on user privacy 
    • Free 21-day trial available (for Matomo Cloud) and free 30-day plugin trial (for Matomo On-Premise)

    Cons

    • No on-site personalisation tools available 
    • Configuration requires some coding experience 

    Price : Matomo A/B Testing is included in the monthly Cloud plan (starting at €19 per month). On-Premise users can buy this functionality as a plugin (starting at €199/year). 

    Google Optimize vs Matomo A/B Testing : The Verdict 

    Matomo offers the same types of A/B testing features as Google Optimize (and some extras !), but without any usage caps. Unlike Matomo, Google Optimize doesn’t support A/B tests for mobile apps. You can access some content testing features for Android Apps via Firebase, but this requires another subscription. 

    Matomo lets you run A/B experiments across the web and mobile properties, plus desktop apps, email campaigns and digital ads. Also, Matomo has higher conversion data accuracy, thanks to our privacy-focused method for collecting website analytics

    When using Matomo in most EU markets, you’re legally exempt from showing a cookie consent banner. Meaning you can collect richer insights for each experiment and make data-driven decisions. Nearly 40% of global consumers reject cookie consent banners. With most other tools, you won’t be getting the full picture of your traffic. 

    Optimizely 

    Optimizely homepage

    Optimizely is a conversion optimization platform that offers several competitive products for a separate subscription. These include a flexible content management system (CMS), a content marketing platform, a web A/B testing app, a mobile featuring testing product and two eCommerce-specific website management products.

    The Web Experimentation app allows you to optimise every customer touchpoint by scheduling unlimited split or multi-variant tests and conversions across all your projects from the same app. Apart from websites, this subscription also supports experiments for single-page applications. But if you want more advanced mobile app testing features, you’ll have to purchase another product — Feature Experimentation. 

    Key features :

    • Intuitive experiment design tool 
    • Cross-browser testing and experiment preview 
    • Multi-page funnel tests design 
    • Behavioural and geo-targeting 
    • Exit/bounce rate tracking
    • Custom audience builder for experiments
    • Comprehensive reporting 

    Pros

    • Unlimited number of concurrent experiments 
    • Upload your audience data for test optimisation 
    • Dynamic content personalisation available on a higher tier 
    • Pre-made integrations with popular heatmap and analytics tools 
    • Supports segmentation by device, campaign type, traffic sources or referrer 

    Cons

    • You need a separate subscription for mobile CRO 
    • Free trial not available, pricing on-demand 
    • Multiple licences and subscriptions may be required 
    • Doesn’t support A/B tests for emails 

    Price : Available on-demand. 

    Web Experimentation tool has three subscription tiers — Grow, Accelerate, and Scale with different features included. 

    Google Optimize vs Optimizely : The Verdict 

    Optimizely is a strong contender for Google Optimize alternative as it offers more advanced audience targeting and segmentation options. You can target users by IP address, cookies, traffic sources, device type, browser, language, location or a custom utm_campaign parameter.

    Similar to Matomo A/B testing, Optimizely doesn’t limit the number of projects or concurrent experiments you can do. But you have to immediately sign an annual contract (no monthly plans are available). Pricing also varies based on the number of processed impressions (more experiments = a higher annual bill). An annual licence can cost $63,700 for 10 million impressions on average, according to an independent estimate. 

    Visual Website Optimizer (VWO) 

    VWO is another popular experimentation platform, supporting web, mobile and server-side A/B testing and personalisation campaigns.

    Similar to others, VWO offers a drag-and-drop visual editor for creating campaign variants. You don’t need design or coding knowledge to create tests. Once you’re all set, the app will benchmark your experiment performance against expected conversion rates, report on differences in conversion rate and point towards the best-performing creative. 

    Similar to Optimizely, VWO also offers web/mobile app optimisation as a separate subscription. Apart from testing visual page elements, you can also run in-app experiments throughout the product stack to locate new revenue opportunities. For example, you can test in-app subscription flows, search algorithms or navigation flows to improve product UX. 

    Key features :

    • Multivariate and multi-arm bandit tests 
    • Multi-step (funnel) split tests 
    • Collaborative experiment tracking dashboard 
    • Target users by different attributes (URL, device, geo-data) 
    • Personal library of creative elements 
    • Funnel analytics, session records, and heatmaps available 

    Pros

    • Free starter plan is available (similar to Google Optimize)
    • Simple tracking code installation and easy code editor
    • Offers online reporting dashboards and report downloads 
    • Slice-and-dice reports by different audience dimensions
    • No impact on website/app loading speed and performance 

    Cons

    • Multivariate testing is only available on a higher-tier plan 
    • Annual contract required, despite monthly billing 
    • Mobile app A/B split tests require another licence 
    • Requires ongoing user training 

    Price : Free limited plan available. 

    Then from $356/month, billed annually. 

    Google Optimize vs VWO : The Verdict 

    The free plan on VWO is very similar to Google Optimize. You get access to A/B testing and split URL testing features for websites only. The visual editing tool is relatively simple — and you can use URL or device targeting. 

    Free VWO reports, however, lack the advertised depth in terms of behavioural or funnel-based reporting. In-depth insights are available only to premium users. Extra advertised features like heatmaps, form analytics and session recordings require yet another subscription. With Matomo Cloud, you get all three of these together with A/B testing. 

    ConvertFlow 

    ConvertFlow Homepage

    ConvertFlow markets itself as a funnel optimisation app for eCommerce and SaaS companies. It meshes lead generation tools with some CRO workflows. 

    With ConvertFlow, you can effortlessly design opt-in forms, pop-ups, quizzes and even entire landing pages using pre-made web elements and a visual builder. Afterwards, you can put all of these assets to a “field test” via the ConvertFlow CRO platform. Select among pre-made templates or create custom variants for split or multivariate testing. You can customise tests based on URLs, cookie data and user geolocation among other factors. 

    Similar to Adobe Target, ConvertFlow also allows you to run tests targeted at specific customer segments in your CRM. The app has native integrations with HubSpot and Salesforce, so this feature is easy to enable. ConvertFlow also offers advanced targeting and segmentation options, based on user on-site behaviour, demographics data or known interests.

    Key features :

    • Create and test landing pages, surveys, quizzes, pop-ups, surveys and other lead-gen assets. 
    • All-in-one funnel builder for creating demand-generation campaigns 
    • Campaign personalisation, based on on-site activity 
    • Re-usable dynamic visitor segments for targeting 
    • Multi-step funnel design and customisation 
    • Embedded forms for split testing CTAs on existing pages 

    Pros

    • Allows controlling the traffic split for each variant to get objective results 
    • Pre-made integration with Google Analytics and Google Tag Manager 
    • Conversion and funnel reports, available for each variant 
    • Access to a library with 300+ conversion campaign templates
    • Apply progressive visitor profiling to dynamically adjust user experiences 

    Cons

    • Each plan covers only $10K views. Each extra 10k costs another $20/mo 
    • Only one website allowed per account (except for Teams plan) 
    • Doesn’t support experiments in mobile app 
    • Not all CRO features are available on a Pro plan. 

    Price : Access to CRO features costs from $300/month on a Pro plan. Subscription costs also increase, based on the total number of monthly views. 

    Google Optimize vs CovertFlow : The Verdict 

    ConvertFlow is equally convenient to use in conjunction with Google Analytics as Google Optimize is. But the similarities end up here since ConvertFlow combines funnel design features with CRO tools. 

    With ConvertFlow, you can run more advanced experiments and apply more targeting criteria than with Google Optimize. You can observe user behaviour and conversion rates across multi-step CTA forms and page funnels, plus benefit from first-touch attribution reporting without switching apps. 

    Though CovertFlow has a free plan, it doesn’t include access to CRO features. Meaning it’s not a free alternative to Google Optimize.

    Comparison of the Top 5 Google Optimize Alternatives

    FeatureGoogle OptimizeAdobe TargetMatomo A/B testOptimizely VWOConvertFlow

    Supported channelsWebWeb, mobile, social media, email Web, mobile, email, digital campaignsWebsites & mobile appsWebsites, web and mobile appsWebsites and mobile apps
    A/B testingcheck mark iconcheck mark iconcheck mark iconcheck mark iconcheck mark iconcheck mark icon
    Easy GA integration check mark iconXcheck mark iconcheck mark iconcheck mark iconcheck mark icon
    Integrations with other web analytics appsXXcheck mark iconcheck mark iconXcheck mark icon
    Audience segmentationBasicAdvancedAdvancedAdvancedAdvancedAdvanced
    Geo-targetingcheck mark iconcheck mark iconXcheck mark iconcheck mark iconcheck mark icon
    Behavioural targetingBasicAdvancedAdvancedAdvancedAdvancedAdvanced
    HeatmapsXXcheck mark icon

    No extra cost with Matomo Cloud
    〰️

    *via integrations
    〰️

    *requires another subscription
    X
    Session recordingsXXcheck mark icon

    No extra cost with Matomo Cloud
    X〰️

    *requires another subscription
    X
    Multivariate testing (MVT)check mark iconcheck mark iconcheck mark iconcheck mark iconcheck mark iconcheck mark icon
    Dynamic personalisation Xcheck mark iconXcheck mark icon〰️

    *only on higher account tiers
    〰️

    *only on the highest account tiers
    Product recommendationsXcheck mark iconX〰️

    *requires another subscription
    〰️

    *requires another subscription
    check mark icon
    SupportSelf-help desk on a free tierEmail, live-chat, phone supportEmail, self-help guides and user forumKnowledge base, online tickets, user communitySelf-help guides, email, phoneKnowledge base, email, and live chat support
    PriceFreemiumOn-demandFrom €19 for Cloud subscription

    From €199/year as plugin for On-Premise
    On-demandFreemium

    From $365/mo
    From $300/month

    Conclusion 

    Google Optimize has served marketers well for over five years. But as the company decided to move on — so should you. 

    Oher A/B testing tools like Matomo, Optimizely or VWO offer better funnel analytics and split testing capabilities without any usage caps. Also, tools like Adobe Target, Optimizely, and VWO offer advanced content personalisation, based on aggregate analytics. However, they also come with much higher subscription costs.

    Matomo is a robust, compliant and cost-effective alternative to Google Optimize. Our tool allows you to schedule campaigns across all digital mediums (and even desktop apps !) without a

  • How Funnel for Piwik Analytics enriches your Piwik experience giving you ultimate insights and debugging capabilities

    13 janvier 2017, par InnoCraft — Community

    No matter what type of website or app you have, whether you are trying to get your users to sign up for something or sell products, there is a certain number of steps your visitors have to go through. On every step you lose visitors and therefore potential revenue and conversions. Therefore it is critical to know where your visitors actually follow those steps in your website or app, where you lose them and where your visitors maybe get confused. By defining a funnel, you can improve your conversion rates, sales and revenue as you can exactly determine where you lose your visitors in converting your goal or a sale.

    A Funnel defines a series of steps that you expect your visitors to take on their way to converting a goal. Funnels, a premium feature for Piwik developed by InnoCraft, lets you create funnels to get the data you need to improve your websites and mobile apps. Learn more about Funnel.

    In this blog post we will cover the reports the Funnel plugin provides. The next blog post shows you how to configure and validate your funnel in Piwik.

    Integration in Goal reports

    At Piwik and InnoCraft, we usually start looking into our goal reports. Funnel integrates directly into each goal reporting page giving you a quick overview how your funnel is doing. This saves us a lot of time as we don’t have to separately look into each funnel page and only takes us maybe an additional second to keep an eye on our funnels. By clicking on the headline or “View funnel report” link, you can directly go to the funnel report to get a more detailed report if you notice any spike in the evolution of the conversions or conversion rate.

    Getting an overall Funnel overview

    Next we usually go to the “Funnel Overview” page where it shows a list of all activated Funnels and their performance over time. You will find the look familiar as it is similar to the “Goals Overview” page. If we find something unusual there, for example any spikes, we usually directly click on the headline of the Funnel to go to the detailed Funnel report. You can also choose a funnel from the left reporting menu or search for a funnel by entering the shortcut “f”.

    Viewing a funnel report

    A funnel reporting page looks very similar to a Goal reporting page. It starts with an evolution graph and sparklines showing you the performance of your funnel over time.

    In the evolution graph you can select the metrics you want to plot. We usually have an eye on the funnel conversion rate and the number of “Funnel entries” or the number of “Funnel conversions”. The conversion rate alone does not show you how your funnel is performing. Imagine the rate is always stable at around 20% and you might think everything is alright, but if the number of visitors that take part in your funnel goes down, you might have a problem as the number of funnel conversions actually decreases even though the rate is the same. So we recommend to not only have a look at the conversion rate. The report will remember the metrics you want to plot each time you open it so you don’t have to re-select them over and over again.

    The funnel overview

    In the funnel overview we are giving you more details about the funnel and goal related conversion metrics so you don’t have to switch between the goal and funnel report and compare them easily.

    When you analyze a funnel report, you might not always remember how the funnel is configured. Even though you specify names for each step you sometimes need to know on which pages a certain step will be activated. By clicking on the funnel summary link you can quickly look into the funnel configuration and also see all important metrics at a glance in a simple table without having to scroll.

    You might also notice the Visitor Log link which will show you all actions for all visitors that have entered this funnel. This lets you really understand how your visitors navigate through your website and how they proceeded, exited or converted your funnel on a visitor level.

    The Funnel visualization

    Below the funnel overview you can visually see where your visitors entered, proceeded, converted and exited your funnel. We kept the UI clean so you can focus on the important things.

    Most tools only give you the pages where visitors have entered your funnel but we do better and also show you the list of external referrers used by visitors to enter your funnel directly (marketing campaigns, search engines or other websites). Also we do not only show only the top 5 pages but up to 100 pages and 50 referrers (more can be configured if needed). When you hover a row, you will not only see the number of hits but also the percentage each row has contributed to the entries. Here you want to look and understand how your visitors enter your funnel and based on the data maybe invest in successful referrers, campaigns and pages. If the pages or referrers you expect to see there don’t show up, your users might not understand the path you had in mind for them.

    Next you may notice how many visits have gone through each step, in this case 3487 visits. The green and red bar lets you quickly identify how many of your visitors have proceeded to the next step (green) compared to how many have exited the funnel at this step (red). Ideally, most of the bar is green and not red indicating that more visitors proceed to the next step than they exit.

    Now the next feature is really valuable. When you hover the step title or the number of visits, you will notice that two icons appear :

    Those two little icons are really powerful and give you even more insights to really dig into all the data. The left icon shows you the visitor log showing all actions of each visitor that have participated in this particular funnel step. This means for each step you get to see all the details and actions of each visitor. This lets you really debug and understand problems in your funnel.

    At InnoCraft, we understand that plain numbers are often not so valuable. Only the evolution over time, when you put the numbers in relation to something else you can really understand how your website is doing. The icon to the right lets you do exactly this, it lets you view the row evolution for each funnel step. We are sure you will enjoy this feature. It lets you explore how each funnel step is doing over time. For example the number of entries for a step or how many proceeded to the next step from here over time. Here you ideally want to see that the “Proceeded Rate” increases over time, meaning more and more visitors actually proceed to the next step instead of exiting it.

    We are sure you will really love those features that give you just those extra insights that other tools don’t give you.

    On the right you can find out where your visitors went to, if they did not proceed any further in the funnel. This lets you better understand why they left the funnel and did not proceed any further.

    At the end of the funnel report you find again the number of conversions and the conversion rate. Here we recommend looking into the visitor log when you hover the name of the last step as you can analyze how each visitor converted this funnel in detail.

    Applying segments

    Funnels lets you apply any Piwik segment to the Funnel report allowing you to dice your visitors multiplying the value you get out of Funnel. For example you may want to apply a segment and analyze the funnel for visitors that have visited your website or mobile app for the first time vs. recurring visitors. Sometimes it may be interesting how visitors from different countries go through your funnel, the possibilities are endless. We really recommend to take advantage of segments to understand your different target groups even better.

    The plugin also adds some new segments to your Piwik letting you segment any Piwik report by visitors that have participated in a funnel or participated in a particular funnel step. For example you could go to the “Visitors => Locations” report and apply a segment for your funnel to see which countries have participated or converted most in your funnel.

    Widgets, Scheduled Reports, and more.

    This is not where the fun ends. Funnels defines new widgets that you can add to your dashboard or export it into a third party website. You can set up scheduled reports to receive the Funnel report automatically via email or sms or download the report to share it with your colleagues. It works also very well with Custom Alerts and you can view the Funnel report in the Piwik Mobile app. You can manage Funnels via HTTP API and also fetch all Funnel reports via the HTTP Reporting API. The plugin is really nicely integrated into Piwik we will need some more blog posts to show you all the ways Funnels advances your Piwik experience and how it lets you dig into all the data so you can increase your conversions and sales based on this data.

    How to get Funnels and related features

    You can get Funnels on the Piwik Marketplace. If you want to learn more about Funnels you might be also interested in the Funnel User Guide and the Funnel FAQ.

    Similar to Funnels we also offer Users Flow which lets you visualize the flow of your users and visitors across several interactions.

  • Developing A Shader-Based Video Codec

    22 juin 2013, par Multimedia Mike — Outlandish Brainstorms

    Early last month, this thing called ORBX.js was in the news. It ostensibly has something to do with streaming video and codec technology, which naturally catches my interest. The hype was kicked off by Mozilla honcho Brendan Eich when he posted an article asserting that HD video decoding could be entirely performed in JavaScript. We’ve seen this kind of thing before using Broadway– an H.264 decoder implemented entirely in JS. But that exposes some very obvious limitations (notably CPU usage).

    But this new video codec promises 1080p HD playback directly in JavaScript which is a lofty claim. How could it possibly do this ? I got the impression that performance was achieved using WebGL, an extension which allows JavaScript access to accelerated 3D graphics hardware. Browsing through the conversations surrounding the ORBX.js announcement, I found this confirmation from Eich himself :

    You’re right that WebGL does heavy lifting.

    As of this writing, ORBX.js remains some kind of private tech demo. If there were a public demo available, it would necessarily be easy to reverse engineer the downloadable JavaScript decoder.

    But the announcement was enough to make me wonder how it could be possible to create a video codec which effectively leverages 3D hardware.

    Prior Art
    In theorizing about this, it continually occurs to me that I can’t possibly be the first person to attempt to do this (or the ORBX.js people, for that matter). In googling on the matter, I found various forums and Q&A posts where people asked if it were possible to, e.g., accelerate JPEG decoding and presentation using 3D hardware, with no answers. I also found a blog post which describes a plan to use 3D hardware to accelerate VP8 video decoding. It was a project done under the banner of Google’s Summer of Code in 2011, though I’m not sure which open source group mentored the effort. The project did not end up producing the shader-based VP8 codec originally chartered but mentions that “The ‘client side’ of the VP8 VDPAU implementation is working and is currently being reviewed by the libvdpau maintainers.” I’m not sure what that means. Perhaps it includes modifications to the public API that supports VP8, but is waiting for the underlying hardware to actually implement VP8 decoding blocks in hardware.

    What’s So Hard About This ?
    Video decoding is a computationally intensive task. GPUs are known to be really awesome at chewing through computationally intensive tasks. So why aren’t GPUs a natural fit for decoding video codecs ?

    Generally, it boils down to parallelism, or lack of opportunities thereof. GPUs are really good at doing the exact same operations over lots of data at once. The problem is that decoding compressed video usually requires multiple phases that cannot be parallelized, and the individual phases often cannot be parallelized. In strictly mathematical terms, a compressed data stream will need to be decoded by applying a function f(x) over each data element, x0 .. xn. However, the function relies on having applied the function to the previous data element, i.e. :

    f(xn) = f(f(xn-1))
    

    What happens when you try to parallelize such an algorithm ? Temporal rifts in the space/time continuum, if you’re in a Star Trek episode. If you’re in the real world, you’ll get incorrect, unusuable data as the parallel computation is seeded with a bunch of invalid data at multiple points (which is illustrated in some of the pictures in the aforementioned blog post about accelerated VP8).

    Example : JPEG
    Let’s take a very general look at the various stages involved in decoding the ubiquitous JPEG format :


    High level JPEG decoding flow

    What are the opportunities to parallelize these various phases ?

    • Huffman decoding (run length decoding and zig-zag reordering is assumed to be rolled into this phase) : not many opportunities for parallelizing the various Huffman formats out there, including this one. Decoding most Huffman streams is necessarily a sequential operation. I once hypothesized that it would be possible to engineer a codec to achieve some parallelism during the entropy decoding phase, and later found that On2′s VP8 codec employs the scheme. However, such a scheme is unlikely to break down to such a fine level that WebGL would require.
    • Reverse DC prediction : JPEG — and many other codecs — doesn’t store full DC coefficients. It stores differences in successive DC coefficients. Reversing this process can’t be parallelized. See the discussion in the previous section.
    • Dequantize coefficients : This could be very parallelized. It should be noted that software decoders often don’t dequantize all coefficients. Many coefficients are 0 and it’s a waste of a multiplication operation to dequantize. Thus, this phase is sometimes rolled into the Huffman decoding phase.
    • Invert discrete cosine transform : This seems like it could be highly parallelizable. I will be exploring this further in this post.
    • Convert YUV -> RGB for final display : This is a well-established use case for 3D acceleration.

    Crash Course in 3D Shaders and Humility
    So I wanted to see if I could accelerate some parts of JPEG decoding using something called shaders. I made an effort to understand 3D programming and its associated math throughout the 1990s but 3D technology left me behind a very long time ago while I got mixed up in this multimedia stuff. So I plowed through a few books concerning WebGL (thanks to my new Safari Books Online subscription). After I learned enough about WebGL/JS to be dangerous and just enough about shader programming to be absolutely lethal, I set out to try my hand at optimizing IDCT using shaders.

    Here’s my extremely high level (and probably hopelessly naive) view of the modern GPU shader programming model :


    Basic WebGL rendering pipeline

    The WebGL program written in JavaScript drives the show. It sends a set of vertices into the WebGL system and each vertex is processed through a vertex shader. Then, each pixel that falls within a set of vertices is sent through a fragment shader to compute the final pixel attributes (R, G, B, and alpha value). Another consideration is textures : This is data that the program uploads to GPU memory which can be accessed programmatically by the shaders).

    These shaders (vertex and fragment) are key to the GPU’s programmability. How are they programmed ? Using a special C-like shading language. Thought I : “C-like language ? I know C ! I should be able to master this in short order !” So I charged forward with my assumptions and proceeded to get smacked down repeatedly by the overall programming paradigm. I came to recognize this as a variation of the scientific method : Develop a hypothesis– in my case, a mental model of how the system works ; develop an experiment (short program) to prove or disprove the model ; realize something fundamental that I was overlooking ; formulate new hypothesis and repeat.

    First Approach : Vertex Workhorse
    My first pitch goes like this :

    • Upload DCT coefficients to GPU memory in the form of textures
    • Program a vertex mesh that encapsulates 16×16 macroblocks
    • Distribute the IDCT effort among multiple vertex shaders
    • Pass transformed Y, U, and V blocks to fragment shader which will convert the samples to RGB

    So the idea is that decoding of 16×16 macroblocks is parallelized. A macroblock embodies 6 blocks :


    JPEG macroblocks

    It would be nice to process one of these 6 blocks in each vertex. But that means drawing a square with 6 vertices. How do you do that ? I eventually realized that drawing a square with 6 vertices is the recommended method for drawing a square on 3D hardware. Using 2 triangles, each with 3 vertices (0, 1, 2 ; 3, 4, 5) :


    2 triangles make a square

    A vertex shader knows which (x, y) coordinates it has been assigned, so it could figure out which sections of coefficients it needs to access within the textures. But how would a vertex shader know which of the 6 blocks it should process ? Solution : Misappropriate the vertex’s z coordinate. It’s not used for anything else in this case.

    So I set all of that up. Then I hit a new roadblock : How to get the reconstructed Y, U, and V samples transported to the fragment shader ? I have found that communicating between shaders is quite difficult. Texture memory ? WebGL doesn’t allow shaders to write back to texture memory ; shaders can only read it. The standard way to communicate data from a vertex shader to a fragment shader is to declare variables as “varying”. Up until this point, I knew about varying variables but there was something I didn’t quite understand about them and it nagged at me : If 3 different executions of a vertex shader set 3 different values to a varying variable, what value is passed to the fragment shader ?

    It turns out that the varying variable varies, which means that the GPU passes interpolated values to each fragment shader invocation. This completely destroys this idea.

    Second Idea : Vertex Workhorse, Take 2
    The revised pitch is to work around the interpolation issue by just having each vertex shader invocation performs all 6 block transforms. That seems like a lot of redundant. However, I figured out that I can draw a square with only 4 vertices by arranging them in an ‘N’ pattern and asking WebGL to draw a TRIANGLE_STRIP instead of TRIANGLES. Now it’s only doing the 4x the extra work, and not 6x. GPUs are supposed to be great at this type of work, so it shouldn’t matter, right ?

    I wired up an experiment and then ran into a new problem : While I was able to transform a block (or at least pretend to), and load up a varying array (that wouldn’t vary since all vertex shaders wrote the same values) to transmit to the fragment shader, the fragment shader can’t access specific values within the varying block. To clarify, a WebGL shader can use a constant value — or a value that can be evaluated as a constant at compile time — to index into arrays ; a WebGL shader can not compute an index into an array. Per my reading, this is a WebGL security consideration and the limitation may not be present in other OpenGL(-ES) implementations.

    Not Giving Up Yet : Choking The Fragment Shader
    You might want to be sitting down for this pitch :

    • Vertex shader only interpolates texture coordinates to transmit to fragment shader
    • Fragment shader performs IDCT for a single Y sample, U sample, and V sample
    • Fragment shader converts YUV -> RGB

    Seems straightforward enough. However, that step concerning IDCT for Y, U, and V entails a gargantuan number of operations. When computing the IDCT for an entire block of samples, it’s possible to leverage a lot of redundancy in the math which equates to far fewer overall operations. If you absolutely have to compute each sample individually, for an 8×8 block, that requires 64 multiplication/accumulation (MAC) operations per sample. For 3 color planes, and including a few extra multiplications involved in the RGB conversion, that tallies up to about 200 MACs per pixel. Then there’s the fact that this approach means a 4x redundant operations on the color planes.

    It’s crazy, but I just want to see if it can be done. My approach is to pre-compute a pile of IDCT constants in the JavaScript and transmit them to the fragment shader via uniform variables. For a first order optimization, the IDCT constants are formatted as 4-element vectors. This allows computing 16 dot products rather than 64 individual multiplication/addition operations. Ideally, GPU hardware executes the dot products faster (and there is also the possibility of lining these calculations up as matrices).

    I can report that I actually got a sample correctly transformed using this approach. Just one sample, through. Then I ran into some new problems :

    Problem #1 : Computing sample #1 vs. sample #0 requires a different table of 64 IDCT constants. Okay, so create a long table of 64 * 64 IDCT constants. However, this suffers from the same problem as seen in the previous approach : I can’t dynamically compute the index into this array. What’s the alternative ? Maintain 64 separate named arrays and implement 64 branches, when branching of any kind is ill-advised in shader programming to begin with ? I started to go down this path until I ran into…

    Problem #2 : Shaders can only be so large. 64 * 64 floats (4 bytes each) requires 16 kbytes of data and this well exceeds the amount of shader storage that I can assume is allowed. That brings this path of exploration to a screeching halt.

    Further Brainstorming
    I suppose I could forgo pre-computing the constants and directly compute the IDCT for each sample which would entail lots more multiplications as well as 128 cosine calculations per sample (384 considering all 3 color planes). I’m a little stuck with the transform idea right now. Maybe there are some other transforms I could try.

    Another idea would be vector quantization. What little ORBX.js literature is available indicates that there is a method to allow real-time streaming but that it requires GPU assistance to yield enough horsepower to make it feasible. When I think of such severe asymmetry between compression and decompression, my mind drifts towards VQ algorithms. As I come to understand the benefits and limitations of GPU acceleration, I think I can envision a way that something similar to SVQ1, with its copious, hierarchical vector tables stored as textures, could be implemented using shaders.

    So far, this all pertains to intra-coded video frames. What about opportunities for inter-coded frames ? The only approach that I can envision here is to use WebGL’s readPixels() function to fetch the rasterized frame out of the GPU, and then upload it again as a new texture which a new frame processing pipeline could reference. Whether this idea is plausible would require some profiling.

    Using interframes in such a manner seems to imply that the entire codec would need to operate in RGB space and not YUV.

    Conclusions
    The people behind ORBX.js have apparently figured out a way to create a shader-based video codec. I have yet to even begin to reason out a plausible approach. However, I’m glad I did this exercise since I have finally broken through my ignorance regarding modern GPU shader programming. It’s nice to have a topic like multimedia that allows me a jumping-off point to explore other areas.