Recherche avancée

Médias (91)

Autres articles (51)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • MediaSPIP Init et Diogène : types de publications de MediaSPIP

    11 novembre 2010, par

    À l’installation d’un site MediaSPIP, le plugin MediaSPIP Init réalise certaines opérations dont la principale consiste à créer quatre rubriques principales dans le site et de créer cinq templates de formulaire pour Diogène.
    Ces quatre rubriques principales (aussi appelées secteurs) sont : Medias ; Sites ; Editos ; Actualités ;
    Pour chacune de ces rubriques est créé un template de formulaire spécifique éponyme. Pour la rubrique "Medias" un second template "catégorie" est créé permettant d’ajouter (...)

  • Changer son thème graphique

    22 février 2011, par

    Le thème graphique ne touche pas à la disposition à proprement dite des éléments dans la page. Il ne fait que modifier l’apparence des éléments.
    Le placement peut être modifié effectivement, mais cette modification n’est que visuelle et non pas au niveau de la représentation sémantique de la page.
    Modifier le thème graphique utilisé
    Pour modifier le thème graphique utilisé, il est nécessaire que le plugin zen-garden soit activé sur le site.
    Il suffit ensuite de se rendre dans l’espace de configuration du (...)

Sur d’autres sites (5092)

  • Open Media Developers Track at OVC 2011

    11 octobre 2011, par silvia

    The Open Video Conference that took place on 10-12 September was so overwhelming, I’ve still not been able to catch my breath ! It was a dense three days for me, even though I only focused on the technology sessions of the conference and utterly missed out on all the policy and content discussions.

    Roughly 60 people participated in the Open Media Software (OMS) developers track. This was an amazing group of people capable and willing to shape the future of video technology on the Web :

    • HTML5 video developers from Apple, Google, Opera, and Mozilla (though we missed the NZ folks),
    • codec developers from WebM, Xiph, and MPEG,
    • Web video developers from YouTube, JWPlayer, Kaltura, VideoJS, PopcornJS, etc.,
    • content publishers from Wikipedia, Internet Archive, YouTube, Netflix, etc.,
    • open source tool developers from FFmpeg, gstreamer, flumotion, VideoLAN, PiTiVi, etc,
    • and many more.

    To provide a summary of all the discussions would be impossible, so I just want to share the key take-aways that I had from the main sessions.

    WebRTC : Realtime Communications and HTML5

    Tim Terriberry (Mozilla), Serge Lachapelle (Google) and Ethan Hugg (CISCO) moderated this session together (slides). There are activities both at the W3C and at IETF – the ones at IETF are supposed to focus on protocols, while the W3C ones on HTML5 extensions.

    The current proposal of a PeerConnection API has been implemented in WebKit/Chrome as open source. It is expected that Firefox will have an add-on by Q1 next year. It enables video conferencing, including media capture, media encoding, signal processing (echo cancellation etc), secure transmission, and a data stream exchange.

    Current discussions are around the signalling protocol and whether SIP needs to be required by the standard. Further, the codec question is under discussion with a question whether to mandate VP8 and Opus, since transcoding gateways are not desirable. Another question is how to measure the quality of the connection and how to report errors so as to allow adaptation.

    What always amazes me around RTC is the sheer number of specialised protocols that seem to be required to implement this. WebRTC does not disappoint : in fact, the question was asked whether there could be a lighter alternative than to re-use dozens of years of protocol development – is it over-engineered ? Can desktop players connect to a WebRTC session ?

    We are already in a second or third revision of this part of the HTML5 specification and yet it seems the requirements are still being collected. I’m quietly confident that everything is done to make the lives of the Web developer easier, but it sure looks like a huge task.

    The Missing Link : Flash to HTML5

    Zohar Babin (Kaltura) and myself moderated this session and I must admit that this session was the biggest eye-opener for me amongst all the sessions. There was a large number of Flash developers present in the room and that was great, because sometimes we just don’t listen enough to lessons learnt in the past.

    This session gave me one of those aha-moments : it the form of the Flash appendBytes() API function.

    The appendBytes() function allows a Flash developer to take a byteArray out of a connected video resource and do something with it – such as feed it to a video for display. When I heard that Web developers want that functionality for JavaScript and the video element, too, I instinctively rejected the idea wondering why on earth would a Web developer want to touch encoded video bytes – why not leave that to the browser.

    But as it turns out, this is actually a really powerful enabler of functionality. For example, you can use it to :

    • display mid-roll video ads as part of the same video element,
    • sequence playlists of videos into the same video element,
    • implement DVR functionality (high-speed seeking),
    • do mash-ups,
    • do video editing,
    • adaptive streaming.

    This totally blew my mind and I am now completely supportive of having such a function in HTML5. Together with media fragment URIs you could even leave all the header download management for resources to the Web browser and just request time ranges from a video through an appendBytes() function. This would be easier on the Web developer than having to deal with byte ranges and making sure that appropriate decoding pipelines are set up.

    Standards for Video Accessibility

    Philip Jagenstedt (Opera) and myself moderated this session. We focused on the HTML5 track element and the WebVTT file format. Many issues were identified that will still require work.

    One particular topic was to find a standard means of rendering the UI for caption, subtitle, und description selection. For example, what icons should be used to indicate that subtitles or captions are available. While this is not part of the HTML5 specification, it’s still important to get this right across browsers since otherwise users will get confused with diverging interfaces.

    Chaptering was discussed and a particular need to allow URLs to directly point at chapters was expressed. I suggested the use of named Media Fragment URLs.

    The use of WebVTT for descriptions for the blind was also discussed. A suggestion was made to use the voice tag <v> to allow for “styling” (i.e. selection) of the screen reader voice.

    Finally, multitrack audio or video resources were also discussed and the @mediagroup attribute was explained. A question about how to identify the language used in different alternative dubs was asked. This is an issue because @srclang is not on audio or video, only on text, so it’s a missing feature for the multitrack API.

    Beyond this session, there was also a breakout session on WebVTT and the track element. As a consequence, a number of bugs were registered in the W3C bug tracker.

    WebM : Testing, Metrics and New features

    This session was moderated by John Luther and John Koleszar, both of the WebM Project. They started off with a presentation on current work on WebM, which includes quality testing and improvements, and encoder speed improvement. Then they moved on to questions about how to involve the community more.

    The community criticised that communication of what is happening around WebM is very scarce. More sharing of information was requested, including a move to using open Google+ hangouts instead of Google internal video conferences. More use of the public bug tracker can also help include the community better.

    Another pain point of the community was that code is introduced and removed without much feedback. It was requested to introduce a peer review process. Also it was requested that example code snippets are published when new features are announced so others can replicate the claims.

    This all indicates to me that the WebM project is increasingly more open, but that there is still a lot to learn.

    Standards for HTTP Adaptive Streaming

    This session was moderated by Frank Galligan and Aaron Colwell (Google), and Mark Watson (Netflix).

    Mark started off by giving us an introduction to MPEG DASH, the MPEG file format for HTTP adaptive streaming. MPEG has just finalized the format and he was able to show us some examples. DASH is XML-based and thus rather verbose. It is covering all eventualities of what parameters could be switched during transmissions, which makes it very broad. These include trick modes e.g. for fast forwarding, 3D, multi-view and multitrack content.

    MPEG have defined profiles – one for live streaming which requires chunking of the files on the server, and one for on-demand which requires keyframe alignment of the files. There are clear specifications for how to do these with MPEG. Such profiles would need to be created for WebM and Ogg Theora, too, to make DASH universally applicable.

    Further, the Web case needs a more restrictive adaptation approach, since the video element’s API is already accounting for some of the features that DASH provides for desktop applications. So, a Web-specific profile of DASH would be required.

    Then Aaron introduced us to the MediaSource API and in particular the webkitSourceAppend() extension that he has been experimenting with. It is essentially an implementation of the appendBytes() function of Flash, which the Web developers had been asking for just a few sessions earlier. This was likely the biggest announcement of OVC, alas a quiet and technically-focused one.

    Aaron explained that he had been trying to find a way to implement HTTP adaptive streaming into WebKit in a way in which it could be standardised. While doing so, he also came across other requirements around such chunked video handling, in particular around dynamic ad insertion, live streaming, DVR functionality (fast forward), constraint video editing, and mashups. While trying to sort out all these requirements, it became clear that it would be very difficult to implement strategies for stream switching, buffering and delivery of video chunks into the browser when so many different and likely contradictory requirements exist. Also, once an approach is implemented and specified for the browser, it becomes very difficult to innovate on it.

    Instead, the easiest way to solve it right now and learn about what would be necessary to implement into the browser would be to actually allow Web developers to queue up a chunk of encoded video into a video element for decoding and display. Thus, the webkitSourceAppend() function was born (specification).

    The proposed extension to the HTMLMediaElement is as follows :

    partial interface HTMLMediaElement 
      // URL passed to src attribute to enable the media source logic.
      readonly attribute [URL] DOMString webkitMediaSourceURL ;
    

    bool webkitSourceAppend(in Uint8Array data) ;

    // end of stream status codes.
    const unsigned short EOS_NO_ERROR = 0 ;
    const unsigned short EOS_NETWORK_ERR = 1 ;
    const unsigned short EOS_DECODE_ERR = 2 ;

    void webkitSourceEndOfStream(in unsigned short status) ;

    // states
    const unsigned short SOURCE_CLOSED = 0 ;
    const unsigned short SOURCE_OPEN = 1 ;
    const unsigned short SOURCE_ENDED = 2 ;

    readonly attribute unsigned short webkitSourceState ;
     ;

    The code is already checked into WebKit, but commented out behind a command-line compiler flag.

    Frank then stepped forward to show how webkitSourceAppend() can be used to implement HTTP adaptive streaming. His example uses WebM – there are no examples with MPEG or Ogg yet.

    The chunks that Frank’s demo used were 150 video frames long (6.25s) and 5s long audio. Stream switching only switched video, since audio data is much lower bandwidth and more important to retain at high quality. Switching was done on multiplexed files.

    Every chunk requires an XHR range request – this could be optimised if the connections were kept open per adaptation. Seeking works, too, but since decoding requires download of a whole chunk, seeking latency is determined by the time it takes to download and decode that chunk.

    Similar to DASH, when using this approach for live streaming, the server has to produce one file per chunk, since byte range requests are not possible on a continuously growing file.

    Frank did not use DASH as the manifest format for his HTTP adaptive streaming demo, but instead used a hacked-up custom XML format. It would be possible to use JSON or any other format, too.

    After this session, I was actually completely blown away by the possibilities that such a simple API extension allows. If I wasn’t sold on the idea of a appendBytes() function in the earlier session, this one completely changed my mind. While I still believe we need to standardise a HTTP adaptive streaming file format that all browsers will support for all codecs, and I still believe that a native implementation for support of such a file format is necessary, I also believe that this approach of webkitSourceAppend() is what HTML needs – and maybe it needs it faster than native HTTP adaptive streaming support.

    Standards for Browser Video Playback Metrics

    This session was moderated by Zachary Ozer and Pablo Schklowsky (JWPlayer). Their motivation for the topic was, in fact, also HTTP adaptive streaming. Once you leave the decisions about when to do stream switching to JavaScript (through a function such a wekitSourceAppend()), you have to expose stream metrics to the JS developer so they can make informed decisions. The other use cases is, of course, monitoring of the quality of video delivery for reporting to the provider, who may then decide to change their delivery environment.

    The discussion found that we really care about metrics on three different levels :

    • measuring the network performance (bandwidth)
    • measuring the decoding pipeline performance
    • measuring the display quality

    In the end, it seemed that work previously done by Steve Lacey on a proposal for video metrics was generally acceptable, except for the playbackJitter metric, which may be too aggregate to mean much.

    Device Inputs / A/V in the Browser

    I didn’t actually attend this session held by Anant Narayanan (Mozilla), but from what I heard, the discussion focused on how to manage permission of access to video camera, microphone and screen, e.g. when multiple applications (tabs) want access or when the same site wants access in a different session. This may apply to real-time communication with screen sharing, but also to photo sharing, video upload, or canvas access to devices e.g. for time lapse photography.

    Open Video Editors

    This was another session that I wasn’t able to attend, but I believe the creation of good open source video editing software and similar video creation software is really crucial to giving video a broader user appeal.

    Jeff Fortin (PiTiVi) moderated this session and I was fascinated to later see his analysis of the lifecycle of open source video editors. It is shocking to see how many people/projects have tried to create an open source video editor and how many have stopped their project. It is likely that the creation of a video editor is such a complex challenge that it requires a larger and more committed open source project – single people will just run out of steam too quickly. This may be comparable to the creation of a Web browser (see the size of the Mozilla project) or a text processing system (see the size of the OpenOffice project).

    Jeff also mentioned the need to create open video editor standards around playlist file formats etc. Possibly the Open Video Alliance could help. In any case, something has to be done in this space – maybe this would be a good topic to focus next year’s OVC on ?

    Monday’s Breakout Groups

    The conference ended officially on Sunday night, but we had a third day of discussions / hackday at the wonderful New York Lawschool venue. We had collected issues of interest during the two previous days and organised the breakout groups on the morning (Schedule).

    In the Content Protection/DRM session, Mark Watson from Netflix explained how their API works and that they believe that all we need in browsers is a secure way to exchange keys and an indicator of protection scheme is used – the actual protection scheme would not be implemented by the browser, but be provided by the underlying system (media framework/operating system). I think that until somebody actually implements something in a browser fork and shows how this can be done, we won’t have much progress. In my understanding, we may also need to disable part of the video API for encrypted content, because otherwise you can always e.g. grab frames from the video element into canvas and save them from there.

    In the Playlists and Gapless Playback session, there was massive brainstorming about what new cool things can be done with the video element in browsers if playback between snippets can be made seamless. Further discussions were about a standard playlist file formats (such as XSPF, MRSS or M3U), media fragment URIs in playlists for mashups, and the need to expose track metadata for HTML5 media elements.

    What more can I say ? It was an amazing three days and the complexity of problems that we’re dealing with is a tribute to how far HTML5 and open video has already come and exciting news for the kind of applications that will be possible (both professional and community) once we’ve solved the problems of today. It will be exciting to see what progress we will have made by next year’s conference.

    Thanks go to Google for sponsoring my trip to OVC.

    UPDATE : We actually have a mailing list for open media developers who are interested in these and similar topics – do join at http://lists.annodex.net/cgi-bin/mailman/listinfo/foms.

  • How to increase conversions to meet your business goals

    8 septembre 2020, par Joselyn Khor — Analytics Tips, Marketing

     Through optimizing your messaging, content, or your page layouts, you can increase conversions by getting your visitors through a clear pathway to achieve your business goals.

    Conversion Rate Optimization

    When we talk about optimizing websites to improve and increase conversions, we’re really talking about conversion rate optimization (CRO).

    CRO is the process of learning what the most valuable content/aspect of your website is and how to best optimize this for your visitors to increase its chance to convert. It typically involves generating ideas for elements on your site or app that can be improved, learning which pathways visitors are most likely going to take to conversion and then validating those assumptions through A/B testing and multivariate testing to transform learning into actionable insights.

    Conversion Rate

    The conversion rate is expressed as a % and the goal for any business should be to increase the % of conversions for any given goal e.g. in February a website had 200 newsletter sign-ups from 1,000 visitors on its sign-up page, a conversion rate of 20%. CRO should be used to increase the sign-up rate from 20% to 25%, and then eventually from 25% to 30% and so on.

    CRO cheat sheet

    You need to consider your website or business’ objectives (bigger picture) as well as your website goals (smaller achievements). Whatever the aim of your website, it’s crucial for this to be your starting point. Figure out what you want your website to do and what you want visitors to get from it. When you do that, you’ll know what conversions to focus on.
    • Define your business/website’s objectives. Do you want the website to drive sales ? Is the website a hub to raise awareness for a charity ? Do you want to increase readership for your news site ?
    • Define what your conversion goals are. This helps you narrow your focus so you follow a path to meet your overall objectives. By defining these, you clarify for yourself the next actions you should take, such as wanting to funnel users through to a sign up landing page. Then you’ll need to optimize and test your sign up landing page. If conversions are low, then tweak it and measure the results until you find you’ve increased conversion rates.
    • Conversion goals can include :
      • Purchases in your ecommerce store
      • eBook downloads
      • Sign ups to your mailing list
      • Visitors successfully filling in a contact form
    • Figure out what your Key Performance Indicators (KPIs) are and the metrics you need to focus on to achieve them.

    1. Set goals

    “Make Sure Goals Are Clearly Understood. To prove the value of an analytics-focused company, any project you take on needs to have clear goals. If you don’t have a goal in mind you’ll fail. Everyone involved in the project needs to be aligned around the goals.”

    - Lean Analytics : Use Data to Build a Better Startup Faster

    A goal is the measure of a successful action that you want your visitors to take. The more goals you track, the more you can learn about behavioural changes as you implement and modify paths that lead to conversions over time.

    Matomo goal feature

    You’ll understand which channels and campaigns (SEO, PPC, newsletter, blogging etc.) are converting the best for your business, which cities/countries are most popular, what devices are working and how engaged your visitors are before converting. Learn more

    2. Set Heatmaps

    This is vital to show how your visitors are engaging with your website, blog pages, signup and sales pages. If you want to learn how your visitors really engage with your website to increase conversions, Heatmaps lets you see the results visually without any guesswork.

    Matomo's heatmaps feature

    By showing where your visitors try to click, move the mouse or how far down they’re scrolling on each page, you can effortlessly discover how your visitors truly engage with your most important web pages. Rather than guessing, rely on facts to prove if the changes you make actually improve your website or not. Learn more

    How to improve conversion rates with Heatmaps :

    • If you’ve got important information that will sell your service/product or bring you loyal followers, make sure it’s in the hot zones as shown in your heatmaps.
    • Try to rearrange parts of your pages to see if that increases engagement.
    • Make it easy for people to take important actions by having the CTA above-the-fold where 100% of visitors see it. Make sure you don’t clutter this section with too many messages or actions.
    • You can also identify areas to add links as heatmaps shows where people want to click.
    • Find what content is most popular on the page

    3. Session Recordings

    This is a conversion research technique where you learn what your users are trying to do and make sure your website is optimized to give them what they want. With Session Recordings you can playback all the interactions your visitors took on your website, such as clicks, mouse movements, scrolls, resizes, form interactions and page changes in a video. Truly understand how real visitors are using your website and what experiences they’re having.

    Also, by understanding what’s working you’re increasing the usability of your website, Session Recordings allow you to identify problem areas as well as where users are getting stuck. Learn more

    Session Recordings

    How to improve conversion rates with Session Recordings : For example, on a product landing page, you see your visitor highlighting specific words and putting it into search. With this you can observe what they’re trying to find and what they’re actually interested in. As you tweak the page to ensure what the visitor wants can be easily found, you’re taking steps to increase the chance for more conversions.

    4. A/B Testing

    Test anything and test anywhere to increase your conversions. Grow your website by comparing different versions of your landing pages to determine what works best for your users. Subtle tweaks across different versions of your landing pages can have a significant impact on converting incoming traffic.

    Matomo's a/b testing feature

    The changes for each landing page could be :

    • A different headline
    • Less copy vs more copy
    • Different calls-to-action
    • Colour schemes, forms, fonts, links, testimonials,
    • Or, it could be an entirely different page layout altogether.

    The idea is to see if either page A or page B (or C or D) was most successful in getting your visitors to the next step in the conversion funnel. Learn more

    How Matomo used A/B Testing : For our sign up page we tested three different CTAs and found how phrasing words differently could help improve conversion rates. Both “Start improving your websites” and “Start converting more users now” were stronger CTAs and converted 7% more than, “Start my free 30-day trial”.

    5. Form Analytics

    Form Analytics gives you powerful insights into how your visitors interact with your forms (like cart, sign-up and checkout forms).

    Form Analytics

    Online forms can come in thousands of different variations. It’s an area on your website that if not done right, could lead to you missing out on converting a large portion of your visitors. Rely on facts when you change your forms. Learn more

    How to improve conversion rates with Form Analytics : By proving whether your form is doing better when you change it and by how much. This lets you consistently increase form submission rates (conversions) on your website which is crucial to the success of your business.

    6. Funnels

    At a glance you will learn the steps (actions, events and pages) your users go through to the desired outcomes you want them to achieve whether it’s a sale, sign-up or any other particular goal you have defined.

    Funnels feature

    Looking at the entire conversion funnel and focusing on usability, you’ll be able to identify where your visitors are having problems, where they aren’t understanding the flow of your webpages and identify obstacles that get in the way of your users reaching that end goal. Learn more

    How to improve conversion rates with Funnels : Learn what makes your visitors take action (or what stops them) in progressing to the next step in the conversion funnel. At each step, you’ll discover what content/layout resonates with your visitors and you can optimize your website to have the greatest impact on your business.

    7. Behaviour

    This is one of the most important features to help you optimize your website for conversions. Learning visitor behaviour is a driving force to increase conversions. How ? It lets you identify where you could be taking action to increase conversions. You get to learn first-hand what content or feature on your site is or isn’t working for your visitors. 

    Behaviour feature

    Engagement is essential to help increase conversion rates. If your visitors aren’t interested in the content on your site, then there’s very little chance they’ll be interested in what you have to offer. Learn more

    How to improve conversion rates with Behaviour : Get started by reducing bounce rates on important pages, testing messaging on your most popular entry pages, testing on the highest exit pages to reduce visitors leaving the site, learning pathways through Users Flow and Transitions to see if users are taking pathways that lead them to conversions or are the journeys currently long or go in odd directions. Discover how your visitors are responding to your content. The happier your visitors are to stay on your site, the more likely they’ll be able to move through the journey to help you achieve the goals you’ve set for your site.

    Do privacy-focused industries need conversion optimization ?

    For industries that place extra emphasis on privacy and security, Matomo is a complete analytics tool that can cater for all your needs. You get the full benefits of a web analytics and conversion optimization platform as well as peace of mind knowing Matomo places emphasis on security/privacy and adheres strictly to GDPR.

    If you operate in a data sensitive industry like in government, healthcare, finance, education etc. you can rest assured knowing your user’s privacy is respected and that you will have 100% data ownership.

    Other conversion optimization metrics in Matomo to look at :

    Get a good indication that your conversion optimization efforts are working by knowing where to look and this starts by going through the metrics in your analytics. Below we list how you can make a start.

    “Best” metrics are hard to determine so you’ll need to ask yourself what you want your site to do. How do you want your users to behave or what kind of customer journey do you want them to have ?

    You can start with :

    • Decreasing abandonment rate
    • Decreasing bounce rate
    • Increasing interactions per visit
    • Reducing exit rates on pages that significantly impact your visitors to leave your site
    • Constantly test and learn what content resonates with your visitors
    • Look to advance more users through each stage of the conversion funnel
    • Improve your forms to increase submission rates
    • Always improve the conversion rate % for your goals e.g. if you currently have a 5% conversion rate for selling a product, aim for 10% ; if 30% of your visitors are downloading your e-book, then aim for 40%, then 50% and so on.

    Through optimizing your messaging, content or your page layouts, you will increase conversions by getting your visitors through a clear pathway to meet your website’s goal.

  • Four Trends Shaping the Future of Analytics in Banking

    27 novembre 2024, par Daniel Crough — Banking and Financial Services

    While retail banking revenues have been growing in recent years, trends like rising financial crimes and capital required for generative AI and ML tech pose significant risks and increase operating costs across the financial industry, according to McKinsey’s State of Retail Banking report.

     

    Today’s financial institutions are focused on harnessing AI and advanced analytics to make their data work for them. To be up to the task, analytics solutions must allow banks to give consumers the convenient, personalised experiences they want while respecting their privacy.

     

    In this article, we’ll explore some of the big trends shaping the future of analytics in banking and finance. We’ll also look at how banks use data and technology to cut costs and personalise customer experiences.

    So, let’s get into it.

    Graph showing average age of IT applications in insurance (18 years)

    This doesn’t just represent a security risk, it also impacts the usability for both customers and employees. Does any of the following sound familiar ?

    • Only specific senior employees know how to navigate the software to generate custom reports or use its more advanced features.
    • Customer complaints about your site’s usability or online banking experience are routine.
    • Onboarding employees takes much longer than necessary because of convoluted systems.
    • Teams and departments experience ‘data siloing,’ meaning that not everyone can access the data they need.

    These are warning signs that IT systems are ready for a review. Anyone thinking, “If it’s not broken, why fix it ?” should consider that legacy systems can also present data security risks. As more countries introduce regulations to protect customer privacy, staying ahead of the curve is increasingly important to avoid penalties and litigation.

    And regulations aren’t the only trends impacting the future of financial institutions’ IT and analytics.

    4 trends shaping the future of analytics in banking

    New regulations and new technology have changed the landscape of analytics in banking.

    New privacy regulations impact banks globally

    The first major international example was the advent of GDPR, which went into effect in the EU in 2018. But a lot has happened since. New privacy regulations and restrictions around AI continue to roll out.

    • The European Artificial Intelligence Act (EU AI Act), which was held up as the world’s first comprehensive legislation on AI, took effect on 31 July 2024.
    • In Europe’s federated data initiative, Gaia-X’s planned cloud infrastructure will provide for more secure, transparent, and trustworthy data storage and processing.
    • The revised Payment Services Directive (PSD2) makes payments more secure and strengthens protections for European businesses and consumers, aiming to create a more integrated and efficient payments market.

    But even businesses that don’t have customers in Europe aren’t safe. Consumer privacy is a hot-button issue globally.

    For example, the California Consumer Privacy Act (CCPA), which took effect in January, impacts the financial services industry more than any other. Case in point, 34% of CCPA-related cases filed in 2022 were related to the financial sector.

    California’s privacy regulations were the first in the US, but other states are following closely behind. On 1 July 2024, new privacy laws went into effect in Florida, Oregon, and Texas, giving people more control over their data.

    Share of CCPA cases in the financial industry in 2022 (34%)

    One typical issue for companies in the banking industry is that their privacy measures regarding user data collected from their website are much less lax than those in their online banking system.

    It’s better to proactively invest in a privacy-centric analytics platform before you get tangled up in a lawsuit and have to pay a fine (and are forced to change your system anyway). 

    And regulatory compliance isn’t the only bonus of an ethical analytics solution. The right alternative can unlock key customer insights that can help you improve the user experience.

    The demand for personalised banking services

    At the same time, consumers are expecting a more and more streamlined personal experience from financial institutions. 86% of bank employees say personalisation is a clear priority for the company. But 63% described resources as limited or only available after demonstrating clear business cases.

    McKinsey’s The data and analytics edge in corporate and commercial banking points out how advanced analytics are empowering frontline bank employees to give customers more personalised experiences at every stage :

    • Pre-meeting/meeting prep : Using advanced analytics to assess customer potential, recommend products, and identify prospects who are most likely to convert
    • Meetings/negotiation : Applying advanced models to support price negotiations, what-if scenarios and price multiple products simultaneously
    • Post-meeting/tracking : Using advanced models to identify behaviours that lead to high performance and improve forecast accuracy and sales execution

    Today’s banks must deliver the personalisation that drives customer satisfaction and engagement to outperform their competitors.

    The rise of AI and its role in banking

    With AI and machine learning technologies becoming more powerful and accessible, financial institutions around the world are already reaping the rewards.

    McKinsey estimates that AI in banking could add $200 to 340 billion annually across the global banking sector through productivity gains.

    • Credit card fraud prevention : Algorithms analyse usage to flag and block fraudulent transactions.
    • More accurate forecasting : AI-based tools can analyse a broader spectrum of data points and forecast more accurately.
    • Better risk assessment and modelling : More advanced analytics and predictive models help avoid extending credit to high-risk customers.
    • Predictive analytics : Help spot clients most likely to churn 
    • Gen-AI assistants : Instantly analyse customer profiles and apply predictive models to suggest the next best actions.

    Considering these market trends, let’s discuss how you can move your bank into the future.

    Using analytics to minimise risk and establish a competitive edge 

    With the right approach, you can leverage analytics and AI to help future-proof your bank against changing customer expectations, increased fraud, and new regulations.

    Use machine learning to prevent fraud

    Every year, more consumers are victims of credit and debit card fraud. Debit card skimming cases nearly doubled in the US in 2023. The last thing you want as a bank is to put your customer in a situation where a criminal has spent their money.

    This not only leads to a horrible customer experience but also creates a lot of internal work and additional costs.Thankfully, machine learning can help identify suspicious activity and stop transactions before they go through. For example, Mastercard’s fraud prevention model has improved fraud detection rates by 20–300%.

    A credit card fraud detection robot

    Implementing a solution like this (or partnering with credit card companies who use it) may be a way to reduce risk and improve customer trust.

    Foresee and avoid future issues with AI-powered risk management

    Regardless of what type of financial products organisations offer, AI can be an enormous tool. Here are just a few ways in which it can mitigate financial risk in the future :

    • Predictive analytics can evaluate risk exposure and allow for more informed decisions about whether to approve commercial loan applications.
    • With better credit risk modelling, banks can avoid extending personal loans to customers most likely to default.
    • Investment banks (or individual traders or financial analysts) can use AI- and ML-based systems to monitor market and trading activity more effectively.

    Those are just a few examples that barely scratch the surface. Many other AI-based applications and analytics use cases exist across all industries and market segments.

    Protect customer privacy while still getting detailed analytics

    New regulations and increasing consumer privacy concerns don’t mean banks and financial institutions should forego website analytics altogether. Its insights into performance and customer behaviour are simply too valuable. And without customer interaction data, you’ll only know something’s wrong if someone complains.

    Fortunately, it doesn’t have to be one or the other. The right financial analytics solution can give you the data and insights needed without compromising privacy while complying with regulations like GDPR and CCPA.

    That way, you can track usage patterns and improve site performance and content quality based on accurate data — without compromising privacy. Reliable, precise analytics are crucial for any bank that’s serious about user experience.

    Use A/B testing and other tools to improve digital customer experiences

    Personalised digital experiences can be key differentiators in banking and finance when done well. But there’s stiff competition. In 2023, 40% of bank customers rated their bank’s online and mobile experience as excellent. 

    Improving digital experiences for users while respecting their privacy means going above and beyond a basic web analytics tool like Google Analytics. Invest in a platform with features like A/B tests and user session analysis for deeper insights into user behaviour.

    Diagram of an A/B test with 4 visitors divided into two groups shown different options

    Behavioural analytics are crucial to understanding customer interactions. By identifying points of friction and drop-off points, you can make digital experiences smoother and more engaging.

    Matomo offers all this and is a great GDPR-compliant alternative to Google Analytics for banks and financial institutions

    Of course, this can be challenging. This is why taking an ethical and privacy-centric approach to analytics can be a key competitive edge for banks. Prioritising data security and privacy will attract other like-minded, ethically conscious consumers and boost customer loyalty.

    Get privacy-friendly web analytics suitable for banking & finance with Matomo

    Improving digital experiences for today’s customers requires a solid web analytics platform that prioritises data privacy and accurate analytics. And choosing the wrong one could even mean ending up in legal trouble or scrambling to reconstruct your entire analytics setup.

    Matomo provides privacy-friendly analytics with 100% data accuracy (no sampling), advanced privacy controls and the ability to run A/B tests and user session analysis within the same platform (limiting risk and minimising costs). 

    It’s easy to get started with Matomo. Users can access clear, easy-to-understand metrics and plenty of pre-made reports that deliver valuable insights from day one. Form usage reports can help banks and fintechs identify potential issues with broken links or technical glitches and reveal clues on improving UX in the short term.

    Over one million websites, including some of the world’s top banks and financial institutions, use Matomo for their analytics.

    Start your 21-day free trial to see why, or book a demo with one of our analytics experts.