Recherche avancée

Médias (0)

Mot : - Tags -/diogene

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (73)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

Sur d’autres sites (7243)

  • Introducing the BigQuery & Data Warehouse Export feature

    30 janvier, par Matomo Core Team

    Matomo is built on a simple truth : your data belongs to you, and you should have complete control over it. That’s why we’re excited to launch our new BigQuery & Data Warehouse Export feature for Matomo Cloud, giving you even more ways to work with your analytics data. 

    Until now, getting raw data from Matomo Cloud required APIs and custom scripts, or waiting for engineering help.  

    Our new BigQuery & Data Warehouse Export feature removes those barriers. You can now access your raw, unaggregated data and schedule regular exports straight to your data warehouse. 

    The feature works with all major data warehouses including (but not limited to) : 

    • Google BigQuery 
    • Amazon Redshift 
    • Snowflake 
    • Azure Synapse Analytics 
    • Apache Hive 
    • Teradata 

    You can schedule exports, combine your Matomo data with other data sources in your data warehouse, and easily query data with SQL-like queries. 

    Direct raw data access for greater data portability 

    Waiting for engineering support can delay your work. Managing API connections and writing scripts can be time-consuming. This keeps you from focusing on what you do best—analysing data. 

    BigQuery create-table-menu

    With the BigQuery & Data Warehouse Export feature, you get direct access to your raw Matomo data without the technical setup. So, you can spend more time analysing data and finding insights that matter. 

    Bringing your data together 

    Answering business questions often requires data from multiple sources. A single customer interaction might span your CRM, web analytics, sales systems, and more. Piecing this data together manually is time-consuming—what starts as a seemingly simple question from stakeholders can turn into hours of work collecting and comparing data across different tools. 

    This feature lets you combine your Matomo data with data from other business systems in your data warehouse. Instead of switching between tools or manually comparing spreadsheets, you can analyse all your data in one place to better understand how customers interact with your business. 

    Easy, custom analysis with SQL-like queries 

    Standard, pre-built reports often don’t address the specific, detailed questions that analysts need to answer.  

    When you use the BigQuery & Data Warehouse Export feature, you can use SQL-like queries in your data warehouse to do detailed, customised analysis. This flexibility allows you to explore your data in depth and uncover specific insights that aren’t possible with pre-built reports. 

    Here is an example of how you might use SQL-like query to compare the behaviours of paying vs. non-paying users : 

    				
                                            <xmp>SELECT  

    custom_dimension_value AS user_type, -- Assuming 'user_type' is stored in a custom dimension

    COUNT(*) AS total_visits,  

    AVG(visit_total_time) AS avg_duration,

    SUM(conversion.revenue) AS total_spent  

    FROM  

    `your_project.your_dataset.matomo_log_visit` AS visit

    LEFT JOIN  

    `your_project.your_dataset.matomo_log_conversion` AS conversion  

    ON  

    visit.idvisit = conversion.idvisit  

    GROUP BY  

    custom_dimension_value; </xmp>
                                   

    This query helps you compare metrics such as the number of visits, average session duration, and total amount spent between paying and non-paying users. It provides a full view of behavioural differences between these groups. 

    Advanced data manipulation and visualisation 

    When you need to create detailed reports or dive deep into data analysis, working within the constraints of a fixed user interface (UI) can limit your ability to draw insights. 

    Exporting your Matomo data to a data warehouse like BigQuery provides greater flexibility for in-depth manipulation and advanced visualisations, enabling you to uncover deeper insights and tailor your reports more effectively. 

    Getting started 

    To set up data warehouse exports in your Matomo : 

    1. Go to System Admin (cog icon in the top right corner) 
    2. Select ‘Export’ from the left-hand menu 
    3. Choose ‘BigQuery & Data Warehouse’ 

    You’ll find detailed instructions in our data warehouse exports guide 

    Please note, enabling this feature will cost an additional 10% of your current subscription. You can view the exact cost by following the steps above. 

    New to Matomo ? Start your 21-day free trial now (no credit card required), or request a demo. 

  • Adventures In NAS

    1er janvier, par Multimedia Mike — General

    In my post last year about my out-of-control single-board computer (SBC) collection which included my meager network attached storage (NAS) solution, I noted that :

    I find that a lot of my fellow nerds massively overengineer their homelab NAS setups. I’ll explore this in a future post. For my part, people tend to find my homelab NAS solution slightly underengineered.

    So here I am, exploring this is a future post. I’ve been in the home NAS game a long time, but have never had very elaborate solutions for such. For my part, I tend to take an obsessively reductionist view of what constitutes a NAS : Any small computer with a pool of storage and a network connection, running the Linux operating system and the Samba file sharing service.


    Simple hard drive and ethernet cable

    Many home users prefer to buy turnkey boxes, usually that allow you to install hard drives yourself, and then configure the box and its services with a friendly UI. My fellow weird computer nerds often buy cast-off enterprise hardware and set up more resilient, over-engineered solutions, as long as they have strategies to mitigate the noise and dissipate the heat, and don’t mind the electricity bills.

    If it works, awesome ! As an old hand at this, I am rather stuck in my ways, however, preferring to do my own stunts, both with the hardware and software solutions.

    My History With Home NAS Setups
    In 1998, I bought myself a new computer — beige box tower PC, as was the style as the time. This was when normal people only had one computer at most. It ran Windows, but I was curious about this new thing called “Linux” and learned to dual boot that. Later that year, it dawned on me that nothing prevented me from buying a second ugly beige box PC and running Linux exclusively on it. Further, it could be a headless Linux box, connected by ethernet, and I could consolidate files into a single place using this file sharing software named Samba.

    I remember it being fairly onerous to get Samba working in those days. And the internet was not quite so helpful in those days. I recall that the thing that blocked me for awhile was needing to know that I had to specify an entry for the Samba server machine in the LMHOSTS (Lanman hosts) file on the Windows 95 machine.

    However, after I cracked that code, I have pretty much always had some kind of ad-hoc home NAS setup, often combined with a headless Linux development box.

    In the early 2000s, I built a new beige box PC for a file server, with a new hard disk, and a coworker tutored me on setting up a (P)ATA UDMA 133 (or was it 150 ? anyway, it was (P)ATA’s last hurrah before SATA conquered all) expansion card and I remember profiling that the attached hard drive worked at a full 21 MBytes/s reading. It was pretty slick. Except I hadn’t really thought things through. You see, I had a hand-me-down ethernet hub cast-off from my job at the time which I wanted to use. It was a 100 Mbps repeater hub, not a switch, so the catch was that all connected machines had to be capable of 100 Mbps. So, after getting all of my machines (3 at the time) upgraded to support 10/100 ethernet (the old off-brand PowerPC running Linux was the biggest challenge), I profiled transfers and realized that the best this repeater hub could achieve was about 3.6 MBytes/s. For a long time after that, I just assumed that was the upper limit of what a 100 Mbps network could achieve. Obviously, I now know that the upper limit ought to be around 11.2 MBytes/s and if I had gamed out that fact in advance, I would have realized it didn’t make sense to care about super-fast (for the time) disk performance.

    At this time, I was doing a lot for development for MPlayer/xine/FFmpeg. I stored all of my multimedia material on this NAS. I remember being confused when I was working with Y4M data, which is raw frames, which is lots of data. xine, which employed a pre-buffering strategy, would play fine for a few seconds and then stutter. Eventually, I reasoned out that the files I was working with had a data rate about twice what my awful repeater hub supported, which is probably the first time I came to really understand and respect streaming speeds and their implications for multimedia playback.

    Smaller Solutions
    For a period, I didn’t have a NAS. Then I got an Apple AirPort Extreme, which I noticed had a USB port. So I bought a dual drive brick to plug into it and used that for a time. Later (2009), I had this thing called the MSI Wind Nettop which is the only PC I’ve ever seen that can use a CompactFlash (CF) card for a boot drive. So I did just that, and installed a large drive so it could function as a NAS, as well as a headless dev box. I’m still amazed at what a low-power I/O beast this thing is, at least when compared to all the ARM SoCs I have tried in the intervening 1.5 decades. I’ve had spinning hard drives in this thing that could read at 160 MBytes/s (‘dd’ method) and have no trouble saturating the gigabit link at 112 MBytes/s, all with its early Intel Atom CPU.

    Around 2015, I wanted a more capable headless dev box and discovered Intel’s line of NUCs. I got one of the fat models that can hold a conventional 2.5″ spinning drive in addition to the M.2 SATA SSD and I was off and running. That served me fine for a few years, until I got into the ARM SBC scene. One major limitation here is that 2.5″ drives aren’t available in nearly the capacities that make a NAS solution attractive.

    Current Solution
    My current NAS solution, chronicled in my last SBC post– the ODroid-HC2, which is a highly compact ARM SoC with an integrated USB3-SATA bridge so that a SATA drive can be connected directly to it :


    ODROID-HC2 NAS

    ODROID-HC2 NAS


    I tend to be weirdly proficient at recalling dates, so I’m surprised that I can’t recall when I ordered this and put it into service. But I’m pretty sure it was circa 2018. It’s only equipped with an 8 TB drive now, but I seem to recall that it started out with only a 4 TB drive. I think I upgraded to the 8 TB drive early in the pandemic in 2020, when ISPs were implementing temporary data cap amnesty and I was doing what a r/DataHoarder does.

    The HC2 has served me well, even though it has a number of shortcomings for a hardware set chartered for NAS :

    1. While it has a gigabit ethernet port, it’s documented that it never really exceeds about 70 MBytes/s, due to the SoC’s limitations
    2. The specific ARM chip (Samsung Exynos 5422 ; more than a decade old as of this writing) lacks cryptography instructions, slowing down encryption if that’s your thing (e.g., LUKS)
    3. While the SoC supports USB3, that block is tied up for the SATA interface ; the remaining USB port is only capable of USB2 speeds
    4. 32-bit ARM, which prevented me from running certain bits of software I wanted to try (like Minio)
    5. Only 1 drive, so no possibility for RAID (again, if that’s your thing)

    I also love to brag on the HC2’s power usage : I once profiled the unit for a month using a Kill-A-Watt and under normal usage (with the drive spinning only when in active use). The unit consumed 4.5 kWh… in an entire month.

    New Solution
    Enter the ODroid-HC4 (I purchased mine from Ameridroid but Hardkernel works with numerous distributors) :


    ODroid-HC4 with 2 drives

    ODroid-HC4 with an SSD and a conventional drive


    I ordered this earlier in the year and after many months of procrastinating and obsessing over the best approach to take with its general usage, I finally have it in service as my new NAS. Comparing point by point with the HC2 :

    1. The gigabit ethernet runs at full speed (though a few things on my network run at 2.5 GbE now, so I guess I’ll always be behind)
    2. The ARM chip (Amlogic S905X3) has AES cryptography acceleration and handles all the LUKS stuff without breaking a sweat ; “cryptsetup benchmark” reports between 500-600 MBytes/s on all the AES variants
    3. The USB port is still only USB2, so no improvement there
    4. 64-bit ARM, which means I can run Minio to simulate block storage in a local dev environment for some larger projects I would like to undertake
    5. Supports 2 drives, if RAID is your thing

    How I Set It Up
    How to set up the drive configuration ? As should be apparent from the photo above, I elected for an SSD (500 GB) for speed, paired with a conventional spinning HDD (18 TB) for sheer capacity. I’m not particularly trusting of RAID. I’ve watched it fail too many times, on systems that I don’t even manage, not to mention that aforementioned RAID brick that I had attached to the Apple AirPort Extreme.

    I had long been planning to use bcache, the block caching interface for Linux, which can use the SSD as a speedy cache in front of the more capacious disk. There is also LVM cache, which is supposed to achieve something similar. And then I had to evaluate the trade-offs in whether I wanted write-back, write-through, or write-around configurations.

    This was all predicated on the assumption that the spinning drive would not be able to saturate the gigabit connection. When I got around to setting up the hardware and trying some basic tests, I found that the conventional HDD had no trouble keeping up with the gigabit data rate, both reading and writing, somewhat obviating the need for SSD acceleration using any elaborate caching mechanisms.

    Maybe that’s because I sprung for the WD Red Pro series this time, rather than the Red Plus ? I’m guessing that conventional drives do deteriorate over the years. I’ll find out.

    For the operating system, I stuck with my newest favorite Linux distro : DietPi. While HardKernel (parent of ODroid) makes images for the HC units, I had also used DietPi for the HC2 for the past few years, as it tends to stay more up to date.

    Then I rsync’d my data from HC2 -> HC4. It was only about 6.5 TB of total data but it took days as this WD Red Plus drive is only capable of reading at around 10 MBytes/s these days. Painful.

    For file sharing, I’m pretty sure most normal folks have nice web UIs in their NAS boxes which allow them to easily configure and monitor the shares. I know there are such applications I could set up. But I’ve been doing this so long, I just do a bare bones setup through the terminal. I installed regular Samba and then brought over my smb.conf file from the HC2. 1 by 1, I tested that each of the old shares were activated on the new NAS and deactivated on the old NAS. I also set up a new share for the SSD. I guess that will just serve as a fast I/O scratch space on the NAS.

    The conventional drive spins up and down. That’s annoying when I’m actively working on something but manage not to hit the drive for like 5 minutes and then an application blocks while the drive wakes up. I suppose I could set it up so that it is always running. However, I micro-manage this with a custom bash script I wrote a long time ago which logs into the NAS and runs the “date” command every 2 minutes, appending the output to a file. As a bonus, it also prints data rate up/down stats every 5 seconds. The spinning file (“nas-main/zz-keep-spinning/keep-spinning.txt”) has never been cleared and has nearly a quarter million lines. I suppose that implies that it has kept the drive spinning for 1/2 million minutes which works out to around 347 total days. I should compare that against the drive’s SMART stats, if I can remember how. The earliest timestamp in the file is from March 2018, so I know the HC2 NAS has been in service at least that long.

    For tasks, vintage cron still does everything I could need. In this case, that means reaching out to websites (like this one) and automatically backing up static files.

    I also have to have a special script for starting up. Fortunately, I was able to bring this over from the HC2 and tweak it. The data disks (though not boot disk) are encrypted. Those need to be unlocked and only then is it safe for the Samba and Minio services to start up. So one script does all that heavy lifting in the rare case of a reboot (this is the type of system that’s well worth having on a reliable UPS).

    Further Work
    I need to figure out how to use the OLED display on the NAS, and how to make it show something more useful than the current time and date, which is what it does in its default configuration with HardKernel’s own Linux distro. With DietPi, it does nothing by default. I’m thinking it should be able to show the percent usage of each of the 2 drives, at a minimum.

    I also need to establish a more responsible backup regimen. I’m way too lazy about this. Fortunately, I reason that I can keep the original HC2 in service, repurposed to accept backups from the main NAS. Again, I’m sort of micro-managing this since a huge amount of data isn’t worth backing up (remember the whole DataHoarder bit), but the most important stuff will be shipped off.

    The post Adventures In NAS first appeared on Breaking Eggs And Making Omelettes.

  • How HSBC and ING are transforming banking with AI

    9 novembre 2024, par Daniel Crough — Banking and Financial Services, Featured Banking Content

    We recently partnered with FinTech Futures to produce an exciting webinar discussing how analytics leaders from two global banks are using AI to protect customers, streamline operations, and support environmental goals.

    Watch the on-demand webinar : Advancing analytics maturity.

    By providing your email and clicking “submit”, you agree to receive direct marketing materials relating to Matomo products and services, surveys, information about events, publications and promotions. You can unsubscribe at any time by clicking the opt-out link provided in each communication. We will process your personal information in accordance with our Privacy Policy.

    &lt;script&gt;document.getElementById( &quot;ak_js_3&quot; ).setAttribute( &quot;value&quot;, ( new Date() ).getTime() );&lt;/script&gt;

    &lt;script&gt;<br />
    gform.initializeOnLoaded( function() {gformInitSpinner( 71, 'https://matomo.org/wp-content/plugins/gravityforms/images/spinner.svg', true );jQuery('#gform_ajax_frame_71').on('load',function(){var contents = jQuery(this).contents().find('*').html();var is_postback = contents.indexOf('GF_AJAX_POSTBACK') &gt;= 0;if(!is_postback){return;}var form_content = jQuery(this).contents().find('#gform_wrapper_71');var is_confirmation = jQuery(this).contents().find('#gform_confirmation_wrapper_71').length &gt; 0;var is_redirect = contents.indexOf('gformRedirect(){') &gt;= 0;var is_form = form_content.length &gt; 0 &amp;&amp; ! is_redirect &amp;&amp; ! is_confirmation;var mt = parseInt(jQuery('html').css('margin-top'), 10) + parseInt(jQuery('body').css('margin-top'), 10) + 100;if(is_form){jQuery('#gform_wrapper_71').html(form_content.html());if(form_content.hasClass('gform_validation_error')){jQuery('#gform_wrapper_71').addClass('gform_validation_error');} else {jQuery('#gform_wrapper_71').removeClass('gform_validation_error');}setTimeout( function() { /* delay the scroll by 50 milliseconds to fix a bug in chrome */  }, 50 );if(window['gformInitDatepicker']) {gformInitDatepicker();}if(window['gformInitPriceFields']) {gformInitPriceFields();}var current_page = jQuery('#gform_source_page_number_71').val();gformInitSpinner( 71, 'https://matomo.org/wp-content/plugins/gravityforms/images/spinner.svg', true );jQuery(document).trigger('gform_page_loaded', [71, current_page]);window['gf_submitting_71'] = false;}else if(!is_redirect){var confirmation_content = jQuery(this).contents().find('.GF_AJAX_POSTBACK').html();if(!confirmation_content){confirmation_content = contents;}setTimeout(function(){jQuery('#gform_wrapper_71').replaceWith(confirmation_content);jQuery(document).trigger('gform_confirmation_loaded', [71]);window['gf_submitting_71'] = false;wp.a11y.speak(jQuery('#gform_confirmation_message_71').text());}, 50);}else{jQuery('#gform_71').append(contents);if(window['gformRedirect']) {gformRedirect();}}jQuery(document).trigger(&quot;gform_pre_post_render&quot;, [{ formId: &quot;71&quot;, currentPage: &quot;current_page&quot;, abort: function() { this.preventDefault(); } }]);                if (event.defaultPrevented) {                return;         }        const gformWrapperDiv = document.getElementById( &quot;gform_wrapper_71&quot; );        if ( gformWrapperDiv ) {            const visibilitySpan = document.createElement( &quot;span&quot; );            visibilitySpan.id = &quot;gform_visibility_test_71&quot;;            gformWrapperDiv.insertAdjacentElement( &quot;afterend&quot;, visibilitySpan );        }        const visibilityTestDiv = document.getElementById( &quot;gform_visibility_test_71&quot; );        let postRenderFired = false;                function triggerPostRender() {            if ( postRenderFired ) {                return;            }            postRenderFired = true;            jQuery( document ).trigger( 'gform_post_render', [71, current_page] );            gform.utils.trigger( { event: 'gform/postRender', native: false, data: { formId: 71, currentPage: current_page } } );            if ( visibilityTestDiv ) {                visibilityTestDiv.parentNode.removeChild( visibilityTestDiv );            }        }        function debounce( func, wait, immediate ) {            var timeout;            return function() {                var context = this, args = arguments;                var later = function() {                    timeout = null;                    if ( !immediate ) func.apply( context, args );                };                var callNow = immediate &amp;&amp; !timeout;                clearTimeout( timeout );                timeout = setTimeout( later, wait );                if ( callNow ) func.apply( context, args );            };        }        const debouncedTriggerPostRender = debounce( function() {            triggerPostRender();        }, 200 );        if ( visibilityTestDiv &amp;&amp; visibilityTestDiv.offsetParent === null ) {            const observer = new MutationObserver( ( mutations ) =&gt; {                mutations.forEach( ( mutation ) =&gt; {                    if ( mutation.type === 'attributes' &amp;&amp; visibilityTestDiv.offsetParent !== null ) {                        debouncedTriggerPostRender();                        observer.disconnect();                    }                });            });            observer.observe( document.body, {                attributes: true,                childList: false,                subtree: true,                attributeFilter: [ 'style', 'class' ],            });        } else {            triggerPostRender();        }    } );} );<br />
    &lt;/script&gt;

    Meet the expert panel

    Roshini Johri heads ESG Analytics at HSBC, where she leads AI and remote sensing applications supporting the bank’s net zero goals. Her expertise spans climate tech and financial services, with a focus on scalable analytics solutions.

     

    Marco Li Mandri leads Advanced Analytics Strategy at ING, where he focuses on delivering high-impact solutions and strengthening analytics foundations. His background combines analytics, KYC operations, and AI strategy.

     

    Carmen Soini Tourres works as a Web Analyst Consultant at Matomo, helping financial organisations optimise their digital presence whilst maintaining privacy compliance.

     

    Key findings from the webinar

    The discussion highlighted four essential elements for advancing analytics capabilities :

    1. Strong data foundations matter most

    “It doesn’t matter how good the AI model is. It is garbage in, garbage out,”

    Johri explained. Banks need robust data governance that works across different regulatory environments.

    2. Transform rather than tweak

    Li Mandri emphasised the need to reconsider entire processes :

    “We try to look at the banking domain and processes and try to re-imagine how they should be done with AI.”

    3. Bridge technical and business understanding

    Both leaders stressed the value of analytics translators who understand both technology and business needs.

    “We’re investing in this layer we call product leads,”

    Li Mandri explained. These roles combine technical knowledge with business acumen – a rare but vital skill set.

    4. Consider production costs early

    Moving from proof-of-concept to production requires careful planning. As Johri noted :

    “The scale of doing things in production is quite massive and often doesn’t get accounted for in the cost.”

    This includes :

    • Ongoing monitoring requirements
    • Maintenance needs
    • Regulatory compliance checks
    • Regular model updates

    Real-world applications

    ING’s approach demonstrates how banks can transform their operations through thoughtful AI implementation. Li Mandri shared several areas where the bank has successfully deployed analytics solutions, each benefiting both the bank and its customers.

    Customer experience enhancement

    The bank’s implementation of AI-powered instant loan processing shows how analytics can transform traditional banking.

    “We know AI can make loans instant for the customer, that’s great. Clicking one button and adding a loan, that really changes things,”

    Li Mandri explained. This goes beyond automation – it represents a fundamental shift in how banks serve their customers.

    The system analyses customer data to make rapid lending decisions while maintaining strong risk assessment standards. For customers, this means no more lengthy waiting periods or complex applications. For the bank, it means more efficient resource use and better risk management.

    The bank also uses AI to personalise customer communications.

    “We’re using that to make certain campaigns more personalised, having a certain tone of voice,”

    noted Li Mandri. This particularly resonates with younger customers who expect relevant, personalised interactions from their bank.

    Operational efficiency transformation

    ING’s approach to Know Your Customer (KYC) processes shows how AI can transform resource-heavy operations.

    “KYC is a big area of cost for the bank. So we see massive value there, a lot of scale,”

    Li Mandri explained. The bank developed an AI-powered system that :

    • Automates document verification
    • Flags potential compliance issues for human review
    • Maintains consistent standards across jurisdictions
    • Reduces processing time while improving accuracy

    This implementation required careful consideration of regulations across different markets. The bank developed monitoring systems to ensure their AI models maintain high accuracy while meeting compliance standards.

    In the back office, ING uses AI to extract and process data from various documents, significantly reducing manual work. This automation lets staff focus on complex tasks requiring human judgment.

    Sustainable finance initiatives

    ING’s commitment to sustainable banking has driven innovative uses of AI in environmental assessment.

    “We have this ambition to be a sustainable bank. If you want to be a sustainable finance customer, that requires a lot of work to understand who the company is, always comparing against its peers.”

    The bank developed AI models that :

    • Analyse company sustainability metrics
    • Compare environmental performance against industry benchmarks
    • Assess transition plans for high-emission industries
    • Monitor ongoing compliance with sustainability commitments

    This system helps staff evaluate the environmental impact of potential deals quickly and accurately.

    “We are using AI there to help our frontline process customers to see how green that deal might be and then use that as a decision point,”

    Li Mandri noted.

    HSBC’s innovative approach

    Under Johri’s leadership, HSBC has developed several groundbreaking uses of AI and analytics, particularly in environmental monitoring and operational efficiency. Their work shows how banks can use advanced technology to address complex global challenges while meeting regulatory requirements.

    Environmental monitoring through advanced technology

    HSBC uses computer vision and satellite imagery analysis to measure environmental impact with new precision.

    “This is another big research area where we look at satellite images and we do what is called remote sensing, which is the study of a remote area,”

    Johri explained.

    The system provides several key capabilities :

    • Analysis of forest coverage and deforestation rates
    • Assessment of biodiversity impact in specific regions
    • Monitoring of environmental changes over time
    • Measurement of environmental risk in lending portfolios

    “We can look at distant images of forest areas and understand how much percentage deforestation is being caused in that area, and we can then measure our biodiversity impact more accurately,”

    Johri noted. This technology enables HSBC to :

    • Make informed lending decisions
    • Monitor environmental commitments of borrowers
    • Support sustainability-linked lending programmes
    • Provide accurate environmental impact reporting

    Transforming document analysis

    HSBC is tackling one of banking’s most time-consuming challenges : processing vast amounts of documentation.

    “Can we reduce the onus of human having to go and read 200 pages of sustainability reports each time to extract answers ?”

    Johri asked. Their solution combines several AI technologies to make this process more efficient while maintaining accuracy.

    The bank’s approach includes :

    • Natural language processing to understand complex documents
    • Machine learning models to extract relevant information
    • Validation systems to ensure accuracy
    • Integration with existing compliance frameworks

    “We’re exploring solutions to improve our reporting, but we need to do it in a safe, robust and transparent way.”

    This careful balance between efficiency and accuracy exemplifies HSBC’s approach to AI.

    Building future-ready analytics capabilities

    Both banks emphasise that successful analytics requires a comprehensive, long-term approach. Their experiences highlight several critical considerations for financial institutions looking to advance their analytics capabilities.

    Developing clear governance frameworks

    “Understanding your AI risk appetite is crucial because banking is a highly regulated environment,”

    Johri emphasised. Banks need to establish governance structures that :

    • Define acceptable uses for AI
    • Establish monitoring and control mechanisms
    • Ensure compliance with evolving regulations
    • Maintain transparency in AI decision-making

    Creating solutions that scale

    Li Mandri stressed the importance of building systems that grow with the organisation :

    “When you try to prototype a model, you have to take care about the data safety, ethical consideration, you have to identify a way to monitor that model. You need model standard governance.”

    Successful scaling requires :

    • Standard approaches to model development
    • Clear evaluation frameworks
    • Simple processes for model updates
    • Strong monitoring systems
    • Regular performance reviews

    Investing in people and skills

    Both leaders highlighted how important skilled people are to analytics success.

    “Having a good hiring strategy as well as creating that data literacy is really important,”

    Johri noted. Banks need to :

    • Develop comprehensive training programmes
    • Create clear career paths for analytics professionals
    • Foster collaboration between technical and business teams
    • Build internal expertise in emerging technologies

    Planning for the future

    Looking ahead, both banks are preparing for increased regulation and growing demands for transparency. Key focus areas include :

    • Adapting to new privacy regulations
    • Making AI decisions more explainable
    • Improving data quality and governance
    • Strengthening cybersecurity measures

    Practical steps for financial institutions

    The experiences shared by HSBC and ING provide valuable insights for financial institutions at any stage of their analytics journey. Their successes and challenges outline a clear path forward.

    Key steps for success

    Financial institutions looking to enhance their analytics capabilities should :

    1. Start with strong foundations
      • Invest in clear data governance frameworks
      • Set data quality standards
      • Build thorough documentation processes
      • Create transparent data tracking
    2. Think strategically about AI implementation
      • Focus on transformative rather than small changes
      • Consider the full costs of AI projects
      • Build solutions that can grow
      • Balance innovation with risk management
    3. Invest in people and processes
      • Develop internal analytics expertise
      • Create clear paths for career growth
      • Foster collaboration between technical and business teams
      • Build a culture of data literacy
    4. Plan for scale
      • Establish monitoring systems
      • Create governance frameworks
      • Develop standard approaches to model development
      • Stay flexible for future regulatory changes

    Learn more

    Want to hear more insights from these industry leaders ? Watch the complete webinar recording on demand. You’ll learn :

    • Detailed technical insights from both banks
    • Extended Q&A with the speakers
    • Additional case studies and examples
    • Practical implementation advice
     
     

    Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

    Watch the on-demand webinar : Advancing analytics maturity.

    By providing your email and clicking “submit”, you agree to receive direct marketing materials relating to Matomo products and services, surveys, information about events, publications and promotions. You can unsubscribe at any time by clicking the opt-out link provided in each communication. We will process your personal information in accordance with our Privacy Policy.

    &lt;script&gt;document.getElementById( &quot;ak_js_4&quot; ).setAttribute( &quot;value&quot;, ( new Date() ).getTime() );&lt;/script&gt;

    &lt;script&gt;<br />
    gform.initializeOnLoaded( function() {gformInitSpinner( 71, 'https://matomo.org/wp-content/plugins/gravityforms/images/spinner.svg', true );jQuery('#gform_ajax_frame_71').on('load',function(){var contents = jQuery(this).contents().find('*').html();var is_postback = contents.indexOf('GF_AJAX_POSTBACK') &gt;= 0;if(!is_postback){return;}var form_content = jQuery(this).contents().find('#gform_wrapper_71');var is_confirmation = jQuery(this).contents().find('#gform_confirmation_wrapper_71').length &gt; 0;var is_redirect = contents.indexOf('gformRedirect(){') &gt;= 0;var is_form = form_content.length &gt; 0 &amp;&amp; ! is_redirect &amp;&amp; ! is_confirmation;var mt = parseInt(jQuery('html').css('margin-top'), 10) + parseInt(jQuery('body').css('margin-top'), 10) + 100;if(is_form){jQuery('#gform_wrapper_71').html(form_content.html());if(form_content.hasClass('gform_validation_error')){jQuery('#gform_wrapper_71').addClass('gform_validation_error');} else {jQuery('#gform_wrapper_71').removeClass('gform_validation_error');}setTimeout( function() { /* delay the scroll by 50 milliseconds to fix a bug in chrome */  }, 50 );if(window['gformInitDatepicker']) {gformInitDatepicker();}if(window['gformInitPriceFields']) {gformInitPriceFields();}var current_page = jQuery('#gform_source_page_number_71').val();gformInitSpinner( 71, 'https://matomo.org/wp-content/plugins/gravityforms/images/spinner.svg', true );jQuery(document).trigger('gform_page_loaded', [71, current_page]);window['gf_submitting_71'] = false;}else if(!is_redirect){var confirmation_content = jQuery(this).contents().find('.GF_AJAX_POSTBACK').html();if(!confirmation_content){confirmation_content = contents;}setTimeout(function(){jQuery('#gform_wrapper_71').replaceWith(confirmation_content);jQuery(document).trigger('gform_confirmation_loaded', [71]);window['gf_submitting_71'] = false;wp.a11y.speak(jQuery('#gform_confirmation_message_71').text());}, 50);}else{jQuery('#gform_71').append(contents);if(window['gformRedirect']) {gformRedirect();}}jQuery(document).trigger(&quot;gform_pre_post_render&quot;, [{ formId: &quot;71&quot;, currentPage: &quot;current_page&quot;, abort: function() { this.preventDefault(); } }]);                if (event.defaultPrevented) {                return;         }        const gformWrapperDiv = document.getElementById( &quot;gform_wrapper_71&quot; );        if ( gformWrapperDiv ) {            const visibilitySpan = document.createElement( &quot;span&quot; );            visibilitySpan.id = &quot;gform_visibility_test_71&quot;;            gformWrapperDiv.insertAdjacentElement( &quot;afterend&quot;, visibilitySpan );        }        const visibilityTestDiv = document.getElementById( &quot;gform_visibility_test_71&quot; );        let postRenderFired = false;                function triggerPostRender() {            if ( postRenderFired ) {                return;            }            postRenderFired = true;            jQuery( document ).trigger( 'gform_post_render', [71, current_page] );            gform.utils.trigger( { event: 'gform/postRender', native: false, data: { formId: 71, currentPage: current_page } } );            if ( visibilityTestDiv ) {                visibilityTestDiv.parentNode.removeChild( visibilityTestDiv );            }        }        function debounce( func, wait, immediate ) {            var timeout;            return function() {                var context = this, args = arguments;                var later = function() {                    timeout = null;                    if ( !immediate ) func.apply( context, args );                };                var callNow = immediate &amp;&amp; !timeout;                clearTimeout( timeout );                timeout = setTimeout( later, wait );                if ( callNow ) func.apply( context, args );            };        }        const debouncedTriggerPostRender = debounce( function() {            triggerPostRender();        }, 200 );        if ( visibilityTestDiv &amp;&amp; visibilityTestDiv.offsetParent === null ) {            const observer = new MutationObserver( ( mutations ) =&gt; {                mutations.forEach( ( mutation ) =&gt; {                    if ( mutation.type === 'attributes' &amp;&amp; visibilityTestDiv.offsetParent !== null ) {                        debouncedTriggerPostRender();                        observer.disconnect();                    }                });            });            observer.observe( document.body, {                attributes: true,                childList: false,                subtree: true,                attributeFilter: [ 'style', 'class' ],            });        } else {            triggerPostRender();        }    } );} );<br />
    &lt;/script&gt;