Artificial Intelligence, Machine Learning and Data in Financial Services | Chapter 3 | Part 2

Table of Contents

AI/ML, big data in finance: advantages and effect on business models/activity of financial sector participants

2.3. Credit intermediation and assessment of creditworthiness 

Banks and fintech lenders increasingly rely on AI-based models and big data to evaluate the creditworthiness of potential borrowers and make underwriting decisions, both of which are fundamental functions of finance. In credit scoring, machine learning models predict borrower defaults with greater accuracy than standard statistical models such as logic regressions, especially when limited information is available (Bank of Italy, 2019) (Albanesi and Vamossy, 2019). Additionally, financial institutions use AI-based systems to detect fraud and analyze the interconnectedness between borrowers, allowing them to manage their lending portfolios more effectively.

Box 2.3. AI for scam detection, marketing screening and monitoring 

The combination of Artificial Intelligence (AI) and Big Data has proved helpful in detecting fraudulent activities in financial institutions and FinTech lenders. It is used for client onboarding, conducting KYC (Know Your Customer) checks, screening for anti-money laundering (AML) and terrorist financing (CFT) during onboarding and ongoing customer due diligence, and identifying suspicious activities during continuous monitoring. 

AI can be extremely helpful in detecting fraudulent activity and abnormal transactions in institutions by using image recognition software, risk models, and other advanced techniques. For instance, AI can identify fraudulent use of customers’ personal information, misrepresenting products or services, and other scams. Additionally, AI can reduce the number of false positives, which are valid transactions that are wrongly rejected, leading to higher client satisfaction. 

In Japan, a Proof of Concept project was conducted to evaluate the feasibility and effectiveness of AI in AML/CFT on a shared platform. The project used an AI-based system for transaction screening and monitoring. It used previously filed suspicious transactions from various financial institutions as ML data for the objective function. The system helped compliance personnel identify suspicious transactions and triage transaction screening results against sanctions lists. This was reported by the New Energy and Industrial Technology Development Organization in 2021.

Financial institutions rely on fraud detection capabilities to prevent financial crimes. However, AI-based applications can be used to circumvent these capabilities. For instance, AI-generated fraudulent pictures can now look indistinguishable from actual photos, posing significant challenges to authentication and verification functions within financial services. This was highlighted in a report by the US Treasury in 2018. 

The availability of big data and advanced AI-based analytics models have revolutionized the way credit risk is evaluated. AI-powered credit scoring models integrate conventional credit information with big data not directly associated with creditworthiness, such as social media data, digital footprints, and transactional data available through Open Banking initiatives. These models can provide a more accurate and comprehensive credit risk assessment, enabling lenders to make better-informed credit decisions. 

Using AI models in credit scoring can significantly reduce the costs associated with underwriting while enabling the creditworthiness analysis of clients with limited credit history (also known as ‘thin files’). This can potentially lead to the extension of credit to viable companies that cannot demonstrate their viability through historical performance data or tangible collateral assets. By doing so, it can enhance access to credit and support the growth of the real economy by alleviating constraints to SME financing. Recent empirical analyses have shown that it could reduce the need for collateral by reducing information asymmetries prevalent in credit markets (BIS, 2020).

Alternative scoring methods enabled by AI-based credit scoring can also serve credit approval rates for parts of the population that have been historically left behind, such as near-prime clients or underbanked regions. This promotes financial inclusion and enhances access to credit for these groups.

However, it is essential to note that AI-based credit scoring models remain untested over longer credit cycles or in case of a market downturn, and there is limited definitive practical support as to the advantages of ML-driven methods for financial inclusion. While some analysis suggests that the use of ML models for credit risk assessment results in cheaper access to credit only for majority ethnic groups (Fuster et al., 2017[43]), others discover that lending-decision rules based on ML projections help decrease racial bias in the customer loan market (Dobbie et al., 2018).

2.3.1. AI/ML-based credit scoring, clarity and righteousness in lending 

Despite the advantages of AI/ML-based models, such as speed, efficiency, and risk scoring for those without credit scores, there is a risk of disparate impact in credit outcomes and the potential for discriminatory or unfair lending, as noted by the US Treasury in 2016. Additionally, these models face challenges related to the quality of data used and the lack of transparency or explainability around the model, which are common issues in other applications of AI in finance. 

Machine learning models created with good intentions can unintentionally generate biased conclusions and discriminate against certain groups of people based on their race, gender, ethnicity, or religion (White & Case, 2017). Poorly designed and controlled AI/ML models can potentially worsen or reinforce existing biases, making it even more challenging to detect discrimination in credit allocation (Brookings, 2020). 

As with any model utilised in economic services, the risk of ‘garbage in, garbage out’ lives in AI/ML-based models for threat scoring and beyond. Inadequate data may include poorly labelled or inaccurate data, data that reflects underlying human prejudices, or incomplete data (S&P, 2019[48]). A neutral ML model trained with inadequate data risks producing inaccurate results even when fed with ‘good’ data. Alternately, a neural network trained on high-quality data provided insufficient data will have a questionable output despite the well-trained underlying algorithm. This, combined with the lack of explainability in ML models, makes it harder to detect inappropriate data use or unsuitable data in AI-based applications. 

It’s crucial to use high-quality and suitable data while making decisions. If the data is poor in quality or inadequate, it may lead to incorrect or biased decision-making. The unreasonable or discriminatory scoring might not be intentional from the organization’s perspective while using the model. However, algorithms might combine data points that seem neutral on the surface and treat them as proxies for unchangeable attributes like gender or race. This way, the existing non-discrimination laws can be circumvented (Hurley, 2017). For example, a credit officer might be careful not to include gender-based information as input in the model. However, the model can still determine the gender based on the transaction activity and use that knowledge in the credit assessment, thereby violating the law. Biases might also be present in the data used as input variables. Since the model trains itself on data from external sources that may already have embedded certain biases, it perpetuates historical biases.

AI-powered models, including ML-based models, can raise transparency issues due to their lack of explainability. This means it can be challenging to comprehend, follow or replicate the decision-making process (as detailed in Section 3.4). In particular, lending decisions require high accountability from lenders, who must explain the basis for credit extension denials. Unfortunately, consumers cannot identify and contest unfair credit decisions and have little chance to understand what steps they should take to improve their credit rating. 

Regulations in developed economies ensure that specific data points are excluded from credit risk analysis. For example, US regulations prohibit using race or zip code data, while UK regulations prohibit using protected category data. Rules that promote anti-discrimination principles, like the US fair lending laws, exist in many jurisdictions. Regulators worldwide are considering the potential for bias and discrimination risks that AI/ML and algorithms can pose (White & Case, 2017). 

In specific legal systems, evidence of disparate treatment, such as lower average credit limits for protected group members compared to other groups, is regarded as discrimination, irrespective of whether there was any intention to discriminate. To mitigate such risks, it is necessary to have auditing mechanisms that verify the outcomes of the model against baseline datasets; it is also important to test such scoring systems for their fairness and accuracy (as suggested by Citron and Pasquale, 2014), and to establish governance frameworks for AI-enabled products and services, and assign accountability to the human component of the project, among other things. 

2.3.2. BigTech and financial services 

As Big Tech companies increasingly use customer data to power AI models for financial services, concerns over data privacy and the potential exploitation of personal data for commercial gain have arisen (DAF/CMF(2019)29/REV1). Such practices could lead to discrimination against customers, including unfair credit availability and pricing.

Access to customer data by BigTech companies gives them a significant competitive edge over traditional financial services providers. This advantage is further strengthened by their use of AI, which enables them to offer innovative, personalized, and more efficient services. However, BigTech’s dominance in specific market sectors may lead to an excessive market concentration and increase the market’s dependence on a small group of BigTech players. Such a situation could have systemic implications, depending on the size and scope of these players. It could also raise concerns about potential risks to financial consumers, who may not receive the same range of product options, pricing or advice that traditional financial services providers offer. Furthermore, supervisors may face difficulties accessing and reviewing such firms’ economic activities. 

The increasing concentration of key players in the AI industry poses a significant risk in terms of anti-competitive behaviour. The emergence of a few dominant players in the market for AI solutions and services incorporating AI technologies (such as cloud computing service providers) is already being observed in some parts of the world (ACPR, 2018). This trend can potentially create challenges for a competitive environment, especially considering BigTech players’ privileged position regarding customer data. These firms can leverage their data advantage to build monopolistic positions, which can hinder the entry of smaller players into the market and lead to effective price discrimination. 

At the end of last year, the European Union and the UK developed a set of regulatory proposals known as the Digital Markets Act. These proposals aim to establish a framework that would govern digital platforms known as ‘Gatekeepers.’ These Gatekeepers include BigTech, and the recommendations aim to ensure fair and open digital markets while mitigating some of the risks associated with Gatekeepers. The recommendations suggest certain obligations, such as providing business users access to the data generated by their activities and facilitating data portability. The proposals also prohibit Gatekeepers from using data obtained from business users to compete with them, which addresses dual-role risks. Additionally, the recommendations suggest solutions to address self-referencing, parity, and ranking requirements to ensure no favourable treatment is given to the services the Gatekeeper offers over those of third parties.

Thank You For Reading, Hope You Liked It…!!! 😊
Please Like and Share...
Facebook
Twitter
WhatsApp
Email
Picture of admin@helpofai.com

admin@helpofai.com

Help Of Ai (HOA) have a powerful Ai features including Smart Editor, AI ReWriter, AI Video Generator, Sound Studio, AI Plagiarism, Content Detector, Image, Transcript and many more…
Days
Hours
Minutes
Seconds