Ai, Machine Learning and Big Data in Financial Services | Chapter 5

4. Policy responses and implications 

4.1. Recent policy movement around AI and finance

AI has had a significant impact on special demands, and policymakers have recognized the potential risks associated with its services. Consequently, AI regulation has become a priority in recent years. In May 2019, the OECD introduced its Regulations on AI, which are the first-ever international standards established by governments to promote responsible and trustworthy AI. These regulations were developed with the input of a panel of experts from various sectors. The OECD AI Principles have a strong focus on inclusive and sustainable growth, making them highly relevant to the global finance industry. 

AI Principles
AI Principles

In 2020, the European Commission released a White Paper that gives various policy and regulatory possibilities for an ‘AI ecosystem for goodness and trust’. The paper lists a range of measures to support, develop, and adopt AI across the EU’s economy and public administration and proposes a future regulatory framework for AI. Additionally, the paper examines the safety and liability facets of AI. At the practical performance level, the European Commission has initiated different projects, including the Infinitech consortium’s pilot tasks, which aim to lower the barriers to AI-driven creation, boost regulatory compliance, and promote investments in the sector.

AI applications in finance at the European level
AI applications in finance at the European level  

2019, the IOSCO board specified AI and ML as an important preferences. And in 2020, IOSCO issued a consultation report on using AI by market intermediaries and asset administrators. The report advised six measures to assist IOSCO members in creating relevant regulatory frameworks to manage market intermediaries and asset managers that use such technologies, which can be seen in Box 4.3.

use of AI and ML by market intermediaries and asset managers
usage of AI and ML by market intermediates and asset administrators

At the national level, governments have made efforts to discuss and regulate the use of AI in the financial industry. For example, the French ACPR established a task force in 2018 that brought together professionals from the financial sector and public authorities to discuss the opportunities and risks associated with AI in finance, as well as the challenges supervisors face. Also, in 2019, the Bank of England and the Financial Conduct Authority projected the AI Public Private Forum to address identical problems. In 2019, the Russian Federation enacted a National Strategy for the development of AI and in 2020, they developed a Concept for the regulation of AI technologies and robotics. In 2021, a Federal Law on Experimental Digital Innovation Regimes was passed, empowering the Bank of Russia to approve the launch of regulatory sandboxes, including those for projects deploying AI solutions in finance. In Moscow, a five-year regulatory sandbox for implementing AI was launched in July 2020, making it a particularly unique federal law.

On March 31, 2021, the Comptroller of the Currency, the Federal Reserve System, the Federal Deposit Insurance Corporation, the Consumer Financial Protection Bureau, and the National Credit Union Administration issued a joint Request for Information and comment. The request aims to examine the use of artificial intelligence (AI), including machine learning, by financial institutions. The consultation highlights the advantages and risks associated with AI in finance, mainly around explainability, data use, and dynamic updating. It seeks input on various questions related to these issues, including cybersecurity risks, fair lending considerations, oversight of third parties, and more.

The European Commission proposed a regulation on April 21, 2021, aiming to address the risks of AI and introduce unified rules on its use across various industries. The proposal suggests the establishment of a European AI Board to oversee the regulation. The regulation covers various areas, but the most stringent requirements apply to high-risk AI applications, such as creditworthiness assessment. For such high-risk AI, specific risk and quality management systems are required, along with conformity assessment, high-quality representative data, error-free data, complete record-keeping, and transparency on AI-driven applications’ use and operation. The proposed rules also mandate human oversight by qualified personnel, the use of kill switches, and explicit human confirmation of decision-making. Additionally, the system must ensure accuracy, robustness, and security, post-market monitoring, notification of the regulator about severe incidents, and the registration of the design on a public register.

4.2. Policy considerations 

AI has brought about significant benefits to the financial services industry. Customers and market participants can now enjoy enhanced quality of services and improved efficiency provided by financial service providers. However, the use of AI-powered applications in finance presents new challenges, particularly in terms of explainability. Additionally, it amplifies existing risks that are present in financial markets, especially those related to data management and usage.

Policymakers and regulators have a crucial responsibility to ensure that the use of AI in finance is consistent with the promotion of financial stability, safeguarding financial consumers, and advancing market integrity and competition. It is vital to identify and mitigate potential risks that may arise from the deployment of AI techniques and to encourage the use of responsible AI. In some situations, existing regulatory and supervisory requirements may need to be clarified or modified to address any perceived incompatibilities with AI applications.

When implementing regulatory and supervisory requirements for AI techniques, it’s crucial to take into account the context and the potential impact on consumers. By doing so in a thoughtful and proportional manner, we can promote the use of AI while avoiding unnecessary restrictions and still encouraging innovation.

It is crucial for policymakers to prioritize and concentrate on improving data governance by financial sector firms in order to ensure consumer protection in AI applications in finance. The risks concerning data management, such as data privacy, confidentiality, data concentration, and their potential impact on the market’s competitive dynamics, need to be taken into account. Additionally, there is a risk of unintended bias and discrimination of specific population segments as well as data drifts. The importance of data cannot be overstated, not only in terms of training, testing, and validating machine learning models, but also in determining their ability to maintain their predictive capabilities in extreme situations.

Policymakers could consider implementing specific requirements or best practices for data management when using AI-based techniques. This could include ensuring high data quality, using an adequate dataset depending on the intended use of the AI model, and implementing safeguards to avoid potential biases. One effective best practice could be to check model results against baseline datasets and other tests to mitigate the risks of discrimination, especially against protected classes. To reduce potential biases, it could be helpful to validate the appropriateness of the variables used by the model. Additionally, tools could be developed to monitor and correct for any conceptual drift. Authorities may also want to consider introducing requirements for greater transparency and opt-out options for using personal data.

Policymakers could consider implementing disclosure requirements around the use of AI techniques in providing financial services. This would ensure financial consumers are informed about AI techniques in delivering a product and potential interaction with an AI system instead of a human being. By providing clear information about the AI system’s capabilities and limitations, consumers can make informed choices among competing products. In addition, authorities could consider introducing suitability requirements for AI-driven financial services similar to the ones applicable to the sale of investment products. Such conditions would help financial service providers better assess whether prospective clients have a solid understanding of how AI affects product delivery.

The limited transparency and explainability of many advanced AI-based ML models is a pressing issue that needs to be addressed. It is incompatible with current laws and regulations and financial service providers’ internal governance, risk management, and control frameworks. This lack of explainability limits a user’s ability to comprehend how their models operate within the market and how they can contribute to market shocks. It can also amplify systemic risks such as pro-cyclicality, convergence, and increased market volatility through simultaneous purchases and sales of large quantities, mainly when third-party standardized models are utilized. Moreover, the inability of users to adjust their strategies during times of stress can lead to exacerbated market volatility and bouts of illiquidity during periods of acute stress, which can worsen flash crash events. 

As the use of AI in financial services becomes more prevalent, regulators must grapple with the challenge of reconciling the lack of explainability in AI with existing laws and regulations. Financial services firms may need to update and adjust the currently applicable frameworks for model governance and risk management to address this challenge. Supervisors may need to shift their focus from documenting the development process to assessing model behaviour and outcomes and explore more technical ways of managing risk, such as adversarial model stress testing or outcome-based metrics. These considerations were highlighted in a report by Gensler and Bailey in 2020.

This seems like an important point to consider. Even though there have been efforts to enhance the transparency of AI, many users and supervisors still lack trust in AI applications due to the lack of explainability. While improving explainability is commonly seen as a way to build trust, it might be necessary to introduce additional measures to ensure that machine learning model-based decisions function as intended. 

Policymakers could consider implementing clear model governance frameworks and holding individuals accountable to build trust in AI-driven systems. Financial services providers may need to establish explicit governance frameworks that designate clear lines of responsibility for developing and overseeing AI-based systems throughout their lifecycle, from development to deployment, to fortify existing arrangements for operations related to AI. It may also be necessary to modify internal model governance frameworks to capture better risks arising from using AI and include consumer outcomes, along with evaluating whether and how such objectives are achieved using AI technologies. Proper documentation and audit trails of these processes can assist supervisors in overseeing such activities.

Financial firms need to provide increased assurance about the robustness and resilience of AI models as policymakers seek to prevent the build-up of systemic risks. This will help gain trust in AI applications in finance. To prevent systemic threats and vulnerabilities that may arise in times of stress, the performance of models may need to be tested in extreme market conditions. Introducing automatic control mechanisms, such as kill switches that trigger alerts or switch-off models in times of stress, can assist in mitigating risks. However, it can also expose the firm to new operational risks. Back-up plans, models, and processes should be in place to ensure business continuity in case the models fail or act unexpectedly. Additionally, regulators could consider add-ons or minimum buffers if banks were to determine risk weights or capital based on AI algorithms (Gensler and Bailey, 2020).

It’s important to have frameworks for appropriate training, retraining, and rigorous testing of AI models to ensure that ML model-based decision-making is operating as intended and in compliance with applicable rules and regulations. The datasets used for training must be large enough to capture non-linear relationships and tail events in the data, even if synthetic, to improve the reliability of such models in times of unpredicted crisis. Continuous testing of ML models is indispensable to identify and correct model drifts, and ongoing monitoring and validation of AI models are also necessary. 

The ongoing monitoring and validation of AI models are essential for risk management. Therefore, regulators should promote this practice further as the most effective way to improve model resilience and prevent and address model drifts. Standardized procedures for monitoring and validation could help enhance model resilience and identify whether the model needs adjustments, redevelopment, or replacement. It’s crucial to separate model validation, approvals, and sign-offs from the development process and document them as best as possible for supervisory purposes. The frequency of testing and validation may vary depending on the model’s complexity and the decisions made by such a model.

It is essential to prioritize human decision-making in high-value use cases, such as lending decisions that significantly affect consumers. To build trust in AI systems, authorities could introduce processes that allow customers to challenge the outcome of AI models and seek redress. The GDPR is an example of such policies, providing individuals with the right to obtain human intervention and contest algorithmic decisions. Additionally, clear communication from the official sector that sets expectations can further increase confidence in AI applications in finance.  

Policymakers have an essential role in supporting innovation in the finance sector while ensuring that financial consumers and investors are protected and that the markets around such products and services remain fair, orderly, and transparent. They should consider the increased technical complexity of AI and whether resources will need to be deployed to keep pace with technological advances. Investment in research can allow some of the issues around explainability and unintended consequences of AI techniques to be resolved. Investment in skills for finance sector participants and policymakers will enable them to follow technological advancements and maintain a multidisciplinary dialogue at the operational, regulatory, and supervisory levels. Closer cooperation of IT staff with more traditional finance experts could be one way to adjust the tradeoff between model predictability and explainability and respond to the legal and regulatory requirements for audit ability and transparency. Building bridges between disciplines that work in silos, such as deep learning and symbolic approaches, may be needed to help improve explainability in AI-based methods. Enforcement authorities, in particular, may need to be technically capable of inspecting AI-based systems and empowered to intervene when required, but also to enjoy the benefits of this technology by deploying AI in RegTech/SupTech applications.

The financial industry must balance innovation with consumer protection and market transparency, and policymakers play a crucial role in achieving this balance. As AI becomes increasingly prevalent in the sector, policymakers should consider strengthening their defences against potential risks associated with AI. Clear communication about AI adoption and the safeguards in place can help build trust and encourage the use of innovative techniques. It’s essential to maintain a multidisciplinary dialogue between policymakers and the industry, both locally and globally, given the ease of cross-border financial services.

Picture of Hoa

Leave a Comment