Ai, Machine Learning and Big Data in Financial Services | Chapter 4 | Part 3

Table of Contents

Emerging risks from the use of AI/ML/Big Data and possible risk mitigation tools (Chapter 4) Part 3

3.6. Governance of AI systems and accountability

Solid governance arrangements and transparent accountability mechanisms are crucial when deploying AI models in high-value decision-making scenarios such as credit access and investment portfolio allocation. The entities involved in developing, deploying, or operating AI systems must take responsibility for their proper functioning, according to OECD’s 2019 report. Additionally, as a safeguard, human oversight may be necessary from the product design phase and throughout the lifecycle of AI products and systems, as recommended by the European Commission in 2020.

Currently, financial market participants using AI rely on existing governance and oversight arrangements for using such technologies, as AI-based algorithms are not considered fundamentally different from conventional ones. Existing governance frameworks applicable to models can constitute the basis for developing or adapting AI activity, given that many of the considerations and risks associated with AI also apply to other models. Explicit governance frameworks that designate clear lines of responsibility for developing and overseeing AI-based systems throughout their lifecycle, from development to deployment, could further strengthen existing arrangements for operations related to AI. Internal governance frameworks could include minimum standards or best practice guidelines and approaches for implementing such policies. Internal model committees set model governance standards and processes that financial service providers follow for model building, documentation, and validation for any model, including AI-driven ML models.

I completely agree with the challenges of using existing model governance processes for AI models, which change frequently and only ephemerally. Retaining data and code to produce replications of inputs and outputs based on a past date seems like a viable approach to mitigate this problem. However, the non-deterministic nature of many ML models makes it difficult to guarantee the same model will be produced even with the same input data. More work is needed to develop effective governance frameworks for AI models.  

It is essential to consider the intended outcomes for consumers when developing a governance framework for AI technologies. This framework should also include an assessment of whether and how these outcomes are achieved. In more complex deep learning models, there may be concerns about the ultimate control of the model since AI could potentially behave in a way that contradicts consumer interests, such as producing biased results in credit underwriting, as mentioned earlier. Moreover, the autonomous behaviour of some AI systems during their lifetime may require significant product changes that affect safety, necessitating a new risk assessment, as noted by the European Commission (2020).  

Box 3.4. ML model governance and model committees 

Financial service providers usually follow similar model governance processes for building, documenting, and validating machine learning (ML) models as they do for traditional statistical models.

Financial institutions must follow model governance best practices when building statistical models for credit and consumer finance decisions. This involves using appropriate datasets, avoiding certain data types, rigorous testing and validation, and ensuring consistency of production input data with the data used to build the model. Documentation and audit trails are essential for deployment decisions, design, and production. 

Monitoring models to prevent producing results that may indicate discriminatory treatment is essential. A crucial aspect is that it should be possible to understand why the model generated a specific output. This is where model governance frameworks come into play, ensuring the models are continuously monitored.

Financial services firms set up model governance committees or review boards to oversee model governance processes’ design, approval, and implementation. Model validation is carried out using holdout datasets as part of such processes. Other standard procedures include monitoring the stability of inputs, outputs, and parameters. Such internal committees are expected to become more familiar with the broader adoption of AI by financial firms, and their roles and competencies may need to be upgraded to accommodate the complexities introduced by AI-based models. For instance, AI-based models’ frequency and validation methods must differ from linear models. 

AI is playing an increasingly important role in the RegTech industry. Financial services companies are implementing AI-powered solutions to improve their model governance. They are enhancing the automated processes that monitor and control the data consumed by the models in production and improving the automatic monitoring of model outputs.

The ultimate responsibility, oversight and accountability over AI-based systems lie with the Executive and board level of management of the financial services provider, who have to establish an organisation-wide approach to model risk control and ensure that the level of model risk is within their patience. It is important to note that the importance of other functions, such as engineers/programmers or data analysts, who have previously not been central to the supervisors’ review, may be subject to increased scrutiny commensurate with their increased importance in deploying AI-based financial products/services. Therefore, accountabilities for AI-related systems would perhaps need to go beyond senior managers and the Board to hands-on professionals responsible for the programming and development of models and to those using the mechanism to deliver customer services, at a minimum, at the internal risk management level, as the responsibility of explanation of such models to senior managers and the Board lies with the technical functions. It is also interesting to note that some jurisdictions may require a third-party audit to validate the model’s performance per the intended purpose. Additionally, strong governance includes documentation of model development and validation.

3.6.1. Outsourcing and third-party providers 

Outsourcing AI techniques to third parties poses competitive dynamics (concentration risks) and gives rise to systemic vulnerabilities related to an increased risk of convergence.

Possible risks associated with relying on certain third-party providers include increased concentration in areas such as data collection and management (e.g. dataset providers), technology (e.g. third-party model providers), and infrastructure (e.g. cloud providers). As AI models and techniques become more commoditised through cloud adoption, there is a greater risk of dependency on outsourced solutions, which can pose new challenges for competitive dynamics and lead to potentially oligopolistic market structures in these services. 

The use of third-party models can create risks of convergence at the firm and systemic levels, especially if there is a lack of diversity in the third-party models available in the market. This may result in herding and liquidity issues during times of stress when liquidity is most needed. The impact of these risks is likely to be further amplified by the reduced warehousing capacity of traditional market-makers, who would otherwise stabilise markets by providing ample liquidity during periods of market stress through active market-making. Smaller institutions are more susceptible to the risk of herding as they are more likely to rely on third-party providers to outsource ML model creation and management, thus lacking the in-house expertise to understand and govern such models fully.

It’s important to note that outsourcing AI techniques or enabling technologies and infrastructure can pose challenges regarding accountability and concentration risks. To effectively manage these risks, it’s crucial to have proper governance arrangements and contractual modalities similar to any other type of service. Additionally, finance providers should have the necessary skills to perform due diligence and audit the services provided by third parties. It’s worth noting that over-reliance on Outsourcing can increase the risk of service disruption, which could have a potential systemic impact on the markets. To mitigate this, businesses should have contingency and security plans to ensure they can function as usual in case of any vulnerability.

3.7. Regulatory considerations, fragmentation and potential incompatibility with existing regulatory requirements 

Although many countries have dedicated AI strategies (OECD, 2019), only a few jurisdictions currently have requirements that specifically target AI-based algorithms and models. Generally, regulation and supervision of ML applications are based on overarching requirements for systems and controls (IOSCO, 2020). This involves rigorous testing of the algorithms used before they are deployed in the market and continuous monitoring of their performance throughout their lifecycle.

The point made in the text is quite relevant. While most jurisdictions apply a technology-neutral approach to regulate financial market products, the increasing complexity of some innovative use cases in finance may challenge this approach. Moreover, the regulatory regimes in the financial sector may fall short in addressing the systemic risks posed by a potential broad adoption of AI techniques such as deep learning, given the depth of technological advances in this area. This is a topic that requires attention and careful consideration moving forward. 

I completely agree. Ensuring that AI techniques are compatible with existing legal and regulatory requirements is essential. However, some advanced AI techniques may not meet these requirements, which could lead to potential incompatibility issues. One such issue could be the lack of transparency and explainability of some ML models, which makes it difficult to understand how they work. Additionally, continuously adapting deep learning models could pose challenges in terms of compliance. Another area of concern is data collection and management. While the EU GDPR framework for data protection imposes time constraints in storing individual data, firms may need to keep records of datasets used to train algorithms for audit purposes. This could be particularly challenging given the practical implications and costs of recording such large datasets.

Some governments, particularly the EU, have recognised the need to update or clarify legislation in certain areas, such as liability, to ensure practical application and enforcement. This is due to the lack of transparency in AI systems, which makes it challenging to identify and prove breaches of laws, including legal provisions that protect fundamental rights, determine liability, and meet the requirements for compensation claims. Over the medium term, regulators and supervisors may have to adjust regulations and supervisory methods to accommodate the new realities introduced by the implementation of AI, such as concentration and Outsourcing.

It has been noted by industry participants that there is a potential risk of fragmentation of the regulatory landscape concerning AI at national, international, and sectoral levels. This has led to the need for more consistency to ensure that these techniques can function across borders, according to a report by the Bank of England and FCA. Many published AI principles, guidance, and best practices have been developed recently, along with existing regulations applicable to AI models and systems. While these are valuable in addressing potential risks, there are differing views over their practical value and the difficulty of translating such principles into effective practical guidance, primarily through real-life examples.

I completely agree with you. The availability of standardised AI tools can make it easier for non-regulated entities to offer investment advisory or other services without the necessary certification or licensing. This could lead to non-compliant practices and regulatory arbitrage. In addition, big tech companies with access to large datasets from their primary activities could also take advantage of this situation. It’s essential to ensure proper regulations and certifications are in place to prevent such practices and protect consumers.

3.8. Employment risks and the query of knacks

Financial services providers and regulators must possess the technical knowledge to operate and inspect AI-based systems and take appropriate action when necessary. The lack of adequate skills is a potential source of vulnerability for the industry and regulatory/supervisory sides, which could result in potential employment issues in the financial sector. Integrating AI and big data in finance requires specific skill sets that only a few financial practitioners have. Companies must invest in developing human capital with the necessary skills to extract value from these technologies and take advantage of vast amounts of unstructured data sources to capitalise on the potential of AI-based models and tools fully.

From an industry perspective, AI deployment requires a team of professionals with a combination of scientific expertise in AI, computer science skills such as programming and coding, and financial sector knowledge.

It’s interesting how roles for specialists in IT or finance have been separated in today’s financial market. However, as financial institutions increasingly adopt AI, there will be a growing demand for experts who combine finance knowledge with computer science expertise. Compliance professionals and risk managers must understand how AI techniques and models work so they can audit, oversee, challenge, and approve their use. In the same vein, senior managers responsible for such practices should be able to comprehend and follow their development and implementation.

The impact of AI and ML on the financial industry is expected to have both positive and negative effects on employment. On the one hand, there will be a high demand for skilled employees in areas such as AI methods, advanced mathematics, software engineering, and data science. On the other hand, there is a concern that these technologies may lead to significant job losses across the industry, as predicted by executives of financial services firms (Noonan, 1998; US Treasury, 2018).However, it is expected that financial market practitioners and risk management experts will gradually gain experience and expertise in AI in the medium term. This is because AI models will coexist with traditional models until AI becomes more mainstream.

Over-reliance on fully automated AI-based systems could lead to a higher risk of service disruption with a potential systemic impact on the markets. In case such markets suffer from technical or other disorders, financial service providers should ensure they are prepared to substitute the automated AI systems with well-trained humans who can act as a safety net and ensure no disturbance in the markets. These factors are expected to become increasingly important as AI deployment becomes ubiquitous across markets.

I completely agree with the statement that skills and technical expertise are becoming increasingly important, especially from a regulatory and supervisory perspective. Financial sector regulators and supervisors must stay updated with the latest technology and enhance their skills to supervise AI-based applications in finance effectively. Enforcement authorities must also be technically capable of inspecting AI-based systems and have the power to intervene when required. This is essential to ensure the proper functioning of the financial sector. Moreover, the upskilling of policymakers can enable them to expand their use of AI in RegTech and SupTech, a crucial area of application of innovation in the official sector. 

I completely agree. AI should be viewed as a tool that enhances our capabilities as humans rather than replacing them. By combining the strengths of both humans and AI, we can achieve better results and decision-making. However, we still need to maintain some level of human oversight to ensure that any risks or vulnerabilities that arise from using AI are kept to a minimum. To make this possible, we need to identify the points at which humans and AI can work together most effectively, creating a ‘human in the loop’ approach. This will be critical for implementing a successful combined ‘man and machine’ approach. 

Thank You For Reading, Hope You Liked It…!!! 😊
Please Like and Share...
Facebook
Twitter
WhatsApp
Email
Picture of admin@helpofai.com

admin@helpofai.com

Help Of Ai (HOA) have a powerful Ai features including Smart Editor, AI ReWriter, AI Video Generator, Sound Studio, AI Plagiarism, Content Detector, Image, Transcript and many more…
Days
Hours
Minutes
Seconds