Executive Summary of AI, Machine Learning and Big Data in Finance | Chapter 1

Table of Contents

Artificial intelligence (AI) in finance 

Machine-based systems with varying levels of autonomy are called Artificial Intelligence (AI) systems. They can make predictions, recommendations or decisions for a given set of human-defined objectives. AI techniques increasingly use massive amounts of alternative data sources and data analytics, known as ‘big data’. These gigantic datasets feed machine learning (ML) models, which automatically learn from the data and improve predictability and performance through experience without being programmed to do so by humans.

The digitalization trend, already underway before the COVID-19 pandemic, has been accelerated and intensified by the crisis. This includes the increasing use of AI, and it is predicted that global spending on AI will double from USD50 billion in 2020 to more than USD110 billion in 2024 (IDC, 2020. AI is being more commonly adopted in areas like asset management, algorithmic trading, credit underwriting, and blockchain-based financial services, thanks to the abundance of available data and the increased affordability of computing capacity. 

The use of AI in finance is predicted to offer significant competitive benefits for financial firms in two main areas: (a) by improving the firms’ efficiency through cost reduction and productivity enhancement, leading to higher profitability (such as improved decision-making processes, automated execution, gains from better risk management and regulatory compliance, optimization of back-office and other processes); and (b) by enhancing the quality of financial products and services offered to consumers (such as new product offerings and high customization of products and services). This competitive advantage can, in turn, benefit financial consumers by increasing the quality and variety of products, providing personalization, and reducing costs. 

Why is the deployment of AI in finance applicable to policymakers 

Using AI in finance can create or increase financial and nonfinancial risks, which may raise consumer and investor protection concerns. The application of AI can amplify the risks that may affect the safety and stability of a financial institution due to the lack of transparency and interpretability of AI models. This can lead to potential pro-cyclicality and systemic risks in the markets. The complexity of AI techniques and their dynamic adaptability can create issues in understanding how they generate results, posing a challenge to existing financial supervision and internal governance frameworks. Additionally, it may even challenge the technology-neutral approach to policymaking. While many risks associated with AI in finance are not unique to AI, its use can magnify such vulnerabilities. AI, particularly in consumer protection, poses risks of biased, unfair or discriminatory results and data management and usage concerns. Advanced AI applications with a high level of autonomy can present a more significant potential for risks. 

Figure 1. Relevant problems and risks arising from the deployment of AI in finance

Source: OECD staff photograph.
Source: OECD staff photograph.

How is AI impacting parts of the financial markets? 

Asset management and buy-side activities in the market involve applying AI techniques for asset allocation, stock selection, risk management and operational workflow optimization optimization. Learning models can identify signals and underlying relationships in big data, enabling better decision-making. However, AI techniques may be more prevalent among prominent asset managers or institutional investors with the necessary resources to invest in such technologies.

Regarding trading, AI introduces an additional layer of complexity to traditional algorithmic trading. AI algorithms learn from data inputs, continuously evolving into computer-programmed algorithms that can identify and execute trades without human intervention. In highly digital markets, such as equities and FX markets, AI algorithms can improve liquidity management and the execution of large orders with minimal market impact. They do this by dynamically optimizing size and order size based on market conditions. In addition, traders can use AI for risk management and order flow management to streamline execution and increase efficiency. 

Like traditional models and algorithms, using the same machine learning models by many finance practitioners could lead to herding behaviour and one-sided markets. This, in turn, could pose risks to the liquidity and stability of the system, especially during times of stress. While AI algo trading can increase liquidity regularly, it can also cause convergence and, consequently, bouts of illiquidity during periods of stress, even leading to flash crashes. Large simultaneous sales or purchases can increase market volatility, creating new sources of vulnerabilities. Convergence of trading strategies also increases the risk of cyber-attacks, as it becomes easier for cybercriminals to influence agents acting similarly. These risks are present in all algorithmic trading forms, but AI amplifies them due to their ability to learn and adjust to evolving conditions fully autonomously. For instance, AI models can identify signals and understand the impact of herding, changing their behaviour and learning to front-run based on the earliest signs. The complexity and difficulty in explaining and reproducing the decision-making mechanisms of AI algorithms and models make it challenging to mitigate these risks.

Using AI techniques in trading can potentially increase illegal practices aimed at manipulating the markets. It could also make it more challenging for supervisors to detect such patterns, mainly if machines collude. This is because self-learning and deep-learning AI models can recognize and adapt their behaviour to other market participants or AI models. They can potentially reach a collusive outcome without human intervention, and the user may be unaware of it. decentralized

decentralized of AI on business models and activity in the financial sector

Impact of AI on business models and activity in the financial sector
Impact of AI on business models and training in the financial sector

Implementing AI models in lending could lower the cost of credit underwriting and make it easier for lenders to offer loans to people with limited credit history. This can promote financial inclusion by extending credit facilities to individuals with a ‘thin file.’ The use of A’ can improve the efficiency of data processing for assessing the creditworthiness of borrowers, enhance the underwriting decision-making process, and enable better lending portfolio management. It can also provide credit ratings to clients with limited credit history, thus supporting the financing of tiny and medium-sized enterprises (SMEs) in the real economy. This, in turn, can potentially promote financial inclusion among the underbanked population.

Despite their enormous potential, using AI-based models and inadequate data, particularly concerning gender or race, in lending can lead to risks of disparate impact on credit outcomes and the potential for biased, discriminatory or unfair lending practices. Apart from inadvertently creating or perpetuating biases, models driven by AI make it even more difficult to identify discrimination in credit allocation, and the model outputs are hard to interpret and communicate with declined potential borrowers. These challenges are further compounded in credit extended by BigTech that leverages their access to vast sets of customer data, raising questions about possible anti-competitive behaviours and market concentration in the technology aspect of the service provision (such as the cloud). 

Integrating AI techniques in blockchain-based finance can improve the efficiency gains in DLT-based systems and enhance the capabilities of smart contracts. By incorporating AI, smart contracts can become more autonomous, enabling the code to be adjusted dynamically based on market conditions. However, implementing AI in DLT systems also introduces significant challenges, such as difficulties in supervising networks and systems that operate on opaque AI models and a lack of interpretability of AI decision-making mechanisms. Presently, AI is primarily used for risk management of intelligent contracts to detect flaws in the code. It is worth noting that smart contracts have been around for a long time and rely on simple software code without any ties to AI techniques. Therefore, the advantages of using AI in DLT systems remain mainly theoretical, and further research and development are needed to realize its potential.

Decentralized Decentralized) could potentially benefit from artificial intelligence (AI) in the future. With the help of AI, decentralized DeFi could enable automatic credit scoring based on users’ online investment advisory services, facilitate trading based on financial data, and even offer insurance underwriting. In theory, AI-based intelligent contracts that can learn independently and adjust dynamically without human intervention could create fully autonomous chains.

However, it’s important to note that AI-based systems don’t necessarily solve the problem of poor quality or inadequate data inputs observed in blockchain-based systems. This can lead to significant risks for investors, market integrity, and the system’s stability, depending on the size of the DeFi market. While AI may also help replace off-chain third-party information providers by performing AI inference directly on-chain, it may also amplify the risks in DeFi markets.

This adds complexity to autonomous DeFi networks that are already challenging to supervise, given that they don’t have single regulatory access points or governance frameworks that allow for accountability and compliance with oversight frameworks.

Key overriding risks and challenges and probable mitigating actions

The incorporation of AI in the financial sector has the potential to magnify the existing risks in financial markets. This is due to the autonomous nature of AI systems that can learn and dynamically adapt to evolving market conditions. Consequently, this technology could pose new challenges and risks that must be addressed. Substandard or inadequately analyzed data has biased and discriminatory results, which can ultimately harm financial consumers. The investment requirements of AI techniques can result in concentration risks and related competition issues, leading to dependency on a few prominent players. The absence of adequate model governance that considers the specific nature of AI and the lack of clear accountability frameworks can lead to market integrity and compliance risks. There are also risks associated with oversight and supervisory mechanisms, which may require adjustments for this new technology. The emerging novel dangers of using AI are related to the unintended consequences of AI-based models and systems for market stability and integrity. Explainability of how AI-based models generate results poses significant risks. Increased use of AI in finance could lead to increased market interconnectedness. At the same time, several operational risks related to such techniques could threaten the financial system’s resilience in times of stress. 

Incorporating big data in AI-powered applications could pose a significant source of nonfinancial and nonfinancial challenges and risks associated with data quality, data privacy and confidentiality, cyber security, and fairness considerations. Depending on their application, AI techniques can mitigate discrimination based on human interactions or exacerbate biases, unjust treatment, and discrimination in the financial sector. Prejudices and discrimination in AI can arise from utilized or inadequate data in machine learning models or inadvertently through inference and proxies (for example, inferring gender by scrutinizing pscrutinizingtivity data). In addition to financial consumer protection considerations, employing big data and ML models raises the potential for competition issues, such as high concentration amongst market providers in specific markets or increased risks of tacit collusion.

The challenges of machine learning (ML) models are widely acknowledged, particularly surrounding the issue of explainability. This concept refers to the difficulty in understanding how and why ML models generate results and is associated with various risks. When opaque models are used widely, there could be unintended consequences, significantly if users and supervisors cannot predict how the ML models’ action models negatively affect the markets. When companies intentionally withhold transparency to protect their advantage, it exacerbates the problem of explainability. This raises concerns related to the supervision of AI algorithms and ML models, as well as the ability of users to adjust their strategies during periods of poor performance or stress.

The lack of explainability in AI models concerns financial service providers, as it is incompatible with existing laws, regulations, internal governance, risk management, and control frameworks. The inability of users to understand the impact of their models on markets, along with the amplified risk of systemic risks related to pro-cyclicality, limits the ability of users to adjust their strategies in times of stress. This can lead to increased market volatility and bouts of illiquidity during periods of acute stress, which can exacerbate flash crash events. The complexity of AI models poses a significant challenge for users, as it demands a level of technical literacy that is not widely available. This mismatch between the complexity of AI models and the demands of human-scale reasoning and interpretation that fit human cognition is a significant challenge. Furthermore, transparency and auditing such models in many financial services use cases pose regulatory challenges. Therefore, the explainability of AI models is addressed to ensure that they comply with laws and regulations, internal governance, risk management, and control frameworks and mitigate the risks of market shocks and systemic risks related to pro-cyclicality. 

AI-powered models are being increasingly used in financial markets by practitioners. However, such models must be explainable to better comprehend their behaviour in normal market conditions and in times of stress and manage associated risks. There is a difference in opinion regarding the level of explainability that can be achieved in AI-driven models, depending on the type of AI employed. Therefore, it is imperative to strike a delicate balance between the interpretability of the model and its predictability. Introducing disclosure requirements around AI-powered models and processes could help mitigate challenges associated with explainability and provide more comfort to consumers using AI-driven services. It would also help to build trust among consumers and enhance the credibility of such services. 

Assessing and managing potential risks is imperative to ensure AI systems’ smooth functioning. The durability of these systems can be reinforced through meticulous training and retraining of ML models, utilizing data sufficient size to capture non-linear relationships and tail events within the data, including synthetic ones. The ongoing monitoring, testing, and validation of AI models throughout their lifecycles, tailored to their intended purpose, is indispensable in identifying and rectifying model drifts, which can arise in the form of concept or data drifts and potentially impair the model’s predicmodel’swer. Model drifts frequently emerge when tail events, such as the ongoing COVID-19 crisis, cause discontinuity in datasets, rendering them practically challenging to overcome, as they cannot be reflected in the data used to train the model. Human judgment remains critical at all stages of AI deployment, from dataset input to model output evaluation, and can prevent the risk of interpreting meaningless correlations observed from patterns in activity as causal relationships. Automated control mechanisms or ‘kill switches’ may be employed as a last line of defence to rapidly shut down AI-based systems if they fail to function according to their intended purpose, but this creates operational risk and ensures a lack of resilience in the prevailing business system when the financial system is under stress.

Introducing explicit governance frameworks that delineate clear lines of responsibility around AI-based systems throughout their lifecycle, from development to deployment, could augment the existing model governance arrangements. Financial services providers’ intproviders’l governance committees or model review boards are responsible for defining model governance standards and processes for model building, documentation, and validation for any model type. With the broader adoption of AI by financial firms, such boards are expected to become more prevalent, with possible ‘upgrading’ their roles and competencies and the processes involved to account for the complexities introduced by AI-based models, such as the frequency of model validation. 

In the context of high-stakes decision-making processes such as access to credit, precise accountability mechanisms are becoming increasingly crucial for AI models. Risks arise when AI techniques are outsourced to third parties, posing challenges to accountability and competitive dynamics. Concentration and dependency risks are some of the hazards arising from outsourcing AI models or infrastructure. Furthermore, outsourcing could lead to vulnerabilities related to higher risks of convergence, which could trigger herding behaviour and convergence in trading strategies. The possibility that a large part of the market is affected simultaneously could lead to bouts of illiquidity in times of stress.

The increasing complexity of innovative AI applications in finance may challenge the technology-neutral approach many jurisdictions adopt to regulate financial market products. Advanced AI techniques, such as deep learning models, may lead to potential inconsistencies with existing legal and regulatory frameworks due to their lack of explainability and adaptability. Additionally, there is a risk of fragmentation of the regulatory landscape of AI at national, international, and sectoral levels. 

As AI applications become more ubiquitous in the finance industry, there will be a growing need to enhance skill sets and effectively manage emerging risks. However, adopting AI may negatively affect employment, with potential job losses across the industry presenting significant challenges. Therefore, it is essential to develop strategies to mitigate these risks and ensure that the financial sector can leverage AI’s potential without compromising its workforce’s well-being. 

Artificial intelligence (AI) in finance has sparked concerns over the possibility of machines replacing humans. However, viewing AI as a complementary tool that enhances human abilities rather than a replacement is essential. By combining the strengths of both humans and machines, AI can provide valuable insights and inform human decision-making processes. This approach ensures that accountability and control remain in the hands of humans, who are ultimately responsible for the final decision. In particular, it may be necessary to prioritize unprioritized making in scenarios that involve higher-value choices, such as lending. By doing so, the benefits of AI can be fully realized while maintaining safeguards to protect against potential risks. Ultimately, the key is to balance humans and machines, leveraging both strengths to achieve optimal results.

Policy considerations 

Policymakers and regulators have a critical role in ensuring that the use of artificial intelligence (AI) in the finance industry aligns with the regulatory objectives of promoting financial stability, safeguarding financial consumers, and fostering market integrity and competition. As such, it is incumbent upon policymakers to consider supporting innovative AI solutions within the sector while ensuring that financial consumers and investors are adequately protected and that markets remain fair, orderly, and transparent. To this end, it is necessary to identify and mitigate emerging risks associated with deploying AI techniques to promote the use of responsible AI. Additionally, existing regulatory and supervisory requirements may require clarification and, in some cases, adjustment to address any perceived incompatibilities with AI applications. In conclusion, implementing AI in the finance industry presents challenges that require careful consideration and proactive regulatory measures. Policymakers and regulators must remain vigilant in upholding regulatory objectives while fostering innovation within the sector.

Regulatory and supervisory requirements for AI techniques should be evaluated in a contextual and proportional framework. The criticality of the application and potential impact on consumer outcomes and market functioning should be considered. This approach can promote the use of AI while still encouraging innovation. However, proportionality should not compromise fundamental prudential and stability safeguards or the protection of investors and financial consumers, which are critical mandates of policymakers.  

Policymakers need to focus on improving data governance by financial sector companies to strengthen consumer protection in AI applications related to finance. This can be achieved by implementing specific requirements or best practices for data management in AI-based techniques. These practices should address concerns such as data quality, dataset adequacy based on the intended use of the AI model, and safeguards that ensure the model is robust enough to avoid potential biases. Best practices such as appropriate sense checking of model results against baseline datasets should be adopted to mitigate discrimination risks. Additionally, other tests based on whether protected classes can be inferred from other attributes in the data may be helpful. Authorities should consider implementing requirements for additional transparency over the use of personal data, along with opt-out options for using such data. 

Policymakers must establish disclosure requirements regarding using AI techniques to provide financial services. Customers should be informed about the application of AI techniques in product delivery and the possibility of interacting with an AI system instead of a human being. This will enable them to make informed decisions when comparing various products. Disclosure statements should provide clear information about the AI system’s capabilities and limitations. To ensure prospective clients understand how AI affects product delivery, authorities should consider introducing suitability requirements for AI-driven financial services. 

Regulators are faced with a significant challenge when considering the implementation of artificial intelligence (AI) in financial services, as the perceived incompatibility of the lack of explainability with existing laws and regulations raises concerns. In response, financial services firms may need to update and adjust the currently applicable frameworks for model governance and risk management to address these challenges. It may be beneficial for regulators to shift their supervisory focus from documentation of the development process and the process by which the model arrives at its prediction to model behaviour and outcomes. More technical ways of managing risk, such as adversarial model stress testing or outcome-based metrics, could be explored for this purpose. This shift in focus can help overcome the perceived incompatibility of the lack of explainability in AI with existing laws and regulations. Therefore, it is recommended that regulators consider these options and adjust the supervisory framework accordingly to ensure the successful integration of AI in financial services.

To ensure the successful integration of AI in financial services, regulators should consider exploring stress testing or outcome-based metrics. This shift in focus can help overcome the perceived incompatibility of the lack of explainability in AI with existing laws and regulations. Policymakers should require clear model governance frameworks and attribution of accountability to help build trust in AI-driven systems. Financial services providers could put explicit governance frameworks in place, designating clear lines of responsibility for developing and overseeing AI-based systems throughout their lifecycle, from development to deployment. This will help strengthen existing arrangements for operation. 

As the use of AI technologies becomes increasingly prevalent in the financial industry, financial firms must provide greater assurance regarding the robustness and resilience of their AI models. Such a step is essential to help policymakers prevent the buildup of systemic risks and to instil confidence in AI applications in finance. To achieve this, it is recommended that AI models undergo rigorous testing in extreme market conditions to prevent potential vulnerabilities and systemic risks that may arise during times of stress. Introducing automatic control mechanisms, such as kill switches that trigger alerts or switch-off models in times of stress, could assist in mitigating risk. However, it is necessary to recognize that recognizing may expose firms to new operational risks. In addition, it is essential to have backup plans, models, and processes in place to ensure business continuity in case the AI models fail or act unexpectedly. Regulators may also consider implementing add-ons or minimum buffers if banks were to determine risk weights or capital based on AI algorithms. This would help mitigate the risk of systemic failures and the associated consequences. By adhering to such practices, financial firms can ensure greater trust in AI technologies and help prevent the buildup of systemic risks in the financial industry.

Financial institutions’ algorithms have introduced new challenges in maintaining business continuity in the face of unexpected events. To address this issue, regulators may consider implementing add-on or minimum buffers to mitigate the risk of systemic failures and their associated consequences, should banks determine risk weights or capital based on AI models. Adhering to such practices can foster greater trust in AI technologies and help prevent the buildup of systemic risks in the financial industry. Consequently, processes must be in place to ensure business continuity in the event of AI models failing or acting in unexpected ways. 

Regulators could enhance the resilience of AI models by promoting ongoing monitoring and validation of such models. This could be one of the most effective ways to prevent and address model drifts and ensure model resilience. Standardized monitoring and verification could be implemented to improve model resilience and to identify whether the model requires adjustment, redevelopment, or replacement. Documenting the model validation, necessary approvals, and sign-offs separately from the model development for supervisory purposes is essential. The frequency of testing and confirmation should be defined based on the complexity of the model and the significance of the decisions made by the model. 

Regulators should consider promoting the ongoing monitoring and validation of AI models, which are fundamental for their risk, as one of the most effective ways to reinforce model resilience and prevent and address model drifts. Best practices around standardization, such as monitoring and validation, could assist in improving model resilience and identify whether the model necessitates adjustment, redevelopment, or replacement. Model validation and the necessary approvals and sign-offs should be separated from the development of the model and documented as best possible. Appropriate emphasis could be placed on human importance in decision-making regarding higher-value use cases, such as lending decisions, which significantly affect consumers. Authorities should consider introducing processes that can allow customers to challenge the outcome of AI models and seek redress, which could also help build trust in such systems. The GDPR is an example of such policies, as it provides the associated right of individuals’ individuals n intervention’ and to express their points of view if they wish to challenge the judgment made by an algorithm (EU, 2016).

Policymakers should consider the growing technical complexity of AI and assess whether resources need to be dedicated to keep up with technological advancements. AI has a transformative impact on certain financial market activities and presents new risks, so it has become a key policy priority in recent years. To address this, allocating research and skills development resources for finance sector participants and enforcement authorities is essential.

Policymakers are crucial in supporting innovation within the financial sector while protecting financial consumers and investors. Maintaining fair, orderly, transparent markets around financial products and services is also essential. Policymakers should consider strengthening their existing defences against risks that may emerge or be exacerbated by the use of AI. To promote the adoption of innovative techniques, clear communication should be established about the use of AI and the safeguards in place to protect the system and its users. In addition, since financial services can be easily provided across borders, policymakers should engage in a multidisciplinary dialogue with the industry at national and international levels.

Thank You For Reading, Hope You Liked It…!!! 😊
Please Like and Share...
Facebook
Twitter
WhatsApp
Email
Picture of admin@helpofai.com

admin@helpofai.com

Help Of Ai (HOA) have a powerful Ai features including Smart Editor, AI ReWriter, AI Video Generator, Sound Studio, AI Plagiarism, Content Detector, Image, Transcript and many more…
Days
Hours
Minutes
Seconds