BoE/PRA/FCA Discussion Paper (DP5/22 & DP22/4) “Artificial Intelligence and Machine Learning” Innovate Finance Response

About Innovate Finance
Innovate Finance is the independent industry body that represents and advances the global FinTech community in the UK. Innovate Finance’s mission is to accelerate the UK's leading role in the financial services sector by directly supporting the next generation of technology-led innovators.
The UK FinTech sector covers businesses from seed-stage start-ups to global financial institutions who embrace digital solutions, playing a critical role in technological change across the financial services industry. FinTech has grown strongly since the Global Financial Crisis of 2007/8, which led to mistrust in traditional banks and coincided with an explosion in the use of smartphones, widespread adoption of the use of apps, the advent of blockchain technology, and significant investment in FinTech start-ups.
FinTech is synonymous with delivering transparency, innovation and inclusivity to financial services. As well as creating new businesses and new jobs, it has fundamentally improved the ways in which consumers and businesses, especially small and medium sized enterprises (SMEs), access financial services.
Introduction
Innovate Finance welcomes the opportunity to respond to the regulators’ joint discussion paper on Artificial Intelligence (AI) and Machine Learning (ML) (FCA DP22/4 and BoE DP5/22). In reviewing this discussion paper, we have engaged with a wide cross-section of our membership base, including scale-up banks and Regulatory Technology (RegTech) providers. We are grateful to Baker McKenzie for their invaluable support in preparing this response.
Responsible use of AI has the potential to revolutionise financial services for the benefit of UK consumers and businesses. Innovate Finance members’ use cases demonstrate that AI enables firms to create solutions that address tier one priorities for the financial services sector, which align with Government and regulators’ aims, including tackling economic crime and supporting the transition to a Net Zero economy.
If the UK is to remain a global AI superpower and realise the societal and economic benefits that the technology can deliver, it will be important for the regulators and Government to take a thoughtful, joined-up approach to the development of the regulatory and legislative framework for AI. We summarise some of our key recommendations below, and we explore these in more detail in the main body of our response:
- Develop a common taxonomy and/or common set of principles for AI, followed by sectoral rules (where required) and guidance. This will provide certainty and consistency to the industry, and in particular to the FinTech industry which often operates on a cross-sectoral basis given its characteristics.
In creating a common taxonomy and common set of principles for AI, policymakers should avoid blanket high-risk versus low-risk categorisations and instead set out principles for the identification and assessment of risks posed by a particular use case that considers context.
- The financial services sector should be held to the same standards as other sectors, given how highly regulated the sector is compared to others, and not to higher standards unless that is merited for specific use cases which would need to be subject to more detailed, specific consideration.
- Our members consider that the existing body of regulation and legislation that currently applies to AI is comprehensive, and we would caution against a drive to overregulate the technology and its many uses. However, we think that there is merit in issuing tailored guidance addressing how firms can ensure that they comply with the rules and principles where AI is being employed.
- In relation to firms’ AI governance arrangements, we are not currently persuaded that there should be a new function holder or prescribed responsibility allocated to AI. We think that a better approach is likely to be that the Senior Managers and Certification Regime (SMCR) responsibility for AI is allocated to the most appropriate Senior Management Function (SMF) holder depending on its usage, and fall under that person's Statement of Responsibility. Outside of the governance requirements mandated by the SMCR, better practice is that governance for AI is, in practical terms, a cross-team effort, with a range of stakeholders involved in ensuring proper governance across data science, technology, legal, compliance and business units.
- Our members would welcome regulators issuing guidance around how SMFs discharge their responsibilities where AI is used in the business line or function for which they are responsible.
We would be pleased to discuss our response in more detail with FCA, PRA and BoE colleagues and/or facilitate a roundtable with our members.
Discussion Paper questions and responses
Supervisory authorities’ objectives and remits
Q1: Would a sectoral regulatory definition of AI, included in the supervisory authorities’ rulebooks to underpin specific rules and regulatory requirements, help UK financial services firms adopt AI safely and responsibly? If so, what should the definition be?
Q2: Are there equally effective approaches to support the safe and responsible adoption of AI that do not rely on a definition? If so, what are they and which approaches are most suitable for UK financial services?
Our members consider that a common taxonomy is very important in establishing a common language with which financial institutions and regulators can communicate about facets of AI and to facilitate better risk identification and management. In our view, any approach to setting a common definition of Artificial Intelligence (AI) will need cross-sectoral coordinated action, perhaps under the tutelage of the Digital Regulation Cooperation Forum (DRCF) as the guiding forum. A coordinated definition and/or common set of principles, followed by sectoral rules and guidance, would provide certainty and consistency to the industry, and in particular to the FinTech industry which often operates on a cross-sectoral basis given its characteristics. One good example of this coordination is the digital identity trust framework at what was previously known as the Department for Digital, Culture, Media and Sport (DCMS), which is setting standards cross-industry and is designed to instil a uniform level of confidence across a number of different sectors.
We consider that the guiding principle should be an avoidance of overregulation. The financial services regulators should take inspiration from their ongoing move from granular compliance-based regulation to agile outcomes-based regulation supported by guiding principles. Regulators should avoid blanket high-risk versus low-risk categorisations, such as the positions taken by the EU Commission and Council of the EU in their positions on the proposed EU Artificial Intelligence Act that assessing creditworthiness and insurance risk assessment and pricing are high-risk AI systems by definition. Instead, regulators should set out principles for the identification and assessment of risks posed by a particular use case that considers context (for example, is the decision or action based significantly on the output of an AI model or is it one of several data points that might be considered) and guidance on actions that can be taken to manage the risk that is proportional to the risks posed, building in agility to remain as future-proofed as possible. Illustrative examples, which have been employed in regulatory guidance to great effect, would also be welcomed. From a practical point of view, guiding principles will help to strengthen the effectiveness of third-party audits, where independent experts can apply regulatory principles with the background of their industry expertise to more effectively analyse and mitigate risk, rather than simply ensuring adherence to tick-box risk regulation.
Benefits, risks, and harms of AI
Q3: Which potential benefits and risks should supervisory authorities prioritise?
Q4: How are the benefits and risks likely to change as the technology evolves?
Q5: Are there any novel challenges specific to the use of AI within financial services that are not covered in this DP?
Q6: How could the use of AI impact groups sharing protected characteristics? Also, how can any such impacts be mitigated by either firms and/or the supervisory authorities?
Q7: What metrics are most relevant when assessing the benefits and risks of AI in financial services, including as part of an approach that focuses on outcomes?
The use of AI is not inherently risky and it is important that functionalities and use of AI is judged on a case-by-case basis. The performance of AI, including the potential risk of bias, will depend on a number of factors such as quality of data and this will be specific to the systems involved. The regulators should give careful thought to the relationship between the Consumer Duty and the risks arising from the use of AI; guidance on the consumer outcomes as they relate to the use of AI would be welcomed. We consider that the regulators should, as they formulate their innovation-friendly approach to AI, ensure that they focus on the benefits of AI as much as the risks. Examples of some of these benefits include:
- AI can be used to develop dynamic models whose structure or parameters adapt during deployment in response to new data, providing benefits over more conventional static models which are slower to react.
- AI can reduce operational costs, making financial products more affordable and accessible. For example, AI can improve access to lending products by opening up new areas of the market previously deemed too risky or too poorly understood to be targeted through conventional processes. AI can also help to improve the customer experience by providing customers with better explanations of why they may be refused credit (including information they can then use to improve their credit rating or financial literacy).
- AI has the potential to improve risk profiling capabilities, which could enhance suitability and affordability decisions, particularly for loans and consumer credit (with the potential to improve financial inclusion and better address issues relating to vulnerability).
- AI may provide opportunities to develop more bespoke pricing solutions due to autonomous assessments of data sets.
- AI can facilitate the faster detection of fraud and financial crimes or other compliance issues, helping to relieve resourcing pressures and reduce anti-money laundering (AML) or other compliance procedural backlogs. In particular, AI allows far more detailed verification of identity documents to ensure validity. Further, AI can significantly improve the consumer experience through, for example, remote verification of identity which will obviate the need to physically visit a bank branch to open an account. This will reduce the operational cost of identity verification for challenger banks which brings demonstrable benefits for both the bank (i.e. lower costs and better efficiency) and for customers (for example, shorter waiting times for opening an account, more effective recovery processes when, for instance, one is locked out of a bank's internet banking app or interface). This has the added benefit of improving competition in the banking space, particularly in relation to levelling the playing field for digital challengers. We would note in this regard the positive treatment by the Dutch courts of challenger bank Bunq's use of AI / ML in AML screening.
- AI can help to enhance investment strategies.
- AI has also been employed by investment banks and financial services firms in the US to comply with the Dodd-Frank Act. The US Treasury mandated that US financial institutions maintain detailed records about their qualified financial contracts (QFCs) whereby firms capable of providing this information to relevant primary financial regulatory agencies had to do so within 24 hours of receiving a request. One of our members focusing on providing software services used AI to help a major US investment bank to automate document processing for QFC recordkeeping purposes with high accuracy levels.
The mechanics of AI, and the reach of the benefits described above, are best understood in "real world" settings. Our members would be delighted to offer a workshop for the regulators to demonstrate how some of these benefits work in practice.
Regulation
Q8: Are there any other legal requirements or guidance that you consider to be relevant to AI?
Q9: Are there any regulatory barriers to the safe and responsible adoption of AI in UK financial services that the supervisory authorities should be aware of, particularly in relation to rules and guidance for which the supervisory authorities have primary responsibility?
Q10: How could current regulation be clarified with respect to AI?
Q11: How could current regulation be simplified, strengthened and/or extended to better encompass AI and address potential risks and harms?
Q12: Are existing firm governance structures sufficient to encompass AI, and if not, how could they be changed or adapted?
Q13: Could creating a new Prescribed Responsibility for AI to be allocated to a Senior Management Function (SMF) be helpful to enhancing effective governance of AI, and why?
Q14: Would further guidance on how to interpret the ‘reasonable steps’ element of the SM&CR in an AI context be helpful?
Q15: Are there any components of data regulation that are not sufficient to identify, manage, monitor and control the risks associated with AI models? Would there be value in a unified approach to data governance and/or risk management or improvements to the supervisory authorities’ data definitions or taxonomies?
Q16: In relation to the risks identified in Chapter 3, is there more that the supervisory authorities can do to promote safe and beneficial innovation in AI?
Q17: Which existing industry standards (if any) are useful when developing, deploying, and/or using AI? Could any particular standards support the safe and responsible adoption of AI in UK financial services?
Q18: Are there approaches to AI regulation elsewhere or elements of approaches elsewhere that you think would be worth replicating in the UK to support the supervisory authorities’ objectives?
Q19: Are there any specific elements or approaches to apply or avoid to facilitate effective competition in the UK financial services sector?
As a general starting point, it is important that the financial services regulators move in lockstep with the other policy work that is going on elsewhere in Government to ensure cross-sectoral alignment and certainty for the industry. We consider that the starting point should be that the financial services sector is held to the same standards as other sectors, given how highly regulated the sector is compared to others, and not to higher standards unless that is merited for specific use cases which would need to be subject to more detailed, specific consideration. We would also urge the regulators to be mindful that, especially in the case of large financial institutions, those in the sector operating cross-border will be subject to a range of laws outside the UK which will impact their AI governance strategy, and greater harmonisation with other jurisdictions will help reduce fragmentation and operational complexity when implementing risk management and governance frameworks.
We do not foresee insurmountable barriers to AI adoption in the current regulatory rulebooks, as they take a technology-neutral approach, and as such we think wholesale changes are unlikely to be necessary. Our members would support a continuation of the regulators' technology-neutral approach as rulemaking develops. It is important that rulemaking proceeds on this principle in order to support the Government's stated aims of fostering innovation and competition in UK financial services. We would
also urge the regulators to continue efforts to make their rulebooks machine-readable so as to support RegTech solutions, and to further expand their work on making rules machine executable.
However, we think that there is merit in issuing tailored guidance addressing how firms can ensure that they comply with the rules and principles where AI is being employed. For example, our FinTech challenger bank members would welcome more guidance from regulators on what they consider to be “high risk” or “low risk”, when employing AI to make decisions on credit and lending. This is likely to be particularly helpful for smaller firms looking to enter the market, reducing barriers to entry and contributing to competition. This tailored guidance is also aligned with the approach that the Financial Conduct Authority (FCA) has demonstrated in its Innovation Pathways service, which has provided invaluable support to firms launching automated advice and guidance. The FCA should consider taking the learnings from this support unit to develop broader industry guidance, using service-specific case studies and use case examples it has encountered in supporting firms in the unit.
Turning to governance, we are not currently persuaded that there should be a new function holder or prescribed responsibility allocated to AI. There is already an overarching responsibility for managing all or substantially all of a firm's technology allocated to the chief operations function (SMF 24). The use cases for AI will, as the Discussion Paper demonstrates, run across many different elements of a firm's business, including in the delivery of services (for example, via the provision of automated advice), internal risk modelling, and so on. We think that a better approach is likely to be that SMCR responsibility for AI is allocated to the most appropriate SMF function depending on its usage, and fall under that person's Statement of Responsibility. This may be, in some cases, the chief operations function: given that AI forms a subset of technology, it could be that it is appropriate to allocate responsibility for AI to the chief operations function for firms that have that role.
The regulators will also need to consider how appropriate it might be for a single SMF with a particular purpose – say, the chief operations function insofar as it is responsible for Information and Communications Technology (ICT) implementation across business lines – to bear responsibility for the impact that deployment of AI might have on customer outcomes, where the responsibility for those customer outcomes may have been allocated to different SMF in the absence of AI deployment. Outside of the governance requirements mandated by the SMCR, better practice is that governance for AI is, in practical terms, a cross-team effort, with a range of stakeholders involved in ensuring proper governance across data science, technology, legal, compliance and business units. More broadly, we would urge the regulators to take a prudent approach and consider refraining from imposing more burdensome governance controls on the use of AI in the already highly regulated financial services sector, given the multiple additional layers of governance regulation to which financial services providers are already subject and the need for cross-sectoral aligned principles.
Our members do see the benefit of the regulators issuing guidance around how SMFs discharge their responsibilities where AI is used in the business line or function for which they are responsible. This would help to provide a starting point as to how SMFs in these roles should be required to act at the point of launching AI technology and on monitoring/overseeing its use. For example, additional guidance could be included in the FCA’s Code of Conduct (COCON) section of the Handbook against the various Conduct Rules / Senior Manager Conduct Rules. We think it will be particularly important for any guidance issued to also include guidance on "reasonable steps".
Turning to general standards, our members would find it useful to have guidance around good practice on data access and data quality. A significant issue with the deployment of AI, particularly when it comes to mitigating discrimination and bias, is access to data (and the quality of that data). AI outcomes are only as good as the quality of the data used to inform the system and without access to good data, good outcomes are unachievable. One of the key questions for the regulators to consider in devising and issuing their guidance will be the extent to which successful data access can be achieved without stifling innovation in the industry or dampening competition. FinTechs in particular may not have the same access to data sets as large financial institutions. On the related issue of discrimination and bias, we think it is important that the regulators have a good grasp of what discrimination and bias in AI look like from the practitioner's perspective. To that end, our members would be happy to participate in learning roundtables or workshops with the regulators to provide further engagement on this issue.
Finally, we consider that sustainability and Environmental, Social and Governance (ESG) objectives should be considered in any set of standards or guidance. On the one hand, the computing power – and infrastructure investment – required for AI can be quite significant, especially in relation to explainability and in mitigating the effects of discrimination and bias, though note that this may depend on the AI system itself and smaller data systems will use less computing power. On the other hand, more broadly on ESG, AI systems are going to play a critical role in ensuring that the financial services sector can meet its ESG objectives (for example, through the deployment of AI to underpin carbon reduction systems), helping the industry and the UK to meet Net Zero ambitions. We think it would be helpful to see guidance on how this ESG push and pull can be appropriately addressed by firms alongside standards or guidance incorporating sustainability.
[ENDS]