December 2023
Artificial intelligence (AI) has the potential to transform all sectors, a prospect highlighted by PwC’s ‘Global Artificial Intelligence’ study which estimates that AI could contribute up to $15.7 trillion to the global economy by 2030. In particular, the financial services sector is already reaping the benefits of AI. These benefits manifest as enhanced efficiency, improved decision-making, and the customisation of products and services.
However, the rapid development of AI brings risks, leading to increased global regulatory scrutiny. In our 2019 ‘AI in financial services’ paper, we emphasised key regulatory themes and considerations in the financial services sector, which remain highly relevant today. But as the power of AI has developed, so has the regulatory focus. This has been evidenced by the recent AI Safety Summit held on 1-2 November 2023 and a range of communications from financial services regulators in the UK and globally.
This regulatory evolution is in part driven by the emerging debate over the risks of generative AI and artificial general intelligence, which have captured the attention of governments globally. This discussion not only covers existing risks, but also systemic and longer term risks related to market resilience, large-scale misinformation, and loss of controls over powerful AI systems. These concerns led to the AI Safety Summit and are increasingly shaping the regulatory narrative.
Firms need to assess the rapid evolution of AI, what this means for their business model and strategy, and consider the impact of regulation on their adoption journey. With the ongoing evolution of AI and its regulation, what comprehensive approach should firms adopt to effectively manage risks and leverage opportunities?
AI is quickly becoming more than just a supportive tool; it is shaping up to be a significant catalyst for transformation. However, advanced AI systems amplify issues related to data privacy and data quality, security, accuracy, autonomy, and complexity. Explaining the functioning of the models or outputs may become challenging, increasing risks such as bias, herd behaviour and data drift.
Firms are grappling with these challenges, while recognising the benefits that AI could bring to their organisations and customers. The latest ‘Machine learning in UK financial services’ survey from the Bank of England and FCA revealed 72% of firms are using or developing machine learning applications, with firms anticipating a 3.5-fold increase in applications by 2026.
The use cases for AI are growing across financial services organisations. For example, the ‘Artificial Intelligence: Challenges and Opportunities for Compliance’ report produced by PwC for the Association for Financial Markets in Europe showed that there was significant appetite within compliance functions to explore AI use cases, seeking improved operational efficiencies that enable a greater focus on value-add activities and business partnering. The report also found that firms are mindful of the need for robust risk, control and governance structures around the use of AI.
The swift advancement of AI has prompted UK authorities to reassess the existing regulatory framework for AI. The Government proposed a principles-based framework the ‘AI regulation: a pro-innovation approach’ white paper published in March 2023. It is set to be implemented by the regulators and overseen by a central monitoring function. The Government also initiated work to explore how the law regarding copyright should be applied, develop assurance techniques and advance research of advanced AI models through a new safety institute.
Within this broader context the financial services regulators explored the implications of AI in a discussion paper (DP5/22) in October 2022. They stressed that a range of regulatory frameworks are in place to support the safe and responsible adoption of AI. These include the ‘Model risk management principles for banks’ published by the PRA in a supervisory statement, the operational resilience framework, and the Senior Managers & Certification Regime. The FCA also sees the Consumer Duty as a key instrument, and its intersection with AI was noted at the FCA Board meeting in July 2023. In September 2023 the FCA’s Chief Executive, Nikhil Rathi, emphasised the regulator's focus on AI's impact on inclusion in a speech at PwC Glasgow.
The regulators published a feedback statement on 26 October 2023 after reviewing the responses from DP5/22. Most respondents favoured a technology-neutral, principles-based, and outcomes-driven approach. Instead of implementing new regulation, which risks creating outdated rules, respondents favoured developing practical guidance on a range of areas requiring clarification. These include data management, data risks related to bias and fairness, consumer outcomes, accountability, third-party involvement, and model risk management.
The feedback statement did not present any new policies, but the regulators may consider providing new guidance based on the highlighted areas above and the principles outlined in the Government’s AI white paper. In terms of risks, the impact on consumers, financial inclusion and market functioning may be of particular concern to the regulators. We may see further engagement from the regulators after the Government publishes its response to the AI white paper, expected by the end of the year.
UK firms will also be impacted by recent global AI regulatory initiatives such as the G7 Hiroshima AI process AI Safety Summit. The Bletchley Declaration, signed by countries represented at the AI Safety Summit, signals a growing consensus on AI risks and actions to be taken by relevant stakeholders, including putting in place risk classifications, evaluation metrics and safety protocols. This movement towards international convergence is also seen in activities of the Council of Europe, Global Partnership on AI, Organisation for Economic Co-operation and Development, United Nations, and other international standard-setting bodies.
However, the regulatory environment remains fragmented. Various jurisdictions are adopting their own distinct regulatory frameworks for AI. In the USA, this encompasses an Executive Order issued on 30 October 2023, as well as policy work in the Senate, including the introduction of a bipartisan bill on 15 November 2023. The European Union is similarly advancing its own approach with the upcoming AI Act. Likewise, China has designed laws specific to AI. The emergence of different AI regulatory frameworks may increase compliance costs and complexity for firms operating across multiple jurisdictions.
Particularly notable is the contrast between the EU's detailed regulatory approach, especially for 'high-risk' applications like creditworthiness assessments, and the UK's focus on outcomes. The EU's more prescriptive approach offers greater clarity for firms but could potentially impact innovation or become out of date, while the UK's approach allows greater flexibility but demands greater judgement from firms on how to meet regulatory expectations.
Preparing for change is critical for firms wanting to gain a competitive advantage in a complex landscape. As explored in a recent report by PwC’s Strategy&, AI and particularly generative AI, presents a significant opportunity for the financial services sector. And whilst the technological and regulatory frameworks evolve, firms should embed robust governance and controls throughout the lifecycle of AI, as laid out in PwC’s Responsible AI framework.
Firms are likely to see questions posed by the regulators on how responsibilities are allocated and reasonable steps taken within the Senior Managers Regime. Securing the right skills and experience, at both the Board level and throughout the organisation, is essential for supporting innovation, yet can be challenging to achieve. The PwC 2023 Trust Survey indicates that only 35% of executives plan to enhance AI system governance in the coming year, underscoring the importance for firms to reevaluate their current strategies.
Ensuring AI does not result in discriminatory or poor outcomes for consumers is understandably a key focus for the FCA. By establishing a tailored AI, data and ethics strategy, firms can take steps to prevent and mitigate risks stemming from bias. These may include focusing on responsible data sourcing and establishing clear definitions of fairness to monitor and support fair outcomes to customers.
To support these objectives, model risk management is an essential step, and will be subject to further regulatory scrutiny. Developing frameworks that cover model risk classification and validation, and updating them for technological changes, will be crucial. This is particularly important as the use of AI developed by third party suppliers may increase. Firms will need to fully consider the implications for operational resilience and outsourcing requirements. An interesting dynamic will be the extent to which the regulators and HM Treasury bring AI providers into scope of the upcoming critical third parties regime.
To maximise the long-term benefits of AI and protect the interests of both firms and their customers, it is essential for firms to proactively update their AI strategies and governance models. This approach should align with the latest technological advancements and adhere to current and evolving regulatory frameworks. Such strategic alignment should ensure a responsible, ethical and legally compliant utilisation of AI, paving the way for sustainable growth and innovation in the financial services sector.
Peter El Khoury
Head of Banking Prudential Regulation & FS Digital Partner, PwC United Kingdom
Tel: +44 (0)7872 005506