At a glance

Regulators outline AI priorities for next 12 months

  • Insight
  • 12 minute read
  • April 2024

The FCA, Bank of England and PRA (the regulators) detailed their plans for the regulation of artificial intelligence (AI) in updates published on 22 April 2024.

The updates were requested by the Government in its response to an earlier white paper on AI, published on 6 February 2024. 

The regulators highlight the work they have already conducted to identify and manage AI risks. They also outline existing requirements and areas of focus for the next 12 months.

What does this mean?

In their updates, the regulators map the Government’s principles with existing sectoral requirements, and outline their approach to managing AI risks and opportunities. They also lay out their areas of focus and planned activities for the next 12 months.

Regulatory approach and Government’s principles

The Government has laid out five principles for the regulators to interpret and apply within their remit: 

  • safety, security and robustness
  • appropriate transparency and explainability
  • fairness
  • accountability and governance
  • contestability and redress.

The regulators indicate that the current regulatory framework  appropriately supports the delivery of the benefits of AI, while addressing the risks, in line with the principles set out by the Government.

They reaffirm their technology-neutral approach, focusing on outcomes rather than prescribing specific requirements. However, the regulators will adapt to market and technology changes and consider issuing guidance and / or using other regulatory tools if needed.

Areas of focus

The regulators outline their current areas of focus based on industry feedback received in response to their discussion paper and the AI Public-Private Forum (AIPPF), including:

  • Governance: Compliance with measures and governance structures as outlined in existing rules, including the Consumer Duty, the model risk management supervisory (MRM) statement SS1/23, and the Senior Managers Regime (SMR).
  • Data management: Addressing challenges related to the fragmentation of data regulation.
  • Model risk management: Enforcement of SS1/23. The PRA will consider at a later stage if the scope should be broadened to insurers and banks not in scope.
  • Operational resilience and third party risk: Assessing the impact of AI on operational resilience and third party risk frameworks. Considering AI services as part of the upcoming Critical Third Parties regime.
  • Competition: Assessing risks arising from the concentration of third-party technology services and the broader impact of Big Tech in financial services.
  • Consumer outcomes: Monitoring the impact of AI on consumers and enforcing existing rules, particularly the Consumer Duty.

Planned work in the next 12 months

The focus of regulators over the next 12 months will remain on exploratory initiatives and the implementation of existing regulations. Key activities planned include:

  • Conducting the third edition of the machine learning survey.
  • Monitoring the macroeconomic effects of AI on financial markets, with a particular focus on  financial stability, market integrity, and cybersecurity. These analyses will be reviewed by the FPC.
  • Collaborating with the PSR to examine the integration of AI into payment systems and services.
  • Expanding efforts on emerging technologies such as quantum computing and addressing challenges posed by Big Tech in the financial sector.
  • Contributing to joint research on the cross-sector adoption of generative AI and deepfake technologies with the Digital Regulation Cooperation Forum (DRCF).
  • Participating in a new cross-sectoral advisory pilot launched by the DRCF.
  • Assessing opportunities for piloting new types of regulatory engagement and testing environments, such as AI sandboxes.
  • Exploring the formation of a follow-up industry consortium to the AIPPF.
  • Publishing a consultation paper on the SMR in June 2024.

International cooperation and standards

The updates recognise the role of international cooperation and standards to prevent regulatory divergence and mitigate risks.

The regulators highlight ongoing engagement with international organisations including: the International Organization of Securities Commissions; the FSB; the Organisation for Economic Co-operation and Development; and the Global Financial Innovation Network.

What do firms need to do?

Review and update governance structures and controls frameworks, aligning with the requirements of the SMR, Consumer Duty, and MRM statement.

Develop testing and validation frameworks to address AI developments, supporting explainability and bias detection.

Strengthen operational resilience and third party risk management, particularly by identifying and addressing risks to important business services.

The regulators are not planning to develop an overarching framework for AI regulation in the short term. However, AI remains a key area of focus, and further work will be conducted in the coming months. They also stress that AI is already covered by a broad range of existing regulations, which firms already need to comply with.

Firms should be aware of both UK and international developments. Despite ongoing engagement with international stakeholders, there is a risk of regulatory divergence.

Firms need to establish clear lines of accountability and ensure appropriate governance for their use cases, particularly to meet requirements outlined in the SMR, Consumer Duty, and SS1/23 for applicable firms.

Additionally, firms need to review and update their controls and validation frameworks, especially as they adopt more advanced models such as generative AI. This will support broader compliance efforts, including the Consumer Duty regime.

As firms prepare for large-scale AI / generative AI deployment, they must also ensure robust governance and proper data management guardrails are in place.

Strengthening operational resilience and effectively managing third-party risks are also essential. This includes identifying and mitigating risks to important business services.

“The complexity of AI models may require a greater focus on the testing, validation and explainability of AI models as well as strong accountability principles."

Nikhil Rathi, Chief Executive, FCA

Next steps

The regulators will conduct exploratory work in the coming months to inform their decision to provide clarification / guidance to firms. This includes a survey on machine learning and a consultation on the SMR in June 2024.

Contacts

Leigh Bates

UK FS Data Analytics Leader, London, PwC United Kingdom

Email

Peter El Khoury

Head of Banking Prudential Regulation & FS Digital Partner, PwC United Kingdom

+44 (0)7872 005506

Email

Conor MacManus

Director, London, PwC United Kingdom

+44 (0)7718 979428

Email

Hugo Rousseau

Manager, PwC United Kingdom

+44 (0)7484 059376

Email

Follow us