At a glance

Government sets out next steps for regulators on AI

  • Insight
  • 12 minute read
  • February 2024

The Government provided an update on artificial intelligence (AI) policy and outlined the next steps following its March 2023 AI white paper, on 6 February 2024.

It confirmed the adoption of a principles-based approach as proposed in the white paper, emphasising the crucial role of regulators, including the FCA and Bank of England (BoE), in developing a context-specific approach to AI regulation based on five high-level principles.

The Government also published the first phase of guidance to the regulators for implementing the AI framework. The next steps include refining this guidance with feedback from regulators and firms in phase two, aiming for a release by the summer, followed by developing joint tools and guidance across regulatory remits in phase three.

What does this mean?

The Government has reaffirmed its strategy for AI regulation, emphasising broad support for its approach and noting progress on actions proposed in the AI white paper, including establishing a central function to oversee policy implementation.

Opting for a principles-based, context-specific approach, the Government aims for an agile regulatory response to the rapid evolution of AI technologies, assigning regulators a pivotal role in designing appropriate rules and guidelines.

While current plans do not include immediate legislation for the regulation of AI, the Government indicates that future legislative action may become necessary as AI technologies continue to advance and risks become better understood.

Actions for regulators

The FCA and BoE will publish an update by 30 April 2024 on their strategic AI regulation approach. This update should detail:

  • steps taken in alignment with the white paper's expectations
  • analysis of AI-related risks and actions they are taking forward to mitigate these
  • evaluation of current capabilities and requirements to address AI risks, and actions they are taking to ensure they have the right structures and skills in place
  • planned activities over the next 12 months.

The regulators will also need to reflect on the Government’s guidance, which sets out the following actions for them to consider, across five key principles:

  • Safety, security and robustness: assess and communicate safety risks, require AI developers and deployers within their remit to conduct risk assessments and adopt mitigations.
  • Appropriate transparency and explainability: encourage firms to incorporate transparency and explainability.
  • Fairness: advance and clarify fairness in AI outcomes within regulatory scopes, ensure AI design and usage align with these standards, and emphasise collaboration on fairness tools and guidance across overlapping regulatory domains.
  • Accountability and governance: set defined compliance and best practice standards for AI supply chain actors, evaluate the applicability of existing accountability measures to AI contexts, and enhance accountability through transparency and explainability.
  • Contestability and redress: encourage AI developers and deployers to guide users on contesting AI decisions, emphasising that transparency and explainability are essential for challenging outcomes and understanding redress options.

The Government also states that regulators should also consider mapping the standards which could help AI developers and deployers understand the principles in practice.

Regulators to develop new capabilities

Acknowledging the need for context-specific regulatory support, the Government has pledged a £10m fund to aid regulators in developing tools and research for effectively monitoring AI risks and opportunities, potentially introducing new technical tools for AI examination.

Advanced AI and international dynamics

The Government will explore further the impact of new advanced AI models, including highly capable general-purpose AI. This work will be conducted as part of the AI Safety Institute and with international partners, including through the AI Safety Summits. 

Recognising the limitations of voluntary measures, the Government indicates it is likely that regulatory intervention will be needed, covering different tiers based on compute and capability benchmarks.

The Government has reiterated its commitment to strengthening the governance of AI on an international scale, encompassing both advanced AI technologies and the broader application of AI.

What do firms need to do?

Assess AI and GenAI's impact for businesses and society by identifying and prioritising key use cases, then scaling strategically to enhance benefits.

Collaborate with regulators and align AI practices with government principles to foster innovation within a responsible framework.

Invest in developing capabilities for compliance and innovation, including technical tools, staff upskilling, and best practices.

The Government’s approach to AI policy is aligned with the approach that the financial services regulators have adopted and the themes they have been prioritising. However, the Government’s proposals may signal a shift in the level of supervisory scrutiny and regulatory activity on AI. The regulators have already undertaken a number of information gathering exercises and firms should be prepared for further supervisory initiatives.    

Further regulatory clarity should support adoption across the sector. However, firms should already be ensuring their deployment of AI is done in a way which is consistent with the obligations under regulations such as the consumer duty, and that responsibility for AI is embedded in the SMR. 

Firms exploring and deploying AI and GenAI should evaluate their use cases by selecting and prioritising those offering the greatest value to their business, while also considering their wider impact on customers and society as well as meeting regulatory expectations.

Firms that invest in governance and compliance frameworks, as well as innovation capabilities, will be best placed to realise sustainable value from AI. 

Firms operating globally are also operating in an increasingly complex regulatory landscape, with the AI Act in the EU and emerging expectations in the US and other major jurisdictions. Navigating this fragmented landscape will require adaptable global frameworks and a tech- enabled approach to regulatory tracking and compliance.  

“Financial services firms must proactively adapt to an evolving AI regulatory landscape by aligning AI strategies with regulatory principles. Enhancing compliance to safely innovate at pace, underpinned by trust and the responsible use of AI, is critical to managing evolving AI risks..”

Leigh Bates
Partner, PwC

Next steps

The FCA and BoE will publish an outline of their strategic approach to AI regulation by 30 April 2024. This should include a forward look of plans and activities over the coming 12 months.

Contacts

Leigh Bates

UK FS Data Analytics Leader, London, PwC United Kingdom

Email

Peter El Khoury

Head of Banking Prudential Regulation & FS Digital Partner, PwC United Kingdom

+44 (0)7872 005506

Email

Chris Heys

Partner, PwC United Kingdom

+44 (0)7715 034667

Email

Balaji Krishnamurthy

Partner, PwC United Kingdom

+44 (0)7590 352503

Email

Conor MacManus

Director, London, PwC United Kingdom

+44 (0)7718 979428

Email

Fabrice Ciais

Director, AI, PwC United Kingdom

+44 (0)7843 334241

Email

Follow us