The impact and necessary response

EU AI Act

Row of European Union Flags

The EU Artificial Intelligence Act is a cross-sector framework for regulating AI systems. What actions can UK firms take now to comply with the requirements?

What is the EU AI Act?

The EU Artificial Intelligence Act (AIA) is a new legislative cross-sector framework for regulating AI systems in the European Union (EU). It sets harmonised rules for the use of AI technologies, including generative and general-purpose AI. The act uses a risk-based approach, categorising AI systems by their potential risks to health, safety, and fundamental rights, and imposing specific obligations accordingly.

Why is the EU AI Act relevant to UK Firms?

The AIA’s scope is extra-territorial, meaning UK businesses that develop or deploy AI systems for the EU market fall under its regulation. UK entities are within the scope as providers when releasing AI systems through EU subsidiaries and also within scope if their models are not deployed in the EU, but their outputs are intended be used in the EU.

While other regions are developing their own AI regulations, the EU AI Act is a global reference point. The UK government acknowledges that the challenges posed by AI technologies will ultimately require legislative action in every country. However, it currently relies on existing laws and frameworks, estimating that more time is needed to better understand the risks, opportunities, and appropriate regulatory responses. There is broad agreement on AI risks and principles across jurisdictions, but regulatory divergence remains a potential challenge for firms.

When will the Act be enforced?

Timelines for compliance have been established for different risk classifications. Organisations must proactively classify and assess the risks of their AI systems in the coming months to avoid penalties and reputational damage.

For example, a firm deploying an AI system that is deemed to be prohibited, such as one that infers emotions in the workplace, six months after the Act takes effect could face fines of up to 35,000,000 EUR or 7% of its total worldwide annual turnover from the previous financial year, whichever is higher. Such a system would have to be removed from the European market, or redesigned such that it no longer meets the prohibited criteria as defined by the EU AI Act.

Our view for UK firms

UK firms must act now to comply with the EU AI Act’s requirements. While the majority of the obligations, including for most high-risk systems, will apply in 24 months, some provisions will apply before and after that milestone. For example, prohibitions on certain AI systems will be enforced by the end of 2024, and requirements for general-purpose AI will apply by mid-2025.

Actions for firms:

  • Assess the impact on compliance: evaluate the impact of the AIA on existing compliance frameworks, particularly for cross-border functions.
  • Classify AI systems and develop an inventory of AI assets: categorise AI systems according to the risk classifications and keep a dynamic inventory based on the EU taxonomy.
  • Identify prohibited AI systems and take action immediately: determine and address AI systems that will be prohibited by the end of 2024.
  • Review AI governance model: update governance frameworks to align with the AIA.
  • Risk management framework: develop and implement comprehensive risk management, testing, and validation procedures.
  • Data governance: ensure robust data management practices.
  • Ensure adequate controls: confirm that proper controls are in place, especially for advanced AI systems.
  • Map interdependencies: understand internal and external dependencies related to AI systems. Build trust with cloud and third-party platform providers to ensure shared visions, roles, and responsibilities to implement adequate controls and safeguards.

The AIA will likely present new compliance challenges but also offers an opportunity to align AI development and deployment with strategic priorities. Proactively addressing these challenges can enhance innovation capabilities, ensure ethical AI practices globally, and strengthen competitive advantage.

Question & Answer

What is the UK’s regulatory stance? Will the obligations be different from or inconsistent with the EU AI Act?

The UK Government has established five principles for regulators, which broadly align with the AIA:

  • Safety, security and robustness
  • Appropriate transparency and explainability
  • Fairness
  • Accountability and governance
  • Contestability and redress.

The regulators have indicated these principles already align well with current regulations, facilitating risk identification and mitigation. They maintain a technology-neutral, outcomes-driven approach but will respond to market and technological changes. The regulators are actively exploring potential gaps, such as in the interpretation of copyright law, data protection, and impacts on security, fairness, and competition.

UK regulators may issue new guidance and rules if they identify regulatory issues. Firms should consider taking the key actions outlined in ‘Our view for UK firms’ to navigate the evolving international regulatory landscape. However, firms should note that the UK's approach may evolve, and compliance with the AIA does not guarantee compliance with UK regulations.

How is AI defined in the EU AI Act, and what technologies are covered?

The EU AI Act defines an AI system as a machine-based system designed to operate with varying levels of autonomy and adaptiveness after deployment. These systems can infer from inputs to generate outputs like predictions, content, recommendations, or decisions, influencing physical or virtual environments.

The definition is based on key characteristics that distinguish AI systems from simpler traditional software systems or programming approaches and does not cover systems based solely on rules defined by natural persons to automatically execute operations. This ensures broad coverage of various AI technologies while clearly differentiating them from basic data processing systems lacking inferential capabilities​

Organisations should also understand other definitions and conditions of use for their AI systems. For instance, AI systems used in high-risk applications that don't significantly influence decision making or pose a significant risk of harm may not be considered high risk. To benefit from this exemption, AI systems need to fulfil specific conditions, including performing only narrow procedural tasks.

I am a UK-based entity developing a General Purpose AI model. What would be my obligations?

The regulation applies if the AI model is placed on the EU market, put into service in the EU, or if its outputs are intended to be used in the EU. Firms also need to identify whether the model is categorised as a model with systemic risks or high risk, as this significantly impacts compliance. Additionally, there are specific exemptions for open-source models, provided these models do not pose systemic risks.

If the model falls within the scope and firms are considered providers, they need to comply with a range of obligations, including:

  • Draw up and maintain up-to-date technical documentation, including its training, testing process, and evaluation results.
  • Create, keep current, and make available information and documentation to providers who intend to integrate the general-purpose AI model into their systems. This documentation must enable providers to understand the capabilities and limitations of the AI model and comply with their regulatory obligations.
  • Implement a policy to comply with Union law on copyright.
  • Make publicly available a detailed summary of the content used for training the AI model.

In addition, providers of general-purpose AI models with systemic risk need to comply with further obligations, including:

  • Perform model evaluation, including conducting and documenting adversarial testing to identify and mitigate systemic risks.
  • Assess and mitigate possible systemic risks, including their sources, that may stem from the development, placing on the market, or use of general-purpose AI models with systemic risk.
  • Keep track of, document, and report serious incidents and possible corrective measures.
  • Ensure an adequate level of cybersecurity protection for the general-purpose AI model with systemic risk and its physical infrastructure.

It is also important for firms to understand that if they use a third-party general-purpose model, they may be considered the provider of the model if they change the intended use of the model or make significant changes.

I am a UK-based firm developing a high-risk system which will be used by subsidiaries in the EU. Am I a deployer and a developer?

A UK firm can be subject to the EU AI Act both as a provider and a deployer.
For example, a UK-based company that develops high-risk AI systems to be put into service in the EU, such as within a subsidiary, must comply with the requirements as a provider. The subsidiary will need to assume the deployer's responsibilities.

Firms need to strategically consider how the boundary between provider and deployer impacts their functions, especially on a cross-border basis. This includes understanding compliance requirements in both roles and ensuring seamless coordination between the UK headquarters and EU subsidiaries to meet regulatory standards.

Contact us

Chris Oxborough

Chris Oxborough

Lead for Responsible AI, PwC United Kingdom

Tel: +44 (0)7711 473199

Fabrice Ciais

Fabrice Ciais

Director, AI, PwC United Kingdom

Tel: +44 (0)7843 334241

Follow us