The EU Artificial Intelligence Act (AIA) is a new legislative cross-sector framework for regulating AI systems in the European Union (EU). It sets harmonised rules for the use of AI technologies, including generative and general-purpose AI. The act uses a risk-based approach, categorising AI systems by their potential risks to health, safety, and fundamental rights, and imposing specific obligations accordingly.
The AIA’s scope is extra-territorial, meaning UK businesses that develop or deploy AI systems for the EU market fall under its regulation. UK entities are within the scope as providers when releasing AI systems through EU subsidiaries and also within scope if their models are not deployed in the EU, but their outputs are intended be used in the EU.
While other regions are developing their own AI regulations, the EU AI Act is a global reference point. The UK government acknowledges that the challenges posed by AI technologies will ultimately require legislative action in every country. However, it currently relies on existing laws and frameworks, estimating that more time is needed to better understand the risks, opportunities, and appropriate regulatory responses. There is broad agreement on AI risks and principles across jurisdictions, but regulatory divergence remains a potential challenge for firms.
Timelines for compliance have been established for different risk classifications. Organisations must proactively classify and assess the risks of their AI systems in the coming months to avoid penalties and reputational damage.
For example, a firm deploying an AI system that is deemed to be prohibited, such as one that infers emotions in the workplace, six months after the Act takes effect could face fines of up to 35,000,000 EUR or 7% of its total worldwide annual turnover from the previous financial year, whichever is higher. Such a system would have to be removed from the European market, or redesigned such that it no longer meets the prohibited criteria as defined by the EU AI Act.
UK firms must act now to comply with the EU AI Act’s requirements. While the majority of the obligations, including for most high-risk systems, will apply in 24 months, some provisions will apply before and after that milestone. For example, prohibitions on certain AI systems will be enforced by the end of 2024, and requirements for general-purpose AI will apply by mid-2025.
Actions for firms:
The AIA will likely present new compliance challenges but also offers an opportunity to align AI development and deployment with strategic priorities. Proactively addressing these challenges can enhance innovation capabilities, ensure ethical AI practices globally, and strengthen competitive advantage.
The UK Government has established five principles for regulators, which broadly align with the AIA:
The regulators have indicated these principles already align well with current regulations, facilitating risk identification and mitigation. They maintain a technology-neutral, outcomes-driven approach but will respond to market and technological changes. The regulators are actively exploring potential gaps, such as in the interpretation of copyright law, data protection, and impacts on security, fairness, and competition.
UK regulators may issue new guidance and rules if they identify regulatory issues. Firms should consider taking the key actions outlined in ‘Our view for UK firms’ to navigate the evolving international regulatory landscape. However, firms should note that the UK's approach may evolve, and compliance with the AIA does not guarantee compliance with UK regulations.
The EU AI Act defines an AI system as a machine-based system designed to operate with varying levels of autonomy and adaptiveness after deployment. These systems can infer from inputs to generate outputs like predictions, content, recommendations, or decisions, influencing physical or virtual environments.
The definition is based on key characteristics that distinguish AI systems from simpler traditional software systems or programming approaches and does not cover systems based solely on rules defined by natural persons to automatically execute operations. This ensures broad coverage of various AI technologies while clearly differentiating them from basic data processing systems lacking inferential capabilities
Organisations should also understand other definitions and conditions of use for their AI systems. For instance, AI systems used in high-risk applications that don't significantly influence decision making or pose a significant risk of harm may not be considered high risk. To benefit from this exemption, AI systems need to fulfil specific conditions, including performing only narrow procedural tasks.
The regulation applies if the AI model is placed on the EU market, put into service in the EU, or if its outputs are intended to be used in the EU. Firms also need to identify whether the model is categorised as a model with systemic risks or high risk, as this significantly impacts compliance. Additionally, there are specific exemptions for open-source models, provided these models do not pose systemic risks.
If the model falls within the scope and firms are considered providers, they need to comply with a range of obligations, including:
In addition, providers of general-purpose AI models with systemic risk need to comply with further obligations, including:
It is also important for firms to understand that if they use a third-party general-purpose model, they may be considered the provider of the model if they change the intended use of the model or make significant changes.
A UK firm can be subject to the EU AI Act both as a provider and a deployer.
For example, a UK-based company that develops high-risk AI systems to be put into service in the EU, such as within a subsidiary, must comply with the requirements as a provider. The subsidiary will need to assume the deployer's responsibilities.
Firms need to strategically consider how the boundary between provider and deployer impacts their functions, especially on a cross-border basis. This includes understanding compliance requirements in both roles and ensuring seamless coordination between the UK headquarters and EU subsidiaries to meet regulatory standards.