The regulation applies if the AI model is placed on the EU market, put into service in the EU, or if its outputs are intended to be used in the EU. Firms also need to identify whether the model is categorised as a model with systemic risks or high risk, as this significantly impacts compliance. Additionally, there are specific exemptions for open-source models, provided these models do not pose systemic risks.
If the model falls within the scope and firms are considered providers, they need to comply with a range of obligations, including:
- Draw up and maintain up-to-date technical documentation, including its training, testing process, and evaluation results.
- Create, keep current, and make available information and documentation to providers who intend to integrate the general-purpose AI model into their systems. This documentation must enable providers to understand the capabilities and limitations of the AI model and comply with their regulatory obligations.
- Implement a policy to comply with Union law on copyright.
- Make publicly available a detailed summary of the content used for training the AI model.
In addition, providers of general-purpose AI models with systemic risk need to comply with further obligations, including:
- Perform model evaluation, including conducting and documenting adversarial testing to identify and mitigate systemic risks.
- Assess and mitigate possible systemic risks, including their sources, that may stem from the development, placing on the market, or use of general-purpose AI models with systemic risk.
- Keep track of, document, and report serious incidents and possible corrective measures.
- Ensure an adequate level of cybersecurity protection for the general-purpose AI model with systemic risk and its physical infrastructure.
It is also important for firms to understand that if they use a third-party general-purpose model, they may be considered the provider of the model if they change the intended use of the model or make significant changes.