As the regulatory landscape surrounding the EU Artificial Intelligence Act (“AI Act”) continues to evolve, organisations developing or integrating AI technologies should turn their attention to a critical and rapidly developing area: General-Purpose AI (GPAI) — and more specifically, GPAI models considered to present systemic risk.
In Part I of this series, we covered the provisions already in force as of February 2025, and the obligations coming into effect in August 2025. We also introduced the concept of GPAI. In this section, we will examine the criteria that categorize a GPAI model as systemic and the implications upon the providers and businesses.
When Is a GPAI Model Considered “Systemic”?
Chapter V of the AI Act (Articles 51–55) establishes a dedicated compliance regime for GPAI models with systemic risk.
Α GPAI model is deemed “systemic” if it possesses “high impact capabilities” and presents particularly high risks due to its scale, capabilities, or impact on the internal market due to its reach. The AI Act by default identifies high impact capabilities when the total computational resources utilized for its training, measured in floating point operations, exceeds 10^25 (10²⁵ FLOPS). This level of computation could significantly affect public safety, health, fundamental rights, societal or economic stability.
Importantly, the European Commission, in consultation with its expert scientific panel, may formally designate additional models as systemic — even where they fall below the compute threshold — based on a broader set of criteria set out in Annex XIII, including but not limited to performance, usage, and risk profile.
Notification and Designation Process
The AI Act requires providers of GPAI models that meet the criteria of “high capabilities” to immediately notify the European Commission, and in any case within two weeks of such criteria being met or the providers become aware that such criteria will be met. The providers shall furnish the Commission with all the information necessary to assess and justify the designation of systemic risk. The Commission is also empowered, once it becomes aware of any GPAI models presenting systemic risks for which it has not received any notification, to designate such model as a model of systemic risk.
Interestingly, the AI Act allows the providers to submit along with the notification to the Commission such arguments that, although the GPAI model has high capabilities, it does not present a systemic risk due to its specific characteristics. The Commission shall then assess the arguments presented before it and decide whether they are sufficiently substantiated to demonstrate that the specific characteristics of the GPAI model do not pose a systemic risk.
Key Obligations for Systemic GPAI Providers
The AI Act imposes enhanced obligations on providers of GPAI models with high impact capabilities, beyond the general requirements for GPAI models under Article 53. These enhanced obligations include:
- Model evaluation and testing: Providers must conduct and document assessments of safety, performance, and systemic risks — both prior to deployment and on an ongoing basis.
- Adversarial robustness and cybersecurity: Measures must be in place to protect against misuse, manipulation, or unintended behaviours.
- Incident monitoring and reporting: Any serious incidents or emerging risks must be reported to the EU AI Office and relevant national authorities.
- Compute and energy transparency: Providers must disclose information about the computing resources and energy consumption associated with training and deploying the model.
These requirements are designed to ensure that models with the potential to impact critical systems or democratic processes are built and deployed with the highest levels of transparency and accountability.
Latest Updates
European Commission Guidance – July 18, 2025
On July 18, 2025, the European Commission released detailed implementation guidance to assist GPAI models providers in understanding and preparing for the systemic-risk obligations. Key updates include:
- Compliance timelines: Enforcement of systemic-risk provisions begins on August 2, 2025, with full compliance required by August 2, 2026.
- Clarified expectations: The guidance emphasizes robust model evaluation, comprehensive risk assessments, incident tracking, and compute/energy disclosures.
- Named providers: Firms such as OpenAI, Google, Meta, Anthropic, and Mistral were specifically cited as likely to be subject to the new requirements based on the scale and capabilities of their models.
The GPAI Code of Practice: A Voluntary Yet Strategic Step
To support the implementation of these provisions, the EU finalised its GPAI Code of Practice on 10 July 2025. Although voluntary, the Code offers detailed guidance on how GPAI developers can meet their obligations ahead of formal enforcement. It covers essential themes such as transparency, safety, security, fundamental rights impact assessments, and intellectual property safeguards.
Major AI developers including OpenAI have already signed the Code, while others such as Microsoft are expected to follow. However, not all tech players are on board—Meta, for instance, has declined to sign, voicing concerns about overregulation and innovation constraints.
This marks a significant milestone in the EU’s push to regulate frontier AI technologies while ensuring alignment with democratic values, security, and environmental sustainability.
If you’d like support in navigating these upcoming requirements or assessing your exposure to systemic AI risk, our team is here to help.
For additional information, please contact our team:
Andria Koukounis, Partner, Andria.Koukounis@cylaw.ey.com
Nicholas Yiasemis, Manager, Nicholas.Yiasemis@cylaw.ey.com
Thekla Sorokkou, Senior, Thekla.Sorokkou@cylaw.ey.com