What is a GPAI Model?
Article 3(63) defines a general purpose AI model as an AI model that:
- Is trained on large amounts of data using self-supervision at scale
- Displays significant generality
- Is capable of competently performing a wide range of distinct tasks
- Can be integrated into a variety of downstream systems or applications
The definition does not require a specific architecture. It captures large language models, multimodal models, and other foundation models where the key characteristic is versatility and the ability to be applied across many use cases.
What is a GPAI System?
A GPAI system is an AI system based on a GPAI model that can be used for many different purposes — whether as provided by the original developer or when integrated into another AI system. Consumer-facing products like ChatGPT are GPAI systems built on GPAI models.
Obligations for All GPAI Providers
All providers placing GPAI models on the EU market must comply with the following, from 2 August 2025:
Technical Documentation (Article 53)
Providers must draw up and maintain up-to-date technical documentation covering:
- General description of the model including intended purpose
- Information about training methodology and processes
- Training data: source, scope, what types of data were used
- Compute used for training
- Model capabilities: performance benchmarks, known limitations
- Measures taken to reduce risks to health, safety, fundamental rights, and the environment
Copyright Compliance Information (Article 53)
GPAI providers must put in place a policy to respect copyright law across the EU, in particular to identify and comply with rights reservations expressed using machine-readable means under Article 4(3) of the Copyright in the Digital Single Market Directive.
Summary of Training Data (Article 53)
Providers must publish a sufficiently detailed summary of the content used for training — detailed enough to enable providers of downstream AI systems and deployers to comply with their own obligations. This summary must be publicly available.
Downstream Provider Obligations (Article 53)
When GPAI models are made available to other providers building AI systems on top of them, the GPAI provider must also provide technical documentation enabling those downstream providers to understand the model's capabilities and limitations and to comply with their own AI Act obligations.
Systemic Risk GPAI Models
GPAI models that present systemic risks face a more demanding set of obligations on top of the baseline requirements above.
What Triggers Systemic Risk Classification?
A GPAI model is presumed to present systemic risk if it was trained using a total computing power greater than 1025 floating point operations (FLOPs).
The EU AI Office may also designate a model as systemic risk on the basis of other criteria, including the model's reach (number of users), its capabilities in specific high-impact domains, or its degree of autonomy. Providers may also self-classify their models as presenting systemic risk.
Additional Obligations for Systemic Risk GPAI Providers
- Model evaluation (Article 55): Conduct model evaluations, including adversarial testing, to identify and mitigate systemic risks. Evaluations must include assessments of significant capability levels in specific domains (cybersecurity, biological risks, etc.).
- Incident reporting (Article 55): Track, document, and report to the EU AI Office serious incidents and malfunctions that could contribute to systemic risks — without undue delay.
- Cybersecurity measures (Article 55): Implement appropriate levels of cybersecurity protection for the model, its infrastructure, and physical environments.
- Energy efficiency reporting (Article 55): Report energy consumption during training and — to the extent applicable — during inference.
- Codes of practice: Systemic risk GPAI providers are expected to participate in and adhere to the codes of practice developed by the EU AI Office under Article 56.
The EU AI Office
The EU AI Office was established within the European Commission to oversee GPAI models at EU level. It became operational alongside the GPAI rules on 2 August 2025.
Key Functions
- Designating GPAI models as presenting systemic risk
- Developing and administering codes of practice for GPAI providers
- Investigating alleged violations by GPAI providers (including requesting documents, conducting evaluations)
- Imposing fines on GPAI providers for violations (working through national authorities)
- Coordinating with national market surveillance authorities on cross-border issues
- Publishing guidance on the application of the GPAI provisions
For high-risk AI systems that are not GPAI, primary enforcement is by national market surveillance authorities in each member state. The EU AI Office handles systemic risk GPAI enforcement at EU level.
Does the AI Act Apply to ChatGPT, Claude, and Gemini?
Yes — the providers of these models (OpenAI, Anthropic, Google DeepMind) are subject to GPAI obligations under Chapter V from 2 August 2025.
What Specifically Applies
| Model / Provider | Baseline GPAI Obligations | Systemic Risk (likely) |
|---|---|---|
| GPT-4 / OpenAI | Yes — from Aug 2025 | Yes (training compute exceeds 1025 FLOPs) |
| Claude Opus / Anthropic | Yes — from Aug 2025 | Likely yes for largest models |
| Gemini Ultra / Google DeepMind | Yes — from Aug 2025 | Likely yes for largest models |
| Llama (Meta) — open weights | Reduced obligations — open source exemptions apply | Systemic risk rules still apply if thresholds met |
The GPAI rules apply to the model provider — the company that trained and releases the model. Businesses that deploy these models in their own products are not GPAI providers; they may be deployers of AI systems built on GPAI models, and their obligations depend on what those systems do.
Open-Source GPAI
GPAI providers that release their model weights publicly under open-source licences benefit from reduced obligations under Article 53 — specifically, they can comply with technical documentation requirements through simplified means. However, open-source status does not exempt systemic risk models from the additional systemic risk obligations under Article 55.