What is a GPAI Model?

Article 3(63) defines a general purpose AI model as an AI model that:

The definition does not require a specific architecture. It captures large language models, multimodal models, and other foundation models where the key characteristic is versatility and the ability to be applied across many use cases.

GPAI is a model classification, not a system classification. The four-tier risk framework (prohibited / high / limited / minimal) applies to AI systems based on their use case. GPAI rules apply to the model itself — regardless of what downstream systems are built on top of it.

What is a GPAI System?

A GPAI system is an AI system based on a GPAI model that can be used for many different purposes — whether as provided by the original developer or when integrated into another AI system. Consumer-facing products like ChatGPT are GPAI systems built on GPAI models.


Obligations for All GPAI Providers

All providers placing GPAI models on the EU market must comply with the following, from 2 August 2025:

Technical Documentation (Article 53)

Providers must draw up and maintain up-to-date technical documentation covering:

Copyright Compliance Information (Article 53)

GPAI providers must put in place a policy to respect copyright law across the EU, in particular to identify and comply with rights reservations expressed using machine-readable means under Article 4(3) of the Copyright in the Digital Single Market Directive.

Summary of Training Data (Article 53)

Providers must publish a sufficiently detailed summary of the content used for training — detailed enough to enable providers of downstream AI systems and deployers to comply with their own obligations. This summary must be publicly available.

Downstream Provider Obligations (Article 53)

When GPAI models are made available to other providers building AI systems on top of them, the GPAI provider must also provide technical documentation enabling those downstream providers to understand the model's capabilities and limitations and to comply with their own AI Act obligations.


Systemic Risk GPAI Models

GPAI models that present systemic risks face a more demanding set of obligations on top of the baseline requirements above.

What Triggers Systemic Risk Classification?

A GPAI model is presumed to present systemic risk if it was trained using a total computing power greater than 1025 floating point operations (FLOPs).

The EU AI Office may also designate a model as systemic risk on the basis of other criteria, including the model's reach (number of users), its capabilities in specific high-impact domains, or its degree of autonomy. Providers may also self-classify their models as presenting systemic risk.

Additional Obligations for Systemic Risk GPAI Providers


The EU AI Office

The EU AI Office was established within the European Commission to oversee GPAI models at EU level. It became operational alongside the GPAI rules on 2 August 2025.

Key Functions

For high-risk AI systems that are not GPAI, primary enforcement is by national market surveillance authorities in each member state. The EU AI Office handles systemic risk GPAI enforcement at EU level.


Does the AI Act Apply to ChatGPT, Claude, and Gemini?

Yes — the providers of these models (OpenAI, Anthropic, Google DeepMind) are subject to GPAI obligations under Chapter V from 2 August 2025.

What Specifically Applies

Model / Provider Baseline GPAI Obligations Systemic Risk (likely)
GPT-4 / OpenAI Yes — from Aug 2025 Yes (training compute exceeds 1025 FLOPs)
Claude Opus / Anthropic Yes — from Aug 2025 Likely yes for largest models
Gemini Ultra / Google DeepMind Yes — from Aug 2025 Likely yes for largest models
Llama (Meta) — open weights Reduced obligations — open source exemptions apply Systemic risk rules still apply if thresholds met

The GPAI rules apply to the model provider — the company that trained and releases the model. Businesses that deploy these models in their own products are not GPAI providers; they may be deployers of AI systems built on GPAI models, and their obligations depend on what those systems do.

Open-Source GPAI

GPAI providers that release their model weights publicly under open-source licences benefit from reduced obligations under Article 53 — specifically, they can comply with technical documentation requirements through simplified means. However, open-source status does not exempt systemic risk models from the additional systemic risk obligations under Article 55.

Informational purposes only. Nothing on this site constitutes legal advice. Whether specific models meet the systemic risk threshold is determined by the EU AI Office. Information about commercial AI models reflects publicly available information. Always consult qualified legal counsel. Not affiliated with the European Union or any EU institution.