Unacceptable Risk — Prohibited

AI systems in this category are banned outright. They pose risks to fundamental rights, safety, or democratic values that the EU legislature determined cannot be mitigated through compliance measures. These prohibitions took effect on 2 February 2025.

Prohibited AI Practices (Article 5)

Detailed breakdown of each prohibited practice →

Already in force. Prohibited practices have been illegal since 2 February 2025. Violations carry fines of up to €35M or 7% of global annual turnover.

High Risk — Strict Obligations

High-risk AI systems are permitted but subject to extensive obligations before they can be placed on the EU market or put into service. The classification is determined by the AI system's intended purpose and the domain in which it operates — not by its technical characteristics alone.

Two Routes to High-Risk Classification

Annex I — Safety Components in Regulated Products

AI systems that function as a safety component of products already regulated under specific EU product safety legislation. The AI Act adds obligations on top of existing product regulation. Sectors include: machinery, medical devices, in vitro diagnostics, lifts, radio equipment, pressure equipment, recreational craft, cableway installations, civil aviation equipment, two/three-wheel vehicles, agricultural tractors, motor vehicles.

Annex III — Specific High-Risk Use Cases

AI systems used in these eight areas are classified as high-risk:

Annex III Area Examples
1. Biometric identification Remote biometric identification systems, biometric categorisation systems
2. Critical infrastructure AI managing water, gas, electricity, transport networks
3. Education and vocational training Admission decisions, exam evaluation, student assessment and monitoring
4. Employment and HR CV screening, recruitment, promotion and termination decisions, task allocation
5. Essential services Credit scoring, insurance risk assessment, access to public benefits
6. Law enforcement Risk assessment for criminal offences, evidence evaluation, crime prediction tools
7. Migration and asylum Risk assessment of applicants, document authenticity checks, border management
8. Administration of justice and democracy AI assisting in judicial decisions, electoral or democratic influence

High-Risk Obligations (Summary)

Providers of high-risk AI systems must implement:

Most high-risk obligations apply from 2 August 2026 (Annex III) and 2 August 2027 (Annex I products).

Complete high-risk obligations guide →


Limited Risk — Transparency Obligations

AI systems in this category must meet transparency requirements so that people know they are interacting with AI or consuming AI-generated content. No substantive risk management or conformity assessment obligations apply.

Transparency Requirements


Minimal Risk — No Obligations

The vast majority of AI applications fall into this category. The Act imposes no specific legal obligations on minimal-risk AI. Examples include:

The Act encourages providers and deployers of minimal-risk AI to adopt voluntary codes of conduct and to adhere to established AI ethics guidelines — but this is not legally mandated.

GPAI models cut across tiers. General purpose AI models (foundation models) like GPT-4, Claude, and Gemini are subject to their own rules under Chapter V — separate from the four-tier risk framework. See GPAI obligations →
Informational purposes only. Nothing on this site constitutes legal advice. Risk classification is fact-specific and depends on the AI system's intended purpose and deployment context. Always consult qualified legal counsel. Not affiliated with the European Union or any EU institution.