Unacceptable Risk — Prohibited
AI systems in this category are banned outright. They pose risks to fundamental rights, safety, or democratic values that the EU legislature determined cannot be mitigated through compliance measures. These prohibitions took effect on 2 February 2025.
Prohibited AI Practices (Article 5)
- Subliminal manipulation: AI that deploys subliminal techniques beyond a person's consciousness to materially distort behaviour in a way that causes or is likely to cause harm.
- Exploitation of vulnerabilities: AI that exploits vulnerabilities of specific groups (age, disability, social or economic situation) to materially distort behaviour causing harm.
- Social scoring: AI systems used by public authorities to evaluate or classify people based on social behaviour or personal characteristics, leading to detrimental treatment unrelated to the context of data collection.
- Real-time biometric identification in public spaces: AI systems used by law enforcement for real-time remote biometric identification in publicly accessible spaces. Narrow exceptions exist: searching for missing children, preventing specific imminent terrorist threats, identifying suspects in serious crimes.
- Retrospective biometric databases: AI used to create or expand facial recognition databases through untargeted scraping of facial images from the internet or CCTV.
- Emotion recognition (workplace and education): AI systems that infer emotions of natural persons in workplace or educational settings, with narrow exceptions for medical or safety reasons.
- Predicting criminality from profiling: AI that makes risk assessments of natural persons to predict the likelihood of committing offences based solely on profiling or personality trait assessment.
- Biometric categorisation for sensitive attributes: AI systems that categorise individuals based on biometric data to deduce or infer race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation.
Detailed breakdown of each prohibited practice →
High Risk — Strict Obligations
High-risk AI systems are permitted but subject to extensive obligations before they can be placed on the EU market or put into service. The classification is determined by the AI system's intended purpose and the domain in which it operates — not by its technical characteristics alone.
Two Routes to High-Risk Classification
Annex I — Safety Components in Regulated Products
AI systems that function as a safety component of products already regulated under specific EU product safety legislation. The AI Act adds obligations on top of existing product regulation. Sectors include: machinery, medical devices, in vitro diagnostics, lifts, radio equipment, pressure equipment, recreational craft, cableway installations, civil aviation equipment, two/three-wheel vehicles, agricultural tractors, motor vehicles.
Annex III — Specific High-Risk Use Cases
AI systems used in these eight areas are classified as high-risk:
| Annex III Area | Examples |
|---|---|
| 1. Biometric identification | Remote biometric identification systems, biometric categorisation systems |
| 2. Critical infrastructure | AI managing water, gas, electricity, transport networks |
| 3. Education and vocational training | Admission decisions, exam evaluation, student assessment and monitoring |
| 4. Employment and HR | CV screening, recruitment, promotion and termination decisions, task allocation |
| 5. Essential services | Credit scoring, insurance risk assessment, access to public benefits |
| 6. Law enforcement | Risk assessment for criminal offences, evidence evaluation, crime prediction tools |
| 7. Migration and asylum | Risk assessment of applicants, document authenticity checks, border management |
| 8. Administration of justice and democracy | AI assisting in judicial decisions, electoral or democratic influence |
High-Risk Obligations (Summary)
Providers of high-risk AI systems must implement:
- Risk management system (Art. 9)
- Data governance and management (Art. 10)
- Technical documentation (Art. 11)
- Automatic logging / record-keeping (Art. 12)
- Transparency information for deployers (Art. 13)
- Human oversight design (Art. 14)
- Accuracy and robustness requirements (Art. 15)
- Conformity assessment (Arts. 43–44)
- EU database registration (Art. 49)
Most high-risk obligations apply from 2 August 2026 (Annex III) and 2 August 2027 (Annex I products).
Complete high-risk obligations guide →
Limited Risk — Transparency Obligations
AI systems in this category must meet transparency requirements so that people know they are interacting with AI or consuming AI-generated content. No substantive risk management or conformity assessment obligations apply.
Transparency Requirements
- Chatbots and virtual assistants: Must inform users they are interacting with an AI system. Applies unless it is obvious from context. Operators must design systems to display this information clearly and prominently.
- Deepfakes: AI-generated or AI-manipulated image, audio, or video content that resembles real persons, events, or places must be labelled as artificially generated or manipulated. Exceptions exist for legitimate creative and satirical purposes where appropriate disclosure is made.
- AI-generated content at scale: Providers of GPAI models generating large volumes of synthetic content must ensure the content is marked with machine-readable watermarks.
- Emotion recognition / biometric categorisation: Where not prohibited outright, these systems must notify the persons being subject to them.
Minimal Risk — No Obligations
The vast majority of AI applications fall into this category. The Act imposes no specific legal obligations on minimal-risk AI. Examples include:
- AI-enabled spam filters
- Video game AI characters
- AI-powered content recommendation systems (general e-commerce)
- Inventory management AI
- AI tools for creative content generation (where no deepfake disclosure is required)
- Customer service chatbots not covered by limited-risk transparency rules
The Act encourages providers and deployers of minimal-risk AI to adopt voluntary codes of conduct and to adhere to established AI ethics guidelines — but this is not legally mandated.