Am I High-Risk?

An AI system is classified as high-risk if it meets either of two criteria:

  1. It is a safety component of a product regulated under specific EU product safety legislation listed in Annex I, or is itself such a regulated product.
  2. Its intended purpose places it in one of the eight areas listed in Annex III.
Intended purpose is key. Classification depends on what the AI system is designed and marketed to do — not on its technical architecture. The same model deployed as a general assistant (minimal risk) and as a credit scoring tool (Annex III — high risk) would face different obligations.

Even if an AI system falls within an Annex III category, providers may be able to demonstrate it does not pose a significant risk — Article 6(3) provides a procedure for this, subject to Commission delegated acts.


Annex I — AI in Regulated Products

Annex I lists the EU product safety legislation whose scope is affected by the AI Act. If an AI system is a safety component of — or itself constitutes — a product regulated under these directives and regulations, it is classified as high-risk and must undergo conformity assessment under both the applicable product legislation and the AI Act.

Sector Relevant EU Legislation
Machinery and related products Machinery Regulation (EU) 2023/1230
Medical devices Medical Device Regulation (EU) 2017/745
In vitro diagnostic medical devices IVD Regulation (EU) 2017/746
Lifts Lifts Directive 2014/33/EU
Radio equipment Radio Equipment Directive 2014/53/EU
Pressure equipment Pressure Equipment Directive 2014/68/EU
Recreational craft and personal watercraft Recreational Craft Directive 2013/53/EU
Cableway installations Cableway Installations Regulation (EU) 2016/424
Civil aviation — aerodromes and ATM Regulation (EU) 2018/1139 and related acts
Two/three-wheel vehicles and quadricycles Regulation (EU) No 168/2013
Agricultural and forestry vehicles Regulation (EU) No 167/2013
Motor vehicles Regulation (EU) 2019/2144 and related acts

Full obligations for Annex I product AI apply from 2 August 2027 — one year after the Annex III deadline.


Annex III — Specific High-Risk Use Cases

# Area Examples
1 Biometric identification and categorisation Remote biometric ID systems, biometric verification systems
2 Critical infrastructure management and operation AI managing water supply, gas networks, electricity grids, transport
3 Education and vocational training Admission and access decisions; evaluating learning outcomes; student monitoring during exams; assessing appropriate level of education
4 Employment, workers management, and access to self-employment CV screening and ranking; recruitment interview AI; task allocation and monitoring; promotion and termination decisions
5 Access to essential private and public services and benefits Credit scoring; insurance risk assessment; public benefit eligibility; emergency services dispatch prioritisation
6 Law enforcement Individual risk assessment for criminal offences; polygraph systems; evidence evaluation; criminal investigative analysis
7 Migration, asylum, and border control management Risk assessment of applicants; document authenticity; assessment of irregular migration risk; automated examination of applications
8 Administration of justice and democratic processes AI assisting in fact-finding, interpretation or application of law; AI influencing electoral or referendum outcomes

Obligations for Annex III systems apply from 2 August 2026.


Provider Obligations

Providers — companies that develop or place high-risk AI systems on the market — must meet these requirements before their system can be deployed in the EU.

Risk Management System (Article 9)

A continuous, iterative process encompassing: identification and analysis of known and foreseeable risks; estimation and evaluation of risks; adoption of risk management measures; testing to ensure measures are effective. The system must be updated throughout the AI system's lifecycle.

Data and Data Governance (Article 10)

Training, validation, and testing datasets must meet quality criteria. Data governance practices must cover: the design choices for data collection, possible biases, relevant data gaps and shortcomings, and appropriate data collection processes. Relevant personal data processed under GDPR must be handled in accordance with both GDPR and the AI Act.

Technical Documentation (Article 11 + Annex IV)

Comprehensive technical documentation must be drawn up before the system is placed on the market. Annex IV specifies exactly what this must contain: general description, detailed description of elements and development process, information on monitoring and functioning, validation and testing, cybersecurity measures, and post-market monitoring plan.

Automatic Logging and Record-Keeping (Article 12)

High-risk AI systems must have automatic logging capabilities enabling traceability throughout their lifecycle. Logs must capture events that enable monitoring of the system's operation. Storage periods depend on the applicable law of the member state or relevant EU law.

Transparency to Deployers (Article 13)

Providers must supply deployers with instructions sufficient to enable compliant use. This includes: the provider's identity and contact information; characteristics, capabilities, and limitations; performance metrics and known risks; human oversight measures; technical measures; expected lifetime and maintenance requirements.

Human Oversight (Article 14)

High-risk AI systems must be designed to enable effective human oversight. Overseers must be able to: understand the system's capabilities and limitations; monitor for anomalies; override, interrupt, or stop the system; interpret outputs correctly. The system must not undermine human capacity to oversee it.

Accuracy, Robustness, and Cybersecurity (Article 15)

Systems must achieve appropriate levels of accuracy for their intended purpose throughout their lifecycle. They must be resilient against errors, faults, and inconsistencies. Appropriate technical redundancy and protection against adversarial inputs is required where relevant.

Conformity Assessment (Articles 43–44)

Before placing a high-risk AI system on the market, providers must conduct a conformity assessment. For most Annex III systems, providers can conduct self-assessment. Systems involving biometric identification, or AI systems embedded in products regulated by Annex I legislation, typically require third-party assessment by a notified body.

EU Declaration of Conformity (Article 47)

Providers must draw up an EU declaration of conformity confirming the AI system complies with the Act's requirements. This must be updated whenever the system is significantly modified.

CE Marking (Article 48)

Once conformity assessment is complete and the declaration of conformity issued, the CE marking must be affixed. For AI embedded in Annex I products, this is part of the existing CE marking process for that product.

Registration in EU Database (Article 49)

Before placing on the market, providers must register their high-risk AI system in the EU-wide database established under Article 71. The database is publicly accessible for most high-risk AI systems (except law enforcement and migration use cases).

Post-Market Monitoring (Article 72)

Providers must establish and implement a post-market monitoring plan covering the entire lifecycle of the AI system. Serious incidents must be reported to national market surveillance authorities.


Deployer Obligations

Deployers — organisations that use high-risk AI systems in the EU — also carry obligations under the Act (Article 26).

Informational purposes only. Nothing on this site constitutes legal advice. High-risk classification and the scope of obligations depend on the specific facts of each AI system and its deployment context. Always consult qualified legal counsel. Not affiliated with the EU.