🏦
Financial Services
High Risk

Credit scoring, insurance risk assessment, access to financial products.

🏥
Healthcare
High Risk

Medical device AI (Annex I), diagnostic support, treatment recommendations.

👥
Human Resources
High Risk

CV screening, recruitment AI, performance monitoring, promotion decisions.

🎓
Education
High Risk

Admission decisions, exam evaluation, student monitoring during assessment.

⚖️
Law Enforcement
High Risk

Risk assessment, evidence evaluation. Some uses prohibited outright.

🛒
Retail / E-commerce
Mostly Minimal

Recommendations and pricing: generally minimal risk. Chatbots: limited risk.


Financial Services

Financial services faces the highest density of high-risk AI use cases of any sector. Annex III explicitly covers credit scoring (area 5: essential services) and insurance risk assessment (area 5), placing most AI-driven underwriting and lending decision tools in the high-risk category.

High-Risk AI in Financial Services

Regulatory Layering

Financial services AI faces the most complex regulatory stack: AI Act obligations layer on top of GDPR, plus sector-specific financial regulation (MiFID II, CRR, Solvency II, PSD2, and others). The AI Act does not displace financial sector regulation — all regimes apply concurrently.

The European Banking Authority (EBA), ESMA, and EIOPA have all published AI-related guidance for their sectors. Financial institutions should monitor both AI Act obligations and relevant sectoral guidance from their prudential supervisor.


Healthcare

Healthcare AI faces obligations under both Annex I (AI embedded in medical devices) and potentially Annex III (AI assisting clinical decisions about individuals).

Annex I — Medical Device AI

AI systems that are safety components of medical devices regulated under the Medical Device Regulation (MDR, Regulation (EU) 2017/745) or the IVD Regulation (Regulation (EU) 2017/746) are classified as Annex I high-risk AI. These systems must comply with both the MDR/IVD conformity assessment process and the AI Act.

Full AI Act obligations for Annex I medical device AI apply from 2 August 2027.

Clinical Decision Support

AI systems that assist clinicians in diagnosing conditions or selecting treatments for specific patients may fall within Annex III depending on how they are classified and whether they directly influence individual patient decisions. The EU AI Office has published guidance on this boundary.

Intersection with GDPR

Health data is special category personal data under GDPR Article 9. AI processing health data must have both a GDPR legal basis (typically explicit consent or necessity for medical care) and comply with AI Act data governance requirements for training data.


Human Resources

Annex III area 4 (employment, workers management, and access to self-employment) covers a wide range of AI systems used in HR and recruitment — one of the most commercially prevalent high-risk AI categories.

High-Risk AI in HR

Practical Impact

Any company using AI-powered ATS (Applicant Tracking Systems) with ranking or scoring features, or AI-driven performance management platforms, is a deployer of Annex III high-risk AI. Deployer obligations apply from August 2026, including human oversight implementation and FRIA for certain public sector deployers.


Education

Annex III area 3 (education and vocational training) covers AI in educational settings where AI influences access to or performance within education.

High-Risk AI in Education


Law Enforcement

Law enforcement AI faces both prohibited practice restrictions (Article 5) and Annex III high-risk classification (area 6). This sector has the most complex AI Act landscape.

Prohibited in Law Enforcement Context

High-Risk in Law Enforcement

Law enforcement AI is subject to additional oversight requirements. The EU database for registration of law enforcement high-risk AI is not publicly accessible (unlike most Annex III categories). National legislation must authorise even the permitted uses.


Retail and E-commerce

Most retail AI falls into the minimal risk category and faces no specific AI Act obligations.

Minimal Risk (No Obligations)

Limited Risk (Transparency Only)

High Risk — Check Case by Case

Informational purposes only. Nothing on this site constitutes legal advice. Sector-specific application of the AI Act depends on the intended purpose of each AI system and the specific regulatory context of each industry. Always consult qualified legal counsel. Not affiliated with the European Union or any EU institution.