Credit scoring, insurance risk assessment, access to financial products.
Medical device AI (Annex I), diagnostic support, treatment recommendations.
CV screening, recruitment AI, performance monitoring, promotion decisions.
Admission decisions, exam evaluation, student monitoring during assessment.
Risk assessment, evidence evaluation. Some uses prohibited outright.
Recommendations and pricing: generally minimal risk. Chatbots: limited risk.
Financial Services
Financial services faces the highest density of high-risk AI use cases of any sector. Annex III explicitly covers credit scoring (area 5: essential services) and insurance risk assessment (area 5), placing most AI-driven underwriting and lending decision tools in the high-risk category.
High-Risk AI in Financial Services
- Credit scoring and creditworthiness assessment: AI systems that evaluate credit applications or assign credit scores are Annex III high-risk (essential services, area 5).
- Insurance risk assessment: AI systems used to assess insurance risk or price insurance products affecting individuals are Annex III high-risk.
- Access to financial benefits: AI determining eligibility for financial products, public financial benefits, or emergency services is high-risk.
- Fraud detection affecting individuals: Where fraud detection systems result in denial of services or freezing of accounts affecting individuals, high-risk classification should be assessed case by case.
Regulatory Layering
Financial services AI faces the most complex regulatory stack: AI Act obligations layer on top of GDPR, plus sector-specific financial regulation (MiFID II, CRR, Solvency II, PSD2, and others). The AI Act does not displace financial sector regulation — all regimes apply concurrently.
The European Banking Authority (EBA), ESMA, and EIOPA have all published AI-related guidance for their sectors. Financial institutions should monitor both AI Act obligations and relevant sectoral guidance from their prudential supervisor.
Healthcare
Healthcare AI faces obligations under both Annex I (AI embedded in medical devices) and potentially Annex III (AI assisting clinical decisions about individuals).
Annex I — Medical Device AI
AI systems that are safety components of medical devices regulated under the Medical Device Regulation (MDR, Regulation (EU) 2017/745) or the IVD Regulation (Regulation (EU) 2017/746) are classified as Annex I high-risk AI. These systems must comply with both the MDR/IVD conformity assessment process and the AI Act.
Full AI Act obligations for Annex I medical device AI apply from 2 August 2027.
Clinical Decision Support
AI systems that assist clinicians in diagnosing conditions or selecting treatments for specific patients may fall within Annex III depending on how they are classified and whether they directly influence individual patient decisions. The EU AI Office has published guidance on this boundary.
Intersection with GDPR
Health data is special category personal data under GDPR Article 9. AI processing health data must have both a GDPR legal basis (typically explicit consent or necessity for medical care) and comply with AI Act data governance requirements for training data.
Human Resources
Annex III area 4 (employment, workers management, and access to self-employment) covers a wide range of AI systems used in HR and recruitment — one of the most commercially prevalent high-risk AI categories.
High-Risk AI in HR
- CV screening and candidate ranking: AI systems that score, rank, or filter job applications based on CVs or application materials are Annex III high-risk.
- Interview analysis AI: AI that analyses video or audio of interviews to assess suitability, including tools that detect vocal tone or facial expressions, is high-risk — and emotion recognition in recruitment contexts may also trigger the prohibited practice provisions (Article 5) if conducted in workplace settings.
- Performance monitoring: AI systems continuously monitoring employee performance, productivity, or behaviour are high-risk when they influence HR decisions.
- Promotion and termination: AI that makes or substantively influences promotion, demotion, or termination decisions is high-risk.
- Task allocation: AI systems allocating work based on individual characteristics (e.g., gig economy dispatch algorithms) are high-risk.
Practical Impact
Any company using AI-powered ATS (Applicant Tracking Systems) with ranking or scoring features, or AI-driven performance management platforms, is a deployer of Annex III high-risk AI. Deployer obligations apply from August 2026, including human oversight implementation and FRIA for certain public sector deployers.
Education
Annex III area 3 (education and vocational training) covers AI in educational settings where AI influences access to or performance within education.
High-Risk AI in Education
- Admission decisions: AI systems assisting in determining access to educational institutions or programmes are high-risk.
- Exam evaluation: AI systems evaluating student work, assigning grades, or assessing learning outcomes that affect academic progression.
- Student monitoring during exams: Proctoring software that monitors students during examinations — including detecting suspicious behaviour or identity verification — is high-risk. Note: emotion recognition during educational activities is also prohibited under Article 5(1)(f), with narrow medical/safety exceptions.
- Educational level assessment: AI that determines the appropriate level of education or vocational training for an individual.
Law Enforcement
Law enforcement AI faces both prohibited practice restrictions (Article 5) and Annex III high-risk classification (area 6). This sector has the most complex AI Act landscape.
Prohibited in Law Enforcement Context
- Real-time remote biometric identification in public spaces (narrow exceptions only)
- Retrospective biometric databases built from scraping
- AI predicting criminality from profiling alone
High-Risk in Law Enforcement
- Individual risk assessment for criminal offences
- Polygraph and similar reliability testing tools
- AI for evaluating the reliability or relevance of evidence
- Criminal investigative AI that profiles suspects
Law enforcement AI is subject to additional oversight requirements. The EU database for registration of law enforcement high-risk AI is not publicly accessible (unlike most Annex III categories). National legislation must authorise even the permitted uses.
Retail and E-commerce
Most retail AI falls into the minimal risk category and faces no specific AI Act obligations.
Minimal Risk (No Obligations)
- Product recommendation engines
- Dynamic pricing algorithms (general retail)
- Inventory management and demand forecasting AI
- Search ranking within e-commerce platforms
- Visual search tools
Limited Risk (Transparency Only)
- Customer service chatbots: Must disclose they are AI systems. No further obligations.
- AI-generated product descriptions or marketing: May require disclosure under limited risk transparency rules if substantial AI generation is involved.
High Risk — Check Case by Case
- Credit for purchases (BNPL): AI-driven Buy Now Pay Later decisioning is likely Annex III (essential services — creditworthiness).
- Insurance products: Retail insurance powered by AI risk models is high-risk.
- Fraud detection affecting individuals: Where AI results in denial of service, assess whether Annex III applies.