Currently in force. All eight prohibited practices below have been illegal across the EU since 2 February 2025. Any AI system deploying these practices must be withdrawn from the EU market or modified immediately.

The Eight Prohibited Practices

1. Subliminal Manipulation

AI systems that deploy subliminal techniques beyond a person's consciousness to materially distort that person's behaviour in a manner that causes or is likely to cause them or another person significant harm.

What this covers: AI that bypasses conscious awareness to influence decisions — for example, embedding imperceptible audio or visual cues that shape behaviour without the person realising. The harm requirement is significant: the distortion must cause or risk causing actual harm, not merely be persuasive.

What this does not cover: Standard recommendation systems, personalised advertising, or A/B testing — these operate above the threshold of conscious awareness.

2. Exploitation of Vulnerabilities

AI systems that exploit any vulnerability of a specific group due to age, disability, or specific social or economic situation to materially distort the behaviour of persons belonging to that group in a manner causing or likely to cause harm.

What this covers: AI targeting children or the elderly with manipulative techniques that exploit developmental or cognitive limitations. AI exploiting financial desperation or addiction patterns to drive harmful behaviour.

What this does not cover: Age-appropriate content filtering or accessibility features designed to assist vulnerable groups.

3. Social Scoring by Public Authorities

AI systems used by public authorities, or on their behalf, to evaluate or classify natural persons based on social behaviour or personal characteristics over a period of time, where the social score leads to detrimental treatment that is either unrelated or disproportionate to the context in which the data was produced, or unjustifiably treats persons or groups unfavourably.

What this covers: Government-run systems that assign trustworthiness scores affecting access to services, benefits, or freedoms based on aggregated behavioural data.

What this does not cover: Private credit scoring (regulated separately under Annex III as high-risk), proportionate law enforcement risk assessment using relevant data.

4. Real-Time Remote Biometric Identification in Public Spaces

AI systems used by law enforcement for real-time remote biometric identification of natural persons in publicly accessible spaces. Narrow exceptions permit use for:

Exception requirements: Exceptions require prior judicial or independent administrative authorisation (emergency use of up to 24 hours is permitted with immediate subsequent authorisation). Member states must enact national legislation to enable even these exceptions.

What this does not cover: Retrospective biometric identification from stored recordings (subject to different rules), biometric identification in non-public spaces.

5. Retrospective Biometric Identification Databases from Scraping

AI systems used to create or expand facial recognition databases through the untargeted scraping of facial images from the internet or from CCTV footage.

What this covers: Tools like Clearview AI's model of scraping billions of images to build identification databases without the knowledge or consent of individuals depicted.

Note: Retrospective use of existing legitimately acquired biometric data by law enforcement may be permitted under the Annex III high-risk framework, subject to strict conditions. This prohibition specifically targets untargeted mass scraping to build new databases.

6. Emotion Recognition in Workplaces and Educational Institutions

AI systems used for the purpose of inferring emotions of natural persons in the context of workplaces and educational institutions. Narrow exceptions exist for medical or safety reasons.

What this covers: AI monitoring employee emotional states through facial expressions, voice tone, or physiological signals for productivity assessment. AI systems monitoring student engagement, attention, or emotional reactions during learning.

What this does not cover: Emotion recognition for medical diagnosis (e.g., depression screening by a healthcare provider), safety-critical applications where detecting extreme emotional states is necessary for operational safety.

7. AI for Predicting Criminal Behaviour Based on Profiling

AI systems that make individual risk assessments to predict the likelihood that a natural person commits a criminal offence based solely on profiling or on assessing personality traits and characteristics, without a factual basis involving that person's actual behaviour directly linked to criminal activity.

What this covers: "Predictive policing" tools that assign crime likelihood scores to individuals based on demographic data, location, or social associations without specific behavioural indicators.

What this does not cover: Area-level crime prediction (not individual assessment), risk assessment based on documented individual conduct, parole assessment tools that consider specific criminal history.

8. Biometric Categorisation to Infer Sensitive Attributes

AI systems that categorise natural persons individually based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation. This prohibition does not apply to the labelling or filtering of lawfully acquired biometric datasets for law enforcement purposes.

What this covers: AI that analyses facial features, gait, or other biometric signals to infer political leanings, religious affiliation, or sexual orientation. Systems that sort individuals into categories based on assumed sensitive characteristics derived from biometric data.


Penalties for Violations

Violations of the prohibited practices provisions (Article 5) carry the highest fine tier under the Act:

Violation Maximum Fine
Any Article 5 prohibited practice €35,000,000 or 7% of total worldwide annual turnover — whichever is higher

Full penalty structure →

Informational purposes only. Nothing on this site constitutes legal advice. Descriptions above are informational summaries of Article 5 of Regulation (EU) 2024/1689. The precise scope of each prohibition depends on the exact statutory language and its interpretation by national authorities and courts. Always consult qualified legal counsel. Not affiliated with the European Union or any EU institution.