Side-by-Side Comparison
| Dimension | GDPR | EU AI Act |
|---|---|---|
| Primary subject matter | Processing of personal data | AI systems and AI models |
| Trigger for application | Processing personal data of EU residents | Placing AI on EU market or using AI in EU |
| Core obligations | Lawful basis, data minimisation, purpose limitation, data subject rights | Risk management, transparency, human oversight, conformity assessment |
| Enforcement authority | Data Protection Authorities (national) | Market Surveillance Authorities + EU AI Office |
| Individual rights | Access, erasure, portability, objection, automated decision-making rights | No direct individual rights under AI Act itself |
| Private action | Yes — individuals can seek compensation | No — regulatory enforcement only |
| Maximum fine (highest tier) | €20,000,000 or 4% of global annual turnover | €35,000,000 or 7% of global annual turnover |
| Effective date | 25 May 2018 | 1 August 2024 (entry into force); obligations phased to 2027 |
| Extraterritorial reach | Yes — applies to processing of EU residents' data anywhere | Yes — applies to AI affecting EU regardless of provider location |
Where They Overlap
AI Systems Processing Personal Data
Any high-risk AI system that processes personal data must comply with both the AI Act and GDPR. The obligations are additive — meeting one does not satisfy the other. For example:
- A credit scoring AI must comply with GDPR (lawful basis, data minimisation, data subject rights under Art. 22) and with the AI Act (risk management, human oversight, conformity assessment as a high-risk system).
- An HR recruitment AI must comply with GDPR's restrictions on automated decision-making and the AI Act's Annex III employment category obligations.
Automated Decision-Making
GDPR Article 22 gives data subjects the right not to be subject to solely automated decisions that produce significant legal or similarly significant effects — with a right to human review and meaningful information about the logic involved.
The AI Act adds human oversight requirements for high-risk AI systems that make or substantially influence such decisions. Both sets of obligations apply where the automated decision involves personal data and falls within a high-risk AI use case.
Training Data and Personal Data
AI models trained on personal data must have a valid GDPR legal basis for that processing. The AI Act's data governance obligations (Article 10 for high-risk AI, Article 53 for GPAI) require documentation of training data and measures to address biases — but do not themselves provide a GDPR legal basis. Both must be addressed independently.
Impact Assessments
GDPR requires a Data Protection Impact Assessment (DPIA) for high-risk personal data processing. The AI Act requires a Fundamental Rights Impact Assessment (FRIA) for certain deployers of high-risk AI. These are separate assessments under separate legal bases — though their findings may be complementary and some organisations choose to conduct them jointly.
Biometric Data
Biometric data is special category personal data under GDPR (Article 9) and requires an explicit legal basis for processing. The AI Act separately prohibits certain uses of biometric AI (real-time remote identification, emotion recognition in workplaces) and classifies others as high-risk. Both bodies of law must be satisfied simultaneously.
Different Enforcement Authorities
GDPR is enforced by national Data Protection Authorities (DPAs) — the ICO in the UK, the CNIL in France, the BfDI in Germany, the DPC in Ireland, and so on.
The AI Act is enforced by national Market Surveillance Authorities (MSAs) — separate bodies in each member state, which member states had to designate from August 2025. The EU AI Office oversees GPAI at EU level.
DPAs and MSAs are required to cooperate where AI systems involve personal data processing — but they are distinct enforcement bodies with distinct mandates. A company could face parallel investigations from both for the same AI system.
Individual Rights: A Key Difference
Under GDPR, individuals have enforceable rights: access to their data, erasure, portability, the right to object, and the right not to be subject to solely automated decisions. Individuals can complain to DPAs and claim compensation through courts.
The AI Act creates no equivalent individual rights. There is no private right of action under the AI Act — an individual cannot sue a company directly for AI Act violations, nor claim compensation under the Act itself. Enforcement is entirely regulatory.
This may evolve through the EU's AI Liability Directive (proposed but not yet adopted as of April 2026) and through member state product liability law.
Practical Implications
For Companies Already GDPR-Compliant
GDPR compliance does not provide AI Act compliance. Companies must now layer AI Act obligations on top of existing GDPR programmes. Key additional actions:
- Inventory all AI systems to determine risk classification under the AI Act
- For high-risk AI: establish risk management systems, technical documentation, and conformity assessment
- For GPAI models (if applicable): ensure transparency documentation and copyright compliance are in place
- Establish or expand governance structures — AI compliance functions alongside DPO
- Update vendor contracts to address AI Act obligations (providers must give deployers adequate instructions)
DPO and AI Compliance Function
Companies required to appoint a Data Protection Officer under GDPR should consider how the DPO function interacts with AI Act compliance. The EU AI Act requires companies deploying high-risk AI to designate qualified persons for human oversight. Organisations are building AI governance functions alongside (not replacing) their DPO.
The AI Act's Higher Fine Ceiling
The AI Act's maximum fine for prohibited practice violations (7% of global turnover) exceeds GDPR's highest tier (4%). For Annex I and III high-risk violations, the 3% cap matches GDPR's lower tier. Both regimes can apply concurrently to the same conduct.