If you place AI systems on the EU market, or if your AI systems are used in the EU, the Act applies to you — regardless of where your company is based. A US company with EU customers using their AI product is within scope. A UK company providing AI software used by EU businesses is within scope. The Act's territorial reach mirrors GDPR: it follows the data subject (or in this case, the affected person) rather than the company's location. Non-EU providers of high-risk AI must designate an EU-based authorised representative before placing their systems on the EU market.
The Act uses a phased enforcement schedule — not a single effective date. Key dates: Prohibited practices have been illegal since 2 February 2025. GPAI model rules applied from 2 August 2025. High-risk AI obligations for Annex III use cases apply from 2 August 2026. High-risk AI obligations for Annex I product-embedded AI apply from 2 August 2027. Full timeline →
The Act does not use a traditional grace period concept. Each date in the enforcement schedule is an enforcement deadline, not a preparation start date. Prohibited practices have already been illegal since February 2025. GPAI rules are already in force. High-risk AI obligations require preparation now for the August 2026 deadline — companies that wait until the deadline to begin work on risk management systems, technical documentation, and conformity assessment are unlikely to be compliant on time.
Article 3(1) defines an AI system as: "a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments." This is the exact statutory definition. Simple rule-based systems that operate on fully deterministic fixed logic without inference may fall outside this definition — but most modern AI systems, including machine learning models and large language models, fall within scope.
Open-source GPAI providers that release model weights publicly have reduced obligations under Article 53 — specifically, they can comply with technical documentation requirements through simplified means and are exempted from some transparency provisions. However, these exemptions are conditional: open-source status does not exempt systemic risk models from the full set of additional systemic risk obligations under Article 55. High-risk AI systems that happen to be open-source still face full high-risk obligations when placed on the EU market. The open-source exemption covers the GPAI model provider only — downstream deployers building high-risk applications on open-source models have deployer obligations regardless.
Yes, if the AI system falls into a high-risk category. Deployer obligations apply to internal use, not just customer-facing AI. A company using a high-risk AI system for its own HR decisions (e.g., an AI recruitment tool used to hire its own employees) is a deployer of a high-risk AI system and must implement human oversight, retain logs, and — if a public body or large private employer using Annex III area 4–8 AI — may need to conduct a Fundamental Rights Impact Assessment.
Yes — OpenAI, Anthropic, and Google DeepMind are subject to GPAI obligations under Chapter V from August 2025. This includes technical documentation, copyright compliance policies, and — for systemic risk models — adversarial testing and incident reporting to the EU AI Office. Whether any specific application or deployment built on these models is high-risk depends on its intended purpose (e.g., a credit scoring tool built on GPT-4 would be Annex III high-risk), not on the underlying model itself. Businesses deploying these models may have their own deployer obligations depending on the use case. See GPAI obligations →
The process by which a high-risk AI system is evaluated for compliance before being placed on the EU market. For most Annex III high-risk AI systems, providers can conduct self-assessment — documenting their own compliance against the Act's requirements. Systems involving biometric identification, or AI embedded in Annex I regulated products (medical devices, machinery, etc.), require third-party assessment by an accredited notified body. The conformity assessment must be completed and an EU declaration of conformity issued before the system can be placed on the market or CE marked.
A body established within the European Commission, operational from August 2025, responsible for overseeing general purpose AI models at EU level. The EU AI Office designates GPAI models as presenting systemic risk, develops codes of practice, investigates GPAI providers for violations, and coordinates with national market surveillance authorities. It is separate from national enforcement bodies — national market surveillance authorities handle high-risk AI enforcement within each member state, while the EU AI Office focuses on GPAI models.
No. The AI Act does not replace GDPR — it layers on top of it. GDPR governs personal data processing. The AI Act governs AI systems. Where an AI system processes personal data, both regimes apply simultaneously. Companies must comply with both independently — meeting one does not satisfy the other. The enforcement authorities are also different: Data Protection Authorities enforce GDPR, while Market Surveillance Authorities and the EU AI Office enforce the AI Act. Full AI Act vs GDPR comparison →
No. The AI Act does not create a private right of action. Individuals cannot bring claims directly under the AI Act for compensation. Enforcement is by regulatory authorities only — national market surveillance authorities and the EU AI Office. This contrasts with GDPR, under which individuals can claim compensation for damage resulting from GDPR violations. The EU's proposed AI Liability Directive (not yet adopted as of April 2026) may address civil liability for AI harms separately from the AI Act.
Several categories are excluded from the Act's scope: AI developed exclusively for military or national security purposes; AI used for the exclusive purpose of scientific research and development; AI used by individuals for personal non-professional activities; certain open-source AI in research contexts. The Act applies to AI used in the EU — AI that never interacts with EU persons or markets is outside scope. Note that exclusions are narrow and conditions-specific — rely on the text of Article 2 and qualified legal counsel, not general summaries.
Fines are the higher of a fixed euro amount or a percentage of global annual turnover — not just EU revenue. The three tiers: up to €35M or 7% of global turnover for prohibited practice violations; up to €15M or 3% for other obligation violations; up to €7.5M or 1% for incorrect information to authorities. National authorities consider factors including severity, duration, and cooperation when setting actual fine amounts. SME caps apply. Full penalty structure →