AI System
Defined in Article 3(1): "a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments."

This definition is deliberately broad. It encompasses large language models, computer vision systems, recommendation engines, and decision-support tools. Simple rule-based systems that operate on fixed, deterministic logic without inference may fall outside this definition.
General Purpose AI (GPAI) Model
Article 3(63): An AI model, including those trained on large amounts of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market, and that can be integrated into a variety of downstream systems or applications. This term does not cover AI models used for research, development, or prototyping activities before they are placed on the market.
GPAI System
Article 3(66): An AI system that is based on a general purpose AI model and which has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems.
Provider
Article 3(3): A natural or legal person, public authority, agency, or other body that develops an AI system or a general-purpose AI model, or that has an AI system or a general-purpose AI model developed, and places that system or model on the market or puts the system into service under its own name or trademark, whether for payment or free of charge.
Deployer
Article 3(4): A natural or legal person, public authority, agency, or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.
Operator
Article 3(8): Provider or deployer. A collective term used in provisions that apply to both roles.
Importer
A natural or legal person established in the EU that places on the EU market an AI system that bears the name or trademark of a natural or legal person established outside the EU.
Distributor
A natural or legal person in the supply chain other than the provider or importer that makes an AI system available on the EU market without modifying its intended purpose.
Authorised Representative
A natural or legal person located or established in the EU that has received a written mandate from a provider located or established outside the EU to act on the provider's behalf for the purposes of AI Act compliance. Providers located outside the EU that place high-risk AI on the EU market must designate an authorised representative.
High-Risk AI System
An AI system classified as high-risk under Article 6. This includes: (a) AI systems that are safety components of regulated products under Annex I legislation, or are themselves such regulated products; and (b) AI systems with intended purposes listed in Annex III. The precise classification depends on the system's intended purpose — not its technical architecture.
Systemic Risk
Article 3(65): A risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain. A GPAI model is presumed to present systemic risk if trained on more than 1025 FLOPs.
Conformity Assessment
Article 3(20): The process of verifying whether the requirements of the Act relating to a high-risk AI system have been fulfilled. For most Annex III high-risk systems, self-assessment is permitted. For certain systems — including those involving biometric identification — or AI in Annex I regulated products, third-party assessment by a notified body is required.
Notified Body
A conformity assessment body designated by a member state under the Act to carry out third-party conformity assessments. Notified bodies must be accredited and must demonstrate competence, independence, and impartiality. Their designation is notified to the European Commission.
CE Marking
Conformité Européenne marking affixed to high-risk AI systems (and AI-embedded regulated products) indicating that the product complies with applicable EU requirements including the AI Act. CE marking on an AI system signals completion of conformity assessment and issuance of an EU declaration of conformity. It is required before high-risk AI can be placed on the EU market.
EU Declaration of Conformity
Article 3(22): A statement made by the provider of a high-risk AI system that confirms the system complies with the requirements set out in the Act. The declaration must be updated when the system undergoes significant modification.
Serious Incident
Article 3(49): An incident or malfunctioning of an AI system that directly or indirectly leads to: death or serious harm to the health of a person; serious and irreversible disruption of infrastructure; breach of obligations under EU law protecting fundamental rights; serious harm to property or the environment. Providers and deployers of high-risk AI must report serious incidents to national authorities.
Intended Purpose
Article 3(12): The use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in instructions for use, promotional and sales materials. Intended purpose determines risk classification — a general-purpose chatbot has a different intended purpose than a medical diagnosis support tool, even if built on the same underlying model.
Reasonably Foreseeable Misuse
Article 3(13): The use of an AI system in a way that is not in accordance with its intended purpose but which may result from reasonably foreseeable human behaviour or interaction with other systems. Providers must account for foreseeable misuse in risk management.
EU AI Office
The body established within the European Commission, operational from August 2025, responsible for overseeing general purpose AI models at EU level. The EU AI Office can investigate GPAI providers, designate models as presenting systemic risk, develop codes of practice, and coordinate with national market surveillance authorities.
European AI Board
An advisory body established under the Act, composed of representatives of national competent authorities and the European Data Protection Supervisor. Coordinates implementation of the Act across member states, issues guidance, and assists the Commission and member states on AI governance matters.
Market Surveillance Authority
The national authority in each member state designated to act as the AI Act enforcement body for high-risk AI systems within that member state's jurisdiction. Member states were required to designate these authorities by August 2025. They conduct investigations, request information, and impose fines for violations of the Act by providers and deployers.
Fundamental Rights Impact Assessment (FRIA)
An assessment required under Article 27 for public bodies and certain private deployers using high-risk AI systems in Annex III areas 5–8. It must assess the impact of the AI system on fundamental rights, including the right to human dignity, non-discrimination, privacy, and data protection. Completed FRIAs must be registered in the EU database.
Post-Market Monitoring
Continuous monitoring of the performance and safety of a high-risk AI system throughout its operational lifetime after market placement. Providers must establish a post-market monitoring plan (Article 72) and report serious incidents discovered through this monitoring.
Placing on the Market
Article 3(9): The first making available of an AI system or a general-purpose AI model on the EU market. This is the trigger event for most provider obligations — technical documentation must be complete and conformity assessment passed before this occurs.
Putting into Service
Article 3(11): The direct use of an AI system on the EU market by the provider for their own purposes, or the making available to an individual deployer for its first use in the EU. Provider obligations apply equally to AI put into service as to AI placed on the market.
Informational purposes only. Definitions above are informational summaries based on Article 3 and other provisions of Regulation (EU) 2024/1689. Direct quotations are from the official text of the Act. Precise legal interpretation of definitions is a matter for qualified legal counsel and, ultimately, national courts and the Court of Justice of the EU. Not affiliated with the European Union or any EU institution.