EU AI Act Glossary
Key terms and definitions from the EU Artificial Intelligence Act (Regulation (EU) 2024/1689). Understand the language regulators use so you can speak it fluently during audits and assessments.
A
- AI Literacy
- Skills, knowledge, and understanding that allow providers, deployers, and affected persons to make an informed deployment and use of AI systems. Providers and deployers must ensure sufficient AI literacy among their staff (Article 4).
- AI Office
- The body established within the European Commission to supervise GPAI model providers, coordinate enforcement across Member States, develop guidelines, and support the implementation of the AI Act.
- AI Pact
- A voluntary commitment initiative launched by the European Commission in 2024 whereby organisations pledge to start applying AI Act principles before the legal deadlines take effect. Signatories publish commitments and progress publicly.
- Read related article →
- AI System
- A machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment, and that infers from inputs how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments (Article 3(1)).
- Read related article →
- Annex III
- The annex listing the areas and use cases of high-risk AI systems, including biometric identification, critical infrastructure management, education and vocational training, employment, essential private and public services, law enforcement, migration and border control, and administration of justice.
C
- CE Marking
- A marking affixed to a high-risk AI system to indicate conformity with the AI Act and other applicable Union harmonisation legislation. It must be affixed before the system is placed on the market (Article 48).
- Code of Practice
- Voluntary codes developed by industry and stakeholders, endorsed by the AI Office, that set out detailed rules for GPAI model providers to demonstrate compliance with Articles 53-56. Adherence to the Code creates a presumption of conformity.
- Read related article →
- Conformity Assessment
- The process of verifying whether a high-risk AI system meets the requirements set out in Chapter III, Section 2 of the AI Act. Depending on the system classification, this may be done through internal controls or by a notified body (Articles 40-49).
D
- Data Governance
- Requirements for training, validation, and testing data sets used for high-risk AI systems, including measures for data quality, relevance, representativeness, error correction, and bias detection (Article 10).
- Deployer
- A natural or legal person that uses an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity (Article 3(4)).
E
- EU Database for High-Risk AI Systems
- A publicly accessible EU-wide database maintained by the Commission where providers and deployers must register high-risk AI systems before placing them on the market or putting them into use (Article 71).
- EU Declaration of Conformity
- A document drawn up by the provider stating that a high-risk AI system fulfils the requirements of the AI Act. It must be kept for 10 years after the system is placed on the market (Article 47).
- European AI Board
- An advisory body composed of representatives from each Member State, the European Data Protection Supervisor, and the AI Office. It provides guidance, facilitates coordination, and advises the Commission on AI Act implementation (Article 65).
F
- Fundamental Rights Impact Assessment (FRIA)
- An assessment that deployers of high-risk AI systems used in certain contexts (e.g., public services, insurance, creditworthiness) must perform before putting the system into use, evaluating the impact on fundamental rights of affected persons (Article 27).
G
- General-Purpose AI (GPAI) Model
- An AI model trained with a large amount of data using self-supervision at scale, that displays significant generality, is capable of competently performing a wide range of distinct tasks, and can be integrated into a variety of downstream systems or applications (Article 3(63)).
- Read related article →
H
- High-Risk AI System
- An AI system that is either a safety component of a product covered by EU harmonisation legislation (Annex I) requiring third-party conformity assessment, or falls within one of the use cases listed in Annex III, such as biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, or administration of justice.
- Read related article →
- Human Oversight
- Measures designed to enable natural persons to effectively oversee the functioning of a high-risk AI system, understand its capabilities and limitations, and intervene or interrupt the system when necessary (Article 14).
I
- Intended Purpose
- The use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the instructions for use, promotional materials, technical documentation, and the EU declaration of conformity (Article 3(12)).
M
N
- Notified Body
- An independent organisation designated by a Member State to carry out third-party conformity assessments for high-risk AI systems covered by Article 6(1). Notified bodies must meet the requirements set out in Article 31.
P
- Post-Market Monitoring
- A system established by the provider to actively and systematically collect, document, and analyse data on the performance of high-risk AI systems throughout their lifetime. Used to ensure ongoing compliance and identify potential risks (Article 72).
- Read related article →
- Prohibited AI Practice
- An AI practice banned outright under Article 5, including subliminal manipulation, exploitation of vulnerabilities, social scoring, untargeted facial-image scraping, emotion recognition in workplaces and education, and individual criminal-risk prediction based solely on profiling.
- Read related article →
- Provider
- A natural or legal person that develops an AI system or a general-purpose AI model, or that has an AI system or model developed, and places it on the market or puts it into service under its own name or trademark (Article 3(3)).
R
- Reasonably Foreseeable Misuse
- The use of an AI system in a way that is not in accordance with its intended purpose but which may result from reasonably foreseeable human behaviour or interaction with other systems. Providers must account for this in their risk management (Article 9).
- Regulatory Sandbox
- A controlled framework set up by a national competent authority to enable the development, testing, and validation of innovative AI systems under regulatory supervision before they are placed on the market (Articles 57-62). Each Member State must establish at least one by August 2, 2026.
- Read related article →
- Risk Management System
- A continuous iterative process planned and executed throughout the entire lifecycle of a high-risk AI system, to identify, analyse, evaluate, and treat risks. Must be documented and regularly updated (Article 9).
S
- Significant Modification
- A change to the AI system after placing it on the market which affects its compliance with the AI Act requirements, or modifies the intended purpose for which the system has been assessed. A significant modification triggers new compliance obligations.
- Read related article →
- Systemic Risk
- A risk specific to the high-impact capabilities of GPAI models, having a significant effect on the Union market due to reach or actual or reasonably foreseeable negative effects on public health, safety, security, fundamental rights, or society as a whole. GPAI models are classified as systemic risk if trained with more than 10²⁵ FLOPs or designated by the Commission.
- Read related article →
T
- Technical Documentation (Annex IV)
- Detailed documentation that providers of high-risk AI systems must draw up and maintain, covering the system description, development process, monitoring and testing procedures, risk management, and data governance measures.
- Read related article →
- Transparency Obligations
- Requirements for providers of certain AI systems (e.g., chatbots, deepfake generators, emotion recognition) to ensure that persons interacting with the system are informed that they are interacting with AI, unless this is obvious from the context (Article 50).
Ready to classify your AI systems?
ActLoom automates risk classification, gap analysis, and evidence management for the EU AI Act.
Start free assessment