What counts as an AI system under the EU AI Act?
The Commission's February 2025 guidelines clarify the Article 3(1) definition. Understand the boundary between AI systems and traditional software.
Article 3(1): the definition that gates everything
The AI Act defines an AI system as a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
This definition is intentionally broad and technology-neutral. It captures statistical ML, deep learning, symbolic reasoning, and hybrid approaches. The Commission's February 2025 guidelines provide worked examples to help providers determine if their system falls within scope.
Key boundary tests
Three elements distinguish AI from traditional software: inference capability (the system derives outputs not directly mapped by developers), autonomy (it operates without full human predetermination of each output), and adaptiveness (optional — it may evolve after deployment).
Simple rule-based systems, conventional database queries, and deterministic algorithms generally fall outside scope. However, systems using ML-trained components for any part of their pipeline are typically in scope, even if combined with rule-based logic.
Practical recommendation
When in doubt, apply the definition conservatively. Documenting your classification reasoning creates an audit trail that protects you whether the system is ultimately found to be in or out of scope.
ActLoom's classifier applies these boundary tests automatically and records the reasoning chain for each system registered on your account.