The European Commission recently published eagerly anticipated Guidelines on the definition of an Artificial Intelligence (“AI”) system under the AI Act. The AI Act does not apply to all systems, but rather those systems that fulfil the definition of an “AI system” within the meaning of Article 3(1) of the AI Act. The definition of an AI system is therefore key to understanding the scope of application of the AI Act.
Unfortunately, however, the Guidelines do not provide the legal certainty which businesses were hoping for. In particular, it seems that the EU Commission has focused so much on aligning the Guidelines with the elements contained in the definition of an AI system in Article 3(1), that it remains unclear when exactly a system crosses from basic computation to an AI-driven one. In addition, it appears that not all elements of the definition are mandatory. For example, an AI system “may” show adaptiveness.
It is important to note that the Guidelines are not legally binding, and any authoritative interpretation of the AI Act can ultimately only be given by the Court of Justice of the European Union.
Background
By issuing these Guidelines, the Commission aims to assist providers and other relevant persons, including market and institutional stakeholders, in determining whether a system constitutes an AI system within the meaning of the AI Act, thereby facilitating the effective application and enforcement of the Act. Article 96(1)(f) of the AI Act requires the Commission to develop guidelines on the application of the definition of an AI system as set out in Article 3(1) of that Act.
The definition of an AI system entered into application on 2 February 2025, together with other provisions set out in Chapters I and II of the AI Act, including Article 5 of the AI Act on prohibited AI practices.
The Guidelines do not address the differences between AI systems and general-purpose AI models. The latter are regulated by Chapter V of the AI Act.
The Definition of an “AI system”
The Guidelines note that the definition of an “AI system” under Article 3(1) of the AI Act contains seven key elements, including: “(1) a machine-based system; (2) that is designed to operate with varying levels of autonomy; (3) that may exhibit adaptiveness after deployment; (4) and that, for explicit or implicit objectives; (5) infers, from the input it receives, how to generate outputs (6) such as predictions, content, recommendations, or decisions (7) that can influence physical or virtual environments”.
The Guidelines work through each of these elements. They highlight that the definition adopts a lifecycle-based perspective encompassing two main phases: (i) the pre-deployment (i.e. the building phase) of the system and (ii) the post-deployment (i.e. the use phase) of the system. Accordingly, the seven elements set out in the definition are not required to be present continuously throughout both phases of the lifecycle. We discuss each of the seven elements in more detail below.
1. Machine-based system
The Guidelines explain that the term “machine-based” refers to the fact that AI systems are developed with and run on machines. The term “machine” includes both the hardware components that provide the infrastructure for computation (e.g. processing units, memory, storage devices), and the software components that handle how the hardware processes data and performs tasks (e.g. computer code, instructions, programs and operating systems) in order to enable the AI system to function.
The Guidelines elaborate that “machine-based” covers a wide variety of computational systems, including the most advanced quantum computing systems, so long as they provide computational capacity.
2. Varying levels of autonomy
The Guidelines refer to Recital 12 of the AI Act which clarifies that “varying levels of autonomy” means that AI systems are designed to operate “with some degree of independence of actions from human involvement and of capabilities to operate without human intervention.”
The reference to “some degree of independence of action” in Recital 12 of the AI Act excludes systems that are designed to operate solely with full manual human involvement and intervention. Human involvement and human intervention can be either direct (i.e. through manual controls), or indirect (i.e. through automated systems-based controls which allow humans to delegate or supervise system operations). As such, the level of autonomy is a necessary condition to determine whether a system qualifies as an AI system.
The Guidelines warn that systems that have the capability to operate with limited or no human intervention in specific use contexts, may trigger additional potential risks and human oversight considerations.
3. Adaptiveness
In explaining the meaning of “adaptiveness”, the Guidelines refer to Recital 12 of the AI Act which clarifies that “adaptiveness” refers to self-learning capabilities, allowing the behaviour of the system to change while in use. The new behaviour of the adapted system may produce different results from the previous system for the same inputs.
Importantly, the Guidelines explain that the use of the term “may exhibit adaptiveness after deployment” means that a system does not necessarily have to possess adaptiveness or self-learning capabilities after deployment to qualify as an AI system. Accordingly, adaptiveness is not a decisive condition for determining whether a system constitutes an AI system.
4. AI system objectives
The Guidelines set out that the objectives of an AI system may be explicitly or implicitly defined. Explicit objectives refer to clearly stated goals that are directly encoded by the developer into the system (e.g. the optimisation of a cost function, a probability or a cumulative award). Implicit objectives refer to goals that may be deduced from the behaviour or underlying assumptions of the system. These objectives may arise from the training data or from the interaction of the AI system with its environment.
The objectives of an AI system may be different from the intended purpose of the AI system. The objectives of an AI system are internal to the system, referring to the goals of the tasks to be performed and their results. Meanwhile, the intended purpose of an AI system is externally oriented, referring to the use for which an AI system is intended by the provider.
5. Inferencing how to generate outputs using AI techniques
An AI system must be able to infer, from the input it receives, how to generate outputs. Recital 12 of the AI Act clarifies that the definition of AI systems “should not cover systems that are based on the rules defined solely by natural persons to automatically execute operations”. In this respect, the Guidelines highlight that systems outside the scope of the AI system definition include, for example, systems for improving mathematical optimisation; basic data processing systems; systems based on classical heuristics; and simple prediction systems.
AI systems should be capable of the process of obtaining outputs, such as predictions, content, recommendations or decisions, which can influence physical and virtual environment, as well as being capable of deriving models or algorithms. Examples of techniques that enable inference while building an AI system include, for example, machine learning approaches that learn from data how to achieve certain objectives; and logic-and-knowledge-based approaches that infer from encoded knowledge or symbolic representation of the task to be solved.
The Guidelines also make clear that the phrase “infer how to” in Article 3(1) and Recital 12 of the AI Act should be interpreted broadly and should be understood as referring to the building phase, whereby a system derives outputs through AI techniques enabling inferencing.
6. Outputs that can influence physical or virtual environments
The outputs of AI systems belong to four broad categories listed in Article 3(1) of the AI Act: (i) predictions; (ii) content, (iii) recommendations or (iv) decisions - that can influence physical or virtual environments. Each category differs in its level of human involvement.
The Guidelines note that predictions are one of the most common outputs that AI systems produce and that require the least human involvement. A prediction is an estimate about an unknown value (the output) from known values supplied to the system (the input). AI systems using machine learning are capable of generating predictions that uncover complex patterns in data and make highly accurate predictions. For example, AI systems used in self-driving cars are designed to make real-time predictions and to take decisions to adjust their behaviour accordingly.
Content refers to the generation of new material by an AI system. This may include text, images, videos, music and other forms of output. The Guidelines note that, while content, as a category of output, may be considered as a sequence of “predictions” or “decisions”, due to the prevalence of this output in generative AI systems, it is listed in Recital 12 of the AI Act as a separate category of output.
Recommendations refer to suggestions for specific actions, products or services to users based on their preferences, behaviours or other data inputs. AI-based recommendation systems, for example, can leverage large-scale data, adapt to user behaviour in real-time, provide highly personalised recommendations, and scale efficiently as the dataset grows. In contrast, non-AI systems that rely on static, rule-based mechanisms and limited data, rarely possess these functionalities. In other cases, recommendations refer to potential decisions, such as to recommend a candidate to hire in a recruitment system, which will be evaluated by humans. If these recommendations are automatically applied, they become “decisions”.
Decisions refer to conclusions or choices made by a decision. If an AI system undertakes any fully automated process whereby a certain outcome is produced in the system’s environment without any human intervention, it may be considered to be a decision.
7. Interaction with the environment
Finally, an AI system’s output must be able to influence physical (i.e. to tangible, physical objects) or virtual (i.e. digital spaces or software ecosystems) environments. This element of the definition of an AI system emphasises the fact that AI systems are not passive, bur rather must actively impact the environments in which they are deployed.
Conclusion
Due to the diverse nature and variety of AI systems, the Guidelines do not provide an exhaustive list of all potential AI systems. Each system must be assessed on a case-by-case basis to determine whether it constitutes an “AI system” under the AI Act, taking into consideration the seven elements of the definition laid down in Article 3(1) of the AI Act.
To the extent that an AI system does fall within the scope of the AI Act, businesses will need to consider the level of risk associated with that AI system in order to determine their obligations under the AI Act. The AI Act’s risk-based approach means that only those systems giving rise to the most significant risks to fundamental rights and freedoms will be subject to its prohibitions contained in Article 5, its regulatory regime for high-risk AI systems set out in Article 6, and its transparency requirements laid down in Article 50. The Guidelines note that the vast majority of systems, even if they qualify as AI systems within the meaning of Article 3(1) of the AI Act, will not be subject to any regulatory requirements under the AI Act.
Contact Us
For more information, please contact Davinia Brennan, or any member of our Technology and Innovation Group or your usual Matheson contact.