The long-anticipated EU Artificial Intelligence ("AI") Act entered into force on 1 August 2024. It will be fully applicable 24 months after entry into force, subject to certain exceptions. In particular, the provisions on prohibited AI systems will take effect on 2 February 2025. However, the related provisions on fines for non-compliance with these rules will, in principle, only start to apply later, on 2 August 2025 (see our timeline for implementation here).
There have been a number of further developments in relation to the AI Act in recent months, including the European Commission’s public consultation on Guidelines regarding the definition of an “AI system” and practical implementation of the rules on “prohibited AI practices”. In addition, the European commission has published FAQs and a first draft Code of Practice on General-Purpose AI models. We have also seen the Irish Government designating nine public bodies with additional powers under the AI Act. In addition, an increasing number of companies have been signing up to the AI Pact, which is a voluntary initiative launched by the European Commission inviting in-scope businesses to comply with the key obligations of the AI Act ahead of time.
We look at each of these developments in more detail below.
European Commission Consultation for Guidelines on “AI System” and “Prohibited AI Practices”
Article 96 of the AI Act requires the European Commission to develop guidelines on the practical implementation of the AI Act, to help organisations understand their obligations and promote compliance.
The European Commission recently launched a targeted stakeholders’ consultation process on the first sets of guidelines, including on: (i) the definition of an “AI system” and (ii) on the implementation of “prohibited AI practices”. The consultation will remain open until 11 December 2024.
The legal concepts regarding the AI system definition and prohibited AI practices are established in the AI Act. The consultation seeks additional practical examples from stakeholders to feed into the guidelines and provide further clarity on practical aspects and use cases.
(i) Definition of an “AI system”
The definition of an “AI system” is key to understanding the scope of application of the AI Act. The definition set out in Article 3(1) of the AI Act is aligned with the OECD definition: “AI system means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”. Recital 12 provides further clarifications on the definition of an AI system.
The European Commission is asking in its consultation which of the following seven elements of the definition of an AI system requires further clarification, in addition to the guidance already provided in Recital 12:
(1) a machine-based system;
(2) designed to operate with varying levels of autonomy;
(3) may exhibit adaptiveness after deployment;
(4) for explicit or implicit objectives;
(5) infers, from the input it receives, how to generate outputs;
(6) predictions, content, recommendations, or decisions;
(7) can influence physical or virtual environments.
Recital 12 indicates that an AI system should be distinguished from “simpler traditional software systems or programming approaches and should not cover systems based on rules defined solely by natural persons to automatically execute operations”. The Commission guidelines are expected to provide a methodology for distinguished AI systems from simpler traditional software systems or programming approaches.
(ii) Implementation of “prohibited AI practices”
Article 5 of the AI Act prohibits the placing on the EU market, putting into service, or the use of certain categories of AI systems. These systems can be misused and provide novel and powerful tools for manipulative, exploitative, social control and / or surveillance practices.
The European Commission is asking in its consultation which elements of the prohibited AI systems set out in Article 5(1) require further clarification. The Commission is further asking stakeholders whether they have, or know of, “any concrete examples of AI systems” in respect of which they need further clarification regarding whether the system falls within the scope of a prohibited system or not, and, if so, to provide details of such systems, and the specific clarifications needed.
The eight categories of prohibited AI systems set out in Article 5(1) include:
(1) harmful subliminal, manipulative and deceptive techniques;
(2) harmful exploitation of vulnerabilities;
(3) unacceptable social scoring;
(4) individual crime risk assessment and prediction (with some exceptions).
(5) untargeted scraping of internet or CCTV material to develop or expand facial recognition databases;
(6) emotion recognition in the areas of workplace and education (with some exceptions);
(7) biometric categorisation to infer certain sensitive categories (with some exceptions); and
(8) real-time remote biometric identification (RBI) in publicly accessible spaces for law enforcement purposes (with some exceptions).
The Commission guidelines are expected to “include an introductory section explaining the general interplay of the prohibitions with other Union legal acts, the high-risk category and general-purpose AI systems as well as relevant specifications of some horizontal concepts such as provider and deployer of AI systems, ‘placement on the market’, ‘putting into service’, and ‘use’ and relevant exceptions and exclusions from the scope of the AI Act (e.g. research, testing and development; military, defence and national security, personal non-professional activity)”.
European Commission publishes GPAI FAQs
The AI Act distinguishes between AI systems and AI models, imposing requirements for certain AI systems (Chapters II-IV) and obligations for providers of general-purpose AI (“GPAI”) models (Chapter V). While the provisions of the AI Act concerning AI systems depend on the context of use of the system, the provisions of the AI Act concerning GPAI models apply to the model itself, regardless of what is or will be its ultimate use.
The European Commission published FAQs for GPAI models under the AI Act on 14 November, 2024. The FAQs explain the regulatory approach to GPAI models in the AI Act.
The Commission stated that it is crucial that “providers of GPAI models possess a good understanding of their models along the entire AI value chain, both to enable the integration of such models into downstream AI systems and to fulfil their obligations under the AI Act”. Providers of GPAI models must draw up and provide technical documentation of their models to the AI Office and downstream providers, must put in place a copyright policy, and must publish a training content summary. In addition, providers of general-purpose AI models posing systemic risks, which may be the case either because they are very capable or because they have a significant impact on the internal market for other reasons, must notify the Commission, assess and mitigate systemic risks, perform model evaluations, report serious incidents, and ensure adequate cybersecurity of their models.
European Commission publishes first draft GPAI Code of Practice
In conjunction with publication of the FAQs on GPAI models, the European Commission also published its first draft General Purpose AI ("GPAI") Code of Practice under the AI Act on 14 November, 2024. Article 56 of the AI Act requires the AI Office to encourage and facilitate the drafting up of Codes of Practice at EU level in order to contribute to the proper application of the Act. The Code of Practice is intended to play a crucial role in guiding the future development and deployment of trustworthy and safe GPAI models.
The Code outlines provider obligations, including transparency and copyright-related obligations. For a small number of providers of advanced GPAI models that could pose systemic risks, the Code further details a taxonomy of systemic risks, risk assessment measures, as well as technical and governance mitigation measures. The Code does not address issues such as how the AI Office will enforce the obligations for providers of GPAI models, nor fines, sanctions and liability. These issues may instead be addressed through other means (e.g. decisions, delegated acts, implementing acts, or further communications from the AI Office). The Code will undergo three more drafts before finalisation, and adherence to the Code will demonstrate compliance with the AI Act, and influence the amounts of fines that may be imposed by the AI Office. The draft code is based on contributions from a multi-stakeholder consultation organised by the AI Office.
The AI Act provides that the final version of the Code should be published by 2 May 2025, to enable providers of GPAI models to demonstrate compliance on time. The rules on GPAI models come into effect on 2 August 2025.
Ireland designates nine authorities to safeguard fundamental rights under AI Act
The Irish Government recently designated nine public authorities to safeguard fundamental rights under the AI Act. Article 77 of the AI Act provides for certain national public authorities to get additional powers to facilitate them in carrying out their current responsibilities for protecting fundamental rights in circumstances where use of high-risk AI systems referred to in Annex III poses a risk to those rights.
The AI Act set a deadline of 2 November 2024 for national governments to identify these national public authorities. The nine public authorities designated by the Irish government include:
(1) An Coimisiún Toghcháin;
(2) Coimisiún na Meán;
(3) Data Protection Commission (“DPC”);
(4) Environmental Protection Agency;
(5) Financial Services and Pensions Ombudsman;
(6) Irish Human Rights and Equality Commission;
(7) Ombudsman;
(8) Ombudsman for Children; and
(9) Ombudsman for the Defence Forces.
These public authorities will have the power to request and access any documentation that developers and deployers of AI systems are required to create or maintain under the AI Act, when access is necessary for effectively fulfilling their mandates within the limits of their jurisdiction. In circumstances where such documentation is insufficient for these public authorities to ascertain whether an infringement of obligations under EU law protecting fundamental rights has occurred, the authority may make a reasoned request to the market surveillance authority to organise testing of the high-risk AI system through technical means.
This list of public authorities will be kept under review and can be updated at any time. The identified authorities will not be “competent authorities” for the purposes of the AI Act, nor will any obligations, responsibilities or tasks be assigned to them. Rather, they will get additional powers to facilitate them in carrying out their current mandates in circumstances involving the use of AI systems. These powers will apply from 2 August 2026. For example, the DPC will supervise and enforce compliance with data protection obligations under the GDPR in respect of high-risk AI systems that process personal data.
Article 70 of the AI Act, separately, requires Member States to identify the national competent authority(ies) to supervise and enforce the provisions of the AI Act relating to AI systems by 2 August 2025 (previously discussed here).
The AI Pact
To facilitate the transition to the new regulatory framework, the European Commission has launched the AI Pact, a voluntary initiative that seeks to support the future implementation and invites AI developers from Europe and beyond to comply with the key obligations of the AI Act ahead of time.
The AI Pact voluntary pledges call on participating companies to commit to at least three core actions:
- Adopting an AI governance strategy to foster the uptake of AI in the organisation and work towards future compliance with the AI Act
- Identifying and mapping AI systems likely to be categorised as high-risk under the AI Act
- Promoting AI awareness and literacy among staff, ensuring ethical and responsible AI development
The text of the pledges , initially drafted by the AI Office, was shared with the relevant stakeholders in the AI Pact network to gather feedback and insights. As a result, the final version of the pledges reflects the input received from stakeholders. These pledges are not legally binding and do not impose any legal obligations on participants. Companies will be able to sign them at any moment until the AI Act fully applies. The Pact is allowing participants to test and share their solutions with the wider community, while fostering the development and deployment of trustworthy AI.
Next Steps
With less than three months to go until the AI Act starts coming into effect, it is time for companies to start taking concrete actions to understand, adapt, and prepare for the implementation of the Act. Priority steps for in-scope companies include identifying and mapping the risk level of your AI systems, assessing which operator role you play, and promoting AI awareness and literacy amongst your staff.
With the imminent implementation of the AI Act in mind, Matheson have published a comprehensive AI Guide for Businesses which provides an overview of the AI Act and will help you to understand the scope of your new obligations. Email us at: AIguide@matheson.com to request your copy.
Contact Us
Matheson's Technology & Innovation Group is available to guide you through the complexities of understanding your organisation's obligations under the AI Act. For more information, or if you would like assistance with putting in place an AI strategy, please contact Davinia Brennan, Anne-Marie Bohan, Sarah Jayne Hanna, Carlo Salizzo, Deirdre Crowley or your usual Matheson contact.