The green light has been given to the Artificial Intelligence Act by the European Parliament's Civil Liberties and Internal Market committees[1].
Notably, it was agreed, by a large majority, to ban the use of facial recognition in public spaces, predictive policing tools, and to impose new transparency measures on generative AI applications like ChatGPT.
As predicted in our Three Top Trends for AI in 2023, AI regulation continues to progress at space in the EU, with keen focus on ethics and development of standards and certification.
What do the AI rules do?
The rules follow a risk-based approach and establish obligations for providers and users depending on the level of risk the AI can generate. AI systems with an unacceptable level of risk to people’s safety would be strictly prohibited, including systems that deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities or are used for social scoring (classifying people based on their social behaviour, socio-economic status, personal characteristics).
Is there anything new on the prohibited list?
It was agreed, by a large majority, to ban the use of facial recognition in public spaces, predictive policing tools, and to impose new transparency measures on generative AI applications like ChatGPT.
The rules were amended to include bans on intrusive and discriminatory uses of AI systems such as:
- “Real-time” remote biometric identification systems in publicly accessible spaces;
- “Post” remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorization;
- Biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation);
- Predictive policing systems (based on profiling, location or past criminal behaviour);
- Emotion recognition systems in law enforcement, border management, workplace, and educational institutions; and
- Indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases (violating human rights and right to privacy).
What about High-risk AI?
MEPs expanded the classification of high-risk areas to include harm to people’s health, safety, fundamental rights or the environment. They also added AI systems to influence voters in political campaigns and in recommender systems used by social media platforms (with more than 45 million users under the Digital Services Act) to the high-risk list.
What about General-purpose AI?
MEPs included obligations for providers of foundation models - a new and fast evolving development in the field of AI - who would have to guarantee robust protection of fundamental rights, health and safety and the environment, democracy and rule of law. They would need to assess and mitigate risks, comply with design, information and environmental requirements and register in the EU database.
Generative foundation models, such as Chat-GPT, will have to comply with additional transparency requirements, including
- disclosing that the content was generated by AI;
- designing the model to prevent it from generating illegal content; and
- publishing summaries of copyrighted data used for training.
What about supporting innovation and protecting citizens' rights?
To boost AI innovation, MEPs added exemptions to these rules for research activities and AI components provided under open-source licenses. The new law promotes regulatory sandboxes, or controlled environments, established by public authorities to test AI before its deployment.
MEPs want to boost citizens’ right to file complaints about AI systems and receive explanations of decisions based on high-risk AI systems that significantly impact their rights. MEPs have also agreed to reform the role of the EU AI Office, which would be tasked with monitoring how the AI rulebook is implemented.
What's next?
The next step is the plenary adoption in the 12-15 June session, followed by the last stage of the legislative process, kicking off the negotiations between the European Parliament, Council of the European Union and the European Commission ("trilogue").[2]
Matheson's Technology and Innovation Group is available to guide you through the complexities of artificial intelligence and will keep clients abreast of developments as they occur. For more information, please contact Rory O'Keeffe or your usual Matheson contact.