supernav-iconJoin Us at AWS re:Invent 2024

Contact Sales

  • Sign In
  • Get Started
HomeBlogEU AI Act

Europe Narrows in on First Artificial Intelligence Act

On Dec. 9, 2023, European Union policymakers reached an agreement on a new law aimed at regulating artificial intelligence.
Elliot Volkman

by Elliot Volkman

December 11, 2023
EU AI Act featured
Contents
Inside the EU Artificial Intelligence Act

On Dec. 9, 2023, European Union policymakers reached an agreement on a new law aimed at regulating artificial intelligence.

The EU AI Act will implement new regulations, including the prohibition of certain uses of artificial intelligence, with exceptions for law enforcement purposes, and additional obligations and safeguards to address emerging technological advancements.

“The EU is the first in the world to set in place robust regulation on AI, guiding its development and evolution in a human-centric direction. The AI Act sets rules for large, powerful AI models, ensuring they do not present systemic risks to the Union and offers strong safeguards for our citizens and our democracies against any abuses of technology by public authorities,” stated Co-rapporteur Dragos Tudorache (Renew, Romania).

The proposed rule aims to reduce risk in terms of societal and economic impacts. It seeks to strike a balance between protective measures and the promotion of technological growth in machine learning and improvements to artificial intelligence models. 

According to the European Parliament, the agreed text must be formally adopted by the EU Parliament and Council before it becomes law. A vote is scheduled for early 2024. After that, organizations will have 12 to 24 months to comply with the new act.

“It was long and intense, but the effort was worth it. Thanks to the European Parliament’s resilience, the world’s first horizontal legislation on artificial intelligence will keep the European promise - ensuring that rights and freedoms are at the centre of the development of this ground-breaking technology,” stated Co-rapporteur Brando Benifei (S&D, Italy).

Taking a stick, rather than carrot approach, non-compliance with the law can lead to fines ranging from 35 million euros or 7% of global turnover to 7.5 million or 1.5% of turnover, depending on the infringement and size of the company.

“Correct implementation will be key - the Parliament will continue to keep a close eye, to ensure support for new business ideas with sandboxes, and effective rules for the most powerful models,” continued Benifei.

Inside the EU Artificial Intelligence Act

The final terms of the act have not been publicly released yet, but the European Parliament has provided some insights into what it will involve. Specifically, the EU AI Act aims to address societal impacts such as job automation or social scoring (similar to the Black Mirror episode Nosedive) and higher-risk activities like misinformation or those that target national security.

Banned EU Artificial Intelligence Applications

  • Biometric categorisation systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race)

  • Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases

  • Emotion recognition in the workplace and educational institutions

  • Social scoring based on social behavior or personal characteristics

  • AI systems that manipulate human behavior to circumvent their free will

  • AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation)

Law Enforcement Exceptions of Banned EU AI Applications

European law enforcement will have more leeway than the private sector and citizens, but they will have strict guidelines when AI exceptions can be made. 

For example, biometric identification systems (RBI) can be used in publicly accessible spaces but will be subject to prior judicial authorization and for strictly defined lists of crime. Post-remote RBI would be used strictly in the targeted search of a person convicted or suspected of having committed a serious crime.

However, real-time use of AI will have an even more narrow scope, which can only be applied in specific locations and windows of time. The parliament offered the following use cases as examples of law enforcement exemptions:

  • Targeted searches of victims (abduction, trafficking, sexual exploitation)

  • Prevention of a specific and present terrorist threat

  • The localization or identification of a person suspected of having committed one of the specific crimes mentioned in the regulation (e.g. terrorism, trafficking, sexual exploitation, murder, kidnapping, rape, armed robbery, participation in a criminal organization, environmental crime)

Higher-Risk, Higher AI Regulatory Obligations

In Europe, high-impact uses of AI, particularly those with higher risks, will be subject to stricter regulations. Depending on the specific application, technology providers may be required to conduct mandatory assessments of their impact on fundamental rights, as well as comply with other yet-to-be-specified requirements.

These more restricted applicable uses will include but are not limited to, critical infrastructure (utilities), medical devices and healthcare, financial services, education, vehicles and transportation, and human resources-related activities.

Most interesting, though, is the inclusion of EU citizens' rights to make a complaint about AI systems, resulting in an explanation "about decisions based on high-risk AI systems that impact their rights.”

It is unclear if this is a regulatory-run complaint system or one imposed on the provider to address specific concerns regarding the use of artificial intelligence.

High Impact AI Guardrails

According to the European Parliament report, high-impact, general-purpose AI (GPAI) systems that could ingest PII, HPII, or other critical information will have the most stringent guardrails.

“If these models meet certain criteria they will have to conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report to the Commission on serious incidents, ensure cybersecurity, and report on their energy efficiency. MEPs also insisted that, until harmonized EU standards are published, GPAIs with systemic risk may rely on codes of practice to comply with the regulation.”

Technology providers will be subject to impact assessments, conformity assessments, register to a related database, develop a risk management and quality management system, ensure data governance, have human oversight, and issue transparency reports.

General AI Guardrails

Other general-purpose AI (GPAI) systems, like ChatGPT, Bard, etc., will also be required to adhere to additional transparency requirements. These include issuing technical documentation, compliance with EU copyright laws, and publishing summaries about what resources the models used to train on.

Initially formed in 2021, these rules were based on a risk system ranging from low to unacceptable. This has since been expanded to include foundational models that bear the most significant weight and user interactions (such as Open AI, Google Bard, or Gemini). Though changed, there are clear indicators that the act includes the most significant regulations around activities that are considered high risk.

Trusted Newsletter
Resources for you
Trust Center that delivers growth List

How to Build a Trust Center that Delivers Growth

NIS 2 5 Challenges List

NIS 2: 5 Challenges Your Organisation Must Overcome to Achieve Compliance

Impact of NIS 2

Impact of NIS 2 on Your Organisation

Elliot Volkman
Elliot Volkman
Former Director of Brand, Content, and Community
Related Resources
List 13 states with comprehensive privacy laws

These Are the 13 States With Comprehensive Consumer Privacy Protection Laws

Privacy by Design is Crucial to AI

Privacy by Design Is Crucial to the Future of AI

Trust & Privacy by Design Drata-s AI Philosophy (1)

Trust and Privacy by Design: Drata's AI Philosophy

How AI impacts privacy

The AI Dilemma: Harnessing the Power of AI While Protecting Privacy