EU AI Act: What to Know

Michelle Ma
July 10, 2024

AI Talk

The EU AI Act was first proposed in 2021, and after significant amendments, was unanimously approved by the EU Council on May 21, 2024. I briefly discussed the EU AI Act when comparing Colorado's AI legislation in a prior post. Today, I provide a high level overview of the EU AI Act, who it applies to, and what it may mean for startups. 

Risk Classification

The EU AI Act classifies and regulates AI systems according to levels of risk: unacceptable risk, high-risk, limited risk, and minimal risk. 

Unacceptable risk systems are those that present a clear threat to the safety, livelihoods, and rights of individuals and are completely prohibited, and the Act outlines what these systems are. Examples include AI systems that deploy subliminal, manipulative or deceptive techniques to distort behavior or impair informed decision-making; exploiting vulnerabilities related to age, disability or socio-economic circumstances; biometric categorization systems; social scoring; and compiling facial recognition systems. 

The majority of regulation is around high-risk systems, such as those used as a safety component or product, including in healthcare, education, law enforcement, and infrastructure areas, asylum, and any system that profiles individuals. This includes automated processing of personal data to assess aspects of an individual’s life, such as work performance, economic situation, health status, preferences, interests, or behavior.

Limited risk AI systems are those that perform generally available functions, much of which we’ve already used, such as image and speech recognition, translation, and content generation. These systems are subject to fewer transparency obligations, including requiring developers and deployers to ensure that end-users are aware they are interacting with AI. 

Minimal risk AI systems are unregulated and may be freely used. These systems include spam filters and AI-enabled video games, although products in these categories may shift higher up on the risk chain with more generative AI developments.

High Risk System Requirements

Providers of high-risk AI systems must comply with numerous obligations, which include: 

  • Establish a risk management system through the system’s lifecycle
  • Conduct data governance, including ensuring that training, validation and testing datasets are relevant
  • Prepare technical documentation to demonstrate compliance, and be ready to provide to authorities
  • Design the system for record-keeping so that it can automatically record events relevant for identifying national level risks and substantial modifications throughout the system’s life cycle. 
  • Provide instructions for use for deployers
  • Design the system to allow deployers to implement human oversight
  • Design the system to achieve “appropriate” levels of accuracy, robustness, and cybersecurity
  • Establish a quality management system

General Purpose AI

The EU AI Act also references General Purpose AI (GPAI) and GPAI systems, which may be used as high risk AI systems or as part of them. Providers of GPAI systems are subject to separate requirements: 

  • Prepare technical documentation
  • Prepare information and documentation to supply downstream providers for integrating GPAI models into their own systems
  • Create a policy to comply with the Copyright Directive
  • Publish a detailed summary about the content used for training the GPAI model

Who This Affects

The EU AI Act affects EU businesses and non-EU businesses that provide, develop or use AI systems or models within the EU. This means that US-based startups that sell their products in the EU must comply with the EU AI Act. The good news is that AI companies will have time to come into compliance – see below.

Timeline for Application & Enforcement

  • In June/July 2024, the AI Act will be published in the Official Journal of the EU, giving official notice of the new law.
  • 20 days later, the AI Act will enter into force. From then, sections of the Act will roll out in this order:
    • 6 months from entering into force: prohibitions on unacceptable risk systems apply
    • 12 months from entering into force: GPAI models, governance, confidentiality, and certain sections for high risk AI systems in setting up authorities in EU member states 
    • 24 months: the rest of the AI Act will apply, except for classification rules for high risk AI systems
    • 36 months: classification rules for high risk systems apply

What’s Next for US Startups

While there isn’t federal legislation yet on AI systems, when reading the EU AI Act together with the Colorado AI Act (which I discussed here and here), I see signal for future regulation at both the state and federal levels: 

  • Initial regulation will likely be on high risk systems, which are those affecting human rights and the ability of individuals to get an education, get a job, exercise their legal rights, and get healthcare. 
  • Regulation will likely include disclosures around AI integration in products to end-users, particularly for consumer products where the average user may not otherwise be aware they’re using a product integrating AI. 
  • Documentation around training data and automated tracking of certain user events will likely be required for more transparency both to users and regulators.

If you’re an AI startup building products that may affect individual rights, it’s best to consult product counsel to get guidance on best practices to prepare for future domestic legislation, compliance with EU regulation, and evolving industry standards.