AI Talk
The EU AI Act was first proposed in 2021, and after significant amendments, was unanimously approved by the EU Council on May 21, 2024. I briefly discussed the EU AI Act when comparing Colorado's AI legislation in a prior post. Today, I provide a high level overview of the EU AI Act, who it applies to, and what it may mean for startups.
The EU AI Act classifies and regulates AI systems according to levels of risk: unacceptable risk, high-risk, limited risk, and minimal risk.
Unacceptable risk systems are those that present a clear threat to the safety, livelihoods, and rights of individuals and are completely prohibited, and the Act outlines what these systems are. Examples include AI systems that deploy subliminal, manipulative or deceptive techniques to distort behavior or impair informed decision-making; exploiting vulnerabilities related to age, disability or socio-economic circumstances; biometric categorization systems; social scoring; and compiling facial recognition systems.
The majority of regulation is around high-risk systems, such as those used as a safety component or product, including in healthcare, education, law enforcement, and infrastructure areas, asylum, and any system that profiles individuals. This includes automated processing of personal data to assess aspects of an individual’s life, such as work performance, economic situation, health status, preferences, interests, or behavior.
Limited risk AI systems are those that perform generally available functions, much of which we’ve already used, such as image and speech recognition, translation, and content generation. These systems are subject to fewer transparency obligations, including requiring developers and deployers to ensure that end-users are aware they are interacting with AI.
Minimal risk AI systems are unregulated and may be freely used. These systems include spam filters and AI-enabled video games, although products in these categories may shift higher up on the risk chain with more generative AI developments.
Providers of high-risk AI systems must comply with numerous obligations, which include:
The EU AI Act also references General Purpose AI (GPAI) and GPAI systems, which may be used as high risk AI systems or as part of them. Providers of GPAI systems are subject to separate requirements:
The EU AI Act affects EU businesses and non-EU businesses that provide, develop or use AI systems or models within the EU. This means that US-based startups that sell their products in the EU must comply with the EU AI Act. The good news is that AI companies will have time to come into compliance – see below.
While there isn’t federal legislation yet on AI systems, when reading the EU AI Act together with the Colorado AI Act (which I discussed here and here), I see signal for future regulation at both the state and federal levels:
If you’re an AI startup building products that may affect individual rights, it’s best to consult product counsel to get guidance on best practices to prepare for future domestic legislation, compliance with EU regulation, and evolving industry standards.