AI Talk
In my previous post, I describe the general aspects of the Colorado AI Law, which makes Colorado the first US state to pass general AI legislation. In today’s post, I’ll discuss highlights and comparisons with the Colorado Privacy Act, the EU AI Act, and how this could create a model for future federal legislation.
The focus of the Colorado Privacy Act is squarely on the use of personal data, profiling, and automated decision making, while the Colorado AI law offers Consumers some rights beyond that, focusing on managing AI risk more generally. However, reading both the AI Act and Privacy Act together provide perspective on how Colorado addresses AI generally.
The Colorado Privacy Act has explicit controller obligations and consumer rights that mirror those in the GDPR, such as right to opt out, right of access, right of correction, right to deletion, and right of data portability. Additionally, it targets “profiling”, which is automated processing performed on personal data that relate to certain attributes or behavior of an individual, such as economic situation, health, personal preferences, interests, reliability, behavior, location or movements. Any profiling that produce legal or similarly significant effects based on Solely Automated Processing or Human Reviewed Automated Processing must be complied with, with an exception – an organization may choose to reject an opt-out request if “material human involvement” is used in reaching the decision, such as an AI informing a human making the decision, but where the AI does not actually determine the outcome.
The CPA has detailed risk assessment requirements, as does the Colorado AI Act for HAIS. For any HAIS processing of personal data, companies will have to read and comply with both sets of risk assessment requirements.
When it comes to enforcement, the CPA and AI Act are similar – only the Colorado Attorney General may enforce the law, with a violation being a deceptive trade practice.
I’ll detail in a future post more details about the EU AI Act, which was approved by the Council of the European Union last week. Both bodies of law aim to protect individuals from discriminatory decisions that can affect their individual rights and livelihoods.
Generally, the EU AI Act and Colorado AI Law both use a risk-based approach. The Colorado AI Law targets High Risk AI Systems, while the EU AI Act categorizes AI under unacceptable risk, high-risk, limited risk, and minimal risk explicitly, with regulations governing use of high-risk and limited risk systems (unacceptable risk systems are prohibited and minimal risk systems are not regulated). The majority of obligations under the AI Act fall on developers of high-risk systems. The EU AI Act applies to companies whose high-risk AI systems will enter the EU market, or where the high-risk AI system’s output is used in the EU.
The EU AI Act also lists out sanctions and fines while the Colorado AI Law gives the Attorney General remedies such as civil penalty authority.
As it’s the first US state to pass general AI legislation, the Colorado AI law has now potentially set a national standard for state legislation, and possibly at the federal level. It’s a good step in regulating systems that have a high risk of infringing on individual rights and affecting livelihood, and its framework, in my opinion, is more likely to be adopted by other states than the EU AI Act, whose regulation is broader both in its categorization of AI systems and compliance requirements.
I’m curious to see how these laws may serve as a model for other US states, and potentially federal legislation. Consumer protection, especially in the financial, healthcare, employment and education sectors are some of the biggest areas of concern when it comes to AI development, so states will have to address these risks soon. My guess is that other states will first see how Colorado rolls out compliance guidelines and enforcement policies before following along, tweaking their policies to address the consequences and outcomes they observe.
And, while Deployer and Developer compliance obligations certainly help protect Consumers from discrimination, I’m curious to see how these obligations affect US and international companies’ ability to sell their products and services to CO residents. Will these compliance and disclosure obligations discourage them, or will there be no effect? Whatever effect we observe may also be multiplied to the extent other states model their AI legislation after Colorado’s. Even if other states adopt different frameworks, future AI legislation will likely create many compliance challenges for AI companies.