SB 942, the California AI Transparency Act, was signed into law the same day as AB 2013, which I discussed in a prior post. In today’s post, I discuss SB 942's objective, when it goes into effect, who it affects, key requirements, and suggested practices for AI developers. In my next and last post on California's AI bills, I'll discuss Newsom's recent veto of SB 1047 with my thoughts on impact.
Objective
The objective of this bill is to allow users to determine digital content provenance, and answer: “Was this content generated or altered using AI?”
When it Goes into Effect
Starting Jan. 1, 2026, “covered providers” must meet certain key requirements, which I discuss below.
Who it Affects
SB 942 applies to “covered providers”, defined as a person that creates, codes, or otherwise produces a generative artificial intelligence system that has over 1,000,000 monthly visitors located anywhere, and where the AI system is publicly accessible within the state of CA. The AI systems in question are those that produce images, video and audio content only; therefore, AI systems that generate text, code or other outputs are not regulated here (including chatbots).
These requirements do not apply to any providers of “exclusively non-user-generated video game, television, streaming, movie, or interactive experiences.”
Key Requirements
- AI Detection Tool:
- Covered providers must make a publicly available tool to users for free, and must collect and use feedback on the efficacy of this tool.
- Features must include: allow the user to determine whether the image, video or audio content was created or altered by the provider’s AI system; provide the user with metadata or similar data showing the provenance of that content (personal information will not be included); allow the user to use the tool by uploading the content or linking to it; support API access and direct website access for the tool so that users have access options; and only retain the submitted content as required by law.
- Disclosure/Watermarking Tool:
- For images, video and audio content created by the covered provider’s AI system, the users must be able to include a watermark or similar disclosure that identifies the content as AI-generated by the covered provider’s AI system.
- The disclosure must be “extraordinarily difficult to remove” and clear, conspicuous, and understandable to a reasonable person.
- Latent Disclosures:
- For images, video and audio content created by the covered provider’s AI system, there must be a disclosure that shares the provenance of that content, such as: the name of the covered provider, name and version number of the AI system that created/altered the content, a timestamp, and a unique identifier.
- This latent disclosure must be detectable by the AI detection tool, consistent with industry standards, and either permanent or extraordinarily difficult to remove.
- Contractual Obligations:
- If the covered provider licenses its AI system to a third party, the contract must require that 3rd party to retain the latent disclosure feature.
- If the covered provider learns that the 3rd party removed the latent disclosure feature by modifying the AI system, the covered provider must revoke the license within 96 hours of learning of it.
Penalties
The covered provider is liable for $5,000 for each violation, and each day that the covered provider is in violation of the is considered a separate violation. The CA attorney general and other city and county-level prosecutors are responsible for enforcement.
If the 3rd party has its license revoked for removing the latent disclosure feature, government prosecutors may pursue injunctive relief and reasonable attorney’s fees and costs.
What’s Next for AI Developers
Here are some suggested next steps for AI developers to comply with this new set of requirements:
Your commercial and product attorney can assist with the last 2 requirements to get you compliant.