CA SB 1047, Safe and Secure Innovation for Frontier Artificial Intelligence Models Act: What to Know

Michelle Ma
September 5, 2024

AI Talk

The California state Senate recently passed SB 1047, which is now on Governor Newsom’s desk to sign with a deadline of September 30, 2024. Introduced by state senator Scott Wiener, and amended following comments from OpenAI, SB 1047 requires AI model developers to follow certain documentation, shutdown, and auditing practices, and aims to hold these companies liable for certain extreme, theoretical risks. In today’s post, I discuss the purpose of SB 1047 and key requirements and dates, and some thoughts on what might happen next. 

Objective of SB 1047

SB 1047 is meant to hold AI infrastructure companies, such as those developing foundational models, liable via civil action if their models are used to cause loss of life or damage due to cyber attack in excess of $500M in damages. Developers are also subject to training, documentation, and audit requirements as described in the bill, which I’ll describe below. 

Key Requirements for Developers: 

  • Prior to training a model covered by the bill, developers must:
    • have the ability to enact a full shutdown of the model
    • implement a safety and security protocol. 
    • retain an unredacted copy of the safety and security protocol for as long as the commercial model is available commercially or to the public, plus another 5 years afterward. 
    • include records and dates of updates and revisions. 
    • grant the Attorney General access to these documents when requested. 
    • Jan. 1, 2026 and onwards: Developers must annually retain a 3rd party auditor to perform an independent audit of compliance with this bill’s requirements, and produce an audit report. The developer must retain an unredacted copy of the report for as long as the covered model is available commercially or to the public, plus another 5 years. The developer must grant the Attorney General access to this report when requested. 
  • Both the safety and security protocol and audit report are exempt from disclosure under the California Public Records Act.
  • Developers are prohibited from using a covered model or covered model derivative for any purpose not exclusively related to the training or reasonable evaluation of the covered model or compliance with state or federal law, or making the covered model available for commercial or public use, if there is an unreasonable risk that it will cause or enable a “critical harm”. 
  • Developers must report to the state AG each AI safety incident affecting the covered model. 
  • The AG may bring a civil action against AI developers for violations of compliance requirements that causes death or bodily harm to another human, harm to property, and other threats to public safety.
  • The bill creates the Board of Frontier Models within the Government Operations Agency, and would require the GOA to issue regulations, starting on Jan. 1, 2027.
  • The bill establishes a consortium called CalCompute within the GOA, which will develop a framework for the creation of a public cloud computing cluster, to advance the development and deployment of AI that is safe, ethical, equitable, and sustainable. 

Key Definitions: 

  • “Covered model” from January 1, 2027 onwards includes: 
    • An AI model that cost more than $100M to train
    • An AI model created by fine-tuning a covered model, exceeding $10M and whose computing power exceeds a threshold set by the Government Operations Agency. 
  • “Critical harm” includes all of the below: 
    • creation of weapons that result in mass casualties
    • cyber attacks on critical infrastructure causing mass casualties or at least $500M of damage 
    • mass casualties or at least $500M in damages resulting from an AI model that acts with limited human oversight, intervention or supervision and results in death, great bodily injury, property damage or property loss. 

What’s Next? 

Governor Newsom has until September 30, 2024 to sign the bill into law or veto it. If he vetoes it, it’ll kick the can down the road for state-level regulation, and signal Congress to handle this at a federal level. Federal regulations are notoriously slow to roll out, and generally not as strict as California law when it comes to tech regulations. With a veto, the foundational model companies get a (desired) reprieve and another chance to speak up to Congress when regulations do get proposed. 

However, if the bill gets signed into law, the first thing to come into effect is on January 1, 2025, where AI companies must have a safety report drafted. The state AG can also issue an injunctive order starting from that date, requiring AI companies to stop training or operating their models after a court decision that those models are dangerous.