CA SB 1047: Newsom’s Veto and Thoughts

Michelle Ma
November 6, 2024

AI Talk

In a prior post, I discussed the objective of the bill and its key requirements for foundational model developers. SB 1047 was vetoed by Governor Newsom in September, and in this post, I discuss his reasoning behind the veto and thoughts looking forward. 

SB 1047’s Objective

SB 1047 was meant to require documentation, security procedures, and shut down measures for large foundational models in the face of certain “critical harms”, which include creation of chemical, biological or nuclear weapons and mass casualties resulting in death, bodily injury, and property damage. The aim is to balance the need for innovation while preventing catastrophic damage arising from AI usage and development. 

Newsom’s Reasoning 

The main reasoning behind the veto is a poor fit between the objective of the bill and its actual requirements and regulations. The bill regulates only “covered models”, which are models that exceed high quantitative processing and training cost thresholds. Newsom argued that the bill doesn’t take into account “whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data”, which he considers a better threshold for regulation than the computational and training cost threshold. His reasoning is that AI models that fall below the threshold in the bill could pose an equivalent or greater risk to the community if deployed in high-risk environments, are involved in critical decision making, or use sensitive data (or all of the above).

Looking Forward: What’s Next

Newsom encouraged further legislation and for California to lead the way on AI regulation, pointing to an evidence-based approach that better addresses the harms the bill is aiming to prevent. It’ll be interesting to see what alternatives SB1047’s proponents draft next. If Newsom’s conclusion is any indicator, we may see a bill that addresses how the foundational models are deployed, regardless of the computational and training cost required to build them. This could mean the companies building applications using these foundational models would be subject to this regulation, rather than the developers of the foundational models themselves. Until the state passes legislation targeting companies building frontier models, they will remain mostly self-regulated.