AI Talk
The California state Senate recently passed SB 1047, which is now on Governor Newsom’s desk to sign with a deadline of September 30, 2024. Introduced by state senator Scott Wiener, and amended following comments from OpenAI, SB 1047 requires AI model developers to follow certain documentation, shutdown, and auditing practices, and aims to hold these companies liable for certain extreme, theoretical risks. In today’s post, I discuss the purpose of SB 1047 and key requirements and dates, and some thoughts on what might happen next.
SB 1047 is meant to hold AI infrastructure companies, such as those developing foundational models, liable via civil action if their models are used to cause loss of life or damage due to cyber attack in excess of $500M in damages. Developers are also subject to training, documentation, and audit requirements as described in the bill, which I’ll describe below.
Governor Newsom has until September 30, 2024 to sign the bill into law or veto it. If he vetoes it, it’ll kick the can down the road for state-level regulation, and signal Congress to handle this at a federal level. Federal regulations are notoriously slow to roll out, and generally not as strict as California law when it comes to tech regulations. With a veto, the foundational model companies get a (desired) reprieve and another chance to speak up to Congress when regulations do get proposed.
However, if the bill gets signed into law, the first thing to come into effect is on January 1, 2025, where AI companies must have a safety report drafted. The state AG can also issue an injunctive order starting from that date, requiring AI companies to stop training or operating their models after a court decision that those models are dangerous.