AI Talk
Another state has enacted laws regulating the use of AI, this time governing the creation of voice deepfakes of individuals without their permission. Today, I’ll discuss Tennessee’s AI legislation that just went into effect.
Aptly named the ELVIS Act (Ensuring Likeness, Voice and Image Security Act), Tennessee’s latest AI legislation aims to protect individuals from the use of their persona in connection with deepfakes, targeting both creators of deepfakes without authorization and providers of tools to create them.
The state was signed into law on March 21, 2024, and went into effect July 1, 2024.
Tennessee’s original personal rights law or right of publicity law, the Personal Rights Protection Act of 1984, previously only protected a person’s name, image and likeness. The ELVIS Act expands its protection to include an individual’s voice, which includes a person’s actual voice, simulations of it, and also humans who imitate other artists. The Act also prohibits unauthorized “commercial use” of a person's personal rights, beyond its original scope of just in advertising.
This means that a person or company who creates a deepfake of a person’s voice, photo or video without authorization and then puts it online or otherwise makes it available to the public can be sued and have an injunction issued by the court. An injunction would likely require the deepfake be taken down, for example. This could also include social media companies or streaming platforms, as they would be distributing the content.
This means that any person or company that creates an AI tool whose “primary purpose or function” is to create a deepfake without authorization can also be sued, and have an injunction issued by the court. AI isn’t mentioned explicitly, but the message is clear that it’s the target.
What isn’t clear is if this encompasses sites that aggregate and make tools from third parties available, or any sites or platforms that provide access to these AI tools. Additionally, many tools arguably don’t have as their “primary purpose or function” to create deepfakes without authorization. Do these companies have to require authorization prior to allowing usage? How do you determine whether a tool's primary purpose or function is to generate deepfakes without authorization?
The Act lists out an additional instance of fair use that is not a violation of the Act – the use of an individual’s voice in connection with news, public affairs, or sports broadcasts.
Notably, Wisconsin enacted a law earlier this year requiring political communications to disclose the use of any content created by generative AI, and bills with other states are in the works. This is definitely the start of a new trend of states expanding publicity rights to protect against deepfakes and the harm they can cause. It’ll be interesting to see which states follow suit next, as deepfakes of public figures, celebrities, and others have proliferated and there is no federal law yet that addresses these issues.
What will be more interesting is seeing how the courts read federal copyright law and these state right of publicity laws. Federal copyright law expressly preempts state law, meaning it takes precedence over state law when it attempts to create any rights equivalent to the exclusive rights in copyright law. And of course, challenges to these laws based on First Amendment free speech will surely arise.
For companies distributing or making technology that is primarily used to create deepfakes of individuals, they should work with product counsel to determine whether proper authorization was obtained before allowing content creation, and continue to monitor developments in this area, particularly as states are likely to pass their own versions of law protecting publicity rights.