The No Fakes Act: Protecting Voices and Likenesses in the Digital Age

No Fakes Act
Judge Dan Hinde

Deepfake technology is the darling of entertainment, sports, education, science, and a host of other endeavors. The technology can create realistic visual effects in movies and TV shows, reducing production costs and time. It can help filmmakers de-age actors or bring deceased actors back to life.

In educational settings, deepfakes can create engaging and interactive learning experiences, bringing historical figures into classrooms to deliver their own speeches, discoveries or lectures, making history lessons more immersive. In the healthcare field, the technology can generate synthetic patient data, facilitating research without compromising patient privacy and training AI models for diagnosing diseases and developing new treatments.

Advertisement

PimCon

Deepfake technology can make information more accessible to those with disabilities and those who speak different languages, creating personalized sign language interpreters and instantly translating or dubbing information into multiple languages.

News organizations can use deepfake technology to create virtual news anchors who deliver news in various languages for a global audience.

This year, Randy Travis, the country music star who in 2013 suffered a stroke that left him unable to sing, rejoined the music scene with the help of AI. After another singer recorded a demo track of Travis’s new song, recordings from his successful career were used to train an AI model on his voice and vocal technique. It then interpolated the demo recording into Travis’ voice and fans enthusiastically embraced the new song.

Advertisement

LexReception

But deepfake technology has become a giant thorn in the sides of celebrities, athletes, public servants, and others whose unique voices and images have been brazenly appropriated for commercial, political, defamatory and a host of other questionable purposes. Nothing is more intimate or personal than one’s appearance and voice; when those are taken, it is the ultimate violation.

Deepfakes are Everywhere

The term “deepfake” is a combination of “deep learning” and “fake.” It is used to describe a type of synthetic media or digital creation, typically produced by the deep learning algorithms of AI. These algorithms can generate highly realistic images, videos, or audio recordings that convincingly mimic the appearances or voices of real people. When used to improve lives and advance society, deepfakes can be hugely beneficial; when used for selfish, mercenary and destructive purposes, they can be extremely damaging.

This last point is amply demonstrated by some recent stories. An AI-generated song, “Heart on My Sleeve,” mimicked the voices of recording artists Drake and The Weeknd without their consent, drawing attention from fans of both artists who believed it was real. An OpenAI chatbot allegedly mimicked the voice of Scarlett Johansson’s performance of an AI chatbot in the movie “Her.” A computer-generated version of Tom Hanks appeared in an advertisement for a dental plan with which he had no connection. A high school teacher in Baltimore used deepfake technology to create a recording of the school’s principal making racist and antisemitic comments, prompting school district members to call for the principal’s dismissal before learning that the recording was fake.

Deepfake videos and recordings, featuring politicians making statements they never actually made, have spread misinformation and influenced public opinion during election cycles.

The No Fakes Act

Lawmakers are taking the matter seriously. Senate Bill 4875, the No Fakes Act of 2024 (formally known as the Nurture Originals, Foster Art, and Keep Entertainment Safe Act of 2024), is a bipartisan legislative proposal designed to safeguard individuals’ rights in their voices and likenesses. Introduced on July 31, 2024 by U.S. Senators Chris Coons, Marsha Blackburn, Amy Klobuchar, and Thom Tillis, the bill would protect voices and visual likenesses from being replicated without individuals’ consent, especially through the use of generative artificial intelligence (AI).

The Act would establish a federal property right for all individuals, not just celebrities, in their own voices and likenesses. It would tamp down on unauthorized digital replicas – including those generated by AI – by prohibiting the production, hosting, or sharing of such replicas without the individual’s consent, thereby ensuring that individuals have control over their own voices and visual likenesses. To ensure that free speech was not infringed, the Act would include exclusions for works protected by the First Amendment, such as sports broadcasts, documentaries, biographical works, and content created for purposes of comment, criticism, parody, and satire.

There is strong and widespread support for the No Fakes Act. It has garnered significant backing from entertainment industry groups such as SAG-AFTRA, the Recording Industry Association of America, and the Motion Picture Association. Major players such as OpenAI, IBM, The Walt Disney Company, Warner Music Group, Universal Music Group, and Sony Music have also endorsed the Act. Rarely has such a broad coalition joined together to support this type of legislation, reflecting the importance of protecting individuals’ rights in the face of rapidly advancing AI technologies.

Legal Framework and Enforcement

The No Fakes Act seeks to establish a national standard for deepfakes by largely preempting state laws on digital replicas and creating a consistent legal framework across the country. While intellectual property rights such as copyrights and trademarks are protected by federal law, an individual’s right of publicity has typically been governed by state laws, which vary by jurisdiction. A federal standard would provide consistency and predictability for how these issues will be dealt with in the future.

Under the Act, an individual’s right to prevent the creation and distribution of unauthorized digital replicas would expire 70 years after his or her death, but provisions are included for post-mortem transfer and renewal. To address concerns from the tech sector, the Act includes a safe harbor provision for AI software developers and a notice-and-takedown mechanism for online platforms. Similar to the current online copyright regime, this system would immunize platforms from liability if they promptly removed unauthorized replicas upon receiving notice. A three-year statute of limitations would run from the date the plaintiff discovered or should have discovered the violation.

Where Things Stand

After the bill was introduced and read twice, it was referred to the Senate Committee on the Judiciary and from there will go through committee review, floor debate, and votes in both the Senate and the House of Representatives. If both chambers agree on the final version, the bill will ultimately be sent to the President for approval.

The Act has faced criticism from various quarters, primarily centered around concerns about its potential impact on free speech, innovation, and the legal landscape. Some critics argue that the Act could stifle free expression by imposing overly broad restrictions on the creation and distribution of digital content. They worry that its provisions could be used to suppress legitimate forms of artistic expression, as well as commentary and parody, despite the Act’s express exceptions for these categories.

There is also concern in the tech industry that the Act could hinder innovation by creating legal uncertainties and potential liabilities for developers and companies working with generative AI. They argue that the threat of litigation might discourage experimentation and the development of new technologies. Smaller companies and independent creators have voiced concerns that they might be disproportionately affected by the Act; costs of compliance and potential legal battles could be burdensome for those without significant resources, potentially stifling competition and innovation in the digital content space.

Additionally, some critics are wary about practical implementation of the Act, noting that determining what constitutes an unauthorized digital replica can be complex and subjective. They warn that the Act’s requirement for service providers to remove content upon notice could lead to an increase in takedown requests, potentially overwhelming platforms and leading to the removal of legitimate content.

Conclusion

As noted above, not all deepfakes are bad. The technology has tremendous potential to solve problems and improve lives. But when it is used to harm others – by misrepresenting them or depriving them of their most essential assets – it can be dangerous and destructive.

The No Fakes Act seeks to strike a balance between AI’s creative potential and safeguarding personal dignity by holding individuals and companies liable for unauthorized digital replicas. The legislation reflects a growing concern over the misuse of AI to create deepfakes and other unauthorized digital content, emphasizing the need to balance technological innovation with the protection of individual rights and dignity.

We should all hope to see this important piece of legislation become law before the end of the year.

Rob Rosenberg

Rob Rosenberg, Principal and Founder of Telluride Legal Strategies, is an independent legal consultant and expert witness. He spent 22 years at Showtime Networks in various legal and business roles, most recently as executive vice president, general counsel and assistant secretary. He now consults with companies of all sizes on legal and business strategies. Rob is a thought leader and a problem solver working at the intersection of law, media and technology. [email protected].

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts