030 88 70 23 80 kanzlei@ra-juedemann.de

Voice Cloning and the Right of Publicity in the Age of Generative AI
By Kai Jüdemann

Introduction
In the age of digital innovation, the very essence of identity has become a commercial commodity. Voices, images, names, and likenesses—once solely expressions of personal individuality—are now assets with tangible economic value. This phenomenon has become especially pronounced with the advent of voice cloning and generative artificial intelligence (GAI), technologies capable of replicating a person’s identity with uncanny accuracy.

As these technologies become increasingly accessible, the right of publicity has emerged as a critical legal doctrine in the protection of personal identity. This article explores the evolving legal landscape of voice cloning, the commercialization of human personas, and how U.S. law—particularly the right of publicity—applies to this growing concern.

The Right of Publicity: A Legal Overview
The right of publicity (ROP)—also known as personality rights—is the right of an individual to control the commercial use of identifiable aspects of their persona. This includes one’s name, voice, image, likeness, signature, and other uniquely attributable traits. Unlike copyright or trademark, which protect creative works and brand identifiers respectively, the right of publicity protects a person’s identity as a commercial asset.

In the U.S., there is no federal statute governing the right of publicity. Instead, it is governed at the state level, with some jurisdictions recognizing the right statutorily, others under common law, and some not at all. States like California, Indiana, and Tennessee provide robust protections, including posthumous rights that allow heirs to control and profit from a deceased individual’s persona.

The primary function of the right of publicity is to prevent unauthorized commercial use of one’s identity that may suggest false endorsements, damage reputations, or exploit someone’s fame for private gain.

The Rise of Generative AI and Voice Cloning
Generative AI represents a class of technologies designed to produce original content—including text, images, and audio—based on vast datasets. Voice cloning, a subset of GAI, uses machine learning models to replicate the speech patterns, tone, inflection, and unique sound of a person’s voice, often from minimal audio input.

This has led to a wave of AI-generated content featuring hyper-realistic recreations of public figures, both living and dead. On platforms like YouTube and TikTok, cloned voices of celebrities narrate fictional events, sing songs they never recorded, or endorse products they never approved. While some instances are clearly parody or fan-generated tribute, others blur the line between homage and misappropriation.

Legal Precedents on Persona Misappropriation
Despite the lack of case law directly addressing GAI, several landmark decisions provide a framework for assessing voice cloning and AI-generated likenesses:

Zacchini v. Scripps-Howard Broadcasting Co. (1977): The U.S. Supreme Court held that broadcasting a performer’s entire act without permission violated his right of publicity. The Court emphasized the economic interest performers have in controlling how their identity is used.
White v. Samsung Electronics America, Inc. (1993): Vanna White successfully sued Samsung for using a robot with her likeness in an ad. The Ninth Circuit held that the advertisement evoked her identity, thus violating her right of publicity—even though her actual image or name wasn’t used.
Midler v. Ford Motor Co. (1988): Bette Midler won a suit after Ford used a voice impersonator to imitate her singing in a commercial. The court ruled that using a sound-alike to evoke a celebrity’s identity can violate the right of publicity.
These rulings collectively suggest that unauthorized use of a person’s identity—even via approximation or impersonation—can constitute a violation, a critical insight in assessing generative AI’s impact.

Generative AI: Opportunity and Risk
Opportunities:
Licensing and Monetization: Artists, actors, and influencers can license aspects of their persona—including their voice—for commercial use, expanding their revenue streams without active participation.
Legacy Preservation: Posthumous recreations using AI (e.g., films featuring deceased actors) can maintain a legacy and provide controlled access to an artist’s likeness.
Risks:
Misappropriation: AI makes it cheap and easy for anyone to replicate a public figure’s voice, potentially leading to false endorsements, reputational harm, or economic losses.
Consumer Confusion: Highly realistic AI recreations can mislead audiences into believing content is authentic, undermining trust in public figures and brands.
Loss of Control: A sufficiently well-trained AI can substitute for the real artist, diminishing their market value or diluting their brand.

Enforcement and Future Challenges
GAI’s accessibility creates a high risk of violation and appropriation. As these technologies proliferate, enforcement of the right of publicity will become more central to identity protection. However, many cases may not reach the courts. Often, the threat of litigation or contractual licensing structures will be the primary means of regulating use.

That said, courts are unlikely to excuse unauthorized use merely because AI was the tool of replication. If the output invokes a person’s identity for commercial gain, it is actionable—regardless of the mechanism used.

Conclusion: The Law Must Evolve
Voice cloning and generative AI underscore the importance of protecting human identity in the digital realm. The right of publicity remains one of the few legal doctrines tailored for this challenge, offering both artists and everyday individuals a mechanism to guard their personas against misuse.

As AI evolves, so too must the legal tools and public understanding surrounding it. Transparency, informed consent, and equitable licensing will be essential—not just to safeguard against exploitation—but to embrace the creative and economic possibilities that lie ahead.

Key Takeaways
Voice cloning via generative AI can violate the right of publicity when used without consent.
U.S. law protects a person’s identity as commercial property, particularly in jurisdictions like California.
Existing case law on sound-alikes and look-alikes provides strong precedent for protecting against AI misuse.
The right of publicity will become increasingly relevant as AI systems gain power and public reach

Kai Jüdemann

Fachanwalt für Urheber- und Medienrecht