AI, Copyright, and Your Voice: What the Latest Court Cases Mean for You
- Studio
- Jul 4
- 3 min read

In the world of voiceover, political campaigns, and brand messaging, AI is moving fast. A few recent court rulings made headlines that sounded like big wins for tech companies. But if you take a closer look, the real story is a lot more encouraging for voice actors and creators. It is also a wake-up call for brands using AI.
Let’s break it down.
Behind the Headlines: The Real Takeaway for AI and Voice
Two major lawsuits in California (Bartz v. Anthropic and Kadrey v. Meta) got attention for being “wins” for tech companies. But the judges didn’t actually say AI developers were in the clear. In fact, one of the judges, Vince Chhabria, said it plainly: this isn’t a green light for AI. The cases just didn’t argue the right points.
He wrote: “This ruling does not stand for the proposition that Meta’s use of copyrighted materials to train its language models is lawful. It only stands for the proposition that these plaintiffs made the wrong arguments…”
Translation? The tech companies got off on a technicality. This happened not because what they’re doing is okay, but because the arguments against them weren’t presented well.
This Legal Standard Could Change How AI Uses Voices
The courts made one thing crystal clear: if an AI-generated product competes with the original creator’s work, it likely doesn’t qualify as “fair use.”
This is a big deal.
If an AI can create a voice that sounds like yours and that voice is used in ads, audiobooks, or video games, you’re being replaced. That’s not fair use. That’s substitution. And according to the court, that means companies should be paying for licenses and talent consent.
Voice Actors: this supports what many of you already know in your gut - your voice has value, and if someone wants to use it, they need to pay for it and get your permission. This legal framework validates the need for consent and compensation, and it’s a standard that should be in every contract you sign.
Clients and Political Consultants: this is your heads-up. Working with unlicensed or questionable AI tools could open you up to legal trouble. Ethical sourcing isn’t just the right thing to do. It also protects your campaign or brand from being caught in the wrong kind of headlines.
The Court’s Take: Pay Creators, Keep Building
The judge also rejected the idea that paying creators would somehow slow innovation. He called that “nonsense” saying that outside of fair use, rightsholders need to be paid for licenses. This isn’t a minor point; it’s the future of the industry. The court has signaled that the path forward is through licensing and partnership, not appropriation.
His point was simple. AI companies can keep developing tools, but they need to do it in a that involves paying for licenses, respecting rights, and working with creators—not around them. That’s exactly how we do things at Lotas.
At Lotas Voice Forward, our stance has always been clear:
Ethical Sourcing is Non-Negotiable: We only partner with and recommend AI voice technology that is built on a foundation of explicit consent and fair compensation for voice talent.
Contracts Must Evolve: Standard contracts are no longer sufficient. Agreements must specifically address AI training, synthetic replicas, and the “market substitution” risk to protect all parties.
Human Creativity is the Asset: AI is a powerful tool, but its value in voice work comes from the talent and craft of human performers. Our mission is to ensure technology enhances this partnership, rather than exploits it.
We’re here to help both sides navigate this shift and make sure we are building something better, together.
The legal landscape is rapidly solidifying. The message from the courts is that those who choose to cooperate and obtain licenses will thrive, while those who continue to operate without permission do so at their peril. If you’re a voice actor, you deserve protection. If you’re a client using synthetic voice, you deserve tools that will not leave you legally exposed.