George Clooney Tom Hanks and Meryl Streep Back Human Consent Standard for AI Licensing

Hollywood has spent the last year watching AI systems learn to imitate voices, styles, and even recognizable character “vibes” at a speed that outpaced most rights conversations. Now, a new effort is trying to catch up—by turning consent into something machines can understand.

A coalition of Hollywood actors and producers is backing a new licensing approach called the Human Consent Standard, designed to clarify when AI systems need permission to use a person’s likeness, creative work, characters, and designs—and, in many cases, when that permission should come with payment. The goal isn’t simply to add another legal document to the pile. It’s to create a practical, machine-readable layer that tells AI developers and deployers what they’re allowed to do before they train, generate, or distribute content that resembles a real-world creator.

At first glance, this may sound like yet another standards initiative. But the Human Consent Standard is notable because it tries to solve a specific problem that has become increasingly visible: even when creators want to license their work—or restrict it—AI systems often lack a reliable way to interpret those preferences. The result is a messy gap between what creators believe is happening and what AI companies can prove is happening. This standard aims to close that gap by making consent explicit, structured, and portable across platforms.

The Human Consent Standard builds on an earlier framework known as the Really Simple Licensing (RSL) Standard, launched last year. RSL was created so websites could signal how AI systems might use their work. In other words, it was an attempt to move from vague “terms of service” language toward something more directly interpretable by automated systems. The Human Consent Standard extends that concept into the world of individual creators and rights holders—where the stakes are often personal, reputational, and financially immediate.

What makes the Human Consent Standard different is the emphasis on choice. Instead of treating consent as a binary switch—allowed or not allowed—it supports multiple modes of permission. A creator can grant full permission for AI systems to use their content. They can also allow access under certain requirements, which could include conditions about how the content is used, how it’s attributed, or how it’s compensated. And crucially, they can restrict access entirely.

That flexibility matters because creators don’t all want the same thing. Some may be comfortable with broad licensing if it’s transparent and compensated. Others may want to permit limited uses while blocking training or commercial generation. Still others may want to prevent their likeness or character designs from being used in any AI context at all. A standard that only supports “yes” or “no” would force creators into oversimplified positions. By contrast, the Human Consent Standard is built around the idea that consent can be nuanced—and that nuance should be readable by machines.

In practice, the standard is meant to function like a consent signal that travels with the relevant content or identity. If an AI system encounters a creator’s work or likeness metadata that indicates the creator’s chosen terms, the system can adjust its behavior accordingly. That could mean refusing to use the content, limiting usage to permitted contexts, or routing the request through a licensing workflow that aligns with the creator’s preferences.

This is where the Hollywood backing becomes more than symbolic. Actors and producers have long argued that AI isn’t just “inspired by” culture—it can reproduce it in ways that affect careers, bargaining power, and public perception. When AI systems generate images that resemble a performer, or write dialogue that echoes a character’s signature style, the question isn’t only whether copyright law applies. It’s also whether consent was obtained in a way that respects the creator’s agency.

The Human Consent Standard is essentially an attempt to operationalize that agency. It gives creators a mechanism to set terms for the use of their work or likeness, including characters and designs. That’s important because the AI debate has often focused on training data and copyrighted text, while the lived experience of performers and creators includes a broader set of rights and interests: the right to control one’s image, the right to manage how characters are represented, and the right to decide whether AI-generated outputs should compete with licensed productions.

Another reason this standard is getting attention is that it reflects a shift in how the industry is thinking about compliance. For years, AI licensing discussions have tended to be reactive: a company trains first, then faces lawsuits or takedown demands, then scrambles to negotiate. Standards like RSL and now Human Consent are trying to make compliance proactive. Instead of waiting for enforcement after the fact, the system is supposed to know what it’s allowed to do before it does it.

Of course, standards alone don’t guarantee enforcement. A machine-readable consent signal is only useful if AI systems actually read it, respect it, and treat it as binding in their workflows. That’s why the involvement of major creators and producers matters: it signals that the standard is intended to be adopted not just by technologists, but by the people who can influence licensing norms across the entertainment ecosystem.

There’s also a strategic element here. Hollywood has learned—sometimes painfully—that rights disputes are rarely won purely through moral arguments. They’re won through documentation, clarity, and repeatable processes. A standardized consent framework creates a common language that can be referenced in contracts, policies, and audits. Even if not every AI developer adopts it immediately, the existence of a widely recognized standard can shape expectations and reduce ambiguity.

The Human Consent Standard’s relationship to RSL is also worth unpacking. RSL was designed for websites to signal how AI systems use their work. That’s a different context than individual likeness and character rights, but the underlying philosophy is similar: make permissions legible to automated systems. By building on RSL, the Human Consent Standard is positioning itself as part of a broader ecosystem rather than a standalone initiative. That matters because AI systems don’t operate in a vacuum. They pull from many sources—websites, databases, licensed catalogs, and user-generated content. A consistent approach to signaling consent across these sources could make it easier for AI developers to implement one set of rules rather than dozens of incompatible ones.

Still, the entertainment industry is not monolithic, and neither are the technical realities. AI systems vary widely in how they ingest data, how they store representations, and how they generate outputs. Some systems train on large datasets; others rely on retrieval or fine-tuning; others generate from prompts without direct access to a creator’s original assets. The Human Consent Standard is designed to address these differences by focusing on consent signals tied to the relevant content or identity. But adoption will likely be uneven at first, and the standard’s effectiveness will depend on how quickly it becomes integrated into real licensing pipelines.

One unique angle in this story is how it reframes the conversation from “AI art” to “AI licensing infrastructure.” For years, public debates have treated AI licensing as a cultural fight—whether AI should be allowed to learn from human creativity, whether artists are being exploited, and whether generated outputs are “real.” The Human Consent Standard shifts the emphasis toward infrastructure: how permissions are expressed, how they’re interpreted, and how they’re enforced. That’s less dramatic than a viral argument, but it’s arguably more consequential. Infrastructure determines what happens at scale.

If the standard succeeds, it could change the default assumptions that currently govern AI development. Today, many systems operate under the assumption that if content is publicly accessible, it’s fair game unless a rights holder objects. That assumption is increasingly contested. The Human Consent Standard suggests a different default: if a creator has expressed consent terms in a machine-readable way, AI systems should treat those terms as the starting point for decision-making.

That shift could also influence negotiations. When consent terms are standardized, licensing discussions can become more granular and less ad hoc. Instead of negotiating from scratch each time an AI company wants to use a performer’s likeness or a character’s design, the parties can reference a shared framework. That could reduce friction and speed up deals—especially for creators who want to license selectively rather than broadly.

There’s also a potential downstream effect on user-facing products. If AI tools begin to incorporate consent signals into their generation logic, users may see fewer “surprise” outputs that resemble real performers without permission. That could improve trust and reduce reputational risk for AI platforms. It could also create a clearer market for licensed AI content, where creators can offer terms that are easy for developers to understand and easy for consumers to recognize.

But the standard’s success will depend on more than goodwill. It will require technical integration, governance, and a commitment to treat consent signals as meaningful. Standards initiatives often struggle at the “last mile”—the moment when a developer has to decide whether to implement the feature, how to map it to internal policies, and how to handle edge cases. For example, what happens when a creator’s consent terms conflict with another source’s metadata? What happens when a model has already been trained on data that predates the standard? What happens when a system generates an output that resembles a character but doesn’t directly copy a specific asset?

These questions aren’t reasons to dismiss the standard; they’re reminders that consent frameworks must be paired with operational policies. The Human Consent Standard is a step toward that pairing, but it will still require careful implementation and ongoing refinement.

Even so, the direction is clear: the entertainment industry is moving toward a world where consent is not only a legal concept but also a technical one. The Human Consent Standard represents an attempt to make that future real by giving creators a structured way to express permissions and giving AI systems a structured way to interpret them.

There’s a broader cultural implication too. AI has made it easier to blur boundaries between inspiration and imitation. When a system can generate a face that looks like a real actor, or a voice that sounds like a real performer, the line between “creative transformation” and “unauthorized replication” becomes harder to police after the fact. A consent standard offers a preventative approach: it tries to ensure that the system knows what it’s allowed to do before it produces anything.

That preventative approach is especially relevant for likeness and character rights, which are