Actors like Bryan Cranston challenging unauthorized AI use of their likeness are forcing companies like OpenAI to create stricter rules. These high-profile cases are establishing the foundational framework that will ultimately define and protect the digital rights of all individuals, not just celebrities.

Related Insights

Regardless of an AI's capabilities, the human in the loop is always the final owner of the output. Your responsible AI principles must clearly state that using AI does not remove human agency or accountability for the work's accuracy and quality. This is critical for mitigating legal and reputational risks.

The Writers' Guild of America strike offers a sophisticated model for labor unions navigating AI. Instead of an outright ban, they negotiated a dual approach: winning protections against AI-driven displacement while also securing guarantees for their members to use AI as an assistive tool for their own benefit.

The controversy over AI-generated content extends far beyond intellectual property. The emotional distress caused to families, as articulated by Zelda Williams regarding deepfakes of her late father, highlights a profound and often overlooked human cost of puppeteering the likenesses of deceased individuals.

As AI personalization grows, user consent will evolve beyond cookies. A key future control will be the "do not train" option, letting users opt out of their data being used to train AI models, presenting a new technical and ethical challenge for brands.

In its largest user study, OpenAI's research team frames AI not just as a product but as a fundamental utility, stating its belief that "access to AI should be treated as a basic right." This perspective signals a long-term ambition for AI to become as integral to society as electricity or internet access.

When an AI tool generates copyrighted material, don't assume the technology provider bears sole legal responsibility. The user who prompted the creation is also exposed to liability. As legal precedent lags, users must rely on their own ethical principles to avoid infringement.

OpenAI is relaxing ChatGPT's restrictions, allowing verified adults to access mature content and customize its personality. This marks a significant policy shift from broad safety guardrails to user choice, acknowledging that adults want more freedom in how they interact with AI, even for sensitive topics like erotica.

The concept of data colonialism—extracting value from a population's data—is no longer limited to the Global South. It now applies to creative professionals in Western countries whose writing, music, and art are scraped without consent to build generative AI systems, concentrating wealth and power in the hands of a few tech firms.

Amazon is suing Perplexity because its AI agent can autonomously log into user accounts and make purchases. This isn't just a legal spat over terms of service; it's the first major corporate conflict over AI agent-driven commerce, foreshadowing a future where brands must contend with non-human customers.

After users created disrespectful depictions of MLK Jr., OpenAI now allows estates to request restrictions on likenesses in Sora. This "opt-out" policy is a reactive, unscalable game of "whack-a-mole." It creates a subjective and unmanageable system for its trust and safety teams, who will be flooded with requests.