Scarlett Johansson, OpenAI, and the Sky Voice Controversy: Rights, Deepfakes, and the Push for Clearer AI Standards

No time to read?
Get a summary

Celebrity actress Scarlett Johansson compelled OpenAI to remove a Sky voice that echoed her own from the ChatGPT assistant. The reports come from TMZ, highlighting a clash over synthetic voices in AI chatbots.

Johansson explained that Sky, the voice introduced by OpenAI, drew inspiration from the vocal character in the film Her, where Joaquin Phoenix voices an intelligent assistant without being seen on screen. This connection raised concerns about likeness and rights in automated media, especially as AI voices become more prevalent in consumer tools.

In September 2023, it is claimed that OpenAI chief Sam Altman reached out regarding an official voice for the new ChatGPT model. Johansson reportedly declined, preferring not to participate in the voice portrayal. When Sky’s voice appeared in spring 2024 without her approval, she retained legal counsel to address the issue. Altman later suggested that the audio had been misused and publicly referenced the situation on social platforms, hinting at a broader discussion about consent and ownership. After the star’s lawyers intervened, the Sky voice was removed from ChatGPT’s offerings.

Johansson underscored the broader stakes, saying that these questions demand clear and transparent rules at a moment when the public conversation about deepfakes and image rights is intensifying. She advocated for robust legislation to defend individual rights and control over one’s own voice and likeness, a sentiment shared by many in the United States and Canada who consume AI-powered media daily.

Altman countered that Sky was not intended to mimic Johansson and that the team had been exploring a voice that could fit the character while respecting creators. He stated that out of regard for Johansson, the team halted further use of the Sky voice and apologized publicly. The episode has become a talking point in ongoing debates about consent, consent-based modeling, and how to regulate synthetic media across platforms.

In related digital ethics discussions, other celebrity cases have surfaced where synthetic media has raised concerns about exploitative or non-consensual use of a public figure’s voice or image. The incidents have spurred calls for clearer guidelines and stronger protections for personal identity in the AI era.

Analysts note that the public conversation around deepfakes, voice replication, and AI impersonation is not limited to entertainment. Businesses and creatives in North America increasingly seek transparent standards for AI tools, ensuring that voice likenesses are licensed or properly cleared before deployment. The overarching goal is to balance innovation with fundamental rights, allowing new technologies to augment media without eroding trust or personal autonomy.

There is a growing awareness that voice likeness can be as impactful as visual imagery in shaping perception. As AI models evolve, the industry is likely to implement clearer consent mechanisms, licensing practices, and user controls that empower individuals to protect how their voices are used across products and services.

No time to read?
Get a summary
Previous Article

US Presidential Campaign Discourse: Leadership, Rhetoric, and Democracy

Next Article

Strela-10 Crew Neutralizes Ukrainian UAV Over Kherson Front Line—Defense Ministry Update