Scarlett Johansson Takes Legal Action Against OpenAI Over ChatGPT Voice
Actor Scarlett Johansson has expressed shock and anger over the use of a synthetic voice in ChatGPT, released by OpenAI, which she claims closely resembles her own. In a statement to CNN on Monday, Johansson criticized OpenAI CEO Sam Altman for the similarity and announced legal steps to address the issue.
The controversy erupted after OpenAI paused an update to ChatGPT, which included a voice assistant named Sky. Critics compared Sky’s voice to Johansson’s portrayal in the film “Her,” where she played an AI assistant. The voice was also mocked for its flirtatious tone, leading to significant backlash.
“We’ve heard questions about how we chose the voices in ChatGPT, especially Sky,” OpenAI stated on X. “We are working to pause the use of Sky while we address them.”
Johansson revealed that Altman had approached her last September to voice the ChatGPT 4.0 system, which she declined for personal reasons. Altman later reached out again, but the system was released before they could connect. Johansson has since hired legal counsel, and OpenAI agreed to remove the Sky voice after receiving letters from her attorneys.
“In a time when we are all grappling with deepfakes and the protection of our own likeness, our own work, our own identities, I believe these are questions that deserve absolute clarity,” Johansson wrote. “I look forward to resolution in the form of transparency and the passage of appropriate legislation to help ensure that individual rights are protected.”
OpenAI clarified that the Sky voice was not derived from Johansson’s but was provided by a different actress. Altman reiterated this in a statement, expressing regret over the confusion and announcing the pause of Sky’s voice in their products.
The introduction of Sky raised concerns about potential biases in technology designed by tech companies, largely led by White men. This incident has sparked broader discussions about the ethical implications of AI.
In addition to this controversy, OpenAI faced internal challenges. Jan Leike, a former team leader focused on AI safety, left the company, criticizing its safety culture and practices. Altman responded by acknowledging the need for improvement in safety measures.
OpenAI President Greg Brockman emphasized the company’s commitment to AI safety, stating their efforts in raising awareness about AI risks and preparing for a future with advanced AI systems.
As Johansson continues her legal battle, the incident underscores the complexities and ethical considerations surrounding AI development and the protection of individual identities.