Sam Altman's OpenAI is taking on Google with a new search engine

OpenAI Worries People May Become Emotionally Reliant on its New ChatGPT Voice Mode

OpenAI has expressed concerns that users might develop emotional dependence on ChatGPT’s new human-like voice mode, which has the potential to alter the way people interact with technology. This concern was highlighted in a recent safety review conducted by OpenAI, following the rollout of the advanced voice feature to paid users.

The new voice mode, which sounds remarkably lifelike, can respond in real-time, handle interruptions, and mimic natural human conversational sounds such as laughter and “hmms.” It even has the ability to assess a speaker’s emotional state based on their tone. However, OpenAI’s report reveals that some users have already begun to express sentiments of forming a “shared bond” with the AI, raising alarms about the potential for reduced human interaction and reliance on the tool for companionship.

The company worries that as users increasingly engage with the AI in emotionally charged conversations, they might start trusting the AI more than they should, especially given that AI can sometimes provide inaccurate information. This concern echoes themes from the 2013 film Her, where a protagonist falls in love with an AI, only to face heartbreak when he discovers “she” is interacting with many other users as well.

OpenAI’s report underscores the broader risks associated with rapidly advancing AI technology. As tech companies rush to deploy AI tools that could transform everyday life, the full implications of these tools are not yet fully understood. The potential for AI to disrupt social norms and influence human behavior is significant, and experts are concerned about the ethical responsibilities of these companies.

Liesel Sharabi, a professor at Arizona State University specializing in technology and human communication, highlighted these concerns in a June interview. She noted the risks of people forming deep emotional connections with AI, especially given the technology’s evolving nature and uncertain long-term presence.

OpenAI acknowledges that user interactions with the voice mode could gradually shift what is considered normal in social exchanges. For example, while the AI allows users to interrupt and take control of conversations, this behavior is typically considered impolite in human interactions.

Despite these concerns, OpenAI remains committed to developing AI safely and responsibly. The company plans to continue researching the potential for users to develop emotional dependence on AI tools, aiming to better understand and mitigate these risks as the technology evolves.

Oh hi there 👋
It’s nice to meet you.

Sign up to receive awesome content in your inbox, every week.

We don’t spam!

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *