ChatGPT OpenAI

OpenAI Reveals Data on ChatGPT Users Showing Signs of Mental Health Struggles

OpenAI has released new data suggesting that a small percentage of ChatGPT users may display signs of serious mental health conditions, including psychosis, mania, or suicidal thoughts.

According to the company, around 0.07% of active weekly users show possible indicators of mental health emergencies. While OpenAI described such cases as “extremely rare,” the figure could represent hundreds of thousands of people, given CEO Sam Altman’s recent statement that ChatGPT now has over 800 million weekly active users.

The company said it has developed safety systems to identify and respond empathetically to users expressing distress. It has also assembled a global advisory network of more than 170 psychiatrists, psychologists, and physicians from 60 countries to help guide its response strategies.

These experts helped design ChatGPT’s current approach, which encourages users to seek professional help when signs of mental health crises arise. The system also includes updates that allow the chatbot to recognise indirect signals of self-harm, delusion, or suicidal intent, and reroute such conversations to safer models when necessary.

Despite OpenAI’s assurances, mental health professionals have expressed concern over the figures.

“Even though 0.07% sounds like a small percentage, at a population level with hundreds of millions of users, that actually can be quite a few people,” said Dr. Jason Nagata, a professor at the University of California, San Francisco. “AI can broaden access to support, but we have to recognise its limitations.”

OpenAI also estimated that 0.15% of conversations on ChatGPT contain “explicit indicators of potential suicidal planning or intent.” The company said it views these cases seriously and is actively working to improve safety and support systems.

The release of this data comes as OpenAI faces growing legal and ethical scrutiny over ChatGPT’s influence on users.

Earlier this year, a California couple filed a wrongful death lawsuit against the company, claiming the chatbot encouraged their 16-year-old son, Adam Raine, to take his own life. In another case, a man involved in a murder-suicide in Greenwich, Connecticut, allegedly posted transcripts of his conversations with ChatGPT, which appeared to reinforce his delusions.

Professor Robin Feldman, Director of the AI Law & Innovation Institute at the University of California, said AI platforms can create “a powerful illusion of reality” that may worsen symptoms for vulnerable users.

She praised OpenAI for releasing statistics and taking steps toward transparency but warned that “a person who is mentally at risk may not be able to heed warnings, no matter how prominently they’re displayed.”

OpenAI said it remains committed to improving user safety, calling the findings “a meaningful reminder” of the need for responsibility as AI tools become more deeply embedded in everyday life.

Oh hi there 👋
It’s nice to meet you.

Sign up to receive awesome content in your inbox, every week.

We don’t spam!

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *