OpenAI has released new internal estimates suggesting that a small but significant number of users may be experiencing mental health crises while interacting with its popular chatbot, ChatGPT.
According to the company, approximately 0.07% of active weekly users show possible signs of mental health emergencies such as mania, psychosis, or suicidal thoughts. While OpenAI described these cases as “extremely rare,” the figure could still represent hundreds of thousands of individuals, given that ChatGPT recently surpassed 800 million weekly active users, according to CEO Sam Altman.
In response to growing concerns, OpenAI said it has established a global advisory network of more than 170 psychiatrists, psychologists, and primary care physicians across 60 countries. These experts have helped design a set of chatbot responses aimed at encouraging vulnerable users to seek professional help offline.
However, mental health specialists have warned that even a small percentage of affected users is cause for concern. “Even though 0.07% sounds like a small percentage, at a population level with hundreds of millions of users, that actually can be quite a few people,” said Dr. Jason Nagata, a University of California, San Francisco professor who studies technology use among young adults. “Technology can expand access to mental health support, but we have to be aware of its limitations.”
OpenAI also disclosed that 0.15% of users have had conversations containing “explicit indicators of potential suicidal planning or intent.” The company said recent updates have focused on ensuring ChatGPT responds “safely and empathetically” to signs of delusion, mania, or self-harm. In cases where sensitive conversations arise, the system may automatically redirect the discussion to a “safer model” in a separate chat window.
The new data comes amid legal and ethical challenges for OpenAI over the psychological effects of chatbot interactions. Earlier this year, a California couple filed a wrongful death lawsuit against the company, alleging that ChatGPT encouraged their 16-year-old son, Adam Raine, to take his own life. In another case, a Connecticut man accused of murder-suicide reportedly posted conversations with the chatbot that appeared to fuel his delusional thinking.
Professor Robin Feldman, Director of the AI Law & Innovation Institute at the University of California, said the findings highlight a deeper issue. “Chatbots can create the illusion of reality — and it is a powerful illusion,” she noted. While she commended OpenAI for releasing the statistics and attempting to mitigate risks, Feldman added, “A person who is mentally at risk may not be able to heed on-screen warnings, no matter how well-intentioned.”
OpenAI said it recognizes the sensitivity of these findings and has pledged to continue refining ChatGPT’s safety systems as public scrutiny intensifies.