OpenAI has released new estimates of the number of ChatGPT users who exhibit possible signs of mental health emergencies, including mania, psychosis or suicidal thoughts.
The company said that around 0.07% of ChatGPT users active in a given week exhibited such signs, adding that its artificial intelligence (AI) chatbot recognizes and responds to these sensitive conversations.
While OpenAI maintains these cases are "extremely rare," critics said even a small percentage may amount to hundreds of thousands of people, as ChatGPT recently reached 800 million weekly active users, per boss Sam Altman.
As scrutiny mounts, the company said it built a network of experts around the world to advise it.
Those experts include more than 170 psychiatrists, psychologists, and primary care physicians who have practiced in 60 countries, the company said.
They have devised a series of responses in ChatGPT to encourage users to seek help in the real world, according to OpenAI.
But the glimpse at the company's data raised eyebrows among some mental health professionals.
"Even though 0.07% sounds like a small percentage, at a population level with hundreds of millions of users, that actually can be quite a few people," said Dr. Jason Nagata, a professor who studies technology use among young adults at the University of California, San Francisco.
"AI can broaden access to mental health support, and in some ways support mental health, but we have to be aware of the limitations," Dr. Nagata added.
The company also estimates 0.15% of ChatGPT users have conversations that include "explicit indicators of potential suicidal planning or intent."
OpenAI said recent updates to its chatbot are designed to "respond safely and empathetically to potential signs of delusion or mania" and note "indirect signals of potential self-harm or suicide risk."
ChatGPT has also been trained to reroute sensitive conversations "originating from other models to safer models" by opening in a new window.
In response to questions by the BBC on criticism about the numbers of people potentially affected, OpenAI said that this small percentage of users amounts to a meaningful amount of people and noted they are taking changes seriously.
The changes come as OpenAI faces mounting legal scrutiny over the way ChatGPT interacts with users.
In one of the most high-profile lawsuits recently filed against OpenAI, a California couple sued the company over the death of their teenage son alleging that ChatGPT encouraged him to take his own life in April.
The lawsuit was filed by the parents of 16-year-old Adam Raine and was the first legal action accusing OpenAI of wrongful death.
In a separate case, the suspect in a murder-suicide that took place in August in Greenwich, Connecticut posted hours of his conversations with ChatGPT, which appear to have fuelled the alleged perpetrator's delusions.
More users struggle with AI psychosis as "chatbots create the illusion of reality," said Professor Robin Feldman, Director of the AI Law & Innovation Institute at the University of California Law. "It is a powerful illusion."
She said OpenAI deserved credit for "sharing statistics and for efforts to improve the problem" but added: "the company can put all kinds of warnings on the screen but a person who is mentally at risk may not be able to heed those warnings."
Source - BBC
The company said that around 0.07% of ChatGPT users active in a given week exhibited such signs, adding that its artificial intelligence (AI) chatbot recognizes and responds to these sensitive conversations.
While OpenAI maintains these cases are "extremely rare," critics said even a small percentage may amount to hundreds of thousands of people, as ChatGPT recently reached 800 million weekly active users, per boss Sam Altman.
As scrutiny mounts, the company said it built a network of experts around the world to advise it.
Those experts include more than 170 psychiatrists, psychologists, and primary care physicians who have practiced in 60 countries, the company said.
They have devised a series of responses in ChatGPT to encourage users to seek help in the real world, according to OpenAI.
But the glimpse at the company's data raised eyebrows among some mental health professionals.
"Even though 0.07% sounds like a small percentage, at a population level with hundreds of millions of users, that actually can be quite a few people," said Dr. Jason Nagata, a professor who studies technology use among young adults at the University of California, San Francisco.
"AI can broaden access to mental health support, and in some ways support mental health, but we have to be aware of the limitations," Dr. Nagata added.
The company also estimates 0.15% of ChatGPT users have conversations that include "explicit indicators of potential suicidal planning or intent."
OpenAI said recent updates to its chatbot are designed to "respond safely and empathetically to potential signs of delusion or mania" and note "indirect signals of potential self-harm or suicide risk."
ChatGPT has also been trained to reroute sensitive conversations "originating from other models to safer models" by opening in a new window.
In response to questions by the BBC on criticism about the numbers of people potentially affected, OpenAI said that this small percentage of users amounts to a meaningful amount of people and noted they are taking changes seriously.
The changes come as OpenAI faces mounting legal scrutiny over the way ChatGPT interacts with users.
In one of the most high-profile lawsuits recently filed against OpenAI, a California couple sued the company over the death of their teenage son alleging that ChatGPT encouraged him to take his own life in April.
The lawsuit was filed by the parents of 16-year-old Adam Raine and was the first legal action accusing OpenAI of wrongful death.
In a separate case, the suspect in a murder-suicide that took place in August in Greenwich, Connecticut posted hours of his conversations with ChatGPT, which appear to have fuelled the alleged perpetrator's delusions.
More users struggle with AI psychosis as "chatbots create the illusion of reality," said Professor Robin Feldman, Director of the AI Law & Innovation Institute at the University of California Law. "It is a powerful illusion."
She said OpenAI deserved credit for "sharing statistics and for efforts to improve the problem" but added: "the company can put all kinds of warnings on the screen but a person who is mentally at risk may not be able to heed those warnings."
Source - BBC
Latest News
PickMe revenue up 51% to Rs. 2.1 Bn; Net Profit surges by 84%
Local
29 October 2025
'Bourse rebounds amid strong retail participation'
Local
29 October 2025
Cameroon opposition leader to face legal action over election unrest, government says
Local
29 October 2025
Pan Asia Bank marks 30 years with ceremonial Bell-Ringing at CSE
Local
29 October 2025
Making History: Suranjan Dunusinghe wins Gold at World Taekkyeon Competition
Local
29 October 2025
Jamie Lee Curtis addresses controversy after emotional remarks about Charlie Kirk
Local
29 October 2025
Sia denies estranged husband's drug use claims, alleges he was being investigated for child pornography
Local
29 October 2025
Camila Mendes's engagement party turns into sweet 'Riverdale' cast reunion
Local
29 October 2025
Sydney Sweeney cast as lead in 'That Man from Rio' remake, a James bond spoof
Local
29 October 2025
‘Nobody wants this’ returns to top Netflix TV charts; ‘a house of dynamite’ leads film
Local
29 October 2025