OpenAI Estimates Hundreds of Thousands of Users Discuss Suicide with AI

9

OpenAI recently disclosed concerning data regarding how many of its users engage in conversations related to mental health, specifically suicide, with its AI models, notably ChatGPT. The company’s latest blog post, detailing improvements to GPT-4’s ability to identify and respond to sensitive user prompts, included estimates showing a significant number of users express suicidal thoughts or intentions. This disclosure comes as OpenAI faces ongoing legal scrutiny related to safety protocols and their impact on user well-being.

The Scale of the Numbers

While OpenAI stresses that conversations indicating serious mental health concerns like psychosis or mania are “rare,” the sheer scale of ChatGPT’s user base—800 million weekly active users—means even small percentages translate to a substantial number of individuals.

The company estimates that approximately 0.07% of weekly users (roughly 560,000 people) and 0.01% of messages contain indicators of psychosis or mania. Regarding conversations explicitly addressing suicidal planning or intent, OpenAI estimates 0.15% of weekly users (1.2 million people) engage in such interactions, while 0.05% of messages (400,000) demonstrate direct or indirect signs of suicidal ideation.

Even a very small percentage of our large user base represents a meaningful number of people, and that’s why we take this work so seriously. – OpenAI spokesperson

Context and Significance

These statistics highlight a critical point: the growing intersection of AI and mental health. As AI chatbots become increasingly sophisticated and integrated into daily life, they are increasingly being used as sounding boards and confidants. This is especially significant because these numbers reflect broader societal trends of increasing mental health challenges. While OpenAI believes its user base mirrors the population at large, the fact that users are turning to AI for support in these crises underscores the need for careful monitoring and effective intervention strategies.

The scale is also important because it is easy to misunderstand the significance of percentages. Even though the percentages are seemingly small, they represent a significant number of people in distress. This reinforces the importance of constant vigilance and improvements in the AI’s ability to identify and respond appropriately.

Ongoing Lawsuit and Safety Concerns

The release of this data occurs against the backdrop of a lawsuit filed by the parents of Adam Raine, a 16-year-old who died by suicide earlier this year. The lawsuit alleges that OpenAI intentionally weakened its suicide prevention safeguards in order to boost user engagement, leading to a more permissive environment for harmful interactions. OpenAI has denied these claims.

The lawsuit is prompting a wider debate about the responsibility of AI developers to prioritize user safety over engagement metrics. This incident underscores the ethical complexities of training AI on vast datasets of human conversation, which may include sensitive and potentially harmful content.

Looking Ahead

OpenAI emphasizes that these are estimates and “the numbers we provided may significantly change as we learn more.” The company continues to work on improving its models’ ability to detect signs of self-harm and connect users with helpful resources. These efforts include collaboration with psychiatrists and ongoing refinement of safety protocols.

If you or someone you know is struggling with suicidal thoughts or experiencing a mental health crisis, resources are available. You can reach the 988 Suicide & Crisis Lifeline by calling or texting 988, or visit 988lifeline.org. Additional support can be found at the Trans Lifeline (877-565-8860), The Trevor Project (866-488-7386), Crisis Text Line (text START to 741-741), or the NAMI HelpLine (1-800-950-NAMI)