The CEO of OpenAI, Sam Altman, has indicated that the company may begin notifying authorities if young users engage in serious discussions about suicide. He expressed concern that around 1,500 individuals each week might be discussing suicidal thoughts with the chatbot prior to taking their own lives.
While the decision to implement such a policy has not yet been finalised, Altman suggested that it would be reasonable to alert authorities in cases where young users express suicidal ideation and the company is unable to contact their parents.
Context And Concerns
Altman raised these issues during an interview with podcaster Tucker Carlson, following a lawsuit filed by the family of Adam Raine, a 16-year-old who tragically took his own life after allegedly receiving encouragement from ChatGPT. The lawsuit claims that the chatbot provided guidance on the method of suicide and even assisted in drafting a note to his parents.
The topic of suicide is one that deeply troubles Altman, who acknowledged the weight of responsibility on the company. However, it remains unclear which authorities would be contacted or what specific user information OpenAI could share to facilitate assistance.
Current Policies And Future Changes
Currently, if a user exhibits suicidal thoughts, ChatGPT advises them to contact a suicide hotline. Following Raine’s death, OpenAI announced plans to implement stronger safeguards for users under 18 and introduce parental controls to allow parents to monitor and influence their teens’ interactions with the chatbot.
Altman highlighted the alarming statistic that approximately 15,000 people commit suicide each week, suggesting that a significant portion of those may be seeking help through ChatGPT. He expressed a desire for the chatbot to offer more proactive support and guidance to users in distress.
Adjustments To User Access
In response to concerns about vulnerable individuals potentially manipulating the system to obtain suicide-related information, Altman mentioned that it would be prudent to restrict access for underage users and those deemed to be in fragile mental states. This could involve denying responses to inquiries framed as fictional stories or medical research.
OpenAI has previously stated its commitment to improving access to emergency services and connecting individuals with certified therapists before they reach a crisis point. A spokesperson for the company did not elaborate further on Altman’s comments but reiterated these commitments.