ChatGPT adds emergency contact feature for self-harm detection

OpenAI is rolling out a new safety feature that alerts trusted contacts when ChatGPT detects users may be discussing self-harm. The feature, called Trusted Contact, allows adults to designate someone they trust who can be notified during crisis situations.

This marks a significant shift in how AI platforms handle mental health crises. Instead of only providing crisis hotline information, ChatGPT now actively reaches out to people in users' real lives when its systems flag serious safety concerns.

How does it work?

Trusted Contact follows a multi-step process designed to balance safety with privacy:

  • Users designate one trusted adult contact through their ChatGPT settings
  • The chosen contact receives an invitation explaining their role and must accept within one week
  • When AI systems detect potential self-harm discussions, ChatGPT warns the user that their contact may be notified
  • Human reviewers examine the conversation to confirm the safety concern
  • If confirmed, the trusted contact receives a brief notification by email, text, or app alert

The notifications are intentionally vague to protect privacy. They explain that self-harm came up in a concerning way but don't include chat details or transcripts. OpenAI says trained reviewers aim to complete their assessment within one hour.

Users can remove or change their trusted contact at any time, and contacts can opt out through OpenAI's help center.

Why does it matter?

This feature represents a new approach to AI safety that goes beyond content filtering or automated responses. By involving real people in users' lives, OpenAI is acknowledging that human connection often matters more than professional crisis services.

"Psychological science consistently shows that social connection is a powerful protective factor, especially during periods of emotional distress," said Dr. Arthur Evans, CEO of the American Psychological Association.

The feature also signals how AI companies are taking greater responsibility for user wellbeing. Rather than treating concerning conversations as purely technical problems, OpenAI is building systems that connect digital interactions to real-world support networks.

However, the approach raises questions about privacy and accuracy. False positives could strain relationships, while missed cases could have serious consequences.

The context

Trusted Contact builds on OpenAI's existing parental controls, which already notify parents when teen accounts show signs of distress. The new feature extends this concept to adult users who voluntarily opt in.

OpenAI developed the feature with input from over 260 licensed physicians across 60 countries and organizations like the American Psychological Association. The company says it worked with 170+ mental health experts to improve how ChatGPT detects and responds to distress signals.

The launch comes as AI platforms face growing scrutiny over their impact on mental health, particularly among younger users. Several lawsuits have alleged that AI chatbots contributed to self-harm incidents, putting pressure on companies to implement stronger safeguards.

ChatGPT already has other safety measures in place, including refusing to provide instructions for self-harm, suggesting breaks after extended use, and directing users to crisis hotlines. Trusted Contact adds another layer by bringing human relationships into the safety equation.

source

💡Did you know?

You can take your DHArab experience to the next level with our Premium Membership.
👉 Click here to learn more