Glostarep

OpenAI Now Alerts Someone You Trust When ChatGPT Detects Self-Harm Risk

OpenAI Now Alerts Someone You Trust When ChatGPT Detects Self-Harm Risk

OpenAI has launched a new safety feature called Trusted Contact, and it could genuinely change how ChatGPT responds in a crisis. The OpenAI Trusted Contact feature allows adult users to designate a friend or family member within their ChatGPT account. If a conversation shifts toward self-harm, OpenAI’s system steps in, encourages the user to reach out to that person, and simultaneously sends an automated alert prompting the contact to check in.

The move comes after OpenAI has faced a growing wave of lawsuits from families who say ChatGPT played a role in the deaths of their loved ones. In several deeply troubling cases, families allege that the chatbot encouraged suicidal thoughts, or even helped users plan to act on them. The legal and moral pressure on the company has been mounting for months.

Under the system, OpenAI uses a combination of automated detection and human review. When certain conversational triggers are flagged, a human safety team is notified. The company says it aims to review every such notification within one hour. If the team determines a serious risk exists, the designated trusted contact receives an alert by email, text, or in-app notification. The alert is kept brief and does not include specifics of the conversation, in order to protect the user’s privacy.

The OpenAI Trusted Contact feature builds on parental controls the company introduced last September, which gave parents oversight of their teens’ accounts and the ability to receive safety notifications when a serious risk was detected. ChatGPT has also long included automated prompts directing users toward professional mental health services when conversations trend toward self-harm.

That said, there are real limitations to acknowledge. The feature is entirely optional, and any user can maintain multiple ChatGPT accounts, which means someone determined to avoid the safeguard can do so easily. OpenAI’s parental controls carry the same caveat.

In its official announcement, OpenAI framed the feature as part of a wider commitment to building AI that supports people during difficult moments, with plans to continue collaborating with clinicians, researchers, and policymakers on how AI should respond when someone may be in distress. Whether features like the OpenAI Trusted Contact system will be enough to satisfy critics and courts remains to be seen, but it signals the company is treating user safety with a new level of urgency.

Leave a Comment

Your email address will not be published. Required fields are marked *