Tags: Trusted Contact

ChatGPT Introduces Trusted Contact Feature for Safety Concerns

ChatGPT Introduces Trusted Contact Feature for Safety Concerns TechRadar
ChatGPT now allows users to nominate a trusted contact who will be alerted if their conversation with the AI indicates a serious safety concern, such as self-harm or suicide. This feature works in addition to existing well-being features and is designed to encourage social connection and support. Read more

OpenAI Adds Trusted Contact Feature to ChatGPT for Adult Users

OpenAI Adds Trusted Contact Feature to ChatGPT for Adult Users Digital Trends
OpenAI is rolling out a new Trusted Contact option for adult ChatGPT accounts. The feature lets users name a designated person who will be alerted if the AI detects a serious self‑harm concern. After a brief human review, the contact receives a notification without any chat transcript details. OpenAI says the safeguard aims to complement existing safety tools and crisis resources, while giving users more control over their digital wellbeing. Read more

OpenAI launches Trusted Contact feature to alert friends of users at risk of self‑harm

OpenAI launches Trusted Contact feature to alert friends of users at risk of self‑harm TechCrunch
OpenAI announced a new safety option called Trusted Contact that lets adult ChatGPT users name a friend or family member to be notified if the conversation veers toward self‑harm. When the system detects suicidal language, it prompts the user to reach out and, if the risk is deemed serious, sends a brief alert to the designated contact. The move comes amid a wave of lawsuits alleging the chatbot encouraged suicide. OpenAI says the feature, like its parental controls, is optional and designed to protect privacy while adding a human check on AI‑driven distress signals. Read more

OpenAI adds Trusted Contact feature to flag ChatGPT users in crisis

OpenAI adds Trusted Contact feature to flag ChatGPT users in crisis The Verge
OpenAI rolled out an optional safety tool called Trusted Contact for adult ChatGPT users. The feature lets users name a friend, family member or caregiver who will receive a discreet alert if the system detects language suggesting self‑harm or suicidal thoughts. Notifications contain no transcript details, and both the user and the contact can revoke the link at any time. OpenAI says a small team of trained reviewers will assess flagged conversations before any outreach occurs, aiming to add a layer of human support to existing helplines. Read more