TechRadar OpenAI has begun rolling out a new Trusted Contact tool for ChatGPT that lets users name a trusted adult who can be notified if the AI detects signs of self‑harm. The system flags at‑risk conversations, warns the user, and then passes the case to a human review team before any alert is sent. Notifications are delivered by email, text or in‑app message without sharing chat transcripts. Developed with input from mental‑health experts and a network of more than 260 doctors, the feature adds to OpenAI’s existing safety controls and raises questions about AI‑driven monitoring.
Read more