Tags: AI monitoring

OpenAI launches Trusted Contact feature to alert designated adults when ChatGPT users show self‑harm risk

OpenAI launches Trusted Contact feature to alert designated adults when ChatGPT users show self‑harm risk TechRadar
OpenAI has begun rolling out a new Trusted Contact tool for ChatGPT that lets users name a trusted adult who can be notified if the AI detects signs of self‑harm. The system flags at‑risk conversations, warns the user, and then passes the case to a human review team before any alert is sent. Notifications are delivered by email, text or in‑app message without sharing chat transcripts. Developed with input from mental‑health experts and a network of more than 260 doctors, the feature adds to OpenAI’s existing safety controls and raises questions about AI‑driven monitoring. Read more