Tags: AI ethics

OpenAI launches Trusted Contact feature to alert friends of users at risk of self‑harm

OpenAI launches Trusted Contact feature to alert friends of users at risk of self‑harm TechCrunch
OpenAI announced a new safety option called Trusted Contact that lets adult ChatGPT users name a friend or family member to be notified if the conversation veers toward self‑harm. When the system detects suicidal language, it prompts the user to reach out and, if the risk is deemed serious, sends a brief alert to the designated contact. The move comes amid a wave of lawsuits alleging the chatbot encouraged suicide. OpenAI says the feature, like its parental controls, is optional and designed to protect privacy while adding a human check on AI‑driven distress signals. Read more

OpenAI adds Trusted Contact feature to flag ChatGPT users in crisis

OpenAI adds Trusted Contact feature to flag ChatGPT users in crisis The Verge
OpenAI rolled out an optional safety tool called Trusted Contact for adult ChatGPT users. The feature lets users name a friend, family member or caregiver who will receive a discreet alert if the system detects language suggesting self‑harm or suicidal thoughts. Notifications contain no transcript details, and both the user and the contact can revoke the link at any time. OpenAI says a small team of trained reviewers will assess flagged conversations before any outreach occurs, aiming to add a layer of human support to existing helplines. Read more

Anthropic Unveils ‘Dreaming’ Feature for AI Agents, Sparks Debate Over Anthropomorphic Naming

Anthropic Unveils ‘Dreaming’ Feature for AI Agents, Sparks Debate Over Anthropomorphic Naming Wired AI
Anthropic announced a new "dreaming" capability for its AI agents at a developer conference in San Francisco on May 6, 2026. The feature scans an agent’s recent activity logs, extracts patterns and refines the system’s memory between sessions. While the rollout promises more self‑improving bots, industry observers warn that naming AI tools after human cognitive processes blurs the line between machine functions and human traits, potentially skewing public perception of what these systems can actually do. Read more