OpenAI is expanding its safety toolkit for ChatGPT with a feature called Trusted Contact, now in limited rollout. Users can tap their profile, select a trusted adult, and wait for that person to accept the role. Once active, the system monitors conversations for language that suggests a serious risk of self‑harm. If the AI flags such content, the user receives a warning that the designated contact may be alerted.

A specially trained human review team then evaluates the situation. Only when reviewers deem the risk genuine does the Trusted Contact receive a notification via email, text message or an in‑app alert, urging them to check in with the user. OpenAI says the alerts do not include chat transcripts or detailed conversation history, preserving user privacy. Users retain full control—they can remove or replace their Trusted Contact at any time.

The feature was built with guidance from mental‑health professionals, suicide‑prevention specialists and a global network of more than 260 doctors spanning 60 countries. OpenAI positions Trusted Contact as an extension of its existing parental controls and safety guardrails, acknowledging that ChatGPT now functions for many as more than a productivity tool—it can act as a confidant, life coach, or even a therapist.

OpenAI CEO Sam Altman has previously remarked that younger users treat ChatGPT like an operating system for life decisions, consulting the AI on everything from career moves to personal relationships. That reliance underpins the company’s push to embed emotional‑support infrastructure directly into the product.

Reactions to the rollout are mixed. Some users view the ability to enlist a trusted adult as reassuring, especially for vulnerable individuals who might otherwise suffer in silence. Others find the notion of AI‑driven monitoring unsettling. In a recent interview, Amy Sutton of Freedom Counselling warned that AI surveillance could exacerbate mental‑health stigma, prompting people to hide signs of distress and potentially deepening the problem.

OpenAI’s approach reflects a broader industry trend: as AI systems become more embedded in daily life, companies are grappling with the balance between user safety and privacy. Trusted Contact illustrates one attempt to provide a safety net while limiting data exposure, but it also raises questions about how comfortable users are with automated alerts and human review of their private conversations.

For now, the feature remains limited to users who opt in and designate a contact. OpenAI has not disclosed a timeline for a wider release, but the company says it will continue to refine the system based on feedback from mental‑health experts and real‑world use.

Dieser Artikel wurde mit Unterstützung von KI verfasst.
News Factory SEO hilft Ihnen, Nachrichteninhalte für Ihre Website zu automatisieren.