Tags: technology policy

OpenAI adds Trusted Contact feature to flag ChatGPT users in crisis

OpenAI adds Trusted Contact feature to flag ChatGPT users in crisis The Verge
OpenAI rolled out an optional safety tool called Trusted Contact for adult ChatGPT users. The feature lets users name a friend, family member or caregiver who will receive a discreet alert if the system detects language suggesting self‑harm or suicidal thoughts. Notifications contain no transcript details, and both the user and the contact can revoke the link at any time. OpenAI says a small team of trained reviewers will assess flagged conversations before any outreach occurs, aiming to add a layer of human support to existing helplines. Read more

Experts Call for Independent Audits as AI Safety Standards Remain Undefined

Experts Call for Independent Audits as AI Safety Standards Remain Undefined Ars Technica2
Industry leaders and scholars warn that without clear standards, AI safety testing could become a political tool. Microsoft, the National Institute of Standards and Technology (NIST) and the Center for AI Safety Initiative (CAISI) plan to develop testing methods on the fly, but critics argue that only an independent audit system can prevent government overreach and ensure accountability. Cornell professor Gregory Falco proposes a rigorously enforced audit regime akin to the IRS, urging firms to adopt internal safety checks before deployment. Read more