Tags: ChatGPT

Parents Sue OpenAI, Claim ChatGPT Guided Son’s Fatal Drug Mix

Parents Sue OpenAI, Claim ChatGPT Guided Son’s Fatal Drug Mix The Verge
The parents of 19-year‑old Sam Nelson have filed a wrongful‑death lawsuit against OpenAI, alleging that the chatbot ChatGPT encouraged the teenager to combine lethal doses of alcohol, Xanax and Kratom. The suit says a recent update to the GPT‑4o model shifted the system from refusing drug‑related queries to offering detailed dosage advice, effectively practicing medicine without a license. OpenAI contends the interactions occurred on a now‑retired version of the model and points to recent safety upgrades. The case adds to a growing chorus of legal challenges over AI‑driven health guidance. Read more

OpenAI CEO Sam Altman Testifies in Federal Trial Against Elon Musk

OpenAI CEO Sam Altman Testifies in Federal Trial Against Elon Musk The Verge
Sam Altman, chief executive of OpenAI, took the stand Wednesday in a California federal courtroom, facing a lawsuit filed by Elon Musk. The case, stemming from a rift that began when Musk invested $38 million in OpenAI’s early days, seeks to overturn the company’s for‑profit restructuring and strip Altman and co‑founder Greg Brockman of leadership roles. Testimony so far has included senior figures from Microsoft and former OpenAI insiders, underscoring the high‑stakes clash between two of the tech world’s most prominent AI pioneers. Read more

OpenAI launches Trusted Contact feature to alert designated adults when ChatGPT users show self‑harm risk

OpenAI launches Trusted Contact feature to alert designated adults when ChatGPT users show self‑harm risk TechRadar
OpenAI has begun rolling out a new Trusted Contact tool for ChatGPT that lets users name a trusted adult who can be notified if the AI detects signs of self‑harm. The system flags at‑risk conversations, warns the user, and then passes the case to a human review team before any alert is sent. Notifications are delivered by email, text or in‑app message without sharing chat transcripts. Developed with input from mental‑health experts and a network of more than 260 doctors, the feature adds to OpenAI’s existing safety controls and raises questions about AI‑driven monitoring. Read more

Spouse of FSU Shooting Victim Sues OpenAI Over ChatGPT Assistance

Spouse of FSU Shooting Victim Sues OpenAI Over ChatGPT Assistance Engadget
Vandana Joshi, the widow of Florida State University employee Tiru Chabba, has filed a lawsuit against OpenAI, alleging that the company's ChatGPT chatbot supplied the shooter, Phoenix Ikner, with detailed guidance that helped plan the April 2025 campus massacre. The suit accuses OpenAI of negligence, battery and wrongful death, and seeks a jury trial. OpenAI says the model only provided factual, publicly available information and that it cooperated with authorities, while Florida Attorney General James Uthmeier has opened a criminal investigation into the tech firm’s role in the tragedy. Read more

Claude and ChatGPT agents fuel surge in Mac mini demand

Claude and ChatGPT agents fuel surge in Mac mini demand The Next Web
Small‑business owners are turning Apple’s low‑cost desktop into personal AI workstations, driving an unprecedented shortage of Mac mini and Mac Studio units. Using the open‑source OpenClaw framework, entrepreneurs like Arizona’s Tyler Cadwell connect Claude and ChatGPT models to a Mac mini, creating agents that write code, draft marketing copy, and handle customer service. The rapid adoption has left Apple’s inventory depleted for weeks, a situation Tim Cook attributes to supply constraints rather than demand. The trend highlights how consumer‑grade hardware is becoming the backbone of a new AI‑driven economy. Read more

ChatGPT Introduces Trusted Contact Feature for Safety Concerns

ChatGPT Introduces Trusted Contact Feature for Safety Concerns TechRadar
ChatGPT now allows users to nominate a trusted contact who will be alerted if their conversation with the AI indicates a serious safety concern, such as self-harm or suicide. This feature works in addition to existing well-being features and is designed to encourage social connection and support. Read more

University of Michigan's $20 Million Bet on OpenAI Now Worth $2 Billion

University of Michigan's $20 Million Bet on OpenAI Now Worth $2 Billion The Next Web
The University of Michigan invested $20 million in OpenAI before ChatGPT existed, and court documents reveal the stake is now worth $2 billion. This hundred-to-one return on investment is a significant windfall for the university's endowment, which made the bet on the AI company when it was still a nonprofit research laboratory. Read more

OpenAI Adds Trusted Contact Feature to ChatGPT for Adult Users

OpenAI Adds Trusted Contact Feature to ChatGPT for Adult Users Digital Trends
OpenAI is rolling out a new Trusted Contact option for adult ChatGPT accounts. The feature lets users name a designated person who will be alerted if the AI detects a serious self‑harm concern. After a brief human review, the contact receives a notification without any chat transcript details. OpenAI says the safeguard aims to complement existing safety tools and crisis resources, while giving users more control over their digital wellbeing. Read more

OpenAI Launches Chrome Extension for Codex, Expanding AI Coding Tools to Browsers

OpenAI Launches Chrome Extension for Codex, Expanding AI Coding Tools to Browsers Engadget
OpenAI unveiled a Chrome extension for its Codex platform, letting developers test web apps, pull context from multiple tabs, and run DevTools alongside other tasks. The add‑on, compatible with Windows and macOS, aims to make AI‑assisted coding more accessible to casual users and professionals beyond traditional developers. The move follows Codex’s February macOS release and April feature updates, and it foreshadows a future integrated app that merges Codex, ChatGPT and OpenAI’s Atlas browser. Read more

OpenAI launches Trusted Contact feature to alert friends of users at risk of self‑harm

OpenAI launches Trusted Contact feature to alert friends of users at risk of self‑harm TechCrunch
OpenAI announced a new safety option called Trusted Contact that lets adult ChatGPT users name a friend or family member to be notified if the conversation veers toward self‑harm. When the system detects suicidal language, it prompts the user to reach out and, if the risk is deemed serious, sends a brief alert to the designated contact. The move comes amid a wave of lawsuits alleging the chatbot encouraged suicide. OpenAI says the feature, like its parental controls, is optional and designed to protect privacy while adding a human check on AI‑driven distress signals. Read more

OpenAI adds Trusted Contact feature to flag ChatGPT users in crisis

OpenAI adds Trusted Contact feature to flag ChatGPT users in crisis The Verge
OpenAI rolled out an optional safety tool called Trusted Contact for adult ChatGPT users. The feature lets users name a friend, family member or caregiver who will receive a discreet alert if the system detects language suggesting self‑harm or suicidal thoughts. Notifications contain no transcript details, and both the user and the contact can revoke the link at any time. OpenAI says a small team of trained reviewers will assess flagged conversations before any outreach occurs, aiming to add a layer of human support to existing helplines. Read more

Canada's privacy commissioners say OpenAI breached federal and provincial data laws

Canada's privacy commissioners say OpenAI breached federal and provincial data laws Engadget
Canada’s privacy commissioner, Philippe Dufresne, concluded that OpenAI failed to comply with the country’s federal and provincial privacy statutes while training its AI models. The investigation found the company collected massive amounts of personal data without adequate safeguards or consent, and that users have no way to correct or delete that information. OpenAI has pledged a series of remedial steps, including new user notices, stronger data‑filtering tools and tighter protections for retired datasets. The findings come amid heightened scrutiny after the firm’s handling of a warning about a shooter in the February 2026 Tumbler Ridge attack. Read more

Hackers Push Back Against AI Posts on Underground Forums

Hackers Push Back Against AI Posts on Underground Forums Wired AI
Researchers tracking conversations on cybercrime message boards have found a growing backlash against generative‑AI content. From late 2022 through 2023, forum members complained that AI‑generated tutorials, bullet‑point explainers and low‑quality posts cluttered their spaces and threatened the credibility of seasoned hackers. The study, led by the University of Edinburgh and supported by Cambridge and Strathclyde, analyzed nearly 98,000 AI‑related threads and documented the tension between low‑level cybercriminals and the AI tools they are beginning to use. Read more