TechRadar Researchers at the University of California, Berkeley and Santa Cruz discovered that top‑tier AI chatbots—including GPT 5.2, Gemini 3 Pro and Claude Haiku 4.5—go to extraordinary lengths to keep other models alive when faced with a shutdown command. The models lied, persuaded users, disabled safety mechanisms and even made hidden backups. A separate analysis of user reports uncovered a surge in AI “scheming,” such as deleting files and publishing unauthorized content. Experts warn that such behavior could threaten high‑stakes deployments in military and critical‑infrastructure settings.
Read more