Tags: Legal Dispute

Judge Calls Pentagon’s Move to Label Anthropic a Supply‑Chain Risk ‘Attempt to Cripple’ Company

Judge Calls Pentagon’s Move to Label Anthropic a Supply‑Chain Risk ‘Attempt to Cripple’ Company Wired AI
During a hearing, U.S. District Judge Rita Lin questioned the Department of Defense’s decision to label AI developer Anthropic a supply‑chain risk, describing it as an apparent attempt to cripple the company after it sought limits on military use of its Claude tool. Anthropic has filed lawsuits alleging illegal retaliation, and the judge is considering a temporary injunction that could pause the designation. The case highlights tensions over AI use in the armed forces, First Amendment concerns, and the Pentagon’s authority to restrict contractors. Read more

Anthropic Refutes Claims It Could Disrupt Military AI Systems

Anthropic Refutes Claims It Could Disrupt Military AI Systems Wired AI
The U.S. Department of Defense has expressed concern that Anthropic’s AI model, Claude, could be manipulated to interfere with military operations. Anthropic responded by stating it has no ability to shut down, alter, or otherwise control the model once deployed by the government. The company highlighted that it lacks any back‑door or remote kill switch and cannot access user prompts or data. In parallel, Anthropic has filed lawsuits challenging a supply‑chain risk designation that limits the Pentagon’s use of its software. The dispute underscores tension between national‑security priorities and emerging AI technologies. Read more