The Commerce Department quietly erased a web page that outlined a high‑profile partnership between Microsoft, Google and xAI, the artificial‑intelligence arm of Elon Musk’s X. The original posting, dated May 5, said the three firms would hand over their frontier AI systems to a federal testing team for evaluation of cyber‑attack vulnerabilities, risks of military misuse and other national‑security flaws before the models hit the market.

By Monday afternoon, Washington time, the link returned a generic "Sorry, we cannot find that page" notice. Visitors were automatically redirected to the website of the Center for AI Standards and Innovation, the body that now oversees the testing program. The Center, a successor to the U.S. AI Safety Institute, operates within the National Institute of Standards and Technology (NIST), itself a component of the Commerce Department.

The shift in online presence follows an executive order that scaled back the previous administration’s AI‑safety architecture. Instead of a broad safety‑evaluation mandate, the order refocused the institute’s mission on developing standards and coordinating with industry. The change in branding and web location reflects that policy pivot.

Neither the Commerce Department nor the Trump White House responded to Reuters’ requests for comment on why the page was removed. The three companies also declined to comment. The lack of an official statement leaves observers to wonder whether the deletion signals a deeper policy disagreement or simply a routine website update.

When the May 5 announcement was first released, it was seen as a tangible sign of growing federal concern about the national‑security risks posed by powerful AI models. It also marked a rare public commitment by major AI developers to subject their cutting‑edge systems to pre‑deployment government review.

Industry insiders recall that the deal followed the Trump administration’s earlier removal of Anthropic from a Pentagon AI contract over alleged safety‑related constraints, though Anthropic was not listed as a participant in the Commerce Department’s testing program.

Critics have warned that granting the government access to frontier AI models before they are released could create a new target for nation‑state cyber‑espionage. Several federal officials have publicly questioned the wisdom of such pre‑release access, arguing that it may inadvertently expose sensitive technology to hostile actors.

Despite the page’s disappearance, the Center for AI Standards and Innovation continues to operate, and the redirected site still hosts general information about its program. No indication has been given that the testing arrangement itself has been cancelled.

The episode underscores the ongoing tension within U.S. AI policy circles. Supporters of robust government oversight view the original announcement as a cornerstone of the administration’s approach to AI risk mitigation. Detractors see the removal of a positive, public‑facing statement as evidence of internal discord about how deeply the government should involve itself in the development of frontier AI.

For now, the precise status of the Microsoft, Google and xAI testing agreement remains opaque. The public record no longer includes the specifics of the pre‑release testing arrangement, leaving policymakers, industry players and the public to infer the next steps from a shifting regulatory landscape.

Este artículo fue escrito con la asistencia de IA.
News Factory SEO te ayuda a automatizar contenido de noticias para tu sitio.