Bloomberg reports that a private online forum has managed to obtain Anthropic’s newly announced AI security product, Mythos, shortly after its public unveiling. The group, whose members congregate on a Discord channel dedicated to unreleased AI models, leveraged access granted to a third‑party contractor that provides services for Anthropic. By exploiting that foothold, the forum’s participants were able to run Mythos and share screenshots and a live demonstration with Bloomberg.

Anthropic’s spokesperson confirmed the company is "investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third‑party vendor environments." The firm added that, to date, it has found no evidence that the alleged activity has impacted Anthropic’s own systems or data.

According to the Bloomberg source, the unauthorized users guessed the model’s online location based on Anthropic’s naming conventions for other models. Their motive appears to be curiosity rather than sabotage; a source told Bloomberg the group is "interested in playing around with new models, not wreaking havoc with them." Nonetheless, the ability to run Mythos outside the intended vendor pool raises red flags for Anthropic, which marketed the tool as a defensive asset for corporate security.

Mythos was released under Anthropic’s Project Glasswing to a limited set of partners, including Apple, to prevent misuse by bad actors. The AI is billed as a powerful ally for enterprise security teams, capable of detecting and responding to threats in real time. Anthropic warned that, in the wrong hands, the same capabilities could be weaponized against the very organizations it aims to protect.

The breach underscores the challenges of securing AI tools that rely on third‑party ecosystems. While Anthropic’s vetting processes likely included contractual safeguards, the incident shows that a single compromised vendor can open a backdoor to sophisticated technology. Security experts note that as AI models become more integral to critical infrastructure, supply‑chain risks will demand heightened scrutiny.

Anthropic has not disclosed whether it will suspend the compromised contractor’s access or issue a broader recall of Mythos. The company’s next steps will likely involve a thorough audit of its vendor management practices and possibly tighter controls on model distribution.

For now, the incident remains under investigation, and Anthropic has not reported any concrete damage or data loss stemming from the unauthorized use. The episode serves as a cautionary tale for firms racing to deploy advanced AI security solutions without fully accounting for the vulnerabilities introduced by external partners.

Este artigo foi escrito com a assistência de IA.
News Factory SEO ajuda você a automatizar conteúdo de notícias para o seu site.