Alongside Mythos, the group claimed to have access to other unreleased AI models by Anthropic. 
Alongside Mythos, the group claimed to have access to other unreleased AI models by Anthropic. Anthropic’s new cybersecurity model, Mythos, has been accessed by a small group of unauthorised users. The model was released in preview and is restricted to only 40 companies under Project Glasswing. Anthropic suggests that its limited use is necessary due to its powerful capabilities and that it can be easily misused.
Anthropic’s Mythos is currently being tested and reviewed by top authorities as part of ongoing efforts to assess its safety, strengthen safeguards, and ensure it can be deployed responsibly without posing cybersecurity or misuse risks.
Must read: “Never put all your eggs in one basket,” fintech CTO warns after Anthropic suspends 60+ accounts
Anthropic Mythos access breached
According to a Bloomberh report, the group gained access to Mythos on the same day its preview was announced. They reportedly figured out where the model might be hosted online, relying on patterns or formats Anthropic had used for its previous models. Therefore, it is said to be an “educated guess” rather than a breach through advanced hacking. The unauthorised users have been testing the tools, but they have not reported to use for cybersecurity reasons.
Anthropic said, “We’re investigating a report claiming unauthorised access to Claude Mythos Preview through one of our third-party vendor environments.” The company also assured that there’s no sign that the activity impacted its systems.
Reportedly, the group tried multiple methods to access the model, which also involved using the access of a user connected to a third-party contractor working with Anthropic. Therefore, the group has access to legitimate credentials, rather than purely guessing the system location.
Must read: Despite blacklist, NSA is reportedly using Anthropic’s Mythos: Report
Bloomberg further revealed that the group belongs to a Discord community that actively looks for unreleased AI models, and also revealed proof to the publication of its regular usage. The group is “interested in playing around with new models, not wreaking havoc with them,” the report stated.
Alongside Mythos, the group claimed to have access to other unreleased AI models by Anthropic. This raises a bigger concern of potential gaps in access controls and security protocols, especially for highly sensitive systems like Mythos. It also highlights that even non-malicious users could expose powerful tools before proper safeguards are fully in place.
For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine