BT explainer: Inside Claude Mythos, the AI is forcing a rethink of global cybersecurity

BT explainer: Inside Claude Mythos, the AI is forcing a rethink of global cybersecurity

Launched on April 7, Mythos is the newest addition to the Claude family, but unlike previous iterations, it has sparked more concern than curiosity.

Advertisement
Mythos is effective at identifying, fixing, and exploiting security vulnerabilities across software systemsMythos is effective at identifying, fixing, and exploiting security vulnerabilities across software systems
Aishwarya Panda
  • Apr 26, 2026,
  • Updated Apr 26, 2026 10:40 AM IST

Anthropic’s latest AI model, Claude Mythos, is triggering fresh alarm across governments and cybersecurity circles, not just for what it can do, but for how quickly it can do it. Designed to autonomously find and even exploit vulnerabilities, the model is forcing policymakers to reassess whether existing cyber defences are built for an AI-first threat landscape.

Advertisement

Related Articles

Launched on April 7, Mythos is the newest addition to the Claude family, but unlike previous iterations, it has sparked more concern than curiosity.

Governments across the US, UK, Canada, and India are holding high-level discussions to assess the risks posed by such systems. In India, Finance Minister Nirmala Sitharaman recently chaired a meeting with banking leaders and stakeholders to evaluate the emerging threat from advanced AI models like Mythos. Read here

But why are authorities so concerned? Before diving deeper, it’s important to understand what Mythos can do and why Anthropic has released it under tight restrictions.

Must read: BT Explainer: OpenAI’s GPT-5.5 brings autonomy into focus, takes on Anthropic's Mythos  

What can Anthropic’s Claude Mythos do?

Advertisement

Anthropic’s Claude Mythos is built with a strong focus on cybersecurity, autonomous coding, and long-running AI agents. According to the company, the model is particularly effective at identifying, fixing, and exploiting security vulnerabilities across software systems—capabilities that have raised red flags among policymakers.

In early testing, Mythos identified a 27-year-old flaw in OpenBSD that could allow attackers to remotely crash systems simply by connecting to them. It also discovered a 16-year-old bug in FFmpeg, a widely used video-processing tool. Notably, the model found these vulnerabilities autonomously, without human intervention.

In a company blog post, Anthropic said, “Mythos Preview has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser.”

“Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely,” the company added.

Advertisement

Anthropic also noted that while Claude Opus 4.6 had a near-zero success rate in autonomous exploit development, Mythos Preview performed significantly better.

Engineers at Anthropic, even those without formal cybersecurity training, were able to use Mythos Preview to identify serious system vulnerabilities and create working exploits overnight. The company said these capabilities were not explicitly designed but emerged as the model improved in coding, reasoning, and autonomy.

Must read: Why Anthropic’s Claude Mythos could become a major security risk?

While these advancements can help fix security flaws, they also make it easier to exploit them. Because of this dual-use risk, Mythos Preview has been released to a limited group of about 40 companies and institutions under Project Glasswing. Participants include Amazon Web Services, Apple, Cisco, CrowdStrike, Google, Microsoft, and NVIDIA, among others.

Is Mythos AI a genuine threat or a fear-driven narrative?

The rise of Claude Mythos has triggered debate among policymakers and industry experts about whether the risks are overstated.

Some reports suggest that powerful AI systems like Mythos could enable more aggressive and large-scale cyberattacks, pushing enterprises and financial institutions to urgently upgrade their defences.

However, Sam Altman has dismissed the restricted rollout as “fear-based marketing.” Speaking on the Core Memory podcast, Altman said, “There are people in the world who, for a long time, have wanted to keep AI in the hands of a smaller group of people. You can justify that in a lot of different ways.”

Advertisement

“It is clearly incredible marketing to say, ‘We have built a bomb, we are about to drop it on your head. We will sell you a bomb shelter for $100 million,” he added.

Meanwhile, Ciaran Martin told the BBC, “We cannot say for sure whether Mythos Preview would be able to attack well-defended systems.” He added, “For some, this is an apocalyptic event, for others it seems to be a lot of hype.”

Must read: OpenAI launches GPT 5.4 Cyber to boost AI-powered cybersecurity, eyes lead over Claude Mythos

What experts are saying about Claude Mythos

Prabhu Ram, VP at CyberMedia Research (CMR), told Business Today that the model’s capabilities demand an urgent rethink of cybersecurity preparedness.

“Advanced AI models like Anthropic's Claude Mythos lower the technical skill threshold required to launch sophisticated intrusions, enabling adversaries to operate at a scale and speed previously exclusive to well-resourced nation-state actors,” Ram said.

“Security gaps which once afforded organisations days or weeks to respond now offer hours or minutes, transforming what was historically a manageable window into a near-real-time race that most enterprise security teams are not yet equipped to run,” he added.

He emphasised that the threat is not hypothetical, as models capable of “finding and weaponising those gaps are already in circulation and improving rapidly.”

Advertisement

The broader consensus: such AI systems do not create new vulnerabilities; they expose existing ones with unprecedented speed and efficiency. 

As Ram noted, “The real dividing line is between organisations that have invested in foundational security controls and can harness these tools defensively, and those that have not, who will find that advanced AI simply exposes and exploits weaknesses faster than any human attacker ever could.”

For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine

Anthropic’s latest AI model, Claude Mythos, is triggering fresh alarm across governments and cybersecurity circles, not just for what it can do, but for how quickly it can do it. Designed to autonomously find and even exploit vulnerabilities, the model is forcing policymakers to reassess whether existing cyber defences are built for an AI-first threat landscape.

Advertisement

Related Articles

Launched on April 7, Mythos is the newest addition to the Claude family, but unlike previous iterations, it has sparked more concern than curiosity.

Governments across the US, UK, Canada, and India are holding high-level discussions to assess the risks posed by such systems. In India, Finance Minister Nirmala Sitharaman recently chaired a meeting with banking leaders and stakeholders to evaluate the emerging threat from advanced AI models like Mythos. Read here

But why are authorities so concerned? Before diving deeper, it’s important to understand what Mythos can do and why Anthropic has released it under tight restrictions.

Must read: BT Explainer: OpenAI’s GPT-5.5 brings autonomy into focus, takes on Anthropic's Mythos  

What can Anthropic’s Claude Mythos do?

Advertisement

Anthropic’s Claude Mythos is built with a strong focus on cybersecurity, autonomous coding, and long-running AI agents. According to the company, the model is particularly effective at identifying, fixing, and exploiting security vulnerabilities across software systems—capabilities that have raised red flags among policymakers.

In early testing, Mythos identified a 27-year-old flaw in OpenBSD that could allow attackers to remotely crash systems simply by connecting to them. It also discovered a 16-year-old bug in FFmpeg, a widely used video-processing tool. Notably, the model found these vulnerabilities autonomously, without human intervention.

In a company blog post, Anthropic said, “Mythos Preview has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser.”

“Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely,” the company added.

Advertisement

Anthropic also noted that while Claude Opus 4.6 had a near-zero success rate in autonomous exploit development, Mythos Preview performed significantly better.

Engineers at Anthropic, even those without formal cybersecurity training, were able to use Mythos Preview to identify serious system vulnerabilities and create working exploits overnight. The company said these capabilities were not explicitly designed but emerged as the model improved in coding, reasoning, and autonomy.

Must read: Why Anthropic’s Claude Mythos could become a major security risk?

While these advancements can help fix security flaws, they also make it easier to exploit them. Because of this dual-use risk, Mythos Preview has been released to a limited group of about 40 companies and institutions under Project Glasswing. Participants include Amazon Web Services, Apple, Cisco, CrowdStrike, Google, Microsoft, and NVIDIA, among others.

Is Mythos AI a genuine threat or a fear-driven narrative?

The rise of Claude Mythos has triggered debate among policymakers and industry experts about whether the risks are overstated.

Some reports suggest that powerful AI systems like Mythos could enable more aggressive and large-scale cyberattacks, pushing enterprises and financial institutions to urgently upgrade their defences.

However, Sam Altman has dismissed the restricted rollout as “fear-based marketing.” Speaking on the Core Memory podcast, Altman said, “There are people in the world who, for a long time, have wanted to keep AI in the hands of a smaller group of people. You can justify that in a lot of different ways.”

Advertisement

“It is clearly incredible marketing to say, ‘We have built a bomb, we are about to drop it on your head. We will sell you a bomb shelter for $100 million,” he added.

Meanwhile, Ciaran Martin told the BBC, “We cannot say for sure whether Mythos Preview would be able to attack well-defended systems.” He added, “For some, this is an apocalyptic event, for others it seems to be a lot of hype.”

Must read: OpenAI launches GPT 5.4 Cyber to boost AI-powered cybersecurity, eyes lead over Claude Mythos

What experts are saying about Claude Mythos

Prabhu Ram, VP at CyberMedia Research (CMR), told Business Today that the model’s capabilities demand an urgent rethink of cybersecurity preparedness.

“Advanced AI models like Anthropic's Claude Mythos lower the technical skill threshold required to launch sophisticated intrusions, enabling adversaries to operate at a scale and speed previously exclusive to well-resourced nation-state actors,” Ram said.

“Security gaps which once afforded organisations days or weeks to respond now offer hours or minutes, transforming what was historically a manageable window into a near-real-time race that most enterprise security teams are not yet equipped to run,” he added.

He emphasised that the threat is not hypothetical, as models capable of “finding and weaponising those gaps are already in circulation and improving rapidly.”

Advertisement

The broader consensus: such AI systems do not create new vulnerabilities; they expose existing ones with unprecedented speed and efficiency. 

As Ram noted, “The real dividing line is between organisations that have invested in foundational security controls and can harness these tools defensively, and those that have not, who will find that advanced AI simply exposes and exploits weaknesses faster than any human attacker ever could.”

For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine

Read more!
Advertisement