While the leak does not include model weights or user data, it provides a rare look into the architecture, security design and telemetry systems behind one of the industry’s most advanced AI coding agents.
While the leak does not include model weights or user data, it provides a rare look into the architecture, security design and telemetry systems behind one of the industry’s most advanced AI coding agents.In a major lapse, Anthropic has exposed the full source code of its terminal-based AI coding agent, Claude Code, after a misconfigured file was published on the npm registry.
The issue was first flagged by Chaofan Shou, co-founder of security firm Fuzzland, who revealed on X (formerly Twitter) that a source map file had been mistakenly included in a public release.
“Claude code source code has been leaked via a map file in their npm registry!” Shou wrote.
What exactly got exposed
The leak includes a codebase of roughly 57 MB, with over 5 lakh lines of code across over 1,900 files.
This exposed not just implementation details but also deeper insights into Claude Code’s internal workings, including multi-agent workflows, system prompts, tool integrations and feature flags.
While the leak does not include model weights or user data, it provides a rare look into the architecture, security design and telemetry systems behind one of the industry’s most advanced AI coding agents.
No hack, but a familiar mistake
Anthropic said the exposure was not the result of a cyberattack, but a packaging error.
“Human error in the configuration of our content management system,” the company told CNBC, explaining how the files were accidentally made public during a Claude Code release.
This is not the first such incident. A similar exposure occurred in early 2025, when a source map file was briefly published before being removed.
Just days before the latest leak, a separate report by Fortune said the company had accidentally exposed around 3,000 internal files, including a draft blog post referencing an upcoming model.
Soon after discovery, the leaked codebase was archived on GitHub, where it quickly gained traction, reportedly attracting close to 22,000 stars within hours.
The internet reacts
The incident triggered sharp reactions online, with users questioning how a company positioning itself as safety-first could make such an oversight.
One user likened the mistake to “locking every door… then uploading your floor plans to Google Maps,” calling it a basic error that should have been caught during code review.
Another post described it as the “biggest AI leak of 2026,” adding that “thousands of lines of their secret sauce… are now public on GitHub.”
What it means
While there is no immediate risk to user data, the leak could have broader implications. By exposing system prompts, architectural decisions and internal tooling, the incident offers competitors, researchers and even malicious actors a closer look at how modern AI coding agents are built and secured.
For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine