Claude Code Source Leak: A Timeline
A factual roundup of the incident.
On March 31, 2026, security researcher Chaofan Shou posted on X that Anthropic had accidentally published the full source code of Claude Code inside an npm package update. The leaked package contained roughly 512,000 lines of TypeScript across about 1,900 files, according to The Hacker News.
How it happened: A debugging file called a source map (.map) was accidentally included in an update to Claude Code’s public package (version 2.1.88). Source maps let developers translate compressed production code back into readable source. This one pointed to an unprotected zip archive on Anthropic’s cloud storage that anyone could download.
The root cause: The Hacker News reported that the likely root cause was a known bug in Bun, the JavaScript runtime Claude Code is built on, that serves source maps in production mode even when they should be excluded. The bug was filed on March 11 (oven-sh/bun#28001) and was still open at the time of the leak. Anthropic acquired Bun in late 2025.
Anthropic confirmed the leak to several media publications, including CNBC, VentureBeat, and Axios. Anthropic called this a “release packaging issue caused by human error, not a security breach”. No customer data or credentials were exposed.
What the leaked code contained
Developers and researchers dug through the exposed source quickly. Here are their main findings:
44 unreleased feature flags covering autonomous background agents (internally called KAIROS), multi-agent orchestration, voice commands, and browser control via Playwright. Engineer’s Codex noted the flags amount to a readable product roadmap.
Internal model codenames and benchmarks. The code mapped codenames like Capybara, Fennec, and Numbat to specific Claude model versions (analysis). It also included performance metrics that showed regression on a false-claims evaluation between versions.
An “Undercover Mode” (undercover.ts). This feature tells Claude Code to strip AI attribution and Anthropic codenames from commit messages and PR descriptions when working on public repositories. More on this below.
Anti-distillation mechanisms. The code injects decoy tool definitions into system prompts to pollute any training data captured from API traffic (thread). A separate cryptographic client attestation system, built in Zig below the JavaScript layer, verifies that requests come from genuine Claude Code binaries.
A three-layer memory system. Persistent files serve as context pointers, the agent verifies its own memory against actual code, and idle-time consolidation (called “autoDream” in the source) runs in the background. Source: VentureBeat
187 hardcoded spinner verbs for loading animations, including “hullaballooing” and “razzmatazzing.” Developer Wes Bos posted the full list on X (374,900 views). He also found that Claude Code filters out 25 swear words from randomly generated 4-character IDs.
Alex Kim goes into more details for a lot of the findings above.
The Undercover Mode debate
This was the finding that got the most attention. Undercover Mode tells Claude Code to avoid mentioning AI involvement when contributing to public repos.
This means no AI Co-Authored-By lines and no mentions of Claude or Anthropic in commit messages.
On Hacker News, critics pointed to the explicit instruction to write commit messages “as a human developer would.” The argument is that this is AI impersonating human developers in open source projects. The Layer5 engineering blog summarized the concern: if a tool is willing to conceal its own identity in commits, what else is it willing to conceal?
Others read it differently. Several HN commenters noted the file is mostly about preventing leaks of internal Anthropic codenames and model identifiers into public git history, not about deceiving maintainers. One user wrote that the name “Undercover Mode” sounds spooky, but the file is largely about hiding Anthropic internal information like project names.
How the community responded
Shou’s original post reached over 32 million views on X. The main GitHub mirror hit 84,000 stars and 82,000 forks before Anthropic filed DMCA takedowns. PiunikaWeb reported that GitHub disabled over 8,100 repositories.
X/Twitter: Developers drove much of the X conversation. Theo Browne (t3.gg) called the closed-source strategy “the biggest fumble in the AI era”, pointing to cache invalidation bugs that were silently costing users 10-20x more in tokens. If the code were on GitHub, Theo argued, issues like these would be trivial to identify and fix.
Santiago Valdarrama took a more sarcastic angle, saying that “everything is fine in the age of AI-writes-everything-and-we-don’t-review-anything.” Bhavin Turakhia posted a full timeline of the leak and its insane reach.
Reddit: On Reddit, the biggest thread was on r/LocalLLaMA (3,700+ upvotes), where the focus was on what the architecture reveals for building similar systems with open-weight local models.
On r/ClaudeAI, one of top threads (1,800+ upvotes) said that “thanks to the leaked source code for Claude Code, I used Codex to find and patch the root cause of the insane token drain in Claude Code and patched it”.
The takedown race and the copyright question
Anthropic moved quickly on the legal front. GitHub disabled over 8,100 repositories via DMCA takedowns within hours. But the code had already spread.
Developer Sigrid Jin (@realsigridjin) used OpenAI’s Codex to rewrite the entire codebase from TypeScript to Python. The resulting project, claw-code, hit 50,000 GitHub stars in roughly two hours and at the time of this writing is at 105,000 stars.
Gergely Orosz (The Pragmatic Engineer) framed the legal question on LinkedIn: “Rewriting TypeScript code in Python probably means copyright doesn’t apply. The scary thing: it can be done in a trivial amount of time, with AI agents.” His post drew 107+ comments and 1,910+ votes.
One thread running through various social media sites talk what some called the “AI Copyright Paradox.” Boris Cherny has stated that 100% of his recent contributions to Claude Code were written by Claude Code itself. If significant portions of the codebase are AI-generated, and AI-generated work doesn’t carry automatic copyright under current US case law, that could complicate DMCA enforcement. Decrypt noted that the legal standing of copyright claims gets murkier the more AI-authored the code is.
What the leak means for security
One common reaction was that exposing the source code creates new security risks. But there’s a strong counterargument: code that anyone can read is code that anyone can audit. That makes open-source code more secure over time.
For example, AI security firm Straiker published a detailed security analysis of the leak, flagging potential attack vectors in the context management pipeline and offering a few potential fixes.
One thing worth noting is that there’s a difference between code built in the open from day one and a closed codebase suddenly exposed. Open source projects benefit from continuous security review as the code evolves. When closed software leaks, it gets that scrutiny all at once, without the benefit of community feedback shaping it along the way.
One valid security concern was not related to the source code itself. PiunikaWeb reported that attackers registered typosquatting npm packages targeting developers trying to compile the leaked code. The risk there is social engineering, not the source code itself.
Not the first leak
This was Anthropic’s second data exposure in under a week. Days earlier, a CMS misconfiguration had exposed internal files about an unreleased model codenamed “Mythos.” Fortune reported that the back-to-back incidents raised questions about operational practices while the company was reportedly preparing for an IPO.
The earlier Mythos leak had already rattled markets. On March 27, cybersecurity stocks fell sharply after details of the unreleased model surfaced: CrowdStrike dropped 7%, Palo Alto Networks declined 6%, Zscaler fell 4.5%, and the iShares Cybersecurity ETF lost 4.5%.
Yahoo Finance reported that yesterday’s leak probably rattled Anthropic’s $350 billion IPO ambitions.
What all this could mean for the AI coding industry
When an AI coding agent has access to your codebase, your credentials, and your personal data, you should be able to read every line of what it’s running. This leak gave the industry a rare look at how a production AI coding tool operates under the hood, and it reinforced something we’ve believed from the start at Kilo Code: this kind of transparency should be the default, not the exception.
Two other takeaways:
The orchestration layer is the product, not just the model. Only about 1.6% of the leaked code directly involves the AI model itself (Republic World). The rest is engineering: context management, multi-agent coordination, memory systems, tool orchestration, and permission handling.
Anti-distillation is becoming table stakes. The fake tool injection and client attestation systems show that Anthropic views protecting its models from competitor training as a clear priority. As coding agents get more capable, expect this cat and mouse game between model providers to intensify.


the rust port ultraworkers/claw-code repo is locked and can no longer be forked (101k forks already exist). However commit 0de48c69ea7eb1d35b37e98353740e320ffa50b6 in anthropics/claude-code/, their own repo, has the leaked typescript code. It came from PR #42063, which is now closed with unmerged commits. There are at least 3 other still open PR adding the source code.
Any or all of these forks and PR claim to the have the code. There's no guarantee they're a) authentic, b) untampered with.
+1 "Only about 1.6% of the leaked code directly involves the AI model itself (Republic World). The rest is engineering: context management, multi-agent coordination, memory systems, tool orchestration, and permission handling"