On 31 March 2026 Anthropic released version 2.1.88 of its Claude Code npm package. A packaging error bundled a 59.8 megabyte JavaScript source map file that pointed directly to a zip archive on the company’s own Cloudflare R2 storage bucket. Security researcher Chaofan Shou spotted the issue and shared it on X. Within hours developers downloaded the full archive containing nearly 512000 lines of unobfuscated TypeScript spread across roughly 1900 files.
The exposed code revealed the complete client side architecture of the flagship AI coding agent. It included unreleased features such as an always on background agent called KAIROS, a Tamagotchi style digital pet that reacts to user input, and detailed system prompts plus memory handling logic. Instructions for undercover operation in public repositories also surfaced.
Anthropic confirmed the incident. A company spokesperson stated, “This was a release packaging issue caused by human error, not a security breach. We are rolling out measures to prevent this from happening again.” The firm issued DMCA takedown notices, yet copies spread rapidly across GitHub with tens of thousands of forks.
The leak hands competitors and potential adversaries a detailed blueprint of how Anthropic builds agentic AI tools. Organisations that deploy similar coding assistants now face fresh questions about their own supply chain protections.
Author:Oje .Ese
