Anthropic appears to have accidentally revealed how one of its most important AI products works. A large internal file linked to its agentic AI tool, Claude Code, was made public.
The issue happened when a 59.8 MB JavaScript source map file (.map) which is meant only for debugging was included in version 2.1.88 of the @anthropic-ai/claude-code package on npm. This version went live publicly and exposed the sensitive information.
How the leak spread
By 4:23am ET, Chaofan Shou (@Fried_rice), an intern at Solayer Labs shared the discovery on X. The post included a download link, which quickly drew attention.
Within hours, the massive 512,000-line TypeScript codebase was copied across GitHub and studied by thousands of developers.
For Anthropic, this is not just a small mistake. With a reported $19 billion annualized revenue run-rate as of March 2026, the leak is seen as a major loss of valuable intellectual property.
What was in the leak?
According to Venture Beat, One of the biggest discoveries in the leak is how Anthropic solved a major AI problem called “context entropy,” where AI gets confused in long sessions.
Developers found a three-layer memory system described as a “Self-Healing Memory” system.
- A file called MEMORY.md acts as a small index that always stays loaded
- It does not store data, only pointers to where data is located
- Actual information is stored in separate files and loaded only when needed
- Old conversations are not fully reloaded, but searched using keywords
This setup follows a “Strict Write Discipline,” meaning the system updates memory only after successful actions. This prevents errors from being stored.
The system also treats its own memory as a “hint,” meaning it verifies information instead of blindly trusting it.
The leak also revealed a feature called KAIROS which allows Claude Code to run as a background agent. Through a process called autoDream, the system improves and organizes its memory while the user is inactive and making it more efficient when work resumes.
Internal model details were also exposed, including codenames like Capybara, Fennec and Numbat. The data shows that even advanced models still face challenges with some versions having a higher false claims rate than earlier ones.
Another feature, “Undercover Mode,” suggests the AI can contribute to public projects without revealing its identity. The system includes instructions such as, “You are operating UNDERCOVER… Your commit messages… MUST NOT contain ANY Anthropic-internal information. Do not blow your cover.”
Security risks for users
The leak also raises security concerns. With the system’s structure now public, attackers may attempt to exploit weaknesses. A separate supply-chain attack involving the axios npm package during the same timeframe has increased risks for users who installed updates on March 31, 2026.
What users can do now?
Anthropic has recommended switching to its native installer and avoiding the affected npm version. Users are also advised to follow a zero-trust approach, check their systems, and rotate API keys if needed.


