Anthropic has accidentally revealed a huge chunk of how its AI coding tool actually works.
A debug file bundled into version 2.1.88 of its Claude Code package briefly exposed a 500,000+ line codebase (via VentureBeat). This gave developers an unusually detailed look at the system behind one of the fastest-growing AI tools right now.
The file was pulled quickly, and Anthropic says no customer data or credentials were exposed. However, the damage, at least from a competitive standpoint, is already done. The code has been widely mirrored and picked apart online.
At a glance, the leak confirms that Claude Code is far more than just a chatbot wrapper. In fact, it’s effectively a multi-layered system for managing long-running AI tasks. There is also a heavy focus on memory, specifically solving the problem of AI “forgetting” or getting confused over time.
Developers analysing the code pointed to a “self-healing memory” system that avoids storing everything at once. Instead, it keeps a lightweight index (called MEMORY.md) and pulls in relevant information only when needed. The idea is simple: less clutter, fewer hallucinations.
Another standout is something called KAIROS, which hints at a shift towards more autonomous AI. Rather than waiting for prompts, Claude Code can run background processes. This includes a feature dubbed autoDream that tidies up its own memory while idle. Consequently, it’s a more proactive approach than most current AI tools.
The leak also reveals internal model codenames and performance struggles. Notably, one newer model variant reportedly shows a higher false-claim rate than earlier versions. This suggests Anthropic is still ironing out reliability issues even as it scales.
There are also signs of more experimental features, including an “undercover” mode designed to let the AI contribute to public codebases without revealing it’s AI-generated.
For users, Anthropic says there’s no immediate risk. However, the company has warned developers to update away from the affected version and avoid npm installs from a specific window tied to a separate supply-chain attack.
For everyone else, this is a rare glimpse behind the curtain, and a reminder that the race to build smarter, more autonomous AI is still very much in progress.
The post Claude Code leak exposes how Anthropic’s AI really works appeared first on Trusted Reviews.