Anthropic Faces Pressure as Claude Code Token Issues Disrupt Developers

Anthropic Faces Pressure as Claude Code Token Issues Disrupt Developers

Anthropic is racing to fix a problem that is frustrating users of its AI coding assistant, Claude Code, as complaints grow around unpredictable usage limits.

The company confirmed it is investigating why token limits are being reached faster than expected, calling the fix its “top priority”. For developers who rely on the tool daily, the issue cuts deeper than a minor glitch. It directly affects productivity, cost control, and trust in AI-assisted workflows.

Claude Code has gained traction as a practical tool for writing and debugging software. Developers integrate it into routine tasks much like they would a colleague reviewing code. When that system becomes unreliable, the disruption mirrors a key hire suddenly underperforming.

Customers purchase tokens to access AI services, yet the pricing model lacks clarity. Users often struggle to predict how quickly tokens will be consumed, especially as complexity varies between tasks. That uncertainty now sits at the centre of user frustration.

Feedback on Reddit highlights the inconsistency. One user reported they hit the token limit “much later” on their free account compared to their $100 (£75) a month paid account. Another pointed to inefficiencies in debugging cycles, warning: “One session in a loop can drain your daily budget in minutes”. A third flagged extreme jumps in usage: “A simple one sentence reply to a conversation just took me from 59% usage to 100%. How??”

Anthropic recently introduced peak-hour throttling, meaning tokens are consumed more quickly during periods of high demand. The move reflects a broader challenge across AI platforms: balancing infrastructure costs with user expectations. Yet for paying customers, it raises a critical question—should pricing fluctuate based on system load rather than user intent?

Developers depend on consistency. A sudden spike in token consumption can derail a workflow in the same way unexpected cloud costs can strain a startup’s budget. For teams operating under tight deadlines, even minor inefficiencies compound quickly.

Pricing tiers add another layer of complexity:

  • A Claude Pro subscription starts at $20 per month
  • Higher usage tiers can reach $100 or $200 per month
  • Enterprise pricing scales further for organisations

When costs escalate without clear cause, budgeting becomes guesswork rather than strategy.

The issue lands at a sensitive moment for Anthropic. The company recently disclosed that “human error” led to the accidental release of an internal file containing 500,000 lines of Claude Code source code on GitHub. It emphasised that “no sensitive customer data or credentials were exposed or involved”, but the incident still highlights operational risks in a fast-scaling AI environment.

Parts of Claude Code were already known through reverse engineering, and an earlier version had leaked in February 2025. Still, repeated exposure raises questions about internal controls and governance.

Anthropic is also navigating a legal battle with the US government over how its tools may be used by the Department of Defense. That scrutiny, combined with technical instability, places the company at a crossroads.

What happens if developers begin to question not just performance, but predictability? In a market where alternatives are rapidly improving, reliability may prove more valuable than raw capability.

Author: Pishon Yip

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *