The “Claw Tax”: Anthropic, OpenClaw, and the Friction Between Platforms and Open Source

13

A brief but intense confrontation between Anthropic and the creator of OpenClaw, Peter Steinberger, has highlighted a growing tension in the AI industry: the struggle between proprietary ecosystem control and the freedom of open-source integration.

The Incident: A Temporary Ban

On Friday, Peter Steinberger—the creator of the OpenClaw tool and a current employee at OpenAI—reported that Anthropic had suspended his account, citing “suspicious activity.” The ban sparked immediate controversy on social media, particularly because Steinberger’s work involves developing OpenClaw to function across multiple AI models, including Anthropic’s Claude.

While the account was reinstated within hours following viral backlash, the incident exposed a deeper rift regarding how AI companies manage third-party developers. Notably, an Anthropic engineer intervened in the discussion, clarifying that the company does not ban users specifically for using OpenClaw and offered assistance to resolve the matter.

The Shift in Pricing: From Subscriptions to API

The tension stems from a recent policy change by Anthropic. Previously, Claude subscriptions provided a level of access that allowed for certain third-party integrations. However, Anthropic recently announced that Claude subscriptions will no longer cover usage through “third-party harnesses” like OpenClaw.

Instead, users of such tools must now pay via Anthropic’s API, which charges based on actual consumption. This has led to what Steinberger calls a “claw tax.”

Why did Anthropic change the rules?

Anthropic defended the move by citing technical and economic necessity:
High Compute Intensity: Unlike standard chat prompts, “claws” (automated agents) often run continuous reasoning loops.
Automated Loops: These tools frequently retry tasks or connect to multiple third-party services, creating usage patterns that standard consumer subscriptions were not designed to absorb.
Resource Management: By moving these users to the API, Anthropic ensures that the heavy computational load of autonomous agents is billed appropriately.

The Conflict of Interest and Ecosystem Control

Despite the technical explanation, Steinberger suggests a more strategic motive. He pointed out a pattern where Anthropic rolls out new features—such as the Claude Dispatch agentic capabilities—and shortly thereafter implements pricing changes that restrict open-source alternatives.

This raises a critical question for the AI industry: Are major model providers intentionally “locking in” users by making third-party, open-source integrations more expensive or difficult to maintain?

The friction is further complicated by Steinberger’s professional position. As an employee at OpenAI, his work on OpenClaw (via the OpenClaw Foundation) aims to ensure the tool works seamlessly with any model provider. He maintains that his testing of Claude is essential to ensure OpenClaw remains functional for the many users who prefer Claude over ChatGPT.

“One [OpenAI] welcomed me, one [Anthropic] sent legal threats,” Steinberger remarked, reflecting the heightened hostility currently felt in the competitive landscape between AI giants and the developers building on top of them.

Conclusion

The standoff between Anthropic and OpenClaw illustrates the growing divide between “closed” AI ecosystems and the open-source community. As AI agents become more autonomous and resource-heavy, the battle over who controls—and who pays for—the infrastructure of these agents will likely intensify.