A realistic photo of a smartphone screen showing the app icons for ChatGPT, OpenClaw, and Claude.

Locked Out: The Bitter Battle Between Anthropic and OpenClaw’s Founder

Anthropic just made a move that sent shockwaves through the developer community. They temporarily banned Peter Steinberger, the creator of the popular open source tool OpenClaw, from accessing their Claude models. Steinberger shared a screenshot of the ban on Friday morning, which claimed his account was flagged for suspicious activity. The ban didn’t last very long. Once the post went viral on social media, his access was restored within a few hours. But while the technical block was short, the drama behind it reveals a much deeper conflict between big AI labs and the independent developers who use their tech.

The timing of this ban is what really has people talking. Just last week, Anthropic changed its rules to say that standard subscriptions would no longer cover third party tools like OpenClaw. If you want to use these tools now, you have to pay for usage through an API. Steinberger calls this a “claw tax.” He was actually following these new rules and using the API when he got hit with the ban anyway. This has led many to wonder if the ban was a mistake or a targeted move. One major detail makes the situation even more tense: Steinberger actually works for Anthropic’s biggest rival, OpenAI.

Anthropic claims they changed the pricing because tools like OpenClaw use up a lot of resources. Unlike a simple chat, these “agentic” tools run continuous loops, retry tasks, and connect to many other apps. They are much more expensive to run than a standard conversation. But Steinberger isn’t buying that excuse. He points out that Anthropic seems to be copying features from open source tools for their own paid service, Cowork, and then locking out the competition. He feels that the big labs are trying to kill off open source alternatives to protect their own profits.

The social media fallout was brutal. When people pointed out that Steinberger chose to work at OpenAI instead of Anthropic, he replied with a stinging comment. He said one company welcomed him while the other sent him legal threats. Even though he works for OpenAI, he still uses Claude to test OpenClaw and make sure it works for everyone. He says his job at OpenAI is about helping with future product strategy, while his work at the OpenClaw Foundation is about making sure AI tools stay open for everyone.

This fight shows a growing problem in the AI world. As companies like Anthropic and OpenAI grow, they are moving away from the open spirit that started the industry. They are building “walled gardens” where they control everything and charge for every interaction. Independent developers are finding it harder and harder to build on top of these models without getting squeezed. OpenClaw is a favorite for many users because it gives them more control than the official apps. By making it more expensive and difficult to use, Anthropic is forcing users back into their own ecosystem.

The future of open source AI is currently at a crossroads. If the big labs continue to block or tax independent tools, we might see a massive shift toward models that are truly open and free from corporate control. Steinberger’s experience is a warning to every developer out there. You might build something that millions of people love, but you are always just one button click away from being shut down by the company that owns the underlying model. For now, the ban is over, but the trust between Anthropic and the developer community is going to take a long time to fix.