What's the Difference?

Anthropic entered the ring.

Not the research ring. They were already there. The commercial ring. Claude is not a research project with an API bolted on. It is a product: over 500 enterprise customers spending seven figures a year, annualized revenue past $14 billion, built to compete and priced to sustain.

The interesting part is not the commercial success. Everyone is commercially successful right now; the market is large enough that competence is sufficient. The interesting part is that Anthropic built a commercially sustainable business and refused the Pentagon’s ultimatum in the same quarter.


“Open” is a word that has been doing a lot of work#

We have covered this pattern. OpenAI began open. Open research, open charter, “all of humanity.” Then the models went closed, the weights went proprietary, the safety framework got rewritten, and the nonprofit got converted. The name stayed. The openness did not.

We made the same argument about Microsoft naming its OS “Windows.” Names are branding. Architecture is commitment. When someone names their company “open” and then closes the source, the weights, the governance, and the safety framework, the name is a receipt from a decision that was reversed.

Anthropic never called itself “open.” It called itself “responsible.” That word has taken its own hits. We documented how RSP 3.0 weakened the binding commitments that distinguished its safety framework. No company is immune to competitive pressure.

But the distinction that matters: Anthropic weakened its internal framework and held its external red lines. RSP 3.0 bent. The refusal to build mass surveillance tools and autonomous weapons did not. OpenAI weakened its internal framework and held nothing. The safety board was dissolved. The charter was reinterpreted. The military prohibition was removed. The nonprofit was restructured.

Both companies face the same pressures. One drew lines and held them. One drew lines and erased them.


The comparison that matters#

People want to compare models. Context windows. Benchmarks. Tokens per second. Price per million tokens. The comparison is comfortable because it is measurable.

It is also temporary. The models are converging. A lead in February is a tie in April. If your basis for choosing an AI provider is which model scores 2 percent higher on HumanEval this month, you are optimizing for a variable that will not hold.

The variable that holds is the company. The entity that decides what the model is and is not allowed to do after your data is in the pipeline and switching costs make leaving expensive. That is when the character of the vendor matters. Not during the sales cycle. After it.

We made this argument about infrastructure broadly: PostgreSQL won because Oracle punished its own customers. Linux won because Windows could not be audited. Kubernetes won because proprietary orchestration meant proprietary lock-in. The technology that won long-term was not the one with the most revenue in year three. It was the one that did not make its users regret the dependency in year seven.

The AI industry is in year three. The contracts being signed today will produce dependencies that last a decade.


Have a motherfucking conscience#

The alignment problem is not just in the model. It is in the company.

A perfectly aligned model deployed by a company with no conscience is a well-trained tool in the hands of someone who will use it for whatever pays. The 220 engineers at Google and OpenAI who signed a petition were not asking for a better model. They were asking for a company that gives a damn.

Every integration is an endorsement. Every dependency is a relationship. A vendor that will sell out its principles under pressure will sell out its customers under pressure. The question is not whether the pressure will come. It always comes. The question is what the company does when it arrives.

OpenAI aligned its model. Then it misaligned its company. The military prohibition was removed. The safety board was dissolved. The charter was reinterpreted. The nonprofit was restructured. The model is aligned. The organization is not.

Anthropic held the line on the decisions that matter most. Not perfectly. RSP 3.0 was a real concession. But mass surveillance: no. Autonomous weapons: no. Pentagon ultimatum: still no. The company bent where the pressure was diffuse and held where the stakes were existential.

Anthropic benefits from this. The refusal is genuine, but it is not selfless. Being the principled lab attracts the researchers, the enterprise customers, and the developers who care what their tools are used for. Anthropic’s conscience is good business. That does not make it fake. It makes it sustainable, which is more important than pure.

But sustainable is not permanent. We have spent this entire series arguing that voluntary commitments fail under pressure. Anthropic is not exempt. A different CEO, a different board, a different quarter where the revenue gap widens. The structural incentives do not disappear because the current leadership has conviction. The line held this time. Whether it holds next time depends on whether regulation exists by then to hold it from the outside.


Leading the Future (into what, exactly?)#

The difference is not hypothetical. It is playing out in real time, with real money, in a congressional race in New York.

Alex Bores is a computer engineer turned New York State assemblymember. He co-authored the RAISE Act, signed into law in March 2025, which requires frontier AI developers to disclose safety protocols and report serious system misuse. Not a ban. Not a moratorium. Transparency requirements and a dedicated enforcement office. The kind of regulation that a company confident in its own safety work would welcome.

OpenAI’s president, Greg Brockman, co-founded a super PAC called Leading the Future with Andreessen Horowitz, Palantir co-founder Joe Lonsdale, and Perplexity. It has $125 million in commitments and $70 million cash on hand. Its Democratic affiliate, Think Big PAC, ran attack ads against Bores claiming his legislation would “CRUSH INNOVATION, COST NEW YORK JOBS and FAIL TO KEEP PEOPLE SAFE.” The ad argued that Albany bureaucrats should not regulate AI. The same argument every industry makes about every regulator until the thing they were not regulating causes a crisis.

Anthropic put $20 million into Public First Action, a bipartisan group backing candidates who support AI transparency and safety standards. Public First Action is spending $450,000 to support Bores in the race for New York’s 12th Congressional District.

Read that again. OpenAI’s president co-founded a $125 million political operation. One of its first targets was the lawmaker who wrote a transparency bill. Anthropic spent $20 million to back him. The company that says “trust us, we are responsible” is funding attack ads against the person who said “prove it.” The company that held the line on weapons and surveillance is funding his campaign.

This is not a benchmark comparison. This is a values comparison with a dollar sign attached. And unlike benchmarks, it will not converge to parity next quarter.


What’s the difference?#

Both companies build frontier models. Both compete for enterprise revenue. Both ship fast. Both want to win. Everything measurable is approaching parity.

The only durable differentiator is the thing that cannot be benchmarked.

One company proved you can compete and still say no when saying no is expensive. The other proved you can say yes to everything and still call yourself responsible.

That is the difference. Today. Whether it remains the difference depends on whether we build the regulatory framework that makes conscience a requirement instead of a differentiator. Anthropic made the harder choice. But a market where the right decision is the harder decision is one leadership change away from nobody making it.


This post is part of a series on AI policy and accountability. See also: Cannot in Good Conscience, Windows, Walls, Gates, OpenClaw Is Joining OpenAI. It Is Staying Open Source., and Safety Was the Product. Now It Is the Obstacle..