Two Boycotts

The government boycotted the company that said no. The public boycotted the company that said yes.

On February 27, President Trump ordered every federal agency to stop using Anthropic’s technology. Defense Secretary Pete Hegseth designated Anthropic a “supply chain risk”, a label historically reserved for adversarial foreign technology firms like Huawei and Kaspersky. Federal agencies and contractors have six months to phase out all existing business with the company.

Hours later, OpenAI announced a deal with the Pentagon. Not days. Hours. The ink on the ban was not dry before OpenAI stepped into the space Anthropic’s refusal created.

By Saturday, ChatGPT uninstalls had jumped 295%. One-star reviews surged 775%. Claude took the top spot in the App Store. The QuitGPT movement had millions of supporters within days.

Two boycotts. One from the top. One from the bottom. Both responding to the same question: should AI companies be able to set limits on how the government uses their technology?


What OpenAI accepted#

We documented the Pentagon’s demands two weeks ago. The terms: accept “any lawful use” of AI models across military operations. Remove restrictions on mass domestic surveillance and fully autonomous weapons. The standard Anthropic rejected is the standard OpenAI accepted.

OpenAI’s published agreement with the Pentagon does not give OpenAI the right to prohibit otherwise-lawful government use. It states that the Pentagon will not use the technology to break laws and policies as they are currently written. That is not a restriction. That is a restatement of the status quo. The government was already required to follow its own laws. The contract adds nothing.

Sam Altman told OpenAI staff that the military’s “operational decisions” are up to the government. This is the position Anthropic explicitly rejected: that the builder of a weapon system has no responsibility for how it is used once delivered. Every defense contractor in history has made this argument. It has never been accepted as a complete answer by the public, by the courts, or by history.


What the public rejected#

The QuitGPT movement was not organized by a competitor. It was not a marketing campaign. It started on social media and spread because people understood the trade: a company that had already removed its prohibition on military use, dissolved its safety board, and reinterpreted its charter to permit a for-profit conversion had now positioned itself as the willing alternative to the company that refused unrestricted military access.

The numbers tell the story. OpenAI reportedly lost approximately 1.5 million paid subscribers in the first week — potentially $30 million or more in monthly recurring revenue. FEC filings revealed that OpenAI President Greg Brockman made a $25 million personal contribution to MAGA Inc., a pro-Trump super PAC. The same executive who co-founded a $125 million political operation targeting lawmakers who write AI transparency legislation.

The public connected the dots. The Pentagon contract was not an isolated decision. It was the latest in a sequence: safety board dissolved, charter reinterpreted, military prohibition removed, nonprofit restructured, $25 million to the political operation backing the administration that banned the competitor. Each decision was individually defensible. Together, they describe a trajectory.


The timing is the tell#

Companies respond to government contracts all the time. Defense procurement is a legitimate market. The Pentagon needs AI capabilities, and companies should compete to provide them.

But OpenAI did not compete on a parallel timeline. It moved within hours of the government punishing a competitor for maintaining ethical restrictions. The sequence matters: competitor refuses unrestricted access, government bans competitor, OpenAI announces deal to provide unrestricted access. That is not winning a contract. That is filling a vacancy created by government coercion.

The distinction is between “we built a better product and won the bid” and “our competitor was blacklisted for having principles and we raised our hand.” One is competition. The other is opportunism. The public saw the difference, even if the contract language did not make it explicit.


What the boycotts measure#

Neither boycott is about the technology. Claude and GPT are converging in capability. By the end of the year, the benchmark gap will be negligible. The boycotts are about something the benchmarks do not measure.

The government boycotted Anthropic because it values compliance. A contractor that sets conditions on how its product is used is a contractor that might say no at the wrong time. The ban is a message to every other AI company: cooperate fully or face designation as a national security threat. As we documented, the contradiction is the point. A company cannot be both a supply chain risk and a critical supplier. The framing is not an assessment. It is leverage.

The public boycotted OpenAI because it values integrity. A company that races to fill the gap left by a principled competitor, while its president funds the political apparatus that created the gap, has revealed what it prioritizes. Users can tolerate a lot from their tools. They cannot tolerate a company that treats a competitor’s ethics as a market opportunity.

Anthropic sued the Trump administration on March 9 to challenge the supply chain risk designation. The lawsuit says the company faces “irreparable” harm and could lose hundreds of millions. This is not a company grandstanding. It is a company fighting for survival after making the decision the public rewarded and the government punished.


The market is a signal#

Millions of people did not leave ChatGPT because Claude is a better model. They left because the company behind ChatGPT made a sequence of decisions that told them everything they needed to know about what would happen with their data, their trust, and their dependency once switching costs made leaving expensive.

This is the argument we have been making across this entire series. The durable differentiator between AI providers is not the model. It is the entity that controls the model. The entity that decides what it is used for, who gets access, and what happens when a government shows up with an ultimatum and a checkbook.

OpenAI answered that question. So did Anthropic. The government chose OpenAI. The public chose Anthropic. Two boycotts, two directions, and the market is making its preference visible in App Store rankings, subscriber counts, and a $30 million weekly revenue hole.

The models converge. The companies do not. That is what the boycotts are measuring.


This post is part of a series on AI policy and accountability. See also: Cannot in Good Conscience, What’s the Difference?, Safety Was the Product. Now It Is the Obstacle., and They Asked for Regulation. Here’s How It’s Going..