Two days ago, we wrote that safety was the product until it became the obstacle. We documented Anthropic’s revision of its Responsible Scaling Policy, the Pentagon’s ultimatum to CEO Dario Amodei, and the structural incentives that make voluntary safety commitments collapse under commercial and government pressure.
On Thursday, Anthropic answered the ultimatum. Amodei published a statement: the company “cannot in good conscience accede” to the Pentagon’s demands. The deadline was today. Anthropic let it pass.
This is the right decision. It is also the wrong story. Because the story is not that one company refused. The story is that one company is the only one left to refuse.
What the Pentagon demanded#
Defense Secretary Pete Hegseth met with Amodei on Tuesday and delivered terms: accept “any lawful use” of Claude across military operations, remove all safeguards, or face consequences. The consequences were specific. Cancellation of existing contracts. Designation as a “supply chain risk,” a label historically reserved for adversarial foreign technology firms like Huawei and Kaspersky. Invocation of the Defense Production Act to compel development of a military-tailored model regardless of the company’s consent.
The Pentagon sent what it called a “best and final offer” on Wednesday. Anthropic said the new contract language “made virtually no progress on preventing Claude’s use for mass surveillance of Americans or in fully autonomous weapons.” Anthropic offered to conduct joint R&D to improve system reliability for military applications. The Pentagon declined.
The framing deserves the scrutiny we gave it in the previous post. Hegseth simultaneously characterized Claude as essential to national security operations and threatened to label its maker a supply chain risk. As Amodei noted, these positions are “inherently contradictory.” A company cannot be both a critical supplier and a national security threat. The contradiction is not an oversight. It is leverage.
What Anthropic refused#
Anthropic drew two lines and held them.
Mass domestic surveillance. Anthropic supports lawful foreign intelligence collection. It refuses to provide AI for bulk surveillance of American citizens. Amodei’s reasoning is specific: “AI-driven mass surveillance presents serious, novel risks to our fundamental liberties.” He noted that the federal government already purchases detailed movement, browsing, and association records without warrants. Adding frontier AI to that apparatus would enable automated profiling at a scale and granularity that no previous surveillance technology has achieved. A system that can process billions of communications simultaneously does not just find threats faster. It redefines what constitutes a threat, and the redefinition is not subject to review.
Fully autonomous weapons. Anthropic distinguishes between partial autonomy (acceptable; already deployed in Ukraine) and full autonomy (unacceptable). The position is not ideological. It is technical: “frontier AI systems are simply not reliable enough to power fully autonomous weapons.” The company offered to work with the Pentagon on improving reliability. The offer was declined. The Pentagon does not want reliable autonomous weapons. It wants unrestricted access now.
Pentagon spokesman Sean Parnell responded that the military “has no interest in using AI to conduct mass surveillance of Americans (which is illegal).” The response is accurate and insufficient. The question is not what the military intends today. The question is what an unrestricted system permits tomorrow, under a different secretary, a different administration, or a different interpretation of “lawful.”
What Anthropic has already provided#
This is not a company that refused to work with the military. Anthropic is the first frontier AI company to deploy models on classified U.S. government networks. The first to deploy at National Laboratories. The first to provide custom models for national security customers. Claude is extensively deployed across the Department of Defense and intelligence agencies for intelligence analysis, modeling and simulation, operational planning, and cyber operations.
This matters because the Pentagon’s framing erases it. The narrative being constructed is that Anthropic is obstructing national security. The record shows a company that has been more aggressive than any competitor in deploying AI to military and intelligence applications, and that drew the line at exactly two use cases: mass surveillance of citizens and weapons that fire without human involvement.
The Pentagon is not asking Anthropic to contribute more. It is asking Anthropic to contribute without limits. The distinction is the entire argument.
The competitive landscape has already collapsed#
Anthropic is the last holdout among the four Pentagon AI contractors. Google, OpenAI, and xAI have either joined or pledged to join the military’s GenAI.mil network without the restrictions Anthropic maintains.
OpenAI removed its prohibition on military use in January 2024. Google dissolved its AI ethics board years ago. xAI, operated by an individual with direct financial ties to the current administration, has no published usage restrictions for military applications. The competitive field has already cleared. Every other company looked at the same pressure Anthropic is facing and chose the contract.
This is the structural problem. When three of four competitors capitulate, the fourth is not making a free choice. It is choosing between its stated principles and its survival as a government contractor. Anthropic chose its principles. But the choice should not have been necessary. The reason it was necessary is that no regulatory framework exists to prevent the government from demanding unrestricted access in the first place.
“Lawful” is not a safeguard#
The Pentagon’s position is that the military issues only lawful orders, and compliance is the military’s responsibility, not Anthropic’s. Therefore, any restriction Anthropic places on Claude’s military use is unnecessary and obstructionist.
This argument has a structural flaw. “Lawful” is defined by the entity issuing the orders, interpreted by counsel employed by the same entity, and reviewed (if at all) by courts that grant extraordinary deference to national security claims. The FISA court approved 99.97% of surveillance requests between 1979 and 2012. The legal opinions authorizing warrantless wiretapping under the Bush administration were classified for years. The legal memos justifying targeted killings were withheld from Congress. “Lawful” in the national security context is not an independent check. It is a designation that the executive branch applies to its own actions.
When the Pentagon says “we only ask for lawful use,” it is saying “trust our legal interpretation.” Anthropic is saying “we have seen what legal interpretations produce when they are not subject to independent review, and we are not comfortable providing the tool.” That is not obstruction. It is the minimum responsible position for a company whose technology could enable surveillance and targeting at a scale that previous legal frameworks never contemplated.
The Defense Production Act is not a policy tool#
The threat to invoke the Defense Production Act deserves specific attention. The DPA was enacted in 1950 to ensure industrial capacity during the Korean War. It authorizes the president to direct private companies to prioritize government contracts and to allocate materials for national defense.
Using the DPA to compel an AI company to remove safety features from a commercial product is not an exercise of wartime authority. It is the weaponization of emergency powers to override corporate governance on a technology policy question. If the government can use the DPA to force Anthropic to remove safeguards, it can use the DPA to force any technology company to remove any feature the government finds inconvenient. The precedent is not about AI. It is about the boundary between government authority and private-sector discretion over product design.
The DPA has been invoked for ventilator production during COVID-19, for domestic semiconductor manufacturing alongside the CHIPS Act, and for rare earth mineral supply chains. In each case, the purpose was to increase production of a needed good. This would be the first use to compel a company to make its product less safe. The distinction matters.
The self-regulation experiment has a third result#
In the previous post, we documented two tests of the self-regulation model. OpenAI shipped GPT-5.3-Codex without implementing its own safety framework’s required mitigations. Anthropic revised its Responsible Scaling Policy to eliminate the binding commitment to pause development when safety measures lag capabilities. Two companies. Two voluntary frameworks. Two frameworks rewritten when they conflicted with the business objective.
Anthropic’s refusal of the Pentagon’s demands is the third test, and it produced a different result. The same company that weakened its internal safety framework held its external red lines against direct government pressure. RSP 3.0 bent. The surveillance and autonomous weapons lines did not.
This is not a contradiction. It is a demonstration of where voluntary commitments actually hold. They hold when the cost of capitulation is existential: when the specific act being demanded is so clearly wrong that no business justification can override it. They do not hold when the erosion is gradual, the language is ambiguous, and the competitive pressure is diffuse.
Anthropic will not build a surveillance engine. That line held. Anthropic will not pause training when safety measures lag capabilities if a competitor is ahead. That line did not. The difference is not courage. It is specificity. A bright line against a named harm is defensible. A general commitment to “responsibility” is not.
This is the strongest argument yet for regulation. Not because companies are irresponsible. Because the only commitments that survive pressure are the ones specific enough to be indefensible to break. Voluntary frameworks produce vague commitments. Vague commitments erode. Regulation produces specific, enforceable requirements. Specific requirements hold. Anthropic has demonstrated both sides of this in the same week.
The employees already know#
On the same day Anthropic let the Pentagon’s deadline pass, current and former employees of Google and OpenAI signed a joint petition asking their own companies’ leadership to hold the same two lines Anthropic held. By midday, more than 220 had signed, and the number was growing. The letter is titled “We Will Not Be Divided,” and its position is precise: oppose the Pentagon’s demands regarding mass domestic surveillance and autonomous weapons that kill without human oversight.
The petition names the pressure campaign directly. It states that the Pentagon is negotiating with Google and OpenAI “to try to get them to agree to what Anthropic has refused.” It describes the threat to invoke the Defense Production Act against Anthropic. It notes that the strategy is division: isolate the company that refused, then use that isolation as leverage against the others. The title is not aspirational. It is a counter-strategy.
The signatories do not claim to agree on everything. The petition’s organizers state explicitly that signers need not share the same politics, the same views on AI regulation, or the same assessment of the technology’s risks. They need only agree that these two specific use cases are wrong. Mass surveillance of citizens. Autonomous killing without human oversight. The coalition is built on the narrowest possible common ground, which is what makes it credible.
This matters for a specific reason. The Pentagon’s leverage depends on the assumption that the workforce will follow where the contracts lead. That if leadership capitulates, the engineers who build the models will build what they are told. The petition says otherwise. Not because the employees are refusing to work. Because they are stating, on the record, that they know the difference between building AI for national security and building AI for unrestricted deployment, and they are asking their leadership to maintain that distinction.
The verification process is worth noting. Every signature is verified before publication. Signers can remain anonymous; their personal data is deleted within 24 hours. The organizers are not affiliated with any political party, advocacy group, or organization. This is not an activist campaign. It is a statement from the people who build the systems, directed at the people who run the companies, about what those systems should not be used for.
Under Secretary of War Emil Michael responded by accusing Anthropic’s leadership of having a “God-complex.” The characterization is revealing. When a CEO refuses an unrestricted military contract, it is a God-complex. When 220 engineers say the same thing, it is still a God-complex. The framing does not engage with the substance of the objection. It dismisses the objectors. That is the response of an institution that expects compliance, not agreement.
The question is structural, not moral#
Anthropic deserves credit for this decision. Dario Amodei put the company’s government contracts, its competitive position, and potentially its operational independence on the line. That is not nothing. In an industry where every other major player has already capitulated, refusing took genuine conviction. And now 220 employees at the companies that did capitulate are saying their leadership got it wrong.
But individual conviction is not a policy framework, and neither is a petition. Anthropic held the line because its leadership chose to. The Google and OpenAI employees signed because their conscience demanded it. A different CEO, a different board, a different quarter’s earnings pressure, and the calculation changes. A petition can be ignored. The defense against mass surveillance and autonomous weapons should not depend on the moral constitution of whoever happens to run the company that built the best model, or on whether enough engineers are willing to put their names on a letter.
The correct response to Anthropic’s refusal is not applause. The correct response to the petition is not admiration. It is the recognition that we are relying on one company’s conscience and 220 employees’ courage to hold a line that should be held by law. The fact that Anthropic refused is admirable. The fact that employees at Google and OpenAI had to organize a petition to ask for what should be obvious is damning. The fact that all of this may not be enough is a policy failure of the first order.
One company said no. 220 employees said their companies should have too. The question is why any of this was necessary.
The answer is the same one we have been documenting across this series. Voluntary commitments fail. Self-regulation fails. The experiment has produced its results. The only commitments that hold are the ones backed by law, and the law does not exist yet.
If you are a current or former employee of Google or OpenAI and you agree that frontier AI should not be deployed for mass domestic surveillance or autonomous weapons without human oversight, the petition is at notdivided.org.
This post is part of a series on AI policy and accountability. See also: Safety Was the Product. Now It Is the Obstacle., You Built the Training Set, and The Foundation Is Physical.