The previous post examined what happens when national security pressure meets voluntary safety commitments. The commitments lose. But the national security framing, as urgent as it is, has an effect the Pentagon did not intend and the industry is happy to exploit: it narrows the conversation.
When AI regulation is discussed as a national security problem, the constituency is the state. Can adversaries use this model? Can the military? Can intelligence agencies? These are real questions. They are not the only questions.
The larger constituency is the American public. Not as an abstraction. As the specific population whose labor, creative output, economic behavior, and personal data constitute the training corpus on which every frontier model is built. The people who made AI possible are not a stakeholder group being consulted. They are a resource being consumed.
The commercial sector is unregulated by design#
There is no federal regulatory framework governing commercial AI deployment in the United States. None. The EU has the AI Act. Canada has AIDA. The UK has its framework principles. The United States has executive orders that can be rescinded by the next administration (and have been), voluntary commitments that can be revised when inconvenient (and have been), and a legislative environment where every proposed AI bill is met with the same objection: regulation will slow innovation.
This is not an oversight. It is a policy choice. The commercial AI sector operates in a regulatory vacuum because the companies that profit from that vacuum have successfully argued that filling it would cost America its competitive advantage.
“Regulation will slow us down” is always true in the short term and always irrelevant in the long term, because the question was never whether regulation slows things down. The question is whether the absence of regulation produces outcomes that a functioning society can tolerate.
The data was not donated#
Every frontier model is trained on data produced by the American public. Text, images, code, medical records, financial transactions, social media posts, search queries, location histories, purchase patterns, professional communications. The scale is comprehensive. The consent is nonexistent.
The legal theory underpinning this extraction is that publicly available data is not owned by its creators. If you posted it, published it, or failed to read the terms of service that licensed it, the data is available for ingestion. This theory is convenient. It is also a policy choice, not a natural law. Copyright, privacy, and data protection law could require consent, compensation, or both. They do not, because the entities that benefit from the current framework have the resources to ensure it persists.
The result is a wealth transfer of historic proportions. The collective creative and intellectual output of hundreds of millions of people has been converted into private model weights owned by a handful of corporations. The people who produced that output received nothing. Not compensation. Not equity. Not even the ability to opt out after the fact. The training already happened. The models already exist. The value has already been captured.
This is not a technology problem. It is a labor problem. When a factory converts raw materials into products, the suppliers of those raw materials are compensated. When a model converts human-generated data into capabilities, the suppliers of that data are told they should be excited about the product.
Displacement without a framework#
The commercial deployment of AI is restructuring the American labor market at a pace no existing institution is equipped to manage. This is not speculative. It is underway.
Legal research, paralegal work, customer service, content writing, graphic design, medical coding, bookkeeping, software testing, translation, data entry, administrative support. These are occupations where AI tools are already reducing headcount, compressing wages, or eliminating the entry-level positions that served as career on-ramps for millions of workers.
The standard response is retraining. Learn to use the tools. Adapt. Upskill. As individual advice, this is not always wrong. As a policy framework, it is absurd. It shifts the entire cost of economic disruption onto the individuals being disrupted and asks nothing of the entities profiting from it. It assumes a 55-year-old paralegal and a 23-year-old software engineer have equivalent capacity to retool their careers. It assumes new jobs will appear in the same geographies, at the same pay scales, in sufficient numbers. None of these assumptions have evidentiary support.
The closest historical analog is the federal government’s own evaluation of its flagship Trade Adjustment Assistance program. Participants earned less than comparable displaced workers who received no retraining at all, four years later. That program addressed trade-driven manufacturing displacement. The current disruption is faster, less geographically concentrated, and targets a broader skill range. If retraining failed under easier conditions, there is no reason to expect it will succeed under harder ones.
The industries being disrupted are the backbone of the American middle class. When manufacturing was offshored, affected communities did not recover for decades. Many have not recovered at all. The current disruption is faster, broader, and hits white-collar work that was previously considered insulated. There is no retraining program that operates at this speed. There is no safety net designed for this shape of displacement.
The data environment makes this worse. The federal labor statistics that would normally measure displacement are increasingly unreliable, as the current administration has demonstrated a willingness to reshape reporting to fit political narratives. When the institutions responsible for counting the damage are compromised, independent analysis and historical baselines become the only credible source. The Brookings data cited above predates the current distortion. That is what makes it useful.
The replacement does not work#
The displacement would be easier to justify if the systems doing the displacing were reliable. They are not.
AI models hallucinate. This is not a bug being fixed in the next release. It is a structural property of how large language models generate output. They produce text that is statistically plausible, not text that is verified.
The problem is not that AI is universally unreliable. In narrow, well-defined domains it can outperform humans. The problem is that commercial deployment is not limited to those domains. AI is not deployed where it is most accurate. It is deployed where it is cheapest. The accuracy gap is treated as an acceptable tradeoff by the companies capturing the savings. It is not acceptable to the public receiving the degraded service. Insurance claims routed through AI review systems are denied at rates that human reviewers never produced, and the appeals process assumes a human made the call. Patients receive AI-triaged mental health assessments that miss risk factors a clinician would catch, and the provider records it as “screened.” They were not told the human was removed. They were not given the option to pay for accuracy. The decision was made for them by an entity optimizing for margin.
Integrating an unreliable system into critical services is not innovation. It is liability transferred from the provider to the public.
But unreliability is only half the problem. Even when the systems work exactly as designed, the design itself is the problem.
The surveillance is the product#
Most commercial AI-integrated services are surveillance systems. Every document uploaded for summarization is ingested. Every email routed through AI filtering is read. Every query is logged, scored, and fed into a behavioral profile that determines what you see next, what you pay, and whether you qualify. The commercial AI stack does not just automate tasks. It observes, records, and profiles the people using it at a granularity no previous technology has achieved.
This is not a side effect. It is the business model. In 2023, Mozilla researchers found that every major automobile brand with AI-integrated systems collected driver behavioral data, 84 percent reserved the right to sell or share it, and none offered a meaningful opt-out. That was cars. The same architecture is now embedded in productivity software, healthcare portals, education platforms, and financial tools used by hundreds of millions of people daily. The user is not the customer. The user is the training signal.
The existing regulatory infrastructure does not cover this. HIPAA, FCRA, FERPA. Each was written to regulate a specific institutional relationship. AI surveillance does not operate within those relationships. It operates around them, in the gaps between definitions, where the data flows freely because no statute anticipated this architecture.
The industry calls this “personalization.” It is surveillance, rebranded with a feature name.
Previous surveillance required institutional intent. Someone had to decide to watch you, build the apparatus, justify the expense. AI-integrated services surveil by default. The data collection is not a feature that was added. It is one that would have to be deliberately removed, and no competitive incentive exists to remove it.
No one is accountable#
Regulation requires a responsible party. AI systems are designed, deployed, and marketed in a way that ensures no such party exists.
When an AI system denies a mortgage, filters a resume, recommends a treatment, or suspends an account, no person made the decision. The model’s logic is proprietary. The data it used is undisclosed. The applicant, patient, or customer has no one to confront and no decision to appeal. The developer did not make the specific decision. The deployer relied on the developer’s product. The model is not a legal entity. The result is a chain of delegation that terminates in no one.
Before AI, accountability was imperfect. People were scapegoated. Institutions deflected. But even imperfect accountability, scapegoats and settlements and revoked licenses, meant the system enforced its own rules badly rather than not at all. AI does not even pretend. The entire accountability infrastructure that regulation depends on has been architecturally removed.
The companies know this#
This is not information frontier AI companies lack. They know the displacement is real. They publish research quantifying it. They fund think tanks that study it. They make public statements acknowledging it. And they continue deploying, because there is no regulatory mechanism that requires them to do anything else.
OpenAI’s charter says the company exists to “benefit all of humanity.” Anthropic’s mission emphasizes “the responsible development and maintenance of advanced AI for the long-term benefit of humanity.” These organizations have explicitly committed to broad public benefit. They have also captured billions in value from public data, are actively displacing public labor, and face no binding obligation to share the gains with the public that made those gains possible.
The mission statements are not lies. They are aspirations. That distinction matters, because it is harder to regulate an industry that sincerely believes it is doing good. A cynical industry can be shamed into compliance. An idealistic one will explain, at length, why the rules should not apply to it.
But aspirations, like voluntary safety commitments, are revised when they become inconvenient. As we documented in the previous post, Anthropic quietly revised its own safety commitments when competitive pressure made them costly. The same structural logic will cause every frontier lab to prioritize shareholder returns over public benefit. Not because the people running these companies are indifferent, but because the system they operate within does not permit sustained altruism at the expense of competitive position. The sincerity is real. It is also structurally irrelevant. Good intentions without binding obligations produce the same outcomes as no intentions at all.
The value is real; the distribution is not#
AI produces genuine breakthroughs. DeepMind’s AlphaFold predicted the structure of 200 million proteins and made the database freely available. Researchers used it to advance malaria vaccines, cancer treatments, and enzyme design. An NBER study found that AI tools made the least experienced customer service workers 34 percent more productive. AI adoption in low-income countries is growing four times faster than in wealthy ones. These are not trivial outcomes. The technology creates real value.
The question is not whether AI creates value. It is who captures it.
The protein structures were free. The drugs developed from them will not be. The customer service workers became 34 percent more productive, and then the jobs were eliminated entirely. The productivity gain was a stepping stone to headcount reduction, not a durable benefit to the workers who produced it. The adoption surge in developing countries measures usage, not welfare; billions of people adopted social media too, and the value of that adoption accrued to Menlo Park.
Every benefit cited in defense of the current model follows the same pattern: the research is socialized, the product is privatized, and the gap between the two is where the profit lives. This is not an argument against AI. It is an argument against an economic structure that treats public benefit as a byproduct rather than an obligation.
What regulation actually looks like#
That obligation requires a framework. Not a new one. The same kind that governs every other industry whose products affect public welfare.
Data rights. Individuals should have legally enforceable rights over data used to train commercial models: the right to know, the right to compensation, and the right to opt out. The technical objection that this is computationally difficult is an engineering problem, not a policy argument. The pharmaceutical industry found clinical trials expensive. They conducted them anyway.
Displacement accountability. Companies deploying AI systems that eliminate jobs should bear a binding legal obligation to fund transition programs for the workers displaced. The mechanism, whether revenue-based, headcount-based, or sectoral, is a design question. The principle is not. Environmental regulation requires polluters to fund remediation. AI displacement is not categorically different; it is just newer.
Algorithmic transparency. AI systems used in hiring, lending, healthcare, and public services should be subject to audit requirements. The public has a right to know when an algorithm made the decision, what data it used, and how to contest the outcome. “Proprietary model” is not an acceptable response to “why was I denied.” The industry counterargument is that disclosure exposes trade secrets and enables gaming. This is the same argument banks made against mortgage disclosure requirements, hospitals made against outcomes reporting, and manufacturers made against ingredient labeling. In every case, the transparency requirement survived because the public interest in knowing outweighed the commercial interest in hiding. The gaming risk is real and manageable. The opacity risk is real and corrosive. A system that cannot explain its decisions to the people those decisions affect should not be making them.
Sectoral deployment standards. AI systems deployed in healthcare, finance, education, and legal services should meet sector-specific safety and accuracy standards before deployment. Not after harm is demonstrated. Before. The FAA does not wait for a crash to certify an aircraft. The FDA does not wait for deaths to approve a drug. Commercial AI should not operate under a lower standard simply because the lobbyists arrived before the regulators.
Accountability by design. Every AI system making consequential decisions about individuals must have a legally designated responsible party. Not a terms-of-service disclaimer. Not a shared-responsibility model that diffuses liability into nothing. A specific entity, subject to audit, required to explain decisions, and liable when the system causes harm. If no one is accountable, the system should not be deployed.
The public made this possible#
The American public is not a bystander to the AI revolution. The American public is the foundation of it.
Every model was trained on their words. Every benchmark was validated against their judgments. Every commercial application generates revenue by automating tasks they used to be paid to perform. They did not invest in these companies. They did something more fundamental: they produced the raw material without which these companies would have nothing to sell.
The people who built the training set deserve more than a thank-you note in a mission statement. They deserve a regulatory framework that acknowledges their contribution, protects their interests, and requires the companies profiting from their labor to share the cost of the disruption those profits create.
That framework does not exist. Not because no one has proposed it. Because the entities it would regulate have ensured, through lobbying, through litigation, and through the slow gravitational pull of money on policy, that it has not yet arrived.
You built the training set. Retraining for the jobs you lost to your own data is not a concession. It is the floor. And the floor is embarrassingly low.