In May 2023, OpenAI CEO Sam Altman sat before the Senate Judiciary Committee and asked Congress to regulate his industry. He proposed a federal licensing agency, mandatory safety testing, and independent audits. The AI industry was not just accepting regulation; it was inviting it.
Nearly three years later, there is no federal AI safety law. What exists is a deregulatory executive branch, a handful of state laws, a renamed safety institute, and an EU regulation that American companies must follow anyway.
The federal picture#
On January 20, 2025, President Trump revoked Biden’s Executive Order 14110, which had required AI developers to conduct safety tests and share results with the government before release. Three days later, he signed Executive Order 14179, directing officials to ensure American AI dominance while eliminating regulatory obstacles.
In December 2025, Executive Order 14365 escalated further, establishing an AI Litigation Task Force to preempt state AI laws. The federal government proposed no replacement safety framework. It is dismantling state-level frameworks while offering nothing in their place.
The U.S. AI Safety Institute was renamed to the Center for AI Standards and Innovation, removing “safety” from the title and narrowing the mission to cybersecurity, biosecurity, and evaluating foreign AI systems. Whether American AI is safe for Americans is no longer the institution’s primary concern.
Congress has not passed comprehensive AI safety legislation and is not close. The 119th Congress has bills in committee (including the Algorithmic Accountability Act and the SANDBOX Act) but nothing imminent.
States filling the vacuum#
With no federal law, states are legislating despite the federal push to stop them.
Colorado’s AI Act takes effect June 30, 2026, requiring risk management programs, impact assessments, and consumer protections for high-risk AI systems.
Texas enacted TRAIGA, effective January 1, 2026, taking a more business-friendly approach focused on prohibiting intentional discriminatory and harmful uses of AI.
California passed SB 53, effective January 1, 2026, requiring large frontier AI developers to publish safety frameworks, report catastrophic risks, and protect whistleblowers. This is notably weaker than SB 1047, the broader bill Governor Newsom vetoed in September 2024 after industry lobbying. SB 53 is the compromise that survived: publish your own safety rules and follow them.
The industry could not make it six weeks.
The first test#
On February 5, 2026, OpenAI released GPT-5.3-Codex. Their own testing classified it as “high” risk for cybersecurity under their Preparedness Framework (the safety document they published to comply with SB 53). The framework specifies that “high” cybersecurity risk triggers misalignment safeguards. CEO Sam Altman acknowledged the classification publicly.
On February 10, the Midas Project reported that OpenAI shipped the model without implementing those safeguards. The threshold was triggered. The specified response was not executed. The model shipped anyway.
OpenAI’s defense: the safeguards only apply when high cybersecurity risk occurs “in conjunction with” long-range autonomy, which they say GPT-5.3-Codex lacks. They acknowledged the wording in their own framework is “ambiguous.” They shipped before clarifying the ambiguity. They clarified after a watchdog noticed.
A company wrote its own safety framework, triggered its own threshold, and found a reading of its own words that did not require action.
Transparency is declining#
This is not isolated. Stanford’s Foundation Model Transparency Index found that average scores dropped from 58 to 41 in a single year. Only 30 percent of companies submitted transparency reports, down from 74 percent.
Meta dropped 29 points. Mistral dropped 37. OpenAI fell from second place in 2023 to sixth in 2025. Anthropic (whose CEO spent months calling for transparency on 60 Minutes, at Davos, and in a 20,000-word essay that cited SB 53 by name) did not submit a report. Stanford had to compile one manually.
The companies most vocal about transparency in 2023 are now among the least engaged in practice.
The EU as de facto regulator#
While the U.S. debates whether to regulate, the EU AI Act is enforcing. It applies to all AI systems on the EU market regardless of where the provider is based. OpenAI, Microsoft, Amazon, Google, Anthropic, and Cohere have all signed the General-Purpose AI Code of Practice to demonstrate compliance.
Training separate models for different markets is prohibitively expensive, so EU-compliant versions become the global baseline. The United States has no federal AI safety regulation. Its companies are subject to EU regulation anyway. American regulators have no seat at that table.
The regulatory capture problem#
When AI executives call for regulation, listen to what kind. Altman proposed licensing for frontier models. Amodei proposed transparency requirements his company already meets. Both create barriers smaller competitors cannot clear.
This is not a conspiracy. It is an incentive structure. A company with a billion-dollar compliance budget benefits when compliance becomes expensive. The public hears “we want accountability.” The reality is closer to “we want accountability we have already budgeted for, applied to everyone.”
None of this means regulation is wrong. It means the industry should not be its primary author.
What accountability requires#
SB 53 was deliberately lightweight. Self-defined standards plus legal accountability should produce responsible behavior. The theory was tested in six weeks and produced a company citing its own ambiguous language to avoid its own safety protocol.
Self-defined standards work when the entity defining them has an incentive to be rigorous. Frontier AI companies have an incentive to ship. When the safety framework and the shipping schedule conflict, the shipping schedule wins.
Accountability that depends on the regulated entity’s good-faith interpretation of its own commitments is not accountability. It is a suggestion with a filing fee. Real accountability requires external evaluation, independent auditing, and consequences that cannot be negotiated away by redefining terms.
The AI industry asked for regulation. California delivered the lightest possible version. One company lawyered around its own commitments in six weeks. Another published 20,000 words on transparency while declining to submit a transparency report. The federal government is trying to eliminate the bar entirely.