The conversation the industry isn’t having#
The AI safety discourse has two lanes. The first is capabilities risk: what happens when AI systems become powerful enough to cause harm at scale? The second is governance risk: who decides how these systems are built, deployed, and regulated?
Both lanes share an assumption so fundamental, it is never stated: the physical infrastructure that AI depends on will continue to exist.
That assumption deserves examination.
AI is downstream of everything#
A frontier model does not exist in the abstract. It exists on hardware, in a data center, connected to an electrical grid, cooled by water, manufactured from minerals extracted from the earth, maintained by supply chains that span continents, and operated by societies stable enough to keep all of it running.
The resource requirements are not marginal. They are historic.
Energy. Training a single frontier model consumes electricity measured in gigawatt-hours. Meta’s Llama 4 Maverick required 7.38 million GPU hours on H100 hardware, consuming over 5 GWh of electricity, and that is a mid-sized open model, not the largest training run in the industry. Epoch AI estimates that the largest frontier training runs now exceed 100 MW of sustained power draw, growing at 2.2x per year. Inference (the ongoing cost of actually using the model) dwarfs training over time. The International Energy Agency projects that data center electricity consumption will more than double by 2030, with AI workloads as the primary driver.
Water. Data centers require enormous quantities of water for cooling. Microsoft’s 2025 Environmental Sustainability Report disclosed that the company’s global operations consumed nearly 6.4 million cubic meters of water, or roughly 1.69 billion gallons. These facilities compete with agriculture and municipal water systems for increasingly scarce freshwater resources.
Materials. The chips that power AI require rare earth minerals, cobalt, lithium, and silicon refined to extraordinary purity. The supply chains for these materials run through geopolitically unstable regions, depend on extraction practices with documented environmental and human rights costs, and face physical scarcity constraints that no amount of investment can override.
Land. Data centers occupy physical space. The construction boom required to house AI workloads is consuming agricultural land, straining local grids, and generating community opposition in regions that bear the environmental cost without receiving proportional benefit.
This is the physical foundation of every AI system in existence. Degrade any layer and the system degrades with it. Remove any layer and the system ceases to function entirely.
The scale is no longer abstract#
In January 2026, Mark Zuckerberg showed President Trump a map of Meta’s planned Hyperion data center campus in Richland Parish, Louisiana. Trump’s reaction: when overlaid on a map, the facility “literally covered most of the island” of Manhattan. A $50 billion investment requiring up to five gigawatts of power, roughly half the electricity consumption of New York City.
Hyperion is not an anomaly. It is the new baseline. OpenAI’s Stargate campus in Abilene, Texas will reach the size of Central Park by mid-2026. xAI’s Colossus facility in Memphis already spans several Manhattan blocks. Epoch AI’s satellite imagery analysis projects the largest AI data center campuses will each reach a fifth the size of Manhattan by 2027. Meta’s 2025 capital expenditure guidance is nearly $72 billion, seventy percent more than the previous year. Analysts project spending could surpass $100 billion.
These are not server rooms. They are industrial megaprojects on a scale comparable to dams, refineries, and military installations. And they are being built with less environmental review, less public input, and less regulatory oversight than any of those precedents required.
The policy response matched the scale of ambition, not the scale of consequence. After seeing Zuckerberg’s map, Trump offered to let tech companies build their own on-site power plants, using natural gas, coal, or oil, with federal approvals promised within two weeks. Not two weeks for environmental review. Two weeks for everything, for facilities that will consume resources for decades.
Efficiency accelerates consumption#
Meta, OpenAI, and xAI are building the facilities. NVIDIA is building everything that goes inside them. At CES 2026, Jensen Huang unveiled the Vera Rubin platform: not a chip, but a complete AI factory (compute, networking, storage, interconnect, and software orchestration sold as a single vertically integrated product). The platform delivers five times the performance of Blackwell for inference and could slash the cost per token to one-tenth the previous price. NVIDIA controls 92% of the GPU market, 86% of the AI data center segment, and has secured over $500 billion in orders for Blackwell and Rubin combined. Two days ago, NVIDIA reported $68.1 billion in quarterly revenue, $62.3 billion of it from data centers alone; its CFO noted the company has scaled its data center business by nearly 13x since the emergence of ChatGPT. When Meta spends $72 billion on infrastructure, a dominant share goes to one company, for hardware that serves one purpose.
Five times the inference performance per watt is a genuine technical achievement. The problem is what efficiency does when it operates inside an economic system with no consumption constraints.
NVIDIA’s pitch is that Rubin will reduce the cost per token by 10x. The industry hears “cheaper AI.” What actually happens is Jevons paradox. When a resource becomes cheaper to use, total consumption increases rather than decreases.
William Stanley Jevons observed this in 1865. James Watt’s steam engine was dramatically more efficient than its predecessors. Coal consumption did not decrease. It soared, because efficiency made coal-powered applications economically viable in contexts where they previously were not. The efficiency did not conserve the resource. It accelerated its depletion.
The same dynamic is operating now. When inference costs drop by 10x, the response is not “we can run the same workloads for one-tenth the energy.” The response is “we can run ten times the workloads.” New applications become viable. Existing applications scale. Agentic systems that were too expensive to run continuously become always-on. The efficiency gain is consumed entirely by demand expansion, and total resource consumption increases.
A 2025 ACM FAccT study documented this directly. Cost savings from more efficient AI hardware spur demand for new AI capabilities, which drive further hardware upgrades, which enable further expansion. The cycle is self-reinforcing. Each generation of hardware is more efficient per operation and consumes more resources in total.
NVIDIA’s Vera Rubin will not reduce the energy footprint of AI. It will reduce the cost per unit of AI, which will increase the number of units, which will increase the total energy footprint. The efficiency is real. The conservation is an illusion. And the company capturing 92% of the market has every incentive to ensure that the cycle continues, because the cycle is the revenue.
The recursive dependency#
The AI industry discusses existential risk in terms of what AI might do to humanity. There is a prior question: what is humanity doing to the systems that AI depends on?
Climate change is not a future scenario. It is a current process with measurable, accelerating effects on every resource AI requires.
Grid instability. Power grids designed for 20th-century climate patterns are failing under 21st-century conditions. The Texas grid collapse of February 2021 took data centers offline for days. That was not an isolated event. Days ago, the Blizzard of 2026 dropped over two feet of snow across the Northeast corridor, the region that carries the densest concentration of American internet infrastructure, including major data center clusters in northern Virginia, New Jersey, and the New York metro area. Over 600,000 homes and businesses lost power. The data centers did not. They have backup generators, redundant feeds, and contracts that prioritize their load. That is the point: the grid is not failing equally. It is failing for the communities that live next to the facilities while the facilities themselves keep running, consuming the power and water those communities need, under agreements those communities did not negotiate. The interval between these events is shortening. The grid is not being hardened fast enough to keep pace.
Water scarcity. The Dalles, Oregon, home to one of Google’s largest data center clusters, drew nearly a quarter of the city’s water supply in 2022, before the current expansion wave. The American West, where data centers are concentrated, is in a structural drought that predates AI and will outlast it. A data center that cannot cool its servers cannot operate.
Supply chain fragility. TSMC fabricates over 90% of the world’s most advanced semiconductors on an island 100 miles from a country that claims sovereignty over it. A typhoon, an earthquake, or a naval blockade could sever the supply chain for every frontier AI chip in production. The fabrication facilities take years to build and cannot be replicated on short timescales. There is no backup.
Community destabilization. Data centers require stable local infrastructure to operate: utility workers, emergency services, municipal governance. But the resource demands they impose (grid strain, water competition, inflated housing costs from construction booms) degrade the communities they depend on. The facilities need stable surroundings. Their presence makes those surroundings less stable.
AI does not exist outside these systems. It is one of the most demanding consumers of them. The technology that the industry frames as transcendent is, in physical reality, one of the most dependent artifacts humanity has ever produced.
The energy paradox#
The industry’s response to AI’s energy demands has been to pursue new energy sources. Nuclear, solar, fusion research. These are legitimate investments. They are also inadequate framing.
The question is not whether new energy sources can power AI. The question is what AI’s energy demands are doing to the total infrastructure burden that needs to be decarbonized.
When Microsoft signs a deal to restart Three Mile Island to power AI workloads, the framing is that AI is driving nuclear investment. The reality is that every training run, every new data center, every always-on inference fleet expands the total energy infrastructure the planet must build, maintain, and eventually migrate to carbon-neutral sources. AI is not competing for a fixed energy budget. It is multiplying the infrastructure that needs to transition. There is no way around it.
The industry will argue that AI will solve climate change. That AI-driven materials science, climate modeling, and optimization will produce breakthroughs that offset the energy cost. The examples are real: DeepMind’s GNoME discovered millions of stable crystal structures relevant to battery and solar cell design. AI weather models now outperform traditional numerical forecasting at a fraction of the compute cost. Protein folding, grid optimization, carbon capture modeling. The applications are genuine, and some are already producing results.
But the argument is not whether AI can contribute to climate solutions. It is whether the net effect is positive; whether the breakthroughs arrive faster than the infrastructure buildout compounds the problem they need to solve. Every month that a frontier lab trains a new model, the energy debt grows. Every data center campus that breaks ground locks in decades of resource consumption. The breakthroughs are promising, speculative, and incremental. The buildout is certain, massive, and accelerating. And the buildout is locking in decades of consumption now, while climate tipping points do not wait for ROI calculations. No one has demonstrated that the math nets out. The companies placing the bet have not attempted the calculation publicly. The burden of proof belongs to the entities consuming the resources, not to the public absorbing the cost. The buildout is not waiting for proof.
The accountability inversion#
The extraction documented in the previous post was digital. The public’s data, taken without consent or compensation. The extraction documented here is physical, and it follows the same structure. Communities hosting data centers bear the water costs, the grid strain, the construction disruption, and the depressed property values. Regions where cobalt and lithium are mined bear the contaminated groundwater and the collapsed tunnels. The minerals are refined overseas, shipped to a single island for fabrication, and installed in facilities that consume more electricity than the towns surrounding them. At every stage, the cost is local and the benefit is remote.
The Dalles, Oregon did not vote to become a cooling reservoir for Google’s AI workloads. Richland Parish, Louisiana did not hold a referendum on whether a facility the size of Manhattan should consume its power grid. The communities in the American West competing with data centers for drought-strained water supplies were not consulted. The decisions are made in boardrooms. The consequences land on zip codes and countries that will never appear in a sustainability report.
Yet the environmental cost of AI is not disclosed in any standardized way. There is no regulatory requirement to report energy consumption per training run, water usage per data center, or carbon emissions per inference query. The industry publishes sustainability reports voluntarily, with methodologies it designs and metrics it selects. Microsoft’s emissions have risen 23.4% from its 2020 baseline despite its carbon-negative pledge. A sustained, compounding trend driven by AI infrastructure buildout. Google quietly abandoned its net-zero 2030 target. Amazon redefined its accounting methodology to exclude categories that were driving its totals upward. The emissions are real. The pledges are not binding. And the accounting is designed by the entities being accounted for.
The conversation the industry will not have#
Here is the question that no frontier AI company will engage with publicly: Is the marginal value of the next model worth its marginal physical cost?
Not the marginal commercial value. Not the marginal capability improvement. The marginal cost to the planet’s energy systems, water resources, mineral reserves, and ecological stability.
The industry’s implicit answer is always yes, because the commercial incentive to train the next model is immediate and the ecological cost is diffuse, delayed, and borne by someone else. This is the structure of every tragedy of the commons. The resource is shared. The profit is private. The depletion is everyone’s problem.
The AI industry is extracting resources from the planet’s physical systems at an accelerating rate to build technology that the planet’s population did not ask for, did not consent to, and increasingly cannot opt out of. The extraction is real. The accountability is nonexistent. The assumption that the physical foundation will hold is untested and unexamined.
The minimum viable response is disclosure. It is necessary but not sufficient, and naming it as such is the point: the industry has not cleared even this bar. Standardized, mandatory, auditable reporting of energy consumed per training run, water drawn per data center, carbon emitted per million inference queries, and minerals sourced per hardware generation. Not voluntary sustainability reports written by the companies being assessed. Regulatory requirements with independent verification, comparable across the industry, published on a cadence that keeps pace with deployment. The public cannot evaluate a cost it cannot see. The first step is making it visible.
The disposable query#
The industry prices AI to encourage consumption. Free tiers, subsidized API rates, unlimited chat plans. The pricing is a growth strategy, not a reflection of cost. The cost (the electricity, the water, the hardware depreciation) is absorbed by the provider and externalized to the grid, the watershed, and the supply chain. The user sees a text box. The user does not see a power plant.
The result is a culture of disposable computation at every scale. A person rewrites an email they could have written themselves. A developer re-runs a prompt because they didn’t like the tone of the first answer. Multiply that by hundreds of millions of users and the waste is staggering, but it is the small end. At the other end, companies run benchmark suites across dozens of models, thousands of prompts, millions of tokens; not to ship a product, but to populate a leaderboard. Research teams launch evaluation pipelines that generate and discard outputs by the terabyte to move a metric by a tenth of a point. Entire GPU clusters run for days on testing and evaluation that no one will reference again. The throwaway computation at the top of the stack dwarfs the casual use at the bottom, and none of it is metered, reported, or accounted for.
Every individual query is cheap. That is the design. The playbook is the same one that built every addiction economy of the last twenty years: make the marginal unit free, let volume compound, and collect at a layer the user never sees. Social media monetized attention. AI monetizes computation. The currency is different. The structure is identical. And the physical cost per unit is orders of magnitude higher.
The industry knows this. The pricing is not a miscalculation. Subsidized consumption creates dependency. Dependency creates demand. Demand justifies the next hundred billion in infrastructure spending. The data centers are not being built to serve current needs. They are being built to serve the consumption patterns the pricing model is designed to create. The foundation is being scaled to match an appetite that is being engineered.
The foundation is physical#
Every frontier AI system is a bet that the grid stays up, the water keeps flowing, the fabs keep producing, and the societies maintaining all of it remain stable enough to do so. That bet is getting larger with every training run and every new data center campus. The conditions underwriting it are getting worse.
The industry frames AI risk as a question about the technology: what it might do, how to align it, who governs it. Those are real questions. They are also secondary. The primary risk is not that AI becomes uncontrollable. It is that the physical systems AI depends on become unreliable, and that by the time the industry notices, it will have built a trillion-dollar infrastructure with no fallback.
The planet is the variable. And no one is controlling it.
This post is part of a series on AI policy and infrastructure accountability. See also: You Built the Training Set and Safety Was the Product. Now It Is the Obstacle..