How AI Datacenters eat the world?
AI datacenters are no longer just “server farms”—they’re power-hungry supercomputers reshaping industries. From liquid cooling to gigawatt campuses, they’re eating electricity, real estate, and capital at a scale that rivals nations.
A field outside Temple, Texas tells the story. In August 2022, Meta began erecting a conventional H-type data center there. By April 2023, the site was razed—$70-odd million of work, gone. Why? Because the design, optimised for storage and web apps, couldn’t keep up with the breakneck pivot to AI. Meta paused, re-tooled for high-density, liquid-cooled AI compute, and then restarted with a new layout built purely for training and inference.
From “data-centers” to AI supercomputers
To understand the shift, think C3P: compute, connectivity, cooling, and power. Traditional facilities were sited near users to minimise latency and maximise bandwidth for content delivery (think YouTube and Netflix). AI flips the stack. Training clusters are largely closed systems, where seconds of compute dwarf milliseconds of network latency, and even consumer inference tolerates added roundtrip delay because the bottleneck is still computation. The new objective is density: more silicon per rack, tighter interconnects, and less energy spent on anything that isn’t math.
NVIDIA’s rack-scale GB200 NVL72 shows what “density-first” looks like: 72 Blackwell GPUs acting as one giant GPU, delivered as a liquid-cooled rack that draws ~132 kW via multiple 33 kW power shelves—orders of magnitude above the 3–10 kW racks common in legacy colo. Copper inside the rack keeps power-hungry optics to a minimum; optics take over only when leaving the rack.
Cooling: from breezy aisles to cold plates and CDUs
Air made sense when racks sipped power. But at 100–130 kW per rack (and rising), liquid wins on all fronts: footprint, heat capacity (~4,000× that of air), and component reliability/efficiency at lower junction temps. Meta’s redesign explicitly moved to liquid-ready halls to support GPU-heavy buildouts—a trend mirrored across hyperscalers and TPU houses.
Power: the new hyperscaler product
In the web era, megawatts measured a campus. In the AI era, they measure a building—and gigawatts measure a campus. Microsoft signed a 20-year deal with Constellation to restart Three Mile Island Unit 1, bringing ~835 MW back to the grid specifically to feed AI workloads by 2027. AWS bought Talen Energy’s 960 MW “Cumulus” campus next to the Susquehanna nuclear plant, a template for pairing datacenters with firm, carbon-free supply.
Meta, meanwhile, has gone all-in on multi-gigawatt AI campuses. “Prometheus” in Ohio targets ~1 GW by 2026, while “Hyperion” in Louisiana is scoped to scale toward 5 GW—hence the tents, prefab substations, and on-site generation to compress timelines.
Why the grid suddenly feels small
Analysts now treat power as the scarcest input to AI. SemiAnalysis estimates suggest global critical IT power could nearly double from ~49 GW (2023) to ~96 GW by 2026, with roughly ~40 GW attributable to AI alone. The IEA projects data-centre electricity consumption could more than double by 2030 to ~945 TWh, with AI-optimised sites as the main driver. Goldman Sachs expects a 50% rise in DC power demand by 2027 and up to 165% by 2030 (vs 2023). RAND adds that AI DCs may need ~10 GW of additional capacity in 2025 alone, compounding a tight build window.
This demand shock is already visible: PJM’s capacity auctions have spiked as Northern Virginia (Data-Center Alley) chases 30 GW of extra load by 2030, and U.S. data-center construction hit record highs in mid-2025. Translation: megawatts are the new moats.
The Denton detour: from bitcoin to Blackwell
Another microcosm: Denton, Texas. A former crypto campus is being rebuilt for AI HPC, tied to a nearby gas plant and expected to double the city’s electricity needs as CoreWeave and partners scale GPU hosting. It’s emblematic of a broader retuning of industrial-scale power for AI.
What this means for businesses (and everyone else)
- Location ≠ latency, it’s logistics. AI sites chase power and cooling water first, fibre second. Expect AI campuses near substations, pipelines, nukes, and peaker fleets—not just near metros.
- Capex shifts to electro-mechanical. The bill is tilting from servers to site works: transformers, switchgear, cooling towers, CDUs, and on-site generation. Permitting and transformer backlogs are the new lead-time killers.
- Sustainability is now “firm-ability.” Clean, 24/7 power (nuclear, hydro, CCS gas) is table stakes. Google’s commitments to multiple advanced-nuclear projects underscore where this is heading.
- Operational reality: spiky loads. AI training slams grids with fast load swings; operators must design for grid friendliness (storage, demand shaping, on-site gen) or risk curtailments and blackouts.
So…are AI datacenters really “eating the world”?
They’re certainly eating: power with multi-gigawatt campuses, metals with miles of copper per rack, capital with hundreds of billions, real estate with Manhattan-footprint clusters, and time with speed-build tents and prefab modules. If Web2 datacenters were libraries, AI datacenters are steel mills—louder, hotter, hungrier, and built to turn raw electrons into tokens, embeddings, and insight. And yes, Temple’s empty field is now two high-density halls with liquid cooling and about 170 megawatts of critical IT power, because the winning move was to start over.


