Liquid Cooling as a Platform: The Missing Piece in Scalable Data center Infrastructure

It’s easy to be astonished by how fast AI has progressed. But industry insiders are equally amazed by the pace at which the infrastructure underlying artificial intelligence has developed – and the surge in power demands that comes with it.

Even the simplest prompt triggers a cascade of computation and data transfer. Every link in that chain consumes electricity. And much of it funneled into powerful, power-hungry NVIDIA GPUs. 

Lower-powered alternatives are emerging, but the NVIDIA ecosystem still dominates, dictating the thermal profile of modern data centers. Without advanced cooling, GPUs can’t hit peak performance or density. 

The International Energy Agency estimates global data center energy consumption will near 1000TWh by 2030, more than doubling the 2024 total. That’s a staggering climb, with energy consumption growing 12 percent per year and now accounting for 1.5 percent of all global consumption1. 

But compute is only part of the equation. Every kilowatt powering a chip creates heat. According to ABI Research, 37 percent of the energy used in data centers goes straight to cooling.2

A 1MW facility was once a flagship. Today, hyperscalers are designing data centers in the hundreds of megawatts – and NVIDIA is targeting 1MW per rack by 2027. Meanwhile ABI predicts the number of public data centers will quadruple by 2030.

That’s not just a growth curve – it’s a pressure cooker. And thermal management will define who can scale, who can sustain, and who can lead. While operators can count on vendors to continually deliver better and more efficient compute, the same can’t be said for cooling. Traditional air cooling is commoditized and incapable of handling the heat densities coming with the next wave of AI infrastructure.

The industry is entering a new phase – one where cooling isn’t just a backend necessity, but a strategic differentiator. Liquid cooling is the answer. The Uptime Institute reports that 22 percent of organizations are already using some form of direct liquid cooling (DLC)”. 

Liquid cooling is no longer exotic – but it’s still largely custom, especially outside GPU farms and hyperscale environments. That model is not sustainable.

Liquid cooling for all?

What will it take to make liquid the default?

When it comes to servers, storage, or network infrastructure, operators expect easy integration. Cooling should be no different. Whether designing in or retrofitting, liquid cooling must become predictable, repeatable, and scalable.

It’s not just about day one. If every cooling system in a data center requires custom design and management, operators can’t scale with AI. Cooling must move at the pace of compute.

It must also be easy to service. From hyperscalers supporting global SaaS platforms to enterprise data centers backing up financial services, downtime is unacceptable.

Cooling, Platformized 

At LiquidStack, we’ve built our approach around these needs. We started with two-phase liquid immersion – arguably the most demanding form of thermal management. We’ve since expanded to cover the full spectrum of liquid cooling needs  with a major focus on CDUs for DLC systems.

Our latest solution, the GigaModular CDU, is built for scale. It’s a single-phase DLC platform that scales from 2.5MW to 10MW, with centralised control and modular pump architecture. Everything is accessible from the front, making service simple and placement flexible.

Operators see a 25 percent saving in capex and floorspace, a critical advantage when deploying rapidly or retrofitting legacy environments. And our “pay-as-you-grow” model helps align capital flows with capacity expansion.

But scale doesn’t stop at the rack. We’ve built resilience into our ecosystem, too – because global operators can’t wait on a supply chain.

We currently operate two factories in the US and are actively expanding our manufacturing footprint globally. Our global service network ensures consistent SLAs worldwide. 

Operators can’t afford to slow down – and they can’t build past their cooling capacity.

LiquidStack delivers cooling as a platform – scalable, serviceable, and globally deployable –  just like the other critical infrastructure in the data center.

1. https://www.iea.org/reports/energy-and-ai/executive-summary

2. https://www.abiresearch.com/blog/data-center-energy-consumption-forecast?utm_source=chatgpt.com  

3. https://intelligence.uptimeinstitute.com/resource/uptime-institute-cooling-systems-survey-2024-direct-liquid-cooling 

By David Knox, Global Director of Energy & Sustainability at Colt DCS.
By François Haykal, Senior Project Consultant at BCS, the specialist services provider to the...
By Stuart Farmer, Sales Director at Mercury Power.
By Jean-Marc Bourreau, Co-founder, Afrik Foundation.
By Alec Stewart, Partner, Data Centres, Cundall.
By Sebastian Murphy, Technical Director - Data Centers EMEA, blu-3, Shaheed Salie, Technical...
International Women in Engineering Day provides an opportunity to celebrate the women driving...