Modular datacentres are more about the mindset than technology

Interest in prefabricated (manufactured) modular datacentres is up markedly from just a couple of years ago, and 451 Research projects a further steep growth trajectory for the market. Still, evaluation of PFM datacentres is all too often narrowed down to ‘how much cheaper is it to build?’ Although there are PFM datacentre vendors that quote very competitive prices (even below most traditional brick-and-mortar building costs), such comparisons are inherently limited and static. By Daniel Bizo, Senior Analyst, Datacentre Technologies, 451 Research.

  • 9 years ago Posted in

WHAT PREFABRICATION and modularity really offer operators is the ability to adapt to and take advantage of change – change in market conditions and demand, change in the nature of an IT infrastructure, change in power and cooling technologies. Everyone wants to save money immediately, but PFM datacentres are should be seen much more strategic than that.

On the other hand, quantifying the increased competitiveness resulting from an agile, ‘right-sized’ datacentre strategy is challenging, and is typically not included in classical TCO calculations. Yet these benefits can be substantial, and we believe they need to be reflected in cost and business value benchmarks in order to unlock the full value of modular facilities. TCO comparisons remain crucial, and we are not arguing against them as a tool.

However, the assumptions underlying such cost simulations have a tendency to ignore the new benefits that PFM builds offer and, therefore, are linear in their dynamics.
At the heart of the issue is the application of existing datacentre planning and design practices even when considering PFM facilities, and the apples-to-apples comparisons such approaches lead to. We see at least four common but erroneous (also intertwined and potentially risky) assumptions running in favour of traditional datacentres when a supposedly ‘fair’ comparison is applied.

The first one is around granularity. An apples-to-apples comparison implies that the increment of a capacity expansion should be similar to a more traditional phase even when using prefabrication, to achieve economies of scale. Although it’s true that operators tend to achieve lower unit acquisition cost of capacity with larger installations (many operators build between 1-2MW phases), it says little about utilization over time, and in turn, the value generated by that investment. It also disregards the fact that prefabrication absorbs much of the complexities and overhead in the manufacturing process, and, as a result, can potentially create a stronger economies-of-scale effect even at smaller capacity sizes. With PFM datacentre infrastructure (including the white space, modular cooling and power distribution systems), adding 250-500kW (or even much less) can be economical too. Our research indicates most operators would prefer this.

This leads to our second point: effective utilization. ‘Fair’ comparisons also assume that a traditional and a PFM datacentre are equally good at meeting the requirements of future IT infrastructures. This is clearly not the case. The speed of deployment and granularity of capacity of PFM datacentres mean that businesses can react to the specific needs of new IT systems. Many datacentre managers are looking at major mismatches between datacentre resources (e.g., power density, cooling capacity, topology and configuration topology) and actual IT demand.

A major source of the problem is the necessary simplification of assumptions around the type of IT equipment that will be installed when the datacentre is built without knowing the specifications of the equipment (size of IT equipment, form factor, thermal power, cold air volume needs, airflow modelling of systems and cabinet, etc.). Real-world performance (such as effective cooling capacity) of the facility will in many cases fall rather short of theoretical performance due to such mismatches.

The habit of building large data halls also added more damage: changes to a few racks can have a knock-on effect on the efficiency of the whole room. Datacentre designers, because they cannot know exactly how much capacity will be needed, have a tendency to build excess capacity into the datacentre against a multi-year forecast to be on the safe side. This practice is clearly a waste of capital, and later energy.

By employing a speedy incremental capacity expansion strategy, datacentre professionals can drive much of the uncertainties out of the planning and design stage, and rely on far fewer assumptions for future requirements. Power, cooling and space can be modelled and configured to match the load profile (that is, power and thermal variations) of the incoming IT infrastructure more closely. The option to add capacity to an existing site in less than six months reduces business risks associated with running out of space, power or cooling. This, in turn, allows datacentre designers and managers to build in less spare capacity as a safeguard measure, and design closer to current, rather than projected, needs.
The third assumption rooted in traditional build practices is the single-tier site approach. IT applications vary greatly in how critical they are for the organization, yet it is common practice to give them all the same level of mission-critical support systems.

This can be a wasteful use of capital and operating budget. The ongoing virtualization of workloads and more advanced data replication techniques equip IT managers with more and more resiliency tools, lowering their dependence on any given facility.
PFM datacentres enables managers to introduce multiple resiliency levels more easily than before. Hosted desktop infrastructures, data analytics, high-performance computing applications, archives and content depots may not justify a highly resilient and expensive Tier III (as defined by The Uptime Institute) facility.

Money saved on not overprotecting non-critical applications can also be used to protect mission-critical systems even more by a more secure and fully fault-tolerant section.

The fourth mistake is to ignore the rapid pace of change in IT and datacentre technology (and practices). Converged IT infrastructures running all sorts of data-heavy workloads (from analytics through enterprise applications and desktop hosting) tend to be much higher density than the average rack, requiring in excess of 30kW of power delivery and 20kW of cooling in some cases.

Some will (or already does) support direct fresh air cooling and elevated temperature environments are increasingly accepted. Direct current power and in-rack UPS systems may completely change the electrical infrastructure; so might medium-voltage distribution for larger sites. Advances in software-based resiliency, such as more pervasive high-speed data replication and workload shifting, will allow for lower redundancies of the site.

When planning 5-10 years out, the implications of such technology changes cannot be ignored. But because it cannot be predicted either, the best response is keeping investments prudent and retaining flexibility. That’s what a prefabricated modular strategy offers. It makes business sense.