A radical rethink of the physical infrastructure: micro-modular data centres

MANY OF THE INNOVATIONS in data centre design at present are associated with the very largest facilities, such as those being built by Microsoft, Facebook and Google. However, these operators don’t have a monopoly on change. There is growing interest, especially among some smaller suppliers, in small, highly optimised and self-contained data centres. By Daniel Bizo, Senior Analyst, Data centre Technologies and Andy Lawrence, Research Vice President - Data centre Technologies (DCT) & Eco-Efficient IT, 451 Research.

  • 10 years ago Posted in

451 RESEARCH believes these data centres could win widespread adoption at the edge of the network, for high-performance computing and ultra-dense sites in metropolises. Nevertheless, the radical change involved may slow down adoption – or confine it to niche markets.

What is a micro-modular
data centre?

The development of the prefabricated modular data centre (PFM) market, in general, has been slowed down (and to an extent, it still is) by misconceptions about the available technology options and the maturity of products. Many still picture shipping containers when hearing the phrase ‘prefabricated’ or ‘modular’ data centre. While containerised products are reaping success and are highly relevant to the marketplace, they are just one of the many forms of prefabricated, modular data centres available for operators.

One of these options is the micro-modular data centre (MMDC), a form factor that is held back by lack of awareness, but its adoption is also hampered by its radical departure from traditional data centre designs. 451 Research believes these types of data centres have significant potential.

For clarity, and for use in our future reports and market sizing programs, we define MMDCs as follows:
“A micro-modular data centre is a form of prefabricated modular data centre that tightly couples or incorporates both the IT and supporting infrastructure facilities into a self-contained and pre-fabricated unit or cabinet. Typically, the cooling and climatic controls, as well as power distribution and network connectivity, will be built in; other integrated functions, such as physical security, fire suppression, shock absorption, shielding against electromagnetic interference, power conditioning and UPS or battery may also be provided. In some cases, IT systems will also be incorporated into and supplied with the design, and the entire unit may be optimised and managed for specific purposes such as high performance computing, very low energy computing, cloud services or analytics.”

For clarity, we also add the following to our definition:
“MMDCs can be designed to be sited in a wide variety of locations, although they will replicate many functions already provided by a traditional data centre. Often, the product will be equipped with a hardened shell for use in non-conditioned environments and require no superstructure. Micro-modular data centres do not (currently) include power generation and may require externally sourced cooled air or water. Micro-modular data centres are self-contained units – they are not containerised data centres or prefabricated ‘rooms’ that house traditional racks.”

This definition is somewhat formal; a simpler way is to think of the MMDCs as small data centres that can’t be walked into, are delivered in a complete or semi-complete package, and are adapted for their purpose. They may, for example, be used for edge-of-network computing, content distribution or as an evolution of branch computers or the ‘server closet.’

As of May 2014, half-, single- and up to three-rack MMDC products were available, but the definition applies to any future multi-rack designs that wrap IT cabinets (or an evolution thereof) into their own data centre technical facility. 451 Research expects continued evolution of form factors as the industrialisation of IT and data centres continue. Several vendors now make and sell MMDCs. Most notable examples are Spanish-based prefabricated data centre pioneer AST Modular, recently acquired by Schneider Electric; UK-based data centre equipment manufacturer Cannon Technologies; and US-based Elliptical Mobile Solutions, which was created to focus on the micro-modular market.

Aesthetics and function

The look and feel of a micro-modular site is dramatically different from a traditional data centre. There is no open white space – even if the units are housed in a dedicated data centre facility, let alone on the floor of a warehouse or outside an existing building. But this is exactly what the idea is about. By tightly encapsulating each cabinet (or a few cabinets) in their own dedicated white space infrastructure, there is no need to create the shared and conditioned macro environment that most existing data centres have in the IT space. This does away with much of the design and operational complexity and costs associated with such environments (or at least, it moves it to inside the cabinet, where it is entirely the responsibility of the supplier).

The primary benefit is simplified and guaranteed airflow (or cooled liquid) to the IT kit. Cabinets do not interfere with each other and there is no loss of efficiency from mixed and varying power densities. For example, if the MMDC is housed inside a traditional data centre, configuration and maintenance works being carried out elsewhere in the data hall will have no impact on the operation of cabinets (as long as power in maintained). Also, reconfiguration of the data hall won’t have efficiency implications and saves the need for repeated measurement and modeling to test the impact of changes.

This should give operators the confidence to make whatever changes to the data hall and the IT that are needed to meet business needs without having to worry too much about the performance of the data centre.
Compartmentalisation also helps with protection of IT. The hard outer shell makes physical access to systems markedly more difficult and conceals the nature of the IT infrastructure from unauthorised personnel. It should also contain fire from breaking in or out of the cabinet, acting as a firewall.

Micro-modular cabinets do not need to be housed inside data centres; however, they do require external support for critical power delivery and cooling. Adequate electrification of a site is a prerequisite, as well as removing heat from the cabinets via a water circuit or air if the cabinets are housed in an enclosed space. For high-density cabinets that need cold water, an external chiller plant may also be needed.

Today’s niche can be mainstream in the future

There are scenarios that appear to fit MMDCs really well, assuming the aforementioned requirements can be met. Building a dispersed grid of micro-sites for branch IT, telecommunication and IT services delivery networks can be greatly enhanced by the technology. Rapid and cost-effective deployment of high-performance computing is another typical use case. There are products that can sustain IT loads of up to 50kW with warm water cooling and even more than that with chilled water – several vendors can support 60kW loads, or in extreme cases, 80kW.

This is not to say that MMDCs are technically limited to these niches. We can envisage many use cases, especially because MMDCs could enable enterprises to maintain some local computing under their direct control, but move most of their workloads to colocation and the cloud. We believe that MMDCs are likely to prove relevant for many use cases, because of their performance and lean cost structure. In addition to the operational benefits already discussed, multiple MMDCs can be used alongside each other to support highly mixed power densities, using capital more efficiently than with less granular designs. Lower density (sub-10kW) cabinets tend to be substantially lower cost than high-density models.
Operators can also defer capital outlays for data hall capacity by buying the modules only when really needed. A modular critical power and chiller infrastructure (which can prove difficult) would help the operator to align its expenditures even more closely
with actual IT needs.

A further reason why we believe MMDCs have a bright future: bandwidth and latency constraints, and the cost of content distribution, will make it attractive to store some applications and data near the user, in densely populated cities and inside companies, rather than at remote cloud behemoths.

There are barriers to MMDC adoption, not least of which is the need to re-introduce new capital costs and technical support issues, against the tide of moving work to colocation and cloud.

Also, initially, for those with existing data centres that are not full, the use of new MMDCs outside existing secure and well-connected space will likely seem, and perhaps be, an added complexity. It is likely adoption will be gradual. Another a valid argument against MMDCs is the risk of rapid thermal runaway should the IT exceed available cooling capacity or the cooling fail – there is very little thermal capacity to absorb excess heat in the constrained space of a module. To reconcile this risk with the financial drive for high utilisation, the IT department will therefore need a profound understanding of the dynamic power profile across its infrastructure.

With dynamic IT system power control tools and live VM migration available at the disposal of operators, these technical challenges can be overcome – although there will still be institutional barriers. Suppliers are grappling with the many issues around redundancy for IT, cooling and power. It is likely that long-term solutions will use software as much as redundant or granular hardware.

Given that MMDCs are small, the capital and technical risks involved in innovation are likely to be lower. It may be more feasible to use power generated by fuel cells or renewable sources, for example, to try out supercapacitors or new forms of battery, or to introduce liquid cooling. The capital outlay will be lower, and the scale of the challenge much more manageable. Furthermore, buying support infrastructure, such as UPS systems, generator sets and chillers (if needed), in small increments may be less cost-effective than using larger systems, which can erode some of the long-term financial advantages if not carefully planned for. Despite the challenges, 451 Research considers MMDCs interesting for their very lean cost structure, ease of deployment and operations, and their large performance headroom that can enable very high thermal densities – all of this will help adopters drive the unit cost of IT capacity down.

Furthermore, most data centre operators have a very particular idea what a proper data centre should look like. Any radical departure from norms increases the likelihood of outright rejection, regardless of capital and operational benefits. Mimicking a traditional superstructure around micro-modules may address this issue; however, it could also undermine the cost savings potential at the same time.