The benefits of location when choosing where to house your supercomputer

A broad spectrum of High-Performance Computing (HPC) users, whether they be research institutes, emerging entrepreneurs or corporate enterprises, recognise how the power and efficiency of their HPC applications can accelerate their computationally intensive workloads, providing research breakthroughs and unlocking competitive commercial advantages. By Spencer Lamb, director of research, Verne Global.

HPC technology has long been the engine room of much scientific and commercial innovation, from genome sequencing to modelling how wind can impact a Formula One car at speed. With the rise of artificial intelligence, deep learning and machine learning, HPC’s popularity and strategic importance will only continue to grow. With the continued evolution of this technology, research heavy organisations are now seeking nimble HPC cloud services to complement and in some cases, replace, their expensive in-house clusters.


Powerful data centres are required to deliver these evolving HPC services. However, most businesses and research organisations do not currently have access to the right infrastructure. Either the equipment they currently have on-premise is not up to the task or it is too complex and costly to maintain without specialist support. Concerns about skillsets, technology depreciation and staying ahead of the curve loom large too.

 

Faced with these limitations, forward-thinking business leaders are increasingly investigating whether to migrate their on-premise functions to strategically located, HPC-specialised data centres, which deliver access to complementary HPC cloud services. There are multiple benefits associated with this approach.

 

A key benefit relates to power supply and associated costs. Compared to run-of-the-mill enterprise IT systems, HPC and supercomputers require enormous quantities of energy and create voluminous amounts of heat when running. Today, the majority of the world’s computer rooms and data centres are simply not equipped to deliver the voluminous amount of power HPC clusters and high density compute racks need in order to operate.

 

It is not just power within the data centre that HPC-dependent organisations should be concerned about; the grid is also a critical factor to consider.

 

A mandate to decarbonise electricity supply resources, coupled with ageing infrastructures, means that some grids are becoming much less reliable. Electricity grid performance in Europe is under particular pressure. Pricing is also directly impacted by these factors. The UK, for one, has some of the highest pricing structures in Europe. Furthermore, given its national grid is predominantly powered by fossil fuels (almost 50 percent being gas), UK based organisations relying on on-premise HPC kit run the risk of not only pushing up their bills but also negatively contributing to the nation’s carbon footprint.

 

Unfortunately, any ambiguity around power supply reliability can make HPC implementations quite vulnerable to disruption and the cost of a total outage – even if it’s only for a short period of time – can be crippling.

 

It is also important to remember that processing large volumes of data, which is arriving at very high velocities, can put significant strain on computing storage infrastructure, driving up the costs associated with cooling. There are many cooling solutions on the market that attempt to deal with this. Chillers and even fans are enduring options. But given cooling is essential if servers are to operate effectively, organisations must bear in mind these costs when thinking about their long-term strategic for HPC usage.

 

Yet, one size does not fit all in HPC. Both businesses and research institutions alike want to spend more of their time and money on everyday operations – such as the staff who lead and undertake the research programmes - and less on power bills and technology maintenance. To achieve this balance with HPC, they need be sure that their applications are housed in environments which are attuned to -- and optimised around – their specific compute and network needs.

 

Each business, each HPC application and each implementation is unique to the purpose it serves to the organisation and so the solution enlisted should be too. Just think about the differing models and requirements that businesses or research houses in education, manufacturing, finance or the life science industries could need. They will not only have varying budgets but also very different scaling requirements. This plays directly into issues around power and cooling and the associated costs that come with them.

 

The good news is that there are data centres in some areas of the world that are optimised around HPC, providing both the flexibility, resilience and power security end-users want assurance of.

 

Typically located in Nordic countries, these data centres have access to naturally abundant and 100 percent renewable resources, such as geothermal- and hydro-electric. They are also connected to the rest of the world by advanced, state-of-art networks, negating concerns about latency and/or resiliency. What’s more, their climates mean they are often able to offer natural cooling all year round. Indeed, the natural, ambient temperatures in countries like Iceland provide free cooling that keep servers operating at optimal levels.

 

All servers must be cooled in order to operate efficiently and continuously, but as uses for HPC applications and systems have boomed, there has been an increased need to think differently. The requirements for operating these advanced systems are both large and stringent. Power and cooling are areas that can’t be underestimated. 

 

 

With data centres located in Iceland in particular, a further benefit for HPC-reliant organisations – particularly those focused on environmental research – is that the country already has a fully established, green energy ecosystem -- purposefully built to serve the large scale, power-intensive industries, like aluminium smelters.

 

For companies or organisations hesitant to make the move, continuing to merely replicate what they have been doing to date on-premise is no longer a viable or, indeed, sustainable option. All too many will find themselves trapped in the vicious cycle of technology maintenance, resulting in stalled research and experiments, as well as lost time.

Ultimately, data-intensive HPC workloads will be instrumental to progress vital research in many fields like science, medicine, national security, applied research and business, but their usage does require strategic thinking and planning. Moving applications and installations to cloud-based systems, based in locations where data centres are plugged into smart, renewable power supplies makes technological, financial and environmental sense.

 

 

 

 

 

 

 

By Cary Wright, VP of Product Management, Endace.
By Yoram Novick, CEO, Zadara.
By Dave Errington, Cloud Specialist, CSI Ltd.
BY Jon Howes, VP and GM of EMEA at Wasabi.
By Rupert Colbourne, Chief Technology Officer, Orbus Software.
By Daniel Beers, Senior Vice President, Global Data Center Operations of Ardent Data Centers, a...