HPC and OpenStack – where does it fit in?

By Andrew Dean, HPC Business Development Manager at HPC, Storage and Data Analytics Integrator, OCF.

  • 7 years ago Posted in
I’m often asked if OpenStack can fit into High Performance Computing strategy and if so where. To answer that question, I would like to start by stepping back a bit and focus on what our customers are trying to achieve rather than the technology.
 
For the majority of our customers it is to realise the highest research computing throughput for their investment (to get a good return on investment) and as short as possible ‘time to science’. Most of our users on the systems we supply are scientists, researchers (chemists, bioinformaticians, physicists) and engineers. We understand that their business isn’t in computing, computing is ‘just’ a tool like any other.
 
To achieve a high research computing throughput and a short time to science there a few things you need to think about. Sufficient compute capability - this could be on site, or in the cloud, but somewhere there must be adequate compute/storage/networking to crunch these numbers.
 
High utilisation – when you’ve got access to compute capacity, now you need to use all of it, as much of the time as possible. With more traditional HPC user groups such as physics, chemistry and engineering this has been relatively easy to achieve with well understood applications and excellent schedulers.
 
A traditional HPC cluster with a fixed software stack (OS/Scheduler/Libraries etc) manages to keep the majority of users happy most of the time; these kinds of environments (assuming sufficient workload) are often achieving utilisation in the 90 percent range, which is great as it shows these customers are making the most of their significant investment. For these users, ‘time to science’ is pretty good too. Once the service is up and running it remains a stable resource with software updated during planned downtime periods over its 3-5 year life span.
 
Now, I said the majority of users are happy most of the time – who are the minority that aren’t?
There are many use cases but some examples could include users that need Ubuntu (open source software operating system), when the traditional cluster is running on RedHat. Users may have a commercial application that is only supported on one scheduler (such as Grid Engine - commercially supported open-source batch-queuing system for distributed resource management) but the cluster is running another (such as SLURM - open-source job scheduler for Linux and Unix-like kernels)
 
Other issues include needing a feature in the very latest version of an application, but the main system is running on the more stable version behind. Or, they need a Hadoop cluster for a few months but the organisation doesn’t have a service available
 
This is where OpenStack fits in, building a flexible service to meet the demand for ‘edge cases’ that are, in most cases, currently being met by poorly utilised dedicated hardware either user or group owned workstations or rack servers.
 
OpenStack can host anything from a single small virtual machine all the way up to a complete cluster (including a full HPC software stack) within a virtual environment. OpenStack enables the building of these services quickly, often with very little admin overhead. Take the Ubuntu example above – rather than buying a server (often a 1-month procurement process, 1-month delivery), racking the server, connecting to the network and installing an OS (which can often take a few weeks to schedule with the IT team), a user could simply select ‘build me an Ubuntu VM’ from a drop-down menu taking what could have been 2 ? months to get a login prompt down to minutes and minimising ‘time to science’ for these edge cases.
 
Isca, the new HPC system at the University of Exeter is a good example of a hybrid environment combining traditional HPC and OpenStack.  As well as having the standard ‘Traditional HPC’ nodes and accelerated Nodes (featuring NVIDIA GPU’s and Intel Xeon Phi’s) the University has an OpenStack cloud known as the ‘Non-Traditional’ HPC to meet these edge cases I’ve described above, although primarily built with the requirements for Life Sciences in mind it could be used for almost any research computing workload not suitable for the traditional HPC environment.  The University wanted to ensure that the new system catered for as wide a variety of research projects as possible, so the system reflects the diversity of the applications and requirements its users have.
 
In summary and to answer my original question, is OpenStack High Performance Computing? Sometimes. Could OpenStack be a valuable tool in your research computing strategy? Absolutely!
 
If you would like to have a chat about your infrastructure or are considering OpenStack, please get in touch.
Every increment in understanding and collaboration around the stack, delivery, governance and...
In December, IDC predicted that global digital transformation investments will total $6.8 trillion...
Containers and Kubernetes are the driving force behind how the industry is reinventing the way we...
Only with a flexible integration layer built on the principles of API-led connectivity and reuse...
What precisely are the requirements of a DevOps practitioner, as opposed to an SRE, legacy...
For those in the ever-changing DevOps world, here are some best practices to reconnect with that...
There is no doubt that enterprise IT infrastructure has undergone radical changes over the last few...
Software application development is a profitable and high-growth industry, with the UK market...