For the first time, more than 100 accelerated systems are on the list of the world’s 500 most powerful supercomputers, accounting for 143 petaflops, over one-third of the list’s total FLOPS. NVIDIA® Tesla® GPU-based supercomputers comprise 70 of these systems – including 23 of the 24 new systems on the list – reflecting compound annual growth of nearly 50 percent over the past five years.
There are three primary reasons accelerators are becoming increasingly adopted for high performance computing.
First, Moore’s Law continues to slow, forcing the industry to find new ways to deliver computational power more efficiently. Second, hundreds of applications – including the vast majority of those most commonly used – are now GPU accelerated. Third, even modest investments in accelerators can now result in significant increases in throughput, maximising efficiency for supercomputing sites and hyper scale datacentres.
“One day, all supercomputers will be accelerated,” said Jen-Hsun Huang, co-founder and chief executive officer at NVIDIA. “Leading supercomputing sites around the world have turned to GPU-accelerated computing, reflected in today’s TOP500 list. As the pace of discovery accelerates and researchers turn to computation, machine learning and visualisation, we fully expect to see this trend increase.”
Many of the world’s leading systems use NVIDIA Tesla accelerators, including the fastest supercomputers in 10 countries. These include: the fastest system in the U.S., Titan, at Oak Ridge National Laboratory; the fastest system in Russia, Lomonosov 2, at Moscow State University; and the fastest system in Europe, Switzerland’s Piz Daint, at the Swiss National Computing Centre.
Moore’s Law Slows
As the size of transistors approaches atomic scale, it has become increasingly difficult to improve microchip performance without disproportionately increasing power or cost. While the industry can no longer rely on performance doubling every 18 months, computational demands continue to increase sharply. This has led to the growing adoption of accelerators, which work alongside CPUs to boost the performance of scientific and technical applications.
Hundreds of HPC Applications Support GPU Accelerators
The Tesla Platform has grown steadily since 2008 in the number of supported scientific, engineering, data analytics and other applications, with 370 GPU-accelerated applications now available.
A new study by Intersect360 Research, a tech research firm, shows that nearly 70 percent of the 50 most widely used HPC applications – and 90 percent of the top 10 – support GPU accelerated computing. Among them are the ANSYS Fluent computational fluid dynamics application; the GROMACS molecular dynamics application; and now – as announced separately today – VASP, an atomistic simulation application used by researchers around the world to model the behaviour of individual atoms at the electronic level.
One of the study's authors, Addison Snell, CEO of Intersect360 Research, said: “Accelerated computing has reached the tipping point in HPC, with NVIDIA’s Tesla GPUs as the leader in the market. The adoption of accelerators and availability of GPU-accelerated versions of top HPC codes have been steadily increasing.”
Improved Datacentre Throughput with GPUs
Supercomputing and hyper scale datacentres can cost hundreds of millions of dollars. In the past, the steady progression of Moore’s Law allowed them to upgrade with new CPUs to keep up with ever-increasing demand. That’s no longer possible. With the advent of GPU-accelerated computing, these large datacentre investments can be extended with the addition of NVIDIA Tesla accelerators which boost throughput required to meet these demands.