Artificial Intelligence says “I’ll be back” and Deep Learning says “I’m here”
Despite gaining ground as a marketing term, and being a rich field for basic and applied research, it’s highly doubtful we’ll see the emergence of AI as a dominant force in the next 12 months. We’re still far from the “singularity” that so many of us tech-geeks fear, so don’t expect AI to jump out of marketing copy and begin hunting us down a la The Terminator by next Christmas.
However, tremendous progress is being made in machine learning and deep learning and we’ve already seen how organisations are using this technology to improve their production operations. Though self-driving cars seem to be the most captivating application in the press, the concept of training software applications which then utilize what they’ve learned for inference, will be applied far beyond the motorways. Whether it’s a robot sorting mail, supply-chain optimization or aiding in the search for oil and gas, deep learning is here to stay.
Petroleum Geo-Services (PGS) broke new ground this year when it deployed a machine learning algorithm to advance its capability to run highly complex seismic processing and imaging applications. Use cases like this will continue to emerge as successful applications of machine and deep learning.
China and the strategic importance of long-term HPC strategies
Though China has yet to catch up to the U.S. or Europe (as a collective whole) in terms of total productive HPC infrastructure, its rapid growth trajectory means the US and European leadership will be staring across a very narrow divide by the end of 2017. Today, companies like Huawei, Sugon, Lenovo and others are emerging as vibrant HPC companies. As 2016 draws to a close, the top two spots in the Top500 are both occupied by systems in China. While the top spots in the Top500 do not represent actual productivity (the main reason we at Cray are not fans of the list), it does represent will and competency.
This rapid growth and competency building by China isn’t by accident; it’s the result of a long-term strategic plan to build infrastructural competency in HPC through targeted investments and the use of that competency to build business infrastructure and gain business advantage. China’s strategic model has already been noted by countries around the globe, including India, which has recently announced it will initiate a made-in-India strategy with significant government investment.
The U.S., with the National Strategic Computing Initiative and the Department of Energy Exascale program, as well as the European Union with programs like Horizon 2020, have been pushing specific long-term strategies that drive toward the convergence of deep learning and supercomputing. As we end 2016, the status of that progress is uncertain.
The coming year will be rife with such uncertainty. Will national and regional strategies accelerate or stagnate in 2017With dramatic political change in the U.S. and across the EU, it’s unclear what national policies will look like by the end of 2017 and whether there will be the will or the clarity to advance to a strategic drumbeat. What is clear is that countries and regions with well-defined HPC strategies, like China today, will end 2017 on a business and technology trajectory that will provide competitive business advantage and may be difficult to catch.
The Struggles of Moore’s Law
2016 saw the introduction or announcement of a number of new and innovative processor technologies from leaders in the field such as Intel, Nvidia, ARM, AMD, and even from China. In 2017 we will continue to see capabilities evolve, but as the demand for performance improvements continues unabated and CMOS struggles to drive performance improvements we'll see processors becoming more and more power hungry. Dennard scaling states that the performance-per-watt improves and follows Moore’s Law. As Moore’s Law slows, vendors will be pushed to produce processors that are hotter and hotter. The continued push to improve performance will spur innovation by processor vendors in processor features, as well as in power delivery and cooling technology, and will push system vendors to improve their platform designs. This will be a key challenge in 2017 and beyond.
The cloud of tomorrow, “you got peanut butter in my chocolate!”
The cloud revolution seems like it has been on a rocket trajectory if you look at some of the revenue numbers, and 2017 will see continued growth. The ability to absorb consumption and to create new features has been a boon to the developer and small business. But in the arena of supercomputing, cloud has not yet taken shape. Supercomputing, at its best, matches the application to the underlying architecture to achieve maximum scalability, productivity and performance and total cost of ownership. Supercomputers provide the consumer with the earth-shattering ability to achieve new results at scale. The question is can cloud and supercomputing provide solutions better together than separately? In 2017 something of a paradigm shift will occur in the thinking of both the cloud and platform providers as they look at serving an evolving customer base.
Just as a cloud in the sky continually changes shape, so too does its technological counterpart. 2017 will bring us more hybrid technologies, with organisations demanding a mixture of cloud and on-premises solutions, and the sharing of data and resources to meet their production needs. Our weather prediction for supercomputing in 2017 is that “it will be partly cloudy”.