Our AI future is closer than it seems. AI is already delivering transformational impact to a variety of industry segments—from health and precision medicine to transportation and autonomous vehicles. As AI technologies continue to mature and diffuse, we will continue to see novel new applications, as well as integrations with existing workloads and technology segments.
High-performance computing (HPC) is accelerating this transformation by enabling the application of AI capabilities to existing HPC workflows (HPC-on-AI) and the massive scaling of AI algorithms to take advantage of the capabilities of HPC systems (AI-on-HPC). Promising early results for these approaches point towards a bright future for the integration of AI and HPC.
Data scientists running big HPC applications like genomics sequencing or global climate modeling can realize significant efficiencies by adding deep learning capabilities to existing HPC workflows. Deep learning is an excellent match for some of the types of problems most-commonly addressed by HPC—those involving the identification and classification of patterns within very large data sets while requiring massive amounts of compute, storage, and networking. Applying deep learning to an HPC application is HPC-on-AI.
Viewed simply, deep learning can identify patterns in multidimensional data sets. Tasks typically suitable for deep learning include classification of patterns (e.g., recognizing images), clustering patterns (e.g., identifying increased risks from life signs monitors), and anomaly detection (e.g., identifying fraudulent credit card transactions). These capabilities are enabling new discoveries in some of the most complex HPC domains. For example, Scripps Translational Science Institute genetics expert, Dr. Ali Torkamani, and Intel deep learning experts, Dr. Kyle H. Ambert and Sandeep Gupta, recently presented a solution using deep learning on an Intel® Xeon® Platinum 8180 processor-based system to predict disease from genetic variant data. The solution showed the potential to help clinicians identify patients at risk for cardiovascular disease—an exciting step towards AI-enhanced diagnostic techniques and better patient care.
Further, the flexibility of deep learning models to adapt in response to new data is leading to their application in HPC realms where the environment is challenging to model and evolving continuously. FinTech firms are using deep learning in HPC applications that evaluate data which are dynamically changing at high speed in an environment that is too complex to completely model at any given point in time. The security threat detection space is seeing the application of deep learning techniques to help systems keep pace with rapidly evolving data while still identifying probable anomalies—faint signals within a very noisy data deluge.
Finally, HPC-on-AI has great potential to create solutions that combine global data sets with those that are specific to a particular individual or setting. For example, deep learning is expected to bring about breakthroughs in HPC pipelines for precision medicine, aiding with disease treatment and prevention by taking into account individual variability in genetics, environment, and lifestyle.
As these examples show, deep learning will be a key tool in the data scientist’s toolbox as data sets grow in complexity, velocity, and volatility. This ever-advancing environment makes Intel’s leading portfolio of technologies for HPC, including Intel® Xeon® Scalable processors, Intel® Solid State Drives, and Intel® Omni-Path Architecture, all the more relevant.
Deep learning practitioners also can realize significant benefits from the massive scaling enabled by HPC systems. A deep learning scientist’s set of tasks, languages, and environments should look very similar whether executed on a personal computer, a local server, or on hugely-parallel HPC infrastructure. It matters less where the neural network runs as long as it is quick and accurate. The difference on an HPC system is in the scale of the problems the data scientist can solve and the performance of the deep learning algorithm doing the solving.
The parallelism inherent to deep learning neural networks is an excellent match for highly-parallel HPC environments, where the extreme compute performance, massive memory pools, and optimized inter-node communication fabric can significantly extend a deep learning network’s ability to identify structures and patterns. This is AI-on-HPC.
The work of the US Department of Energy, Office of Science, in collaboration with UC Berkeley and Intel, demonstrates the potential for deep learning on HPC infrastructure. The team created a 15-PetaFLOP deep learning system for solving scientific pattern classification problems. The system scales training of a single deep learning model to up to 9,600 Intel® Xeon Phi™ processor-based nodes on the Cori supercomputer. This massive scale enabled the model to more effectively extract weather patterns from a 15TB climate dataset. This team’s results demonstrate the advantages of optimizing and scaling deep learning training onto many-core HPC systems when using large, complex data sets.
In addition to helping process exceedingly complex data, a second major benefit of using HPC infrastructure for deep learning is the greatly improved response time for training deep learning algorithms. Training a production deep learning network solution requires an iterative process of compute-intensive experimentation. Accelerating the exploration, assessment, and optimization of the deep learning network can materially shorten each model’s iteration time and contribute to higher quality results.
One of the most surprising aspects of the present AI transformation is the pace of change. Tasks that seemed aspirational just four years ago are now commonplace. In recent years, capabilities like speech recognition and computer vision quickly advanced from ‘somewhat useful’ to near- (or better-than-) human level. It seems like every day there are articles about new, interesting applications.
With HPC-on-AI and AI-on-HPC, we can expect the rate of change to increase through the addition of AI techniques to HPC workflows and the acceleration of AI algorithms on powerful HPC systems. In either case, customers can achieve excellent results and shortened time-to-value when running their AI workloads on the same well-known, versatile, efficient Intel® architecture relied upon for so many other tasks.
The successes seen so far demonstrate the promise of these converged approaches. We’re excited about the potential of well-integrated AI and HPC and to continue work to enable new discoveries and innovations through their powerful synergies.
 Kurth, Thorsten et Al.: Deep learning at 15PF: supervised and semi-supervised classification for scientific data, Proceedings of the International Conference for High Performance Computing SC ’17, Networking, Storage and Analysis, Article No. 7 https://dl.acm.org/citation.cfm?doid=3126908.3126916