All around the world, customers like Novartis, Warner Bros., GE Healthcare, and Ziva Dynamics are achieving excellent real-world AI results on Intel® architecture. However, AI hardware is nothing without software. The complex set of machine learning, deep learning, and advanced analytics workloads that comprise modern AI applications requires versatile, performant software optimized to make the best use of that hardware’s features.
My team and I deliver software optimizations for deep learning on current-gen and future-gen Intel® Xeon® Scalable processors. I’m excited to share our progress this week at O’Reilly AI San Francisco.
In 2017 alone, Intel produced more than $1 billion in AI-driven Intel Xeon processor revenue. “One billion” is a big number, but it still doesn’t fully capture the effect that Intel Xeon Scalable processors are having in AI. Much of AI today occurs on Intel Xeon processor-based servers that organizations already use for tasks that keep critical infrastructure up and running, perform advanced analytics, or enable high-performance computing. With this in mind, we enhanced the Intel Xeon Scalable platform specifically to run high-performance AI workloads alongside the other cloud and data center workloads they already run. This gives you the best of both worlds. At Intel’s 2018 Data-Centric Innovation Summit, we showcased new features coming in future generations of the Intel Xeon Scalable platform, called Intel® Deep Learning Boost (Intel® DL Boost), that will further accelerate deep learning inferencing on Intel architecture.
The first of these technologies, the Vector Neural Network Instruction set (VNNI), will be included in the next generation of the Intel Xeon Scalable platform and will accomplish in a single instruction what formerly required three. With VNNI, we’ve projected an up to 11X performance increase in low-precision inferencing for this next generation platform, compared to the performance of the Intel Xeon Scalable platform at its launch in July 2017. The microarchitecture to follow will add support for bfloat16, a new numeric format quickly being adopted by the AI practitioners for highly accurate algorithmic performance and increased parallelism at a fraction of the power.
Many recent results point to the efficacy of Intel Xeon Scalable processors for deep learning applications across enterprises and in the cloud.
Our work prioritizes the out of box experience for data scientists and developers using TensorFlow through Optimized Wheels and the Anaconda* Python* distribution. Our goal is to improve access to the latest performance improvements for Intel processors in TensorFlow. These performance improvements are largely due to the integration of and improvements to Intel MKL-DNN.
Gaining the benefit of Intel MKL-DNN in TensorFlow formerly required building TensorFlow with the MKL tag, which could be a tedious, time-consuming process. We’re now easing this process through the release of Intel-optimized Wheels (or pre-built binaries) and containers for TensorFlow. Customers can now simply use ‘pip’ to install these existing libraries instead of building a new optimized TensorFlow instance.
We’re additionally excited to showcase that the latest Intel optimizations (using Intel MKL-DNN libraries) can install easily and quickly using “conda install” in a conda environment on Linux* OS. Anaconda is a Python distribution that includes many of the most popular packages for data science, analytics, machine learning, and deep learning. Anaconda users can now easily install TensorFlow optimized with Intel MKL-DNN from Anaconda.org into their virtual environments. These performance-optimized wheels and streamlined TensorFlow installations through Anaconda represent great improvements in terms of ease of use.
Software is key to moving AI forward. Intel – and my team – will continue to deliver the performance and simplicity needed to shorten the distance between idea and production AI solution. For more on software optimizations and tools for AI on Intel architecture, please look for us at O’Reilly AI San Francisco this week, follow @intelAI on Twitter, and stay tuned to ai.intel.com.