In my over 20 years of working with Intel, I’ve learned something very important: moving an industry forward is not just about the technology itself. It’s about the relationships and partnerships that get us there, especially with the open source community. Success depends on a holistic and strategic view of an overall system, and an eye toward how each hardware and software component interacts with every other, minimizing friction and bottlenecks.
Collaboration between enterprises and with the larger tech community is essential for the continued success of AI projects both large and small. The success of companies like Facebook, with billions of users, depends on software performance at an unprecedented scale. Collaborating with open source developers is an important foundation of that network. That’s why Intel is working with Facebook as the lead customer for the new Intel® Nervana™ Neural Network Processor for Inference (Intel® Nervana™ NNP-I), as well as committing to open source the API for the processor.
The Intel Nervana Neural Network Processor-I (NNP-I) is Intel’s first purpose-built inference ASIC. It’s built from the ground up with a design philosophy that prioritizes performance-per-watt, and is projected to offer industry-leading acceleration for real-world inference applications today while providing extreme programmability to scale with AI workloads of the future. The Intel Nervana NNP-I’s design incorporates years of expertise from the AI software community, and its open-source foundations will facilitate further innovation by providing low-level access so developers can best utilize Intel hardware to innovate industry solutions faster.
The Intel Nervana NNP-I is optimized for Glow* a new machine learning compiler and runtime that works with the PyTorch open-source machine learning framework. Glow, PyTorch, and the Intel Nervana NNP-I are all, quite literally, made for each other. Intel and Facebook collaborated on Glow from its inception, shaping both the Intel Nervana NNP-I backend that will be open-sourced , as well as the current Glow API for accelerators. The software code generator for the Intel Nervana NNP-I is now fully integrated with Glow, and Glow maps as the bridge for neural network accelerators to PyTorch. The result is that machine-learning scientists and engineers working in PyTorch will have easy access to this powerful new inference hardware, for quicker, better results.
“Glow” is short for “graphic lowering,” the primary method that the compiler uses to generate efficient code. This new compiler and runtime is especially good for automating tasks such as instruction selection, memory allocation, and graph scheduling, and Glow enables users to generate highly optimized code for specific hardware. Like PyTorch, Glow is open source, built for community support, and was crafted with inference accelerators and ASICS (like the Intel Nervana NNP-I) in mind.
The speed and efficiency of trillions of inferences performed by Facebook’s machine learning systems each day is important to the billions of people who use Facebook’s family of applications. Facebook’s continued success is heavily reliant on the speed and efficiency of inference solutions for the millions upon millions of tasks the social network needs to do every day. Inference tools like the Intel Nervana NNP-I perform best in low-friction, holistic systems. These holistic systems are enabled by deep partner relationships that inform Intel’s overall design and execution philosophy.
At Intel, we don’t just build processors, storage devices, or fabric. Those are simply the constituent parts of our larger technological roadmap. Instead, we build ecosystems that are the basis for innovation and creativity throughout various technology industries. This means being a good partner for both existing companies as well as researchers and independent creators who use our technology as a basis for their own breakthroughs. It means building products that facilitate success across the technological community.
The ease that programmers, developers, and creators enjoy is the result of deliberate coordination and planning between major industry players like Intel and Facebook, and from the hard work and dedication of the open source community. We are excited that soon the AI community will soon have a new, powerful resource at their disposal for innovation in deep learning. Find out how you can take advantage of PyTorch and Glow on GitHub, and learn about the Intel Nervana NNP-I here.
Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No product or component can be absolutely secure. Check with your system manufacturer or retailer or learn more at intel.com.
© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.