Intel® Nervana™ Neural Network Processors

Hardware accelerators designed from the ground up to address the needs of deep learning training and inference separately, and at scale. Built for the future, not from the past.

Intel® NNP-T 1000 for Training

Built solely to train deep learning models at lightning speed, the Intel® Nervana™ Neural Network Processor-T 1000 put a large amount of HBM memory and local SRAM much closer to where compute actually happens. This means more of the model parameters can be stored on-die to save significant power for an increase in performance. The Intel® Nervana™ Neural Network Processor features high-speed on- and off-chip interconnects enabling multiple processors to connect card to card and chassis to chassis, acting almost as one efficient chip and scale to accommodate larger models for deeper insights.

Intel® NNP-i 1000 for Inference

Headed into production in 2019, the Intel® Nervana™ Neural Network Processor-I 1000 is a discrete accelerator designed specifically for the growing complexity and scale of inference applications. NNP-I 1000 is expected to deliver industry leading performance per watt on real production workloads and is built on Intel’s 10nm process technology with Ice Lake cores to support general operations as well as neural network acceleration.