See the design philosophy and research behind the Intel® Nervana™ Neural Network Processors,
designed from the ground up for deep learning training and inference at massive scale.
To quickly process vast, sparse, or complex data for large models within a power budget, AI hardware must deliver a critical balance of compute, communication, and memory. The Intel® Nervana™ Neural Network Processor for Training (Intel® Nervana™ NNP-T) does just that. With an all-new architecture that maximizes the re-use of on-die data, the NNP-T was purpose-built to train complex deep learning models at massive scale, and simplify distributed training with out-of-the-box scale-out support.
Enterprise-scale AI deployments are significantly increasing the volume of inference cycles, while demanding ever-stricter latency requirements. The Intel® Nervana™ Neural Network Processor for Inference (Intel® Nervana™ NNP-I) was built for this intensive, near-real-time, high-volume compute. By combining a CPU core and purpose-built AI inferencing engine, NNP-I delivers the novel hardware architecture that emerging, increasingly complex use cases demand, turning customer data into knowledge with an incredibly efficient, mutli-modal inferencing solution.
Artificial Intelligence (AI) is a transformative technology wave, allowing users in every industry to solve problems big and small. As…
» Download all images (ZIP, 97 MB) On the eve of CES, Intel leaders – Gregory Bryant, senior vice president, Client…
AI is at an inflection point as innovators move from training machine learning models to deploying them to solve real-world…
The annual CES event, held each January in Las Vegas, is one of the biggest tech shows of the year.…
In the early days of artificial intelligence (AI), Hans Moravec asserted what became known as Moravec’s paradox: "it is comparatively…