From hardware that excels at training massive, unstructured data sets, to extreme low-power silicon for on-device inference, Intel AI supports cloud service providers, enterprises and research teams with a portfolio of multi-purpose, purpose-built, customizable and application-specific hardware that turn model into reality.

Built to break barriers in deep learning model design and deployment, Intel® AI Accelerators are entirely new hardware architectures built from the ground up to accelerate increasingly complex deep learning applications.

Intel® Xeon® Scalable processors are the first generation of our platform built specifically to run high-performance AI workloads—alongside the cloud and HPC workloads they already run.

Intel® field programmable gate arrays (FPGAs) are blank, modifiable canvases. Their purpose and power can be easily adapted again and again for any number of workloads and a wide range of structured and unstructured data types.

The Intel® Movidius™ Myriad™ VPUs offer industry leading performance per watt for demanding AI inference workloads on edge devices. These system-on-chips (SoC) are designed specifically for on-device advanced computer vision and neural network applications.

The new, improved Intel® Neural Compute Stick 2 (Intel® NCS 2) features Intel’s latest high-performance vision processing unit, the Intel® Movidius™ Myriad™ X VPU. With more compute cores and a dedicated hardware accelerator for deep neural network inference, the Intel® NCS 2 delivers a significant the performance boost compared to the previous generation Intel® Movidius™ Neural Compute Stick.

Intel RealSense depth & tracking cameras, modules and processors give devices the ability to perceive and interact with their surroundings. Computer vision.