At Intel, our vision of the future is one in which AI is infused everywhere, helping to make the world healthier, safer, and more productive — even aiding us in exploring the stars and understanding the nature of the universe. Ideas we, until recently, considered possibilities for the future are actually being achieved and realized today.
We are proud to help support these advances by building products – from CPUs to ASICs – that are optimized for machine learning and deep learning workloads. But as model sizes and data sets grow larger, so do AI compute needs. To better serve our customers and the industry, we are continuously driving performance improvements through ongoing hardware innovations and software optimizations.
One way to comparatively measure performance improvements is through standardized industry benchmarks like MLPerf, which releases benchmarks that reflect the training and inference performance of machine learning hardware. Started in 2018 by AI engineers and researchers, the MLPerf organization now includes working group members from many AI hardware and software companies, including Intel.
In order to best represent a wide variety of inference use cases, MLPerf has defined four different scenarios for the latest results, including Server and Offline. To date, the MLPerf organization has released two rounds of training product benchmark results. As a participating member, we at Intel are happy to share the latest benchmark results on our inference products.
Intel’s forthcoming AI ASIC for inference, the Intel® Nervana™ Neural Network Processor (NNP-I), already shows strong performance—both in pure, raw performance and in performance-per-watt . Running two pre-production Intel Nervana NNP-I processors on pre-alpha software, we were able to achieve an amazing 10,567 images/sec  in Offline scenario and 10,263 images/sec  in Server scenario for ImageNet image classification on ResNet-50 v1.5 when using ONNX. We expect continued improvements as our team further matures the software stack and tests our production products for the next MLPerf inference disclosure.
Intel Xeon Scalable processors power most of the world’s inference in data centers today. AI is being rapidly infused into most mainstream applications, so integrated AI acceleration in general-purpose CPUs is critical to data center and communications networks infrastructure. To help customers get the most from their workloads, Intel released built-in AI acceleration with Intel AVX-512 instructions in the first generation of Intel Xeon Scalable processors in 2017, enabling a large number of operations to be done in parallel with a larger number of cores. We built upon this foundation with Intel DL Boost technology integrated into our 2nd Generation Intel Xeon Scalable processors that were released earlier this year. Intel DL Boost’s Vector Neural Network Instructions (VNNI) maximize compute resources by combining three instruction sets into one while enabling INT8 deep learning inference.
With these integrated acceleration technologies, Intel Xeon Scalable processors are delivering remarkable MLPerf inference benchmark results:
Intel Xeon Scalable processors remain the only server CPU in the industry to include built in AI acceleration capability and they are the backbone of today’s data centers. Customers worldwide are utilizing these systems to run their inference workloads that consumers utilize every minute of every day and Intel continues to optimize our software to drive for additional AI performance gains.
As AI becomes more pervasive, it is becoming infused in all of our products. The client-side PC is no exception. We are seeing large momentum around applications like gaming, rich media editing, and productivity. Recent examples include AI feature updates in leading software like Adobe’s Photoshop Elements and gaming engines like Unity 3D, but there’s more to come.
The latest MLPerf benchmark results for the Intel Core i3-1005G1 are proof of strong AI opportunities in client-side applications: 218 images/sec  for COCO object detection on SSD-MobileNet v1 with excellent latency, 508 images/sec  for ImageNet image classification on MobileNet v1, and 101 images/sec  for ImageNet image classification on ResNet-50 v1.5, all when using the OpenVINO toolkit. The only client CPU that has built-in AI acceleration is the Intel Core processor. We are confident that in the mobile PC market, the only solution better than Intel’s Core i3 for AI performance is an Intel Core i5, i7, or i9.
MLPerf benchmark results prove how quickly AI is improving. Tasks that previously could take hours are now being done in minutes. Enthusiasts today can test software in their homes that is more advanced than what the world’s top research labs were running only a year or two ago. The pace of AI innovation is truly staggering, and that is why we continue to drive optimizations across our entire AI hardware portfolio — from Intel Xeon processors running the world’s data centers, to our power-efficient Intel® Movidius™ Vision Processing Units (VPUs) powering cameras and edge devices—to be incredibly performant and optimized for the most popular frameworks.
I’m incredibly proud of the innovation demonstrated in our MLPerf data driven by Intel engineering teams and can’t wait to share the additional progress we can drive into the next round of results. But I am even more excited by all of the work we are doing across the breadth of the AI ecosystem to deliver real-world results and solve real-world problems with the goal of making the world healthier, safer, and more productive.
 Power measurements are not included in MLPerf results. Performance per watt is based on Intel internal power measurements and published measurements from other companies.
 MLPerf v0.5 Inference Closed ResNet-v1.5 Offline, entry Inf-0.5-33.
 MLPerf v0.5 Inference Closed ResNet-v1.5 Server, entry Inf-0.5-33.
 MLPerf v0.5 Inference Closed SSD-Mobilenet-v1 Offline, entry Inf-0.5-22.
 MLPerf v0.5 Inference Closed SSD-Mobilenet-v1 Server, entry Inf-0.5-22.
 MLPerf v0.5 Inference Closed Mobilenet-v1 Offline, entry Inf-0.5-23.
 MLPerf v0.5 Inference Closed Mobilenet-v1 Server, entry Inf-0.5-23.
 MLPerf v0.5 Inference Closed ResNet-v1.5 Offline, entry Inf-0.5-23.
 MLPerf v0.5 Inference Closed ResNet-v1.5 Server, entry Inf-0.5-23.
 MLPerf v0.5 Inference Closed SSD-Mobilenet-v1 Offline, entry Inf-0.5-24.
 MLPerf v0.5 Inference Closed Mobilenet-v1 Offline, entry Inf-0.5-24.
 MLPerf v0.5 Inference Closed ResNet-v1.5 Offline, entry Inf-0.5-24.
All MLPerf results retrieved November 6th, 2019 from www.mlperf.org. MLPerf name and logo are trademarks. See www.mlperf.org for more information. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.