Intel and Baidu: Working to Deliver on AI Everywhere

AI is at an inflection point as innovators move from training machine learning models to deploying them to solve real-world problems. Global spending on AI systems is predicted to exceed 77 billion USD in 2022—more than triple the 2018 level. The business value created by AI is expected to reach 3.9 trillion USD in 2022, with companies using AI to transform the customer experience, create new products and services, and operate more efficiently. These data points are impressive, but they are just a fraction of AI’s potential. Delivering the full power of AI will require deep collaboration across diverse partners to create optimal, heterogeneous solutions that deliver AI everywhere—from the cloud to the enterprise edge.

That’s why I’m excited to be in Beijing this week. Intel has a long tradition of collaborating with customers, solution developers, hardware providers, and industry leaders to advance technology transformations. We have worked with Baidu for more than a decade and 2019 marks Intel’s third appearance on stage at Baidu’s annual developer conference – such a great opportunity to highlight the work our companies are doing together to drive AI forward. I’m especially excited to announce that Baidu is working with Intel on the development of the new Intel® Nervana™ Neural Network Processor for training (NNP-T).

Committed Collaborators

Intel and Baidu are natural collaborators on AI. We are both global leaders who recognize AI as the next inflection of computing technology—one that will deliver profound, pervasive benefits across our societies. We view AI as much more than a workload in and of itself; it will be part of every workload and every activity. Our teams have been doing exciting work together on hardware, software, and platform solutions that advance our vision of AI everywhere. Let me share some examples.

Hardware

Intel® Xeon® Scalable processors power Baidu’s data centers and handle a wide range of innovative programs running on Baidu Brain*, the company’s unified platform for AI services. Together we have optimized hardware such as 2nd Generation Intel Xeon Scalable processors to accelerate performance for workloads incorporating speech synthesis, natural language processing, visual applications, and other AI technologies.

In a heterogeneous world with AI everywhere, users and developers will be best served not by a single chip, but by multiple options, each tailored to specific applications or use cases. Intel already offers an extensive range of products and applications that provide the right AI solution for any deployment, from the cloud and data center to the edge. Now, we’re teaming up with Baidu to define the next generation of AI compute for deep learning training—the Intel Nervana NNP-T, which represents a new class of hardware, optimized for memory bandwidth, utilization, and distributed workloads. This custom accelerator was built from the ground up to push the boundaries of what is possible in training deep learning models and to empower the user community to solve the world’s biggest challenges.

Software

Software is just as critical as hardware for AI applications. Intel and Baidu have been working together for years to optimize Baidu’s PaddlePaddle*, the first China-based deep learning framework for Baidu and its developer community. PaddlePaddle was also the first framework to integrate Vector Neural Network Instructions (VNNI). These new enhancements deliver dramatic performance improvements for image classification, speech recognition, language translation, object detection, and other important functions as part of Intel® Deep Learning Boost (Intel® DL Boost), a group of acceleration features introduced in our 2nd Generation Intel Xeon Scalable processors. These speedups add value for developers as well as end users.

Beyond the Processor

Baidu uses other Intel technologies to increase performance and functionality for many of its services. For example, Baidu has implemented Intel® Optane™ DC Persistent Memory to provide high concurrency, large capacity, and high-performance data access for the hundreds of millions of users who access personalized mobile content through its Feed Stream service. This allows Baidu’s AI recommendation engines to provide fast, accurate information more efficiently while delivering a better customer experience.

To increase data security, Intel and Baidu have been working together on MesaTEE, a memory- safe Function as a Service (FaaS) computing framework. The solution builds on Intel® Software Guard Extensions (Intel® SGX), enabling security-sensitive services to more securely process their data on public clouds and other environments. We’re also working together on technologies to protect AI algorithms for cloud and edge computing devices.

Collaboration for a Heterogeneous World

Deep collaboration across partners accelerates progress and yields more robust and powerful solutions. By working together to advance AI, Baidu and Intel are helping to usher in a world where AI is ubiquitous and heterogeneous, running on compatible technologies and reaching from the data center to the cloud to the enterprise edge. I’m proud of the strong and growing partnership between Intel and Baidu, and excited to see the ongoing results of our collaboration.