The Mainstreaming of AI

For those who attended the sold-out Artificial Intelligence Conference presented by Intel and O’Reilly Media earlier this month in San Francisco, it was clear to see that we’ve reached a tipping point with AI. The technology—once the province of specialized hardware and lab environments—is going mainstream. AI is quickly coming of age, and that evolution is at the heart of our AI vision to enable full-stack solutions of hardware and software, deployed everywhere from devices all the way to the cloud.

Intel and O’Reilly Media partnered together to put on a series of Artificial Intelligence conferences, at which real-world AI demonstrations in nearly every industry are showcased. In San Francisco, we saw deployments ranging from robotics to computer vision and speech, and extending to edge and device computing.

  • Lenovo used TensorFlow optimized for Intel® Xeon® Scalable processors to create a computer vision application that “sees” products whizzing through assembly lines and detects defects in real time.
  • Visio Ingenii demonstrated an object recognition application that can detect fires and track their patterns to raise alert levels as the fire spreads.
  • Altia Systems uses neural network-based object detection to deliver an immersive video conferencing experience.
  • And we demonstrated how the Intel® Movidius Neural Compute Stick lets you do real-time object recognition in devices at the edge of the network.

In Julie Choi’s keynote talk, she covered a range of AI use cases enabled by Intel, including how Ziva Software used Intel AI to bring a giant shark to the big screen in The Meg. (See it—best shark movie since Sharknado!) Using machine learning algorithms and Intel Xeon Scalable processors, Ziva slashes the time and costs for production companies to create virtual characters with lifelike appearances and movements from months to days.

Julie was joined on-stage by Ariel Pisetzky of Taboola, a leading content discovery website. They run a large-scale recommendation engine using TensorFlow* that spans seven data centers. Ariel explained how Taboola needed to speed up inferencing with a goal of 30% improvement. After evaluating available CPU and GPU options, Taboola chose Intel Xeon Scalable processors and saw a 2.5x overall performance improvement[1] in their data center production environment using Intel-optimized TensorFlow and the Intel® Math Kernel Library compared to baseline TensorFlow.  And because they use Intel® architecture throughout, Taboola can run data center web-serving applications alongside inferencing applications to reduce costs and streamline operations.

As AI is becoming a standard element of the applications businesses run day to day, the hardware to run it must be a standard element of the data center infrastructure that IT organizations operate day to day. That’s what we’re delivering with Intel® Xeon® Scalable processors—a family of processors optimized to run high-performance AI applications alongside the data center workloads they already run. By working with our partners, we’ve optimized versions of TensorFlow and other popular deep learning frameworks to fully utilize the performance our hardware can offer.

The next Artificial Intelligence Conference is a few weeks away — October 8 to 11 in London. If you’re able to attend, it’s a great way to learn how to put AI to work for your business.