The Artificial Intelligence Conference in Beijing: Solving Real-World Problems with Frameworks Optimized for Intel Architecture

Intel is co-presenter of The Artificial Intelligence Conference in Beijing, China on April 10–13, 2018.  The event explores and shares the latest innovations in applied AI. Intel’s keynotes and sessions will share practical use cases and applications, and provide the technical knowledge needed to develop and implement successful AI applications across a variety of industries today. A key area of focus for Intel will be to share the untapped opportunities of applied AI through Intel optimized frameworks. Developers and technology leaders will get the depth and breadth of technical content they desire backed by the latest innovations and research.

Here’s an overview of Intel’s presence at The Artificial Intelligence Conference in Beijing:

Opening Keynote Sessions with Program Chairs

Jason (Jinquan) Dai, Senior Principal Engineer and CTO of Big Data Technologies at Intel, is the program co-chair of The AI Conference in Beijing and will take part in opening keynote sessions on both Thursday and Friday (April 12th and 13th) with Ben Lorica, O’Reilly Media, and Roger Chen, Computable Labs.

Jason leads a Global Engineering team at Intel—in Silicon Valley and Shanghai—on the development of advanced big data analytics, including distributed machine and deep learning. He also leads collaborations with leading research labs such as UC Berkeley AMPLab and RISELab. Jason is an internationally recognized expert on big data, cloud and distributed machine learning; a founding committer and PMC member of Apache Spark*; and the creator of the BigDL: Distributed Deep Learning on Apache Spark* framework.

Speakers: Ben Lorica (O’Reilly Media), Jason Dai (Intel), Roger Chen (Computable Labs)
Day: Friday, April 13, 2018
Time: 08:45–08:55
Location: Grand Hall A

Keynote: Modernizing the Healthcare Industry with AI

Artificial Intelligence (AI) will transform every industry, and one of the most important areas is healthcare. AI can give physicians new insights and speed time to diagnosis by leveraging vast amounts of healthcare data, while also reducing time and money spent. AI will also power precision medicine with the accelerated development of targeted therapies, advances in biomarker discovery and interpretation, and even improved productivity through intelligent algorithms for computer-aided detection of illness. Precision medicine literally has the potential to save lives.

The practice of applying deep learning to precision medicine requires enormous computing power and the development of novel algorithms and frameworks. Arjun Bansal, Vice President and General Manager of Intel’s Artificial Intelligence Products Group and the General Manager of Intel AI Lab, will share how emerging algorithms and models are used to analyze healthcare data such as electronic health records, medical images, and pharmaceutical and genomics datasets.

Speaker: Arjun Bansal
Day: Thursday, April 12, 2018
Location: Function Room 2
Implementing AI, Models and Methods

Keynote: Deep Learning-powered NLP

Deep learning provides new opportunities and promises for natural language processing. It enables data scientists to solve text, language, and conversation-based use cases with new deep learning approaches. It also inspires new ways of building foundations that are applicable to a range of NLP applications. Dr. Yinyin Liu, Head of Data Science for the Artificial Intelligence Products Group at Intel, will discuss how AI technologies are driving the development of NLP and extending its benefits to a range of industries.

Speaker: Yinyin Liu (Intel)
Day: Friday, April 13, 2018
Time: 08:55–09:05
Location: Grand Hall A

Tutorial: Running Distributed Keras* Based on Apache Spark* and BigDL

Advances in deep learning technology continue to drive the evolution of data analysis and machine learning while advancing new applications of artificial intelligence (AI). Keras*, one of the most popular upper neural network APIs, helps companies easily and quickly prototype and support multiple backends, including TensorFlow* and Theano*.

Zhichao Li, Senior Software Engineer at Intel who focuses on distributed machine learning, will demonstrate how to apply BigDL (a distributed deep learning framework for Apache Spark*) to deep learning and an Apache Spark-driven big data task. Keras can be seamlessly integrated into BigDL, which allows users to run existing distributed Hadoop* based on Intel® Xeon® processors.

Speaker: Zhichao Li (Intel)
Day: Wednesday, April 11, 2018
Time: 09:00–17:00
Location: Function Room 2
Track: Implementing AI, Models and Methods

Session: Data Science and Natural Language Processing in the Age of Deep Learning

Natural language processing (NLP) gives the computer the ability to understand human language. It uses advanced learning algorithms to develop applications such as document understanding, enabling companies to screen large amounts of text, categorize that text, and find relevant information.

Dr. Yinyin Liu, Head of Data Science for the Artificial Intelligence Products Group at Intel, will discuss how the latest development of deep learning affects the processing of text, language, and dialogue-based applications, and inspires new directions for using data. She will also share several NLP business use cases as illustration.

Speaker: Yinyin Liu (Intel)
Day: Thursday, April 12, 2018
Time: 11:15–11:55
Location: Function Room 6A+B
Track: Models and Methods

Session: Low-precision Calculations for Deep Learning Inference and Training

Deep learning conducted through low-precision training and inference improves computational performance without losing accuracy. Currently, commercial deep learning applications use mostly 32-bit single-precision floating point (fp32) for training and inference. Studies have shown that using lower-precision representations in training or inference (training 16-bit, inferring 8-bit or lower) requires relatively high precision due to gradient representation in backpropagation to maintain the same accuracy. Low accuracy means that it is likely to become an industry standard practice in the coming years, especially for convolutional network applications. Low precision means the model’s storage capacity is greatly reduced, cache efficiency is improved, and data can be moved between memory, cache, and register to avoid a memory access bottleneck. Hardware may also provide higher computing power (per second operation frequency).

Brian Liu, a System Architect in the Intel AI Division Technical Solutions team, will review the history of low-precision representations for deep learning training and inference, and demonstrate how Intel used low-precision representations to perform deep learning calculations on Intel® Xeon® Scalable processors.

Speaker: Brian Liu (Intel)
Day: Friday, April 13, 2018
Time: 11:15–11:55
Location: Auditorium
Track: Implementing Artificial Intelligence (AI)

Session: The Practice of Hyperscale Image Processing Based on BigDL in

Deep learning technology is widely used by the industry to solve computer vision problems. With the continuous increase of data volume, accelerating and expanding data processing has become a key issue. However, due to software and hardware infrastructure limitations, applications based on GPU solutions face many challenges. To this end, BigDL provides rich end-to-end support for large-scale image processing, including OpenCV-based image preprocessing libraries and various visual models (SSD, Faster-RCNN, Inception, ResNet, etc.). BigDL also supports loading models from third-party frameworks such as Caffe*, Tensorflow*, Keras* and Torch*. These make it easy to build a variety of image applications, such as target detection, image classification, and feature extraction.

Chu Xin, from Intel’s Big Data Technology team, will introduce how to use BigDL to build an end-to-end deep learning application with flexibility and high scalability on Apache Spark. He will also share experiences from building large-scale image feature extraction pipelines for BigDL’s high scalability, high performance and ease-of-use make it easier to analyze a large number of images using deep learning techniques.

Speaker: Chu Xin (Intel)
Day: Friday, April 13, 2018
Time: 13:10–13:50
Location: Function Hall 5A+B
Track: Implementing Artificial Intelligence (AI)

Session: Optimizing Deep Learning Frameworks for Modern Intel CPUs

Deep learning (DL) is an approach based on learning data representations that is applied to solve problems in a variety of areas such as image classification, speech recognition, and object detection. Deep learning frameworks like TensorFlow*, Caffe*, MXNet, and PyTorch allow data scientists to implement neural network models to solve various problems. Intel is an active contributor to the open source community to ensure these frameworks are optimized for ideal performance on Intel® Xeon® processor-based platforms.

Huma Abidi, Engineering Director of the Artificial Intelligence Products Group at Intel, will lead a session to detail these collaborative optimization efforts and explain how deep learning framework users can leverage these optimizations. Huma will also provide specific tuning tips to get the best performance on Intel Xeon processors.

Speaker: Huma Abidi (Intel)
Day: Friday, April 13, 2018
Time: 14:50–15:30
Location: Zijin Hall B (Grand Hall B)
Track: Implementing Artificial Intelligence (AI)

Understanding Visual Data

Today, visual perception is everywhere, while its cost is decreasing and visual data is growing rapidly. Analyzing and understanding massive visual data has become a big challenge. The Intel Research Institute is conducting innovative research on intelligent visual data processing technology on Intel platforms.

Dr. Yurong Chen, Intel’s Chief Researcher and Director of the Cognitive Computing Lab at the Intel China Research Institute, will discuss how Intel can advance vision data analysis through deep learning and share forward-looking research in various fields such as face analysis, emotion recognition, and efficient CNN design for object detection, DNN model compression, and dense video subtitling.

Speaker: Yurong Chen (Intel)
Day: Friday, April 13, 2018
Time: 14:50–15:30
Location: Function Room 6A+B
Track: Implementing Artificial Intelligence (AI)

In Intel booth #100, expect to be greeted by the booth robot and experience the following demonstrations and real-world application examples to help advance your AI solutions:

AI Object Detection on Intel® Xeon® Processors

This demo will showcase an AI object detection solution using the machine learning framework Caffe* optimized on Intel® Xeon® Scalable processors. This is a demonstration of the CPU-based enhancement (Intel® Advanced Vector Extensions®, Intel® Math Kernel Library, Python*, etc.).

Convolutional Neural Network System Based on Intel® Programmable Acceleration Card

The Intel® PCI Express*-based data center FPGA accelerator provides line-side and bypass acceleration, FPGA-accelerated power and versatility, and support for Intel® Xeon® processor and FPGA acceleration stacks. In addition, Intel offers a design package for deep learning acceleration that offloads a variety of neural network predicted topologies onto the Intel® Programmable Acceleration Card. This demo will show how to offload an ImageNet* kinematic image classification task into an FPGA.

Application of BigDL Distributed Deep Learning Framework Based on Apache Spark*

The specific applications of distributed deep learning based on BigDL have been increasingly deployed in various types of enterprises, including image analysis, natural language sentiment analysis, recommendation systems and so on. This demo will showcase these use cases.

Object Demonstration and Object Recognition Technology Application Demonstration Based on Intel® Movidius™ Neural Compute Stick

In this application, the Intel® Movidius™ Neural Compute Stick is used for object recognition and image classification in road monitoring or real-time video. The advantage to using the Intel® MovidiusTM Neural Compute Stick is it can support prototyping and tuning processes for rapid CNN deployment while significantly speeding up edge inference development. See how its low-power VPU architecture provides a completely new embedded engine without cloud connectivity.

Intel® Artificial Intelligence Academy

Whether a novice or an expert, the Intel® AI Academy provides the basic learning materials, communities, tools, and technologies needed to help increase your understanding of artificial intelligence.

3D Face Technology (3DFT)

Intel’s 3DFT based real-time 3D facial effects experience demo was successfully showcased at the 2018 Sundance Film Festival in Park City, UT, USA and now we’re bringing it to Beijing. Get ready to experience how Intel brings this technology to life.

If you miss The Artificial Intelligence Conference Beijing, or just want to see some of the sessions again, Intel sessions on April 12–13 will be recorded and posted to