The AI Conference starts today! Those attending are promised a wide variety of real-world use cases from companies around the globe that are putting AI to work. As co-sponsor of the event, Intel AI will host technical sessions and demos showcasing customer deployments and solutions on a variety of Intel AI hardware and software built to break through memory and power bottlenecks, from real-time object detection on drones to classifying dense medical images. Attendees will learn how Intel AI turns theory into real-world function.
Julie Choi, Intel’s Head of AI Marketing, reviews real-world customer use cases that take AI from theory to reality. From accelerating drug discovery with deep learning to changing the way visual effects are created with machine learning, Intel AI is working side-by-side with a diverse range of organizations to accelerate their AI transformation.
Huma Abidi, Engineering Director for the Intel AI Products Group, will discuss the importance of optimization to deep learning frameworks. As AI evolves, it is essential to have a full-stack solution where software optimizations take advantage of hardware innovations to accelerate AI applications. Partnering with framework developers is a critical component of Intel’s AI strategy to take machine and deep learning models from theory to reality. This talk will include Intel Xeon processor performance results and work Intel is doing with frameworks like TensorFlow.
Deep learning applications employ deep neural networks (DNNs), which are notoriously time, compute, energy, and memory intensive.
Intel’s AI Lab has recently open-sourced Neural Network Distiller, a Python package for neural network compression research. Distiller provides a PyTorch environment for prototyping and analyzing compression algorithms, such as sparsity-inducing methods and low-precision arithmetic. Intel AI is exploring how DNN compression can be another catalyst that brings deep learning innovation to more industries and application domains, making our lives easier, healthier, and more productive.
Neta Zmora, a Deep Learning Research Engineer in the AI Products Group, discusses the motivation for compressing DNNs, outlines compression approaches, and explores Distiller’s design and tools, supported algorithms, and code and documentation. Neta concludes with an example implementation of a compression research paper.
One of the biggest challenges in AI is how to translate advances in the lab into large-scale applications. This challenge sits at the intersection of AI and systems engineering and requires an integrated understanding of all of the components that make up a large machine learning-based system, including computation, storage, communications, and algorithms. Data scientist Casimir Wierzynski reviews current trends in the field and shares case studies to illustrate why codesigning these components in concert will be critical for building the AI systems of the future.
The OpenVINO™ toolkit is free software that helps computer vision teams speed the development and deployment of neural network applications on devices and gateways across multiple Intel® platforms (CPU, GPU, FPGA, VPU). In this session, Dmitry Rizshkov, a Machine Learning Engineer in the AI Products Group, will introduce OpenVINO through real customer case studies that excelled in challenging inference applications.
Vikram Saletore, a Principal Engineer and Performance Architect in the AI Products Group, and Luke Wilson, a Data Scientist and Artificial Intelligence Researcher in Dell EMC’s HPC and AI Engineering Group, discuss a collaboration between SURFSara and Intel as part of the Intel Parallel Computing Center initiative to advance the state of large-scale neural network training on Intel Xeon CPU-based servers. SURFSara and Intel evaluated a number of data and model parallel approaches and synchronous versus asynchronous SGD methods with popular neural networks, such as ResNet50 using large datasets on the TACC (Texas Advanced Computing Center) and Dell HPC supercomputers.
Vikram and Luke share insights on several best-known methods, including CPU core, memory pinning, and hyperparameter tuning, that were developed to demonstrate top-one/top-five state-of-the-art accuracy at scale. They then detail real-world problems that can be solved by utilizing models efficiently trained at large-scale and present tests performed at Dell EMC on CheXNet, a Stanford University project that extends a DenseNet model pre-trained on the large-scale ImageNet dataset to detect pathologies in chest X-ray images, including pneumonia. Vikram and Luke highlight improved time-to-solution on extended training of this pre-trained model and the various storage and interconnect options that lead to more efficient scaling.
Recently, a lot of work has been done on low-precision inference, demonstrating that by training for quantization, large gains in energy efficiency can be achieved. On the other hand, we have seen embedded runtime packages like TensorFlow Lite and Caffe2Go emerge that offer portability over a number of platforms. Cormac Brick, Director of Machine Intelligence in the Movidius Group, looks at the challenge presented by this choice and asks, “Why can’t we have both?” Cormac explains how big this gap truly is, using state-of-the-art methods for both approaches, and specifically trained networks to show performance over a range of popular vision applications. He then covers best-in-class design techniques for developing portable networks to maximize performance on a variety of architectures and shares industry challenges and progress needed to close the portability performance gap.
See real-world use cases that highlight a variety of Intel AI hardware built for different application needs. We’re also demonstrating technologies from Intel® AI Builders partners, who share a vision to accelerate deployment of AI on Intel® architecture. Here’s a preview of the demos you can expect to see:
Join us for the AI at Night party where you can network with other attendees while enjoying happy hour food and beverage and listening to a live DJ. The party happens September 6 from 6:45 p.m. – 9:30 p.m. (open to all conference attendees, bring your badge for admission).