Intel AI Research at ICML

June 9 – 15 in Long Beach, CA

Intel is a sponsor of the 36th International Conference on Machine Learning (ICML). At ICML, you’ll discover cutting-edge research on all aspects of machine learning used in AI, statistics and data science, as well as applications like machine vision, computational biology, speech recognition, and robotics.

Accepted Paper Presentations

Tuesday, June 11

Collaborative Evolutionary Reinforcement Learning (CERL)

Time: 2:35 – 2:40 PM
Location: Deep RL Session – Hall B
Authors: Shaw Khadka – Intel
Somdeb Majumdar – Intel
Zach Dwiel – Terran Robotics
Evren Tumer – Intel
Santiago Miret – Intel
Yinyin Liu – Intel
Kagan Tumer – Oregon State University
Tarek Nassar
Abstract: (Oral presentation) CERL is a sample-efficient Reinforcement Learning framework that combines gradient-based and gradient-free learning. CERL exceeds the performance of either of these two approaches in terms of sample efficiency and sensitivity to hyper-parameters.

Tuesday, June 11

Non-Parametric Priors for Generative Adversarial Networks

Time: 3:05 – 3:10 PM
Location: GAN Session – Hall A
Authors: Martin Braun – Intel
Ravi Garg – Intel
Rajhans Singh
Pavan Turaga
Suren Jayasuriya – Arizona State University
Abstract: (Oral presentation) This paper proposes a novel prior which is derived using basic theorems from probability theory and off-the-shelf optimizers, to improve fidelity of image generation using GANs by interpolating along any Euclidean straight line without any additional training and architecture modifications.

Wednesday, June 12

Parameter Efficient Training of Deep Convolutional Neural Networks by Dynamic Sparse Reparameterization

Time: 12:10 – 12:15 PM
Location: Applications Session – Room 201
Authors: Hesham Mostafa – Intel
Xin Wang – Intel
Abstract: (Oral presentation) We describe a heuristic for modifying the structure of sparse deep convolutional networks during training that allows us to train sparse networks directly to reach accuracies on par with accuracies obtained through compressing/pruning of large, dense models.

Friday, June 14

Learning a Hierarchy of Neural Connections for Modeling Uncertainty

Time: 08:30 AM — 06:00 PM
Location: Uncertainty & Robustness in Deep Learning Workshop – Hall B
Authors: Raanan Yehezkel – Intel
Yaniv Gurwicz – Intel
Shami Nisimov – Intel
Gal Novik – Intel
Abstract: Quantifying and measuring uncertainty in deep neural networks is an open problem. In this paper we propose a new deep architecture, and demonstrate that it enables estimating various types of uncertainties.

Saturday, June 15

Goal-conditioned Imitation Learning

Time: 11:00 – 12:00 PM
Location: Adaptive & Multitask Workshop
Authors: Yiming Ding – UC Berkeley
Carlos Florensa – UC Berkeley
Mariano Phielipp – Intel AI Lab
Pieter Abbeel – UC Berkeley and Covariant
Abstract: Solving challenge Robotics like environments in Reinforcement Learning using few demonstrations and self-supervision.

Saturday, June 15

Privacy Preserving Adjacency Spectral Embedding on Stochastic Blockmodels

Time: 3:30 – 4:30 PM
Location: Learning and Reasoning with Graph-Structured Representations Workshop
Authors: Li Chen – Intel
Abstract: For graphs generated from stochastic blockmodels, adjacency spectral embedding is asymptotically consistent. The methodology presented in this paper can estimate the latent positions by adjacency spectral embedding and achieve comparable accuracy at desired privacy parameters in simulated and real world networks.

Saturday, June 15

Sparse Representation Classification via Screening for Graphs

Time: 3:30 – 4:30 PM
Location: Learning and Reasoning with Graph-Structured Representations Workshop
Authors: Censheng Shen
Li Chen – Intel
Carey Priebe
Yuexiao Dong
Abstract: In this paper we propose a new implementation of the sparse representation classification (SRC) via screening, establish its equivalence to the original SRC under regularity conditions, and prove its classification consistency under a latent subspace model.

Demo

Reaching Intent Estimation via Approximate Bayesian Computation

Time: 2:00 – 6:30 PM
Location: Room 101
Description: This interactive demo shows a system that provides real-time user intent estimation. When the user places an object on the table, the system will estimate the intended placement location and represent it as a probability density function. The system is composed of three elements: an object tracker, a model-based physically plausible trajectory generator and a probability function. The user is captured through an Intel® Realsense™ camera and the intent is obtained through approximate Bayesian computation in an analysis by synthesis approach.

Talk

Optimize Deep Learning on Apache Spark with Intel® DL Boost Technology and Intel® Parallel Studio

Time: 3:00 – 4:00 PM
Location: Grand Ballroom
Description: Thanks to Intel DL Boost technology with new Vector Neural Network Instructions (VNNI), deep learning inference performance in BigDL is dramatically improved on 2nd gen Intel® Xeon® Scalable processors. We will showcase the VGG-16 Fp32/Int8 throughput improvement and how to use Intel Parallel Studio to profile and optimize DL workloads.

Session

NLP Architect by Intel® AI Lab

Time: 5:30 – 6:30 PM
Location: Grand Ballroom
Description: NLP Architect is an open-source Python library for exploring the state-of-the-art deep learning topologies and techniques for natural language processing and natural language understanding. In this session, we will discuss NLP Architect features and demonstrate how easily non-ML/NLP developers can build advanced NLP applications such as unsupervised Aspect-Based Sentiment Analysis (ABSA), Set-Term Expansion, and Topic & Trend extraction.
More Ways to Engage

Follow us @IntelAI and @IntelAIResearch for more updates from @ICMLconf and the Intel AI research team!