Sunday, June 17—6pm-9pm
Elevate, 149 200 S. Salt Lake City, UT
Experience these demos at the Intel® AI DevJam:
In addition, you can talk with our Intel® Student Ambassadors and Intel developers and also network with your peers.
Monday, June 18th, 2018
The 4th IEEE International Low-Power Image Recognition Challenge (LPIRC) will be held in Salt Lake City, Utah, co-located with CVPR. This year, team contributions are being encouraged for 3 different tracks:
Prizes in each track: $2,000 for first prize, $1,000 for second prize, $500 for third prize.
Submission for Tracks 1 and 2 have ended. Track 3 is on-site and participants need to bring their own systems.
Find Intel AI at booth #1337 on Tuesday, June 19th-Thursday, June 21st from 10:00am–6:30pm
Tiramisu DenseNet Architecture for Precise Segmentation
We use a subset of the CVPR 2018 WAD Video Segmentation Challenge dataset  to pre-train a Tiramisu DenseNet. The architecture is based on the model described in . The DenseNet utilizes concatenated skip connections at each scale of the image. This eases the training process and helps the network receive information from initial stages. Furthermore, with dense connectivity at each of the blocks, we find that these networks are able to focus on fine details relevant to features at that timescale. Due to skip connections, these fine details are propagated to the final prediction stage as well. The model is trained upon Intel® Optimization for TensorFlow* and Intel® Distribution for Python*.
Clean Water AI
This demo uses Deep Learning Networks to track water contamination, specifically image classification and object recognition via Convolutional Neural Networks. Current water contamination uses chemical-based sensors and is very effective at detecting chemical contamination, but not bacteria. Clean Water AI is built using Caffe* network and running AI on the edge through optical detection paired with high-speed cameras. This allows the system to classify bacteria and other contaminations in near real time.
Computer vision-based emotion detection is a new field, made possible by increased computational power and advances in algorithmic design. Classification of facial expressions as one of several emotions (eg, happy, surprise) on IoT devices is made possible with the Intel® Movidius™ Neural Compute Stick, which has a very low power envelop (1 Watt), allowing real-time classification at the edge. This model was trained on a deep convolutional network with TensorFlow on the FER 2013 dataset.
RLCoach + SenseNet
The majority of artificial intelligence research, as it relates to biological senses, has been focused on vision. The recent explosion of machine learning and in particular, deep learning, can be partially attributed to the release of high-quality data sets for algorithms from which to model the world on. Thus, most of these datasets are comprised of images. We believe that focusing on sensorimotor systems and tactile feedback will create algorithms that better mimic human intelligence. Here we present SenseNet: a collection of tactile simulation environments for 3D object manipulation. SenseNet was created for the purpose of researching and training Artificial Intelligences (AIs) to interact with the environment via sensorimotor neural systems and tactile feedback. We aim to accelerate that same explosion in image processing, but for the domain of tactile feedback and sensorimotor research. We hope that SenseNet can offer researchers in both the machine learning and computational neuroscience communities brand new opportunities and avenues to explore.
As the field of Computer Vision become commoditized and widely available to secondary markets, how do you start to explore the technology and techniques used, and more importantly convince your boss to start letting build some proof of concepts to keep your industry competitive. We will explore a range of these technologies that can easily be jumpstarted with well-documented use cases, on a variety of hardware that is both widely available and within many discretionary spending budgets. The models you create and extend scale to target platform, whether it be low power edge devices or more robust edge gateways. Some of the frameworks used include TensorFlow and MXNet*. The hardware platform includes AWS DeepLens, Up² Up Board, Raspberry Pi Zero and Intel® Movidius™ technology.
Early works on medical image compression date to the 1980’s with the impetus on deployment of teleradiology systems for high-resolution digital X-ray detectors. Commercially deployed systems during the period could compress 4,096 x 4,096 sized images at 12 bpp to 2 bpp using lossless arithmetic coding, and over the years JPEG and JPEG2000 were imbibed reaching upto 0.1 bpp. Inspired by the reprise of deep learning based compression for natural images over the last two years, we propose a fully convolutional autoencoder for diagnostically relevant feature preserving lossy compression. This is followed by leveraging arithmetic coding for encapsulating high redundancy of features for further high-density code packing leading to variable bit length. We demonstrate performance on two different publicly available digital mammography datasets using peak signal-to-noise ratio (pSNR), structural similarity (SSIM) index and domain adaptability tests between datasets. At high-density compression factors of >300x (~0.04 bpp), our approach rivals JPEG and JPEG2000 as evaluated through a Radiologist’s visual Turing test.
Tuesday, June 19th:
Wednesday, June 20th:
Thursday, June 21st:
June 20th, 2018, 12:30pm-2:30pm
The Doctoral Consortium provides a unique opportunity for students, who are close to finishing or who have recently finished their doctorate degree, to interact with experienced researchers in computer vision. A senior member of the community will be assigned as a mentor for each student based on the student’s preference or similarity of research interests. All students and mentors will attend a Doctoral Consortium meeting/luncheon, giving the students an opportunity to discuss their ongoing research and career plans with their mentor. In addition, each student will present a poster, either describing their thesis research or a single recent paper, to the other participants and their mentors.
June 22nd, 2018
This all-day workshop for both men and women is open to researchers of all levels.
The goals of this workshop are to:
 CVPR 2018 WAD. CVPR 2018 WAD Video Segmentation Challenge, 2018 (accessed April 28, 2018).
 Simon J´egou, Michal Drozdzal, David Vazquez, Adriana Romero, and Yoshua Bengio. The one hundred layers tiramisu: Fully convolutional DenseNets for semantic segmentation. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2017 IEEE Conference on, pages 1175-1183. IEEE, 2017.
Notices and Disclaimers
Intel, the Intel logo, and Movidius are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries.
*Other names and brands may be claimed as the property of others.
© Intel Corporation.