|Image Recognition||Classify image(s) with high accuracy||ResNet50 (I&T)
|Object Detection & Localization||Locate and classify object(s) in image||SSD-VGG16 (I&T)||SSD (I)||SSD-VGG16 (I&T)
|Speech Recognition||Convert speech to text||Deep Speech 2 (I)|
|Language Translation||Translation from one language to another||GNMT (I)||NMT (I)
|Recommender Systems||Predicts the rating or preference a user would give an item||Wide & Deep (I)|
|Generative Adversarial Networks||Neural networks that generate data mimicking some distribution||DCGAN (I)|
|Reinforcement Learning||The use of actions and results to learn how to behave in an environment||A3C (I)|
This Python*-based deep learning framework is designed for ease of use and extensibility on modern deep neural networks and has been optimized for use on Intel® Xeon® processors.
The open-source, deep learning framework MXNet* includes built-in support for the Intel® Math Kernel Library (Intel® MKL) and optimizations for Intel® Advanced Vector Extensions 2 (Intel® AVX2) and Intel® Advanced Vector Extension 512 (Intel® AVX-512) instructions.
The Intel® Optimization for Caffe* provides improved performance for of the most popular frameworks when running on Intel® Xeon® processors.
Theano*, a numerical computation library for Python, has been optimized for Intel® architecture and enables Intel® Math Kernel Library (Intel® MKL) functions.
Chainer* is a Python*-based deep learning framework for deep neural networks. Intel’s optimization for Chainer is integrated with the latest release of Intel® MKL-DNN.
BigDL is a distributed deep learning library for Apache Spark*. With BigDL, users can write their deep learning applications as standard programs, which can run directly on top of existing Apache Spark or Hadoop clusters.
Intel continues to accelerate and streamline PyTorch on Intel architecture, most notably Intel® Xeon® Scalable processors, both using Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) directly and making sure PyTorch is ready for our next generation of performance improvements both in software and hardware through the nGraph Compiler.