Frameworks

Explore resources available for popular AI frameworks optimized on Intel® Architecture, including installation guides and other learning material. We are continuously expanding our list of supported frameworks.
Featured CPU-optimized deep learning topologies for several popular frameworks – check back for regular updates.

AI Capability Application Tensorflow MXNet Caffe2 Caffe
Image Recognition Classify image(s) with high accuracy ResNet50 (I&T)
InceptionV3 (I&T)
ResNet50 (I&T)
InceptionV3 (I&T)
InceptionV4 (I&T)
MobileNet (I)
SqueezeNets (I)
DenseNet (I)
ResNet50,101(I)
InceptionV1 (I)
SqueezeNet (I)
VGG16,19 (I)
ResNet50 (I&T)
InceptionV3 (I&T)
Object Detection & Localization Locate and classify object(s) in image SSD-VGG16 (I&T) SSD (I) SSD-VGG16 (I&T)
Faster-RCNN (I&T)
R-FCN (I&T)
Speech Recognition Convert speech to text Deep Speech 2 (I)
Language Translation Translation from one language to another GNMT (I) NMT (I)
Transformer (I)
Recommender Systems Predicts the rating or preference a user would give an item Wide & Deep (I)
Generative Adversarial Networks Neural networks that generate data mimicking some distribution DCGAN (I)
Reinforcement Learning The use of actions and results to learn how to behave in an environment A3C (I)

(I) – Inference; (T) – Training; (I&T) – Inference and Training

This Python*-based deep learning framework is designed for ease of use and extensibility on modern deep neural networks and has been optimized for use on Intel® Xeon® processors.

The open-source, deep learning framework MXNet* includes built-in support for the Intel® Math Kernel Library (Intel® MKL) and optimizations for Intel® Advanced Vector Extensions 2 (Intel® AVX2) and Intel® Advanced Vector Extension 512 (Intel® AVX-512) instructions.

Based on Python* and optimized for Intel® architecture, Intel’s innovative neon™ framework for deep learning is designed for ease of use and extensibility on modern deep neural networks.

The Intel® Optimization for Caffe* provides improved performance for of the most popular frameworks when running on Intel® Xeon® processors.

Theano*, a numerical computation library for Python, has been optimized for Intel® architecture and enables Intel® Math Kernel Library (Intel® MKL) functions.

Chainer* is a Python*-based deep learning framework for deep neural networks. Intel’s optimization for Chainer is integrated with the latest release of Intel® MKL-DNN.

BigDL is a distributed deep learning library for Apache Spark*. With BigDL, users can write their deep learning applications as standard programs, which can run directly on top of existing Apache Spark or Hadoop clusters.

Intel continues to accelerate and streamline PyTorch on Intel architecture, most notably Intel® Xeon® Scalable processors, both using Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) directly and making sure PyTorch is ready for our next generation of performance improvements both in software and hardware through the nGraph Compiler.