Explore resources available for popular AI frameworks optimized on Intel® Architecture, including installation guides and other learning material. We are continuously expanding our list of supported frameworks.

Get the Latest Framework Optimizations
Development Resources

Intel® Optimization for TensorFlow*

This Python*-based deep learning framework is designed for ease of use and extensibility on modern deep neural networks and has been optimized for use on Intel® Xeon® processors.


The open-source, deep learning framework MXNet* includes built-in support for the Intel® Math Kernel Library (Intel® MKL) and optimizations for Intel® Advanced Vector Extensions 2 (Intel® AVX2) and Intel® Advanced Vector Extension 512 (Intel® AVX-512) instructions.

Intel® Optimization for Caffe*

The Intel® Optimization for Caffe* provides improved performance for of the most popular frameworks when running on Intel® Xeon® processors.


Intel continues to accelerate and streamline PyTorch on Intel architecture, most notably Intel® Xeon® Scalable processors, both using Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) directly and making sure PyTorch is ready for our next generation of performance improvements both in software and hardware through the nGraph Compiler.


BigDL is a distributed deep learning library for Apache Spark*. With BigDL, users can write their deep learning applications as standard programs, which can run directly on top of existing Apache Spark or Hadoop clusters.

Intel® Optimization for Theano*

Theano*, a numerical computation library for Python, has been optimized for Intel® architecture and enables Intel® Math Kernel Library (Intel® MKL) functions.

Intel® Optimization for Chainer*

Chainer* is a Python*-based deep learning framework for deep neural networks. Intel’s optimization for Chainer is integrated with the latest release of Intel® MKL-DNN.