Deploying Intel® AI Solutions

Deploy in the cloud or in data centers

It takes a lot of time to search, install and setup the environment for running AI workloads optimally. But now data scientists and deep learning practitioners have instant access to Intel optimized AI environments.
Get started using Intel optimized AI environments for your workloads in the cloud or in your data center.
Collaboration between Intel, major cloud service providers and hardware partners resulted in the availability of pre-configured Intel optimized AI environments.

Choose your deployment channel – either in the cloud or in your data center.
Intel and Amazon Web Services* (AWS)

Intel and Amazon Web Services* (AWS)

Want to take advantage of the incredible performance of Intel technology on AWS. Learn more about the new compute-intensive C5 instances for Amazon EC2 and the longstanding collaboration between Intel and AWS here

C5 instances for Amazon EC2 are:

  • Ideal for compute-intensive scientific modeling, financial operations, machine learning (ML) inference, high performance computing (HPC) and distributed analytics that require high performance for floating point calculations
  • Includes Intel® custom cloud solution based on next generation Intel® Xeon® Scalable processors, Intel AVX-512 and Intel Deep Learning Boost (supports VNNI instructions)
  • Offers up to 96 vCPUs, 192 GBs of memory
  • Cascade Lake Instances: c5.12xlarge, c5.24xlarge, c5.metal (bare-metal instance)
  • For more information, please contact us at DeployAI@Intel.com

Get started with AWS free tier

Intel and Azure*

Intel and Microsoft Azure*

Azure marketplace recently launched the Intel Optimized Data Science VM for Linux (Ubuntu) to accelerate training and inference on Intel® Xeon® Scalable Processors. This offer adds on new Python environments that contain Intel’s optimized versions of TensorFlow, MXNet and PyTorch. These optimizations leverage the Intel® Advanced Vector Extensions 512 (Intel® AVX-512) and Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN).

  • Intel optimized deep learning frameworks pre-configured and ready to use
  • Take full advantage of all the optimizations on an Intel® Xeon® Scalable Processor based Azure Fv2-Series or Azure HC-Series VM instance
  • When running on an Azure F72s_v2 VM instance, these optimizations yielded an average of 7.7X speedup in training throughput across all standard CNN topologies
  • For more information, please contact us at DeployAI@Intel.com

Get started with $200 in free credits

Intel and Google Cloud Platform* (GCP)

Intel and Google Cloud Platform* (GCP)

Google Cloud now offers the latest Intel® Xeon® Processor family that can be specified through the CPU selector tool and is available in all of North America, Europe and Asia-Pacific GCP regions.

Intel® Xeon® Scalable processors in Google Compute Engine (GCE) or Google Kubernetes Engine (GKE) deliver record breaking performance at no additional cost. The latest 2nd Generation Intel Xeon Scalable Processors (Cascade Lake Architecture) offers:

  • 40% performance improvement compared to current GCP VMs
  • Built-in Acceleration with Intel® Deep Learning Boost
  • Compute-Optimized VMs (C2) with up to 60 vCPUs, 240 GBs of memory, and up to 3TB of local storage
  • Memory-Optimized VMs (M2) with up to 12 TB of memory and 416 vCPUs
  • For more information, please contact us at DeployAI@Intel.com

Get started with $300 in free credits. To secure additional credits, please contact your Intel® AI Builders account manager.

Baidu PaddlePaddle Deep Learning Framework Optimized for Intel

Baidu PaddlePaddle Deep Learning Framework Optimized for Intel

Intel and Baidu have a history of in-depth cooperation in the field of AI. Badu’s open source deep learning platform, PaddlePaddle, runs on Intel® Xeon® Scalable Processors and is optimized at the computing, memory, architecture and communication levels to achieve better model deployment performance.

As an open source deep learning platform, PaddlePaddle gives users a jumpstart by linking high performance processors and computing systems with their deep learning models to solve enterprise-wide challenges. Targeted applications include image classification, speech recognition, language translation, object detection and more.

For more information, please contact us at DeployAI@Intel.com.

These organizations have a focus on artificial intelligence (AI) and high performance computing (HPC) convergence and are collaborting with Intel to help customers deploy AI solutions today

Dell Ready Solutions for AI

Dell EMC Ready Solutions for AI

Dell EMC Ready Solutions for artificial intelligence help make artificial intelligence simpler with pre-designed and pre-validated solutions ideal for machine learning (ML) and deep learning (DL).

  • Ready Solution for DL (Nauta + Intel® Xeon® processor cluster)
  • Ready Solution for ML (BigDL, Apache Spark*, Apache Hadoop*)

A key Dell platform integrating Intel technologies is the Dell R740 and converged C6420 platforms offering 4 dual socket servers in a 2U chassis.

For more information, please contact us at DeployAI@Intel.com.

HPE AI Solutions Based on Intel

HPE AI Solutions Based on Intel

HPE has a proven track record in solutions for AI that are at once high-performing and economically viable. HPE, in collaboration with partners like Intel, have devised High Performance Computing (HPC) solutions that deliver significant AI capabilities.

HPE offers AI solutions focusing on video surveillance, fraud detection, prescriptive maintenance, autonomous driving, speech to text, smart cities and more.

Some purpose-built solutions from HPE that help customers create competitive advantage include the ProLiant*, Apollo*, and Edgeline* systems.

For more information, please contact us at DeployAI@Intel.com.

Inspur and Intel collaborating to accelerate AI Innovation

Inspur and Intel collaborating to accelerate AI Innovation

Inspur is developing AI infrastructures in four layers, including a comprehensive computing platform, a complete management & performance suite, optimized deep learning frameworks, and end-to-end, agile, cost-efficient AI solutions.

Inspur has training and inference solutions based on Intel® Xeon® processors, as well as, Inspur recently released inference accelerators based on Intel® Xeon® processors and Intel® FPGAs.

For more information, please contact us at DeployAI@Intel.com.

Lenovo AI software, solutions and AI Innovation Centers

Lenovo AI software, solutions and AI Innovation Centers

Lenovo helps to accelerate the journey with AI Innovation Centers as well as a portfolio of offerings, software and solutions.

Lenovo intelligent Computing Orchestrator (LiCO)* based on Intel technologies simplifies enterprise AI deployment with an intuitive interface for managing AI workloads.

A key solution from Lenovo focused on optimal AI performance is the ThinkSystem* SR670 Rack Server.

For more information, please contact us at DeployAI@Intel.com.