Nauta is an integrated deep learning (DL) platform built on Kubernetes, and includes carefully selected open source components and Intel-developed custom applications, tools, and scripts—all validated together to deliver an easy to use, and flexible deep learning environment. To learn more, please visit our Github Repo.

Deep Learning on Intel® Xeon® processor based clusters

It can be a cumbersome process for data scientists to develop their own deep learning cluster and set up a workstation with the tools needed to run and monitor experiments. Nauta is an easy to use platform that gets your experiments running by automatically building and delivering a compute environment for your DL jobs on Intel® Xeon® Scalable processor-based clusters.

Industry recognized open source components

The Nauta platform contains carefully selected open source components used by data scientists. This includes interactive Jupyter* notebook access for real time experimentation, Intel® Optimizations for TensorFlow*, and TensorBoard* to create and analyze deep learning models, with everything orchestrated using Kubernetes* and Docker* containers.

Freedom and flexibility provided by Kubernetes*

Kubernetes is rapidly becoming a standard for orchestrating containerized deployments. With Kubernetes orchestration in Nauta, data scientists have a flexible system to scale deep learning models across many nodes.

Scalability: multi-user and multi-node

Scaling deep learning workloads across many nodes can be a challenging process. The Nauta platform resides on an Intel Xeon Scalable processor-based cluster, where DL experiments can be automatically deployed across multiple nodes. Data scientists set up and run their jobs by either logging into accounts on cluster user nodes or using the Nauta CLI installed on their clients.