It can be a cumbersome process for data scientists to develop their own deep learning cluster and set up a workstation with the tools needed to run and monitor experiments. Nauta is an easy to use platform that gets your experiments running by automatically building and delivering a compute environment for your DL jobs on Intel® Xeon® Scalable processor-based clusters.
The Nauta platform contains carefully selected open source components used by data scientists. This includes interactive Jupyter* notebook access for real time experimentation, Intel® Optimizations for TensorFlow*, and TensorBoard* to create and analyze deep learning models, with everything orchestrated using Kubernetes* and Docker* containers.
Kubernetes is rapidly becoming a standard for orchestrating containerized deployments. With Kubernetes orchestration in Nauta, data scientists have a flexible system to scale deep learning models across many nodes.
Scaling deep learning workloads across many nodes can be a challenging process. The Nauta platform resides on an Intel Xeon Scalable processor-based cluster, where DL experiments can be automatically deployed across multiple nodes. Data scientists set up and run their jobs by either logging into accounts on cluster user nodes or using the Nauta CLI installed on their clients.
Follow @IntelAI Watch Star Creating and training deep learning (DL) models typically starts on a workstation or laptop, but as…
Artificial intelligence (AI) is continuing to evolve and expand as enterprises explore use cases to augment their business models. According…
Uber’s* Horovod is a great way to train distributed deep learning models. Its ring-allreduce network architecture scales well to several…