Paperspace, a Brooklyn-based AI startup has announced that it’s enabling the Gradient machine learning platform as a service (PaaS) to run in enterprise data centers, hybrid cloud, and multi-cloud environments.
Gradient is one of the first ML PaaS offerings in the industry to offer end-to-end lifecycle management of machine learning models. The platform includes everything from developing, training, tuning, and deploying ML models. The initial version of Gradient ran in Paperspace’s own infrastructure and Google Cloud Platform (GCP). The latest iteration gives choice of cloud and on-prem environments to customers.
Gradient PaaS has three building blocks – Notebooks, Jobs, and Storage. Notebooks expose a full-blown Jupyter environment for immediate access. Jobs can pretty much run any Docker container or a Python program on the GPU cluster. Storage exposes artifacts and datasets that can be persisted and shared across multiple projects.
With the recent release, Paperspace brought agile development to ML development. Gradient now supports GradientCI, a continuous integration and deployment feature that’s based on tight integration with git repositories. As soon as a developer commits the training code to Git, it will trigger a pipeline to train and deploy models. This feature brings some of the best practices of DevOps and agile development to ML model management.
The latest iteration of Gradient can run across multi-cloud, on-prem, and hybrid environments which makes the PaaS fully infrastructure agnostic. Enterprises can deploy a mature and robust PaaS within their data center to train and deploy models.
Industry verticals such as finance and healthcare have stringent policies that prevent companies from storing data in the public cloud. Customers from these verticals can deploy Gradient in the same data center where sensitive datasets are stored and maintained.
By running Gradient on-premises, customers save bandwidth costs involved in moving the training datasets to the cloud. Since Gradient runs on-prem and in the public cloud, customers can seamlessly move models trained within the local data centers to the cloud for inference. The ability to mix and match training and inference environments makes Gradient appealing to enterprises.
According to Paperspace, Gradient runs in mainstream public cloud environments including AWS, Azure, and GCP. The platform takes advantage of the high-end VM configurations based on high-end CPUs, GPUs, and TPU infrastructure offered by cloud providers.
Paperspace has partnered with Intel to support the upcoming Nervana Neural Network Processors. Intel has built the Nervana family of processors to accelerate training and inferencing. When Gradient is deployed in a cluster based on the Nervana processors, customers instantly take advantage of the acceleration offered by Intel without additional configuration.
Integrated MLOps, agile model management, and infrastructure-agnostic deployment choices make Gradient standout in the crowded ML PaaS market dominated by hyperscale cloud providers.
Credit: Google News