Published in AI

Nvidia moves AI closer to the Edge

by on20 July 2022


Fleet Command gets new features

A  year after the company launched Fleet Command, a cloud-based service for deploying, managing, and scaling AI applications at the edge, Nvidia ahs launched new features that help address the distance between these servers by improving the management of edge AI deployments.

For those not in the know, Edge computing is a distributed computing system with its own set of resources that allows data to be processed closer to its origin instead of having to transfer it to a centralised cloud or data centre. The idea is that it speeds up analysis by reducing the latency time involved in moving data back and forth. Fleet Command is designed to enable the control of such deployments through its cloud interface.

Nvidia product marketing manager Troy Estes wrote in a blog post. “In the world of AI, distance is not the friend of many IT managers unlike data centers, where resources and personnel are consolidated, enterprises deploying AI applications at the edge need to consider how to manage the extreme nature of edge environments.”

Nvidia Fleet Command offers a managed platform for container orchestration using Kubernetes distribution that makes it relatively easy to provision and deploy AI applications and systems in thousands of distributed environments, all from a single cloud-based console, Saunders said. 

Often, the nodes connecting data centers or clouds and a remote AI deployment are difficult to make fast enough to use in a production environment.  With the large amount of data that AI applications require, it takes a highly performative network and data management to make these deployments work well enough to satisfy service-level agreements. 

 

Last modified on 20 July 2022
Rate this item
(0 votes)

Read more about: