For those not in the know, Edge computing is a distributed computing system with its own set of resources that allows data to be processed closer to its origin instead of having to transfer it to a centralised cloud or data centre. The idea is that it speeds up analysis by reducing the latency time involved in moving data back and forth. Fleet Command is designed to enable the control of such deployments through its cloud interface.
Nvidia product marketing manager Troy Estes wrote in a blog post. “In the world of AI, distance is not the friend of many IT managers unlike data centers, where resources and personnel are consolidated, enterprises deploying AI applications at the edge need to consider how to manage the extreme nature of edge environments.”
Nvidia Fleet Command offers a managed platform for container orchestration using Kubernetes distribution that makes it relatively easy to provision and deploy AI applications and systems in thousands of distributed environments, all from a single cloud-based console, Saunders said.
Often, the nodes connecting data centers or clouds and a remote AI deployment are difficult to make fast enough to use in a production environment. With the large amount of data that AI applications require, it takes a highly performative network and data management to make these deployments work well enough to satisfy service-level agreements.