Print this page
Published in News

Nvidia puts more of its deep learning under the bonnet of cars

by on14 January 2016


Powered by Pascal

Nvidia is applying all that it knows about deep learning to enable autonomous vehicles.

The GPU vendor has launched NVIDIA DRIVE PX 2 which is an autonomous vehicle development platform powered by the 16nm FinFET-based Pascal GPU.

The GPU maker issued a version of DRIVE PX last year to its automotive partners including Audi, BMW, Daimler, Ford and dozens more. This newer  version is equipped with two Tegra SOCs with ARM cores plus two discrete Pascal GPUs.

Nvidia said that the new platform is capable of 24 trillion deep learning operations per second ten times more than the last generation.

It can also offer an aggregate of 8 teraflops of single-precision performance which is a four-fold increase over the PX 1 and many times faster than using a slide rule or counting on your fingers.

The development platform includes the Caffe deep learning framework to run DNN models designed and trained on DIGITS, NVIDIA’s interactive deep learning training system.

Nivida wants to take humans out of the drivers’ seat to reduce the one million automotive-related fatalities each year.

Perception is the main issue and deep learning is able to achieve super-human perception capability. DRIVE PX 2 can process 12 video cameras, plus lidar, radar and ultrasonic sensors. This 360 degree assessment makes it possible to detect objects, identify them and their position relative to the car, and then calculate a safe and comfortable trajectory.

Last modified on 14 January 2016
Rate this item
(5 votes)