Published in Graphics

Nvidia has shipped a few DXG 1 systems

by on03 October 2016


At least five are deployed in Europe

We've had confirmation that at least five super expensive, 170 TeraFlops servers with eight P100 cards, each having a 16GB HBM 2, two Xeon processors, 512 GB RAM memory have been reaching its customers.

This means Nvidia had a head start as the first systems reached customers in late summer and before Q4 2016, and Nvidia’s CEO promised to ship the DXG-1 Systems before the end of the year. One of the biggest hold backs was the lack of HBM 2 memory, but that is slowly start to ship.

A single P100 card can easily sell for $5,000 in the market,  but so far Nvidia has concentrated to ship as many P100 cards in DXG-1 machines. This is definitely more profitable as companies like Facebook, Google, SAP and many leading universities are willing to spend their grants on these new supercomputers.

Deep learning startups will also be all over these systems but it might be hard to get your hands on the systems at least for the time being. We also learned that more systems are on the way and that even non educational and non-Google or Facebook range customers are going to get their systems in Q4 2016, in a few weeks from now. 

HBM 2 memory was one of the big hold backs, as it is extremely hard to manufacture memory chips stacks and they make the P100 cards very expensive. Since we've followed Nvidia from the time of its second chip, Riva 128, we do know that Nvidia business mentality is all about making the biggest profits that it  possibly can.

It is hard to blame the firm,  as according to Nvidia’s CEO at the GPU Technology conference in Amsterdam, Pascal architecture was a two billion dollar investment. Nvidia is all about ROI, or return of investment on this gigantic project.

The fact that Nvidia made at least 1600 people interested for its GPU Technology conference in Amsterdam and that a big chunk involved the deep learning enthusiast, it is hardly a surprise that the company expects to see a lot of P100 and DXG-1 servers as they are, at the moment,  the fastest deep learning hardware around.

Last modified on 03 October 2016
Rate this item
(11 votes)

Read more about: