This means Nvidia had a head start as the first systems reached customers in late summer and before Q4 2016, and Nvidia’s CEO promised to ship the DXG-1 Systems before the end of the year. One of the biggest hold backs was the lack of HBM 2 memory, but that is slowly start to ship.
A single P100 card can easily sell for $5,000 in the market, but so far Nvidia has concentrated to ship as many P100 cards in DXG-1 machines. This is definitely more profitable as companies like Facebook, Google, SAP and many leading universities are willing to spend their grants on these new supercomputers.
Deep learning startups will also be all over these systems but it might be hard to get your hands on the systems at least for the time being. We also learned that more systems are on the way and that even non educational and non-Google or Facebook range customers are going to get their systems in Q4 2016, in a few weeks from now.
HBM 2 memory was one of the big hold backs, as it is extremely hard to manufacture memory chips stacks and they make the P100 cards very expensive. Since we've followed Nvidia from the time of its second chip, Riva 128, we do know that Nvidia business mentality is all about making the biggest profits that it possibly can.
It is hard to blame the firm, as according to Nvidia’s CEO at the GPU Technology conference in Amsterdam, Pascal architecture was a two billion dollar investment. Nvidia is all about ROI, or return of investment on this gigantic project.
The fact that Nvidia made at least 1600 people interested for its GPU Technology conference in Amsterdam and that a big chunk involved the deep learning enthusiast, it is hardly a surprise that the company expects to see a lot of P100 and DXG-1 servers as they are, at the moment, the fastest deep learning hardware around.