Amazon Web Services and Microsoft have been doing the same thing lately, and Google's deal allows users to connect up to eight GPUs on four K80 boards to the Google Compute virtual engine of their choice.
For those who came in late, Google uses Nvidia-based GPU VMs and apparently Google has a fair bit of it going spare. This makes it useful for science and developmental laboratory communities, since the prospect of assembling and cooling GPU clusters is normally a bit of an arse.
Last September, AWS launched cloud GPU offerings over P2 cloud VM instances, also on Tesla K80 GPUs – and in December Microsoft made the same hardware available over Azure.
It appears that Google is keen to undercut its rivals to get a foot in the door. Google’s data centres run at $0.70 USD in the US, and $0.77 in Europe and Asia. Microsoft, whose own GPU availability can only be hired by the month, costs $700. Likewise it is a costing challenge to Amazon, whose comparable services begin at $0.90.
The Tesla units which have effortlessly dominated this trend contain - in the case of Google’s product - 12 GB of GDDR5 memory and 4992 cores producing 480 GB/s of overall bandwidth.