The ARM Graviton processor contains 64bit Neoverse cores based on the Cosmos 16nm. According to EE News Europe, ARM's Drew Henry, senior vice president, said the Israeli designed Graviton operates on the Cortex-A72 64bit core, which functions at clock frequencies up to 2.3GHz.
The servers run on Intel and AMD processors. The system will assist Amazon with scale-out workloads. Here it is possible for users of the service to share the load across a group of smaller instances, such as containerised microservices, web servers, development environments, and caching fleets
It also means that Amazon will now have the ability to license ARM blueprints, via Annapurna. Also, the company can customise and tweak those designs, and the ability to go to contract manufacturers like TSMC and Global Foundries and get competitive chips made.
AWS is also building a custom ASIC for AI Inference, called Inferentia, for Amazon. This could be capable of scaling from hundreds to thousands of trillions of operations per second and further reduce the cost of cloud-based. This will allow Amazon to compete with its rivals in the cloud computing space. Forbes reports that Google already has its own TensorFlow Processing Unit (TPU. In addition, Microsoft is using Intel and Xilinx FPGAs to accelerate inference processing.