Published in PC Hardware

Groq releases new Tensor Streaming Processor

by on15 November 2019


A PetaOp/s performance on a single chip

Tensor Streaming Processor (TSP) inventor Groq announced that its new Tensor Streaming Processor (TSP) architecture is capable of 1 PetaOp/s performance on a single chip.

Groq claims its architecture is the first in the world to achieve this level of performance, which is equivalent to one quadrillion operations per second, or 1e15 ops/s. Groq's architecture can add up to 250 trillion floating-point operations per second (FLOPS).

 Groq's co-founder and CEO Jonathan Ross said: "Top GPU companies have been telling customers that they'd hoped to be able to deliver one PetaOp/s performance within the next few years; Groq is announcing it today, and in doing so setting a new performance standard. The Groq architecture is many multiples faster than anything else available for inference, in terms of both low latency and inferences per second. Our customer interactions confirm that. We had first silicon back, first-day power-on, programs running in the first week, sampled to partners and customers in under six weeks, with A0 silicon going into production."

He claims his outfits TSP architecture provides compute flexibility and massive parallelism without synchronization overhead of traditional GPU and CPU architectures. Groq's architecture can support traditional and new machine learning models and is currently in operation on customer sites in both x86 and non-x86 systems.

Groq's new, simpler processing architecture is designed specifically for the performance requirements of computer vision, machine learning, and other AI-related workloads. Execution planning happens in software, freeing up valuable silicon real estate otherwise dedicated to dynamic instruction execution. The tight control provided by this architecture provides deterministic processing that is especially valuable for applications where safety and accuracy are paramount. Compared to complex traditional architectures based on CPUs, GPUs, and FPGAs, Groq's chip also streamlines qualification and deployment, enabling customers to quickly implement scalable, high performance-per-watt systems.

Groq Chief Architect Dennis Abts, said: "Groq's solution is ideal for deep learning inference processing for a wide range of applications, but even beyond that massive opportunity, the Groq solution is designed for a broad class of workloads. Its performance, coupled with its simplicity, makes it an ideal platform for any high-performance, data- or compute-intensive workload."

Last modified on 15 November 2019
Rate this item
(5 votes)

Read more about: