Published in News

Renesas develops AI accelerator

by on18 June 2019

 
For super fast AIs

Renesas has announced it has developed an AI accelerator that performs CNN (convolutional neural network) processing at high speeds and low power.

The development is part of a Renesas moves to create a second generation embedded AI (e-AI), which will accelerate increased intelligence of endpoint devices.

A Renesas test chip featuring this accelerator has achieved the power efficiency of 8.8 TOPS/W, which is the industry's highest class of power efficiency, according to the vendor.

The tech is based on the processing-in-memory (PIM) architecture, an increasingly popular approach for AI technology, in which multiply-and-accumulate operations are performed in the memory circuit as data is read out from that memory.

Renesas developed the following three technologies. The first is a ternary-valued (-1, 0, 1) SRAM structure PIM technology that can perform large-scale CNN computations. The second is an SRAM circuit to be applied with comparators that can read out memory data at low power. The third is a technology that prevents calculation errors due to process variations in the manufacturing.

All this adds up to a reduction in the memory access time in deep learning processing and a reduction in the power required for the multiply-and-accumulate operations.

Renesas said the new accelerator achieves the industry's highest class of power efficiency while maintaining an accuracy ratio more than 99 per cent when evaluated in a handwritten character recognition test (MNIST).

 

Last modified on 18 June 2019
Rate this item
(1 Vote)

Read more about: