Published in News

Flex Logix solves Deep Learning DRAM problem

by on01 November 2018


New architecture should save power and mean less DRAM

Startup Flex Logix says it has worked out a way to fix a problem which has made deep learning a power and resource hog.

Systems designed to do difficult things in real time re continuously shuttling the data that makes up the neural network's guts from memory to the processor.

There is no problem with storage for that data but there is a lack of bandwidth between the processor and memory.

Flex Logix said that some systems need four or even eight DRAM chips to sling the 100s of gigabits to the processor, which adds a lot of space and consumes considerable power.

Flex Logix says that the interconnect technology and tile-based architecture it developed for reconfigurable chips will lead to AI systems that need the bandwidth of only a single DRAM chip and consume one-tenth the power.

The start-up started life commercialise a new architecture for embedded field programmable gate arrays (eFPGAs) but one of the founders, Cheng C. Wang, realised the technology could speed neural networks.

The other founder Geoff Tate told IEEE Spectrum that neural networks need shedloads of circuits that do the critical "inferencing" computation, called multiply and accumulate.

"But what's even harder is that you must be very good at bringing in all these weights, so that the multipliers always have the data they need in order to do the math that's required”, he said.

Wang twigged that the technology that we have in the interconnect of our FPGA, he could adapt to make an architecture that was good at loading weights rapidly and efficiently, giving high performance and low power.

Last modified on 01 November 2018
Rate this item
(0 votes)

Read more about: