Published in PC Hardware

Intel goes neuromorphic

by on28 September 2017


Loihi for AI and deep learning

Chipzilla has announced a new neuromorphic chip, codenamed Loihi, designed for AI and deep learning workloads.

Intel’s Dr. Michael Mayberry said that the chip does not need to be trained in the traditional way and that it takes a new approach to this type of computing by using asynchronous spiking.This uses a technique which is different from a transistor. Instead of flipping between 0 and a 1, it triggers when signal thresholds are reached, and fire unit the number of spikes exceeds a given threshold.

This is how muscles work when they flex.

Intel said: "The brain’s neural networks relay information with pulses or spikes, modulate the synaptic strengths or weight of the interconnections based on timing of these spikes, and store these changes locally at the interconnections. Intelligent behaviors emerge from the cooperative and competitive interactions between multiple regions within the brain’s neural networks and its environment. While neural spike models have been a useful way to study and understand biological cells, they have yet to be deployed as solutions for real-world computing or engineering problems. There’s tremendous potential for cutting-edge applications, but it’s less clear how these capabilities will be practically deployed."

 Loihi is up to a million times faster than other “typical” spiking neural nets when solving MNIST digit recognition problems. It also claims that Loihi is more efficient when used for convolutional neural networks or deep learning tasks.

According to Extreme Tech each neuron can communicate with thousands of other neurons, while each neuromorphic “core” includes what Intel is calling a “learning engine.” The total number of neurons onboard is 130,000, with 130 million synapses. That’s markedly less than IBM’s TrueNorth, which debuted three years ago with one million programmable neurons and 256 million synapses across 4,096 neurosynaptic cores.

Intel sees Loihi as part of a stable of products that range from the HPC-focused Xeon Phi, to the deep learning products it bought when it acquired Nervana, to its own FPGAs, to low-power Movidius solutions. Long-term, the company wants to field deep learning and AI resources that can stretch to cover any market segment or available power envelope.

Last modified on 28 September 2017
Rate this item
(0 votes)

Read more about: