Despite their name, neural networks are only distantly related to the kinds of things you’d find in a brain. While their organization and the way they transfer data through processing layers may share some rough similarities to real neural networks, the data and the calculations performed on it would seem very familiar to a standard CPU.
But neural networks aren’t the only way people have tried to learn lessons from the nervous system. There is a separate discipline called neuromorphic computing which is based on approximate the behavior of individual neurons in hardware. In neuromorphic hardware, calculations are performed by many small units that communicate with each other through bursts of activity called spikes and adjust their behavior based on the spikes they receive from each other.
On Thursday, Intel released the latest version of its neuromorphic hardware, called Loihi. The new version comes with the sorts of things you’d expect from Intel: a better processor and some basic computational enhancements. But it also comes with some fundamental hardware changes that will allow you to run completely new algorithm classes. And while Loihi remains a research-focused product for now, Intel is also releasing a compiler that it hopes will drive wider adoption.
To make sense of Loihi and what’s new in this version, let’s go back and start by looking at a bit of neurobiology, and then let’s start from there.
From neurons to computing
The basis of the nervous system is the type of cell called a neuron. All neurons share some common functional characteristics. At one end of the cell is a structure called a dendrite, which you can think of as a receptor. This is where the neuron receives information from other cells. Nerve cells also have axons, which act as transmitters and connect with other cells to transmit signals.
The signals take the form of what are called “spikes,” which are brief changes in voltage across the cell membrane of the neuron. The spikes travel down the axons until they reach junctions with other cells (called synapses), at which point they become a chemical signal that travels to the nearby dendrite. This chemical signal opens channels that allow ions to flow into the cell, initiating a new peak in the receiving cell.
The recipient cell integrates a variety of information (how many spikes it has seen, whether any neuron is indicating that it should be quiet, how active it was in the past, etc.) and uses it to determine its own state of activity. Once a threshold is crossed, it will trigger a spike in its own axons and potentially trigger activity in other cells.
Usually this results in sporadic, randomly spaced spikes in activity when the neuron is not receiving much information. However, once it starts receiving signals, it will switch to an active state and fire a bunch of spikes in quick succession.
How does this process encode and manipulate information? That is an interesting and important question, and one that we are just beginning to answer.
One of the ways we have responded was through what has been called theoretical neurobiology (or computational neurobiology). This has involved attempts to construct mathematical models that reflect the behavior of nervous systems and neurons in the hope that this will allow us to identify some underlying principles. Neural networks, which focused on the organizing principles of the nervous system, were one of the efforts that emerged from this field. State-of-the-art neural networks, which attempt to build on the behavior of individual neurons, is another.
Spiking neural networks can be implemented in software on traditional processors. But it is also possible to implement them through hardware, as Intel is doing with Loihi. The result is a processor very different from anything you are probably familiar with.
Chopping in silicon
The previous generation Loihi chip contains 128 individual cores connected by a communication network. Each of these nuclei has a large number of individual “neurons” or units of execution. Each of these neurons can receive information in the form of spikes from any other neuron: a neighbor in the same nucleus, a unit in a different nucleus on the same chip, or from another chip entirely. The neuron integrates the peaks it receives over time and, depending on the behavior with which it is programmed, uses them to determine when to send its own peaks to the neurons with which it is connected.
All peak signaling occurs asynchronously. At set time intervals, the x86 cores on the same chip force a sync. At that point, the neuron will redo the weights of its various connections, essentially how much attention to pay to all the individual neurons that send signals to it.
In terms of a real neuron, part of the execution unit on the chip acts like a dendrite, processing incoming signals from the communication network based in part on weight derived from past behavior. A mathematical formula was then used to determine when the activity had crossed a critical threshold and to trigger its own spikes when it did. The “axon” of the execution unit then searches with which other execution units it communicates and sends a spike to each one.
At previous iteration of Loihi, a peak simply contained a piece of information. A neuron only registered when you received it.
Unlike a normal processor, there is no external RAM. Instead, each neuron has a small cache dedicated to its use. This includes the weights you assign to the inputs of different neurons, a cache of recent activity, and a list of all the other neurons to which the spikes are sent.
One of the other big differences between neuromorphic chips and traditional processors is energy efficiency, in which neuromorphic chips are far ahead. IBM, which introduced its TrueNorth chip in 2014, was able to get some useful work done even though it had a slow kilohertz speed, and it used less than 0.0001 percent of the energy that would be required to emulate a peaking neural network. in traditional processors. Mike Davies, director of Intel’s Neuromorphic Computing Laboratory, said that Loihi can beat traditional processors by a factor of 2,000 on some specific workloads. “We are routinely finding 100 times [less energy] for SLAM and other robotic workloads, “he added.