Nengo networks in Neurogrid hardware


Kwabena Boahen
'Peiran Gao'
Emmett McQuinn
Terry Stewart


The goal of this project was to use the Neural Engineering Framework to implement particular computational functions on the Neurogrid hardware. This would allow for complex mathematical and cognitive models to be implemented using biologically realistic simulated neurons on an extremely low-power analog microchip.

The hardware used for this project is  Neurogrid, an analog computer chip containing 65,536 simulated neurons. With 16 of these on a single board, Neurogrid implements a million two-compartment neurons in real-time, including detailed modelling of four synaptic channels and a branching dendritic arbor.

While Neurogrid can already be used for implementing cortical columns with regular connectivity, we also want to be able to specify connection weights between the neurons that will allow it to perform particular functions. Our eventual goal is to be able to implement complex control algorithms, such as a Kalman filter, which we have previously shown would enable us to decode motor cortex spikes to identify motor actions. To implement such a specific algorithm, we need to make use of the Neural Engineering Framework (NEF). The NEF defines a methodology for using populations of spiking neurons to represent numerical values and allows us to analytically derive the ideal connection weights for computing particular functions on those values.



The first step is to encode a value using the spiking behaviour of a group of neurons. The idea here is that every neuron has a randomly chosen encoding vector e, and should have a current input that is proportional to the dot product of the value being represented and this encoding vector. As an example, if we are representing a single value x (which varies from -1 to +1), a neuron's encoding vector e can be either +1 or -1. An encoder of +1 would mean that the neuron would get the most input for x=1 and the least for x=-1. An encoder of -1 would reverse these limits.

This standard approach had to be adjusted in two ways for Neurogrid. First, all input to the chip must be via spikes. This means that we needed to convert an input current of dot(e,x) into excitatory and inhibitory synaptic inputs. We chose to do this by choosing an overall rate of input (80Hz) and dividing this into an excitatory and an inhibitory portion. Thus, an input current of -1 would mean 80Hz of inhibitory spikes, an input of +1 would be 80Hz excitatory, and an input of 0 would be 40Hz of each. Neuron parameters were adjusted so that each excitatory spike had approximately the same magnitude of effect as each inhibitory spike.

The second change is that we made use of the built-in dendritic arbors in Neurogrid to greatly reduce the number of explicit neural connections we had to define. Any spiking input to a particular neuron in Neurogrid can be set to also affect the neurons in its neighborhood. Thus, a single spike can provide input to hundreds or thousands of neurons, with the overall effect fading as the distance from the target neuron increases. By activating this feature, we only provided explicit inputs to one out of every 16 neurons on the chip: the dendritic arbors caused the other neurons to gain input based on the nearby specified inputs.

Given this encoding process, the following figures show the resulting firing rates of two different pools of neurons as we change the value of x being represented.

Neural responses from pool 0 when representing different x values Neural responses from pool 1 when representing different x values

In order to show that these neural groups are adequately representing these values, we can compute decoders for each pool of neurons. This is the core process of the NEF and involves finding a vector D such that dot(A(x),D) is an estimate of x, where A(x) is the firing rate of the neurons when representing a particular value of x. This can be thought of as finding a set of weights that allow you to build the function y=x using the firing rate graphs above as a set of basis functions. As can be seen in the following figure, the decoders we found are highly accurate.

Recovering the represented value by linearly decoding firing rates


Now that we can store information using the firing of a group of neurons, we want to be able to pass information from one group to another using synaptic connection weights. The simplest such situation is a communication channel. We want to connect pool 0 to pool 1 such that if we force pool 0 to represent x, then the connections will cause pool 1 to also represent that same value x. According to the Neural Engineering Framework, this can be achieved by setting the connection weights to outer(E,D) where E is the encoding vectors for the post-synaptic pool and D is the decoders for the pre-synaptic pool. If we define connections in this way, we found that we successfully implemented a communication channel in that we could decode from the second pool a value close to the one we were storing in the first pool.

A communication channel (f(x)=x)

Computing a Non-Linear Function

Our next step was to determine if we could compute a function using the connection. This was done be solving for an alternate set of decoders that would approximate y=x*x, rather than y=x. If we now use this new decoder D and the equation outer(E,D), then we should achieve connection weights that will cause the second pool to represent x*x for whatever x value we place in the first pool. These results were positive, although slightly less accurate than the simpler communication channel.

Computing the square (f(x)=x*x)

Future Work

Our work thus far has shown that it is possible to represent values using Neurogrid simulated neurons, and to derive synaptic connection weights between pools of neurons that will allow us to compute functions. To expand this to more useful systems, our future work will involve:

  • representing N-dimensional vectors by choosing e randomly from an N-dimensional sphere
  • identifying parameter ranges where neural behaviour is as linear as possible
  • finding feedback connections for implementing memory, neural integrators, and central pattern generators, which may involve re-deriving some aspects of NEF for use with the particular post-synaptic current dynamics found in Neurogrid
  • expanding our use of the dendritic arbors to reduce the number of spikes needed to be transmitted between chips (important for scaling up to larger models)