Nengo networks in SpiNNaker hardware
Participants
Sergio Davies  26Jun  16Jul 
Chris Eliasmith  26Jun  16Jul 
Francesco Galluppi  26Jun  16Jul 
Terry Stewart  26Jun  16Jul 
Motivation
As part of the From single cells to cognition in software and hardware http://neuromorphs.net/nm/wiki/2011/ng11 group we engaged in the Nengo networks in SpiNNaker hardware project. The motivation behind this project is to port part of the Neural Engineering Framework inside the SpiNNaker machine.
This will be done by implementing the encoding and decoding process directly on the SpiNNaker chip, by taking advantages of the programmability of the ARM968 cores inside a SpiNNaker chip, letting encode and decode spike onboard.
The approach is described in the figure below:
A value is sent by the host machine (eg. running NENGO, the software implementing the NEF principles), and encoded onboard into spikes, using the NEF encoding process. This is done by implementing a special neuron population, the LIFNEF encoder, which translate values into firing rates for neurons in a population. Each neuron will code the value in a firing rate, accordingly to its own tuning curve (the relation between an input value and the firing rate).
Spikes will travel then in the neural space (implemented by standard LIF neuron), where the weights implement a function f(x). At the decoding end another special population, the NEFDecoder will translate spike trains into a value and send it back to the host machine.
Spikes are then produced and collected onboard, by taking advantage of the fast custom interconnect characterizing the SpiNNaker machine. This reduces the bandwitdh and the load needed on the host machine, by sending and receiving only value to/from SpiNNaker. This approach tends to avoid bottlenecks in sending/receiving spikes directly from an host machine or from an FPGA by porting the encoding/decoding process on the SpiNNaker board.
Results
Representation
In order to test the ability to represent values into spike trains we have implemented a communication channel where we encode and decode informations using the Neural Engineering Framework.
Rather than inputing and outputing spikes directly in/from SpiNNaker, we decided to exploit SpiNNaker configurability by porting part of the NEF onboard, This is done by implementing two special neural populations  the NEF LIF input and the NEF decoder population. The first one takes the input value X and codes it in spike trains, accordingly to the NEF and the encoders values we get from NENGO. The latter collects spikes, weights them for the decoders and outputs them as a value. We are able to input and output values directly on SpiNNaker, converting them to spikes onboard, computing functions in the neural space, collect spikes produced by a population and decode them back to values.
To prove we are able to do that we implemented a communication channel, computing the function Y = X, which structure is illustrated in the figure below:
A value X is input in population A, the NEF LIF encoder population. This population encodes values in spike trains accordingly to the tuning curves of the neurons.
These spikes travel to population B through an all to all connection implementing the communication channel function. These weights are obtained from NENGO by multiplying the decoders ofhe A by the encoders of B. This population is a standard LIF poopulation.
Spike trains are then passed to the OUT population that converts them back into a value Y and outputs it to the external world.
Population A is composed by 150 LIFNEF encoder neurons. B is a standard LIF population composed by 100 neurons. OUT and OUT1 are populations of 150 and 100 NEFDecoder neurons each. The whole communication channel is then composed by 500 neurons firing with a firing rate range of 4060 Hz.
In the next figure a comparison between the output obtained from NENGO and SpiNNaker can be observed. The input is a step function, starting from 1 and increasing of .5 every 500 msec. The simulation runs in real time.
The next figure represents the direct decode of population A, before the information passes in the Communication Channel, decoded by the population IN.
This last figure shows that we can represent information in SpiNNaker so to be able to encode/decode it.
Computation
In order to show computation we implement the function Y = X^{2 in the NEF and run it through the same communication channel (encoders, decoders and weights have been changed in order to compute the square function). The structure of the network is the same of the communication channel described above, but the encoders and decoders (and subsequently the weights) between A and B implement a different function. }
Results are shown in figure:
The simulation has also been run with NENGO controlling the value sent to SpiNNaker and displaying the resulting value received back in the NENGO interface (uvr display). The black line represents the direct decoding from the input sent to SpiNNaker and represents the ability of SpiNNaker of encoding and decoding input. The blue line is the decoded result of the operation implemented in the weights from A to B.
Communication has been done through UDP between NENGO and SpiNNaker.
When the input is 0 (leftmost plot) the output from SpiNNaker is 0 as well (blue line in the uvr panel ). When input is shifted to 1 both input (black) and output go to 1. When input is shifted to 1 the result of squaring stays at 1.
The rightmost panel shows the relation between the input X and the output Y. If the input is shifted slowly the quadratic relation can be observed in this panel, as shown in the next figure.
The network consisted of 500 neurons as in the Communication Channel example (in fact is the same network with the weights between A and B changed so to compute the square) and runs in real time.
A video demonstrating the square computation on SpiNNaker, controlled from NENGO, can be found [here ]. The square relation can be observed in the rightmost panel as the input slides from 1 to 1.
Dynamics
In order to show an implementation of neural dynamics within the NEF we implemented an Integrator. The integrator structure is shown in the next figure:
An input is fed into population A, and it travels through a communication channel to population B. Population B then computes the integral of the input received by A by means of recurring connections. Weights for this connections are computed using the encoders and decoders from NENGO and loaded on board. Population A is and encoding population as described in the sections above.
Results from a simulation ran with NENGO sending input values to SpiNNaker and getting output back is displayed in the figure below:
The integrator integrates the positive pulse in the input and holds the integrated value. Then a negative pulse input is received and integrated. Between the two pulses the integrator is able to hold the value with little drifting. The Input population is composed by 150 NEFencoder neurons firing at 4060 Hz. The integrator is composed by 300 neurons fully recurrently connected (9000 connections) firing at 80120 Hz. In order to have a stable integrator we had to implement a 32bit precision LIF neuron and represent weights with 10 bit decimal precision. This, along with the high firing rates, let us run the simulation at 1/50 of real time. A more efficient/distributed implementation would be needed to run the integrator real time, but that was behind the scope (and time!) of this project.
Conclusions
We have succesfully ported part of the Neural Engineering Framework on the SpiNNaker hardware. We were able to encode and decode values using the NEF directly on board. This approach takes advantages of the programmability of the ARM968 cores inside a SpiNNaker chip, letting encode and decode spike onboard.
Spikes are then produced and collected onboard, by taking advantage of the fast custom interconnect characterizing the SpiNNaker machine. This reduces the bandwitdh and the load needed on the host machine, by sending and receiving only value to/from SpiNNaker. This avoids bottlenecks when firing rates and dimension of input and output neurons increase, letting the system to scale up seamlessly.
Finally we were able to integrate NENGO, the software running the NEF, directly to the SpiNNaker board, so to be able to control the simulation and observe results directly from NENGO.
Future work
 representation multidimensional values
 porting weight generation on board by just sending encoders and decoders
 efficient implementation of a NEF neural model
 integration with NENGO for network instantiation
 run large scale NEF models on SpiNNaker!
Attachments

squared.png
(63.1 KB)  added by francesco.galluppi
7 years ago.
squared

integrator.png
(9.2 KB)  added by francesco.galluppi
7 years ago.
integrator simulation results
 integrator_structure.png (2.7 KB)  added by francesco.galluppi 7 years ago.

squared_nengo.png
(18.3 KB)  added by francesco.galluppi
7 years ago.
results from squaring within NENGO controlling SpiNNaker
 squared_nengo.2.png (18.3 KB)  added by francesco.galluppi 7 years ago.
 squared_relation.png (14.4 KB)  added by francesco.galluppi 7 years ago.

square_demo.mp4
(12.7 MB)  added by francesco.galluppi
7 years ago.
demo video for the square running real time on SpiNNaker

approach.png
(132.1 KB)  added by francesco.galluppi
7 years ago.
approach description