Reservoir Computing in HW

Reservoirs are random recurrent large neural networks. The inherent transient dynamics of the high-dimensional dynamical system formed by a sufficiently large and heterogeneous pool of neurons (reservoir) can serve as a universal analog fading memory and allow computations of rapidly changing spatiotemporal inputs without the need of stable states. The spatiotemporal pattern of activity triggered by the inputs is then decoded by linear discriminant units called readout neurons.

The goal of this project was to implement such a reservoir in hardware (multi-chips setup from Zurich, comprising three neuromorphic multi-neuron chips) and use it for classifying rapidly changing inputs. The inputs consisted of robot's movements, prerecorded with a 128x128 DVS camera.

- multi-chips setup composed of three neuromorphic chips.

The network was composed of three main parts:

1) an input layer where the different patterns were loaded (also called sequencer)

2) the reservoir

3) the readout neurons, or output layer.

The inputs consisted of different movements from the robosapien by WowWee? ( http://www.wowwee.com/en/products/toys/robots/robotics/robosapiens:robosapien) prerecorded with a DVS camera (128x128 pixels). jAER events from the DVS were used to as input to the reservoir. We constructed input spike trains by considering the events in 8x8 squares of pixels from the DVS. These 256 input spike trains were fed to the 256 neurons of the input layer (send from a computer).

The reservoir was created by randomly connecting a pool of 1776 (1648 excitatory neurons/128 inhibitory) neurons. The excitatory population was implemented on the IF2DWTA chip and the inhibitory one on a IFSLWTA chip. This allowed us to control the weights separately.

We implemented four readout neurons on another IFSLWTA chip. Thus, we could theoretically distinguish among four different types of movements. The full network was constructed by attaching the input to the reservoir (random all-to-all) and the reservoir to the output layer (random all-to-all) formed by the four readout neurons.

We considered the distinction of four movements: i) Left Lean; ii) Random sequence of movements; iii) Roar; and iv) movements of the whole body.

- The three figures below show the raster plots of three different input patterns and the corresponding pattern of activation of the excitatory and inhibitory populations.

During the learning phase, only the weights from the reservoir to the readout neurons were modified using a Hebbian spike-driven plasticity rule (Biological Cybernetics, 87, 459-470, (2002)) and lateral inhibition among the readout neurons. Thus, each readout neuron and its corresponding synapses represent a perceptron able to learn features from the input space in an unsupervised way. After learning, different set of synapses become potentiated for the readout neurons allowing distinction of the different inputs.

- Potentiation (vertical red lines) of various synapses for the four readout neurons (vertical blue lines correspond to depressed synapses) and zoom of a region of interest showing different patterns of potentiated synapses.

Attachments