Dynamic Neural Fields on hardware

Mathis Richter, Yulia Sandamirskaya

Goals of the project

This project aims at implementing simple Dynamic Neural Field (DNF) architectures using spiking neurons. Due to the computational performance of current simulations of spiking neurons (e.g., Python brian), it will be necessary to implement the architectures on hardware (e.g., SpiNNaker) to get real-time performance.

Architecture

We decided to implement a classic DNF architecture for change detection. For an input in a given feature space (e.g., color) the architecture is able to detect changes in the input. For instance, when the architecture is presented with a red stimulus, it builds up a memory of having seen this input. This memory will persist even when the input ceases. When the input remains the same and is simply switched back on, the architecture will recognize it as the same input and not react to it. However, when the input is changed, for example to a green input, the architecture will react with a change detection and build up another memory of the new type of input.

Results

Proof of concept in C++

To prove the concept of the change detection architecture, we implemented the change detection model in [ cedar], our DNF-framework.

The architecture consists of three one-dimensional DNFs. We refer to the first (top-most) field as the "perceptual field". It receives one-dimensional input - neural activation with a Gaussian distribution centered at a specific location within the feature space. The second (center) field in the architecture is the "inhibitory field", which receives input from the perceptual field but inhibits it in return. The third (bottom) field represents the "working memory" of the architecture. It receives excitatory input from the perceptual field and sustains its activation once it has received input. It excites the inhibitory field.



The following screenshots demonstrate the model at three stages during the change detection task.

The first screenshot shows the architecture in its initial state, without input. All three fields are in a sub-threshold attractor state.



Now the input has been switched on, which is centered around position x=10 within the feature space. The perceptual field formed a peak, exciting both the inhibitory field and the working memory field. The latter became self-sustained and at the same time excited the inhibitory layer even more. The inhibition acting upon the perceptual field suppressed the peak there. Now there are only self-sustained peaks in the inhibitory and working memory field. You can also see that the perceptual field is still inhibited, lowering the resting level in the position where the input was perceived.



In the final screenshot, the position of the input has been changed to be centered around x=30. The perceptual field reacted to the change in input by forming a peak - the detection of a changed input. Afterward, the architecture again formed memory about the new input and suppressed the perceptual field at position x=30.



Note that the working memory field could also be set up such that it remembers more than one perceived input. This type of behavior of the working memory field can be achieved by using a lateral interaction kernel with a Mexican hat profile.

[This video Download] shows the change detection architecture during the task described above.

Implementation using spiking neurons

Next, we implemented the same architecture using spiking neurons. We decided to use Python brian to simulate the architecture. We simulated each DNF by populations of 100 neurons with synaptic connection patterns similar to the interaction kernel used in DNFs: local excitation and global inhibition.

The following figure shows the spiking patterns of the neurons (y-axis) of all three populations of neurons over time (x-axis). From left to right, the spiking patterns of the perceptual, the inhibitory, and the working memory field are shown. The input is a Poisson process with different spike rates for each of 100 input neurons. The spike rates are distributed in a Gaussian distribution centered around neuron 30 for the first two seconds. It then changes and is centered around neuron 70 for the remaining three seconds.



You will find the [Python brian code Download] used to generate this figure attached to this Wiki page.

With help from Sergio Davies, we converted the Python brian program to pyNN, in order to run the architecture on SpiNNaker. However, due to the limited time, we did not get the the program running on hardware.

Attachments