Deep Belief Networks on SpiNNaker

Participants: Evangelos Stromatias, Daniel Neil

The goal of this project is to port the work originally demonstrated by O'Connor, running deep belief networks on spiking neurons, onto the SpiNNAker hardware platform. For technical details and an in-depth analysis of the process, please refer to the upcoming paper "Real-Time Classification and Sensor Fusion with a Spiking Deep Belief Network" by Peter O’Connor, Daniel Neil, Shih-Chii Liu, Tobi Delbruck, Michael Pfeiffer.

This project consists of three stages: new model generation (optimized for the instantaneous synapses of the event-based DBN), ensuring proper network execution of the DBN, and interfacing to a live spiking input.

New Model Generation

The event-based deep belief network model presented in the above paper has a simplified synapse model to decrease computation. To more accurately reflect this existing model and to optimize the network, the SpiNNaker group replaced their single-rise exponential-decay synapse model with a Dirac, instantaneous-rise synapse model. Additionally, the SpiNNaker group increased the precision of their weight model, using a fixed-point accuracy of 6.16 ("6 dot 16").

DBN Execution

Once the new model was created, spikes were drawn from test images with probability proportional to the pixel intensity and were correctly classified.

Live Spiking Interface

Once it was ensured that the network classified digits correctly, the existing Seville-DVS-to-SpiNNaker interface was used to input new digits directly into the network. Ensuring proper rotation & orientation proves difficult, but example handwritten digits from the MNIST digit recognition task can be found below. Less than optimal performance is currently attributed to retina challenges.

Input Digit

Reconstructed Digit

Spikes in the intermediate layer and output layer