The Deltron Members: Tara Julia Hamilton, with FPAA assistance from Scott Koziol and Jennifer Hasler, Aridam Basu and Shaista Hussain (remotely from NTU Singapore) and Mark Wang (remotely from UWS Australia)

The DELTRON is a spiking neural network that can remember and store patterns by changing the delays of every connection as opposed to modifying the weights. The advantage of this architecture over traditional weight-based ones is simpler hardware implementation without multipliers or digital-analog converters (DACs). The name is derived from its similarity to the learning rule with an earlier architecture called Tempotron.

Aim: The Deltron concept was conceived at a neuromorphic workshop held in Sydney pre-Telluride. The aim at Telluride was to implement the Deltron in hardware using an FPGA for the digital part and an FPAA for the analog part. The concept is shown in Fig. 2.

The digital part of the algorithm was implemented on the FPGA. Results below show the output spikes from the delays converging on a single time. Here a random spatio-temporal sequence was input into 100 axonal delays. The output of the delays were combined onto a single, serial line and the time, Tmax, when the most spikes occurred was stored. This time is then used to adapt the axonal delays. After a few 100s of milliseconds the serial output converges on a single time, indicating that the delays have been correctly learned.

An interface was developed in Python to facilitate the interfacing between the Opal Kelly FPGA board and the FPAA.Unfortunately the FPAA was not working and it was not possible to implement a simply synapse/neuron circuit. The work will be completed with another analogue hardware circuit.

Conclusions: The Deltron algorithm works and we showed it working in hardware. The analogue part of our circuit was non-functioning despite a great deal of effort!!!!!! A paper introducing the Deltron, implemented in software and on the FPGA is currently in preparation and will be submitted to a conference (APCCAS 2012) later this month. The analogue part of the circuit will be developed post-Telluride and we plan on submitting a journal article to FiNE. The Deltron is an example of a learning algorithm that is aimed at implementation in hardware. Delays are clearly easier to adapt than synaptic weights and our results show that unsupervised learning is possible.