Neuromorphic Olympics

Members: Ashley Kleinhans, Anne Collins, Erik Billing, Brandon Kelly, Cheston Tan, Chetan Singh Thakur, Jorg Conradt, Daniel Neil, David Karpul, Diogo Pata, Estela Bicho, Cornelia Fermuller, Greg Cohen, Giovanni Maffei, Himanshu Akolkar, Jasmine Berry, Julien Martel, Jonathan Tapson, Marcello Mulas, Manu Rastogi, Andrew Mundy, Mark Wang, Nicolas Oros, Michael Pfeiffer, Sadique Sheik, Stephen Deiss, Sergio Davies, Shih-Chii Liu, shashikant koul, Timmer Horiuchi, Tobi Delbruck, Thomas Trappenberg, Terry Stewart, Vikram Ramanarayanan, Andre van Schaik, Wang Wei Lee, Xavier Lagorce, Yulia Sandamirskaya

Organizers: Jorg Conradt (TUM) Terry Stewart (UW)


  • Andrew Mundy The University of Manchester; Advanced Processors Technologies Group (APT
  • Nicolas Oros University of California, Irvine; Cognitive Anteater Robotics Laboratory (CARL)
  • Anne Collins Brown University, The Laboratory of Neural Computation and Cognition
  • Ashley Kleinhans The University of Johannesburg; Mobile Intelligent Autonomous Systems group (MIAS)

Focus and goals

Picture of robot

We want to build integrated neuromorphic systems. That is, we want physical devices that interact with the real world, taking sensory input, processing it, and producing outputs. To do this, we need three things:

  • A framework for creating neural models that handles a wide variety of types of neural processing (Nengo)
  • Hardware for running neural models quickly (SpiNNaker)
  • Physical robots (PushBots? and Lego Mindstorms) with sensors (including eDVS)

Now that we have this infrastructure, we want to use it to make robots perform interesting tasks. In particular, we challenge participants to build integrated neuromorphic systems that are able to compete in an Olympic competition. Participants will divide into groups of at least 2-3 people per group, and each group will develop its own neuromorphic robot competitor. Each neuromorphic system will have to complete in all the events without changes to the controller, so participants must consider and attempt to reproduce some aspects of flexible animal cognition.

The main purpose here is to demonstrate and make use of a maturing neuromorphic infrastructure. The Nengo software system will allow students to quickly build sophisticated neural models, and it has now been closely integrated with neuromorphic hardware (both SpiNNaker and FPGAs) and with various robotic platforms. We believe the existence of this tool chain, including robust robotics and the capability of simulating large (1 million+) neural models in real time can be usefully exploited by the participants to build novel interesting neuromorphic systems within the time frame of the workshop.

For the mobile robotic hardware, we will be providing at least two different assembled robot platforms, which could be customized if desired by participants: (a) small mobile robots from TUM with on-board event-based vision sensors and (b) the Lego Mindstorms EV3. For sensors, in addition to those that come with the robots, we will provide (wireless) DVS cameras. The focus will be on tasks involving movement and navigation using event-based processing systems.

For real-time neuromorphic computation, we have six 4-chip SpiNNaker boards and five 48 chip SpiNNaker boards. All of these support running models developed using Nengo. This allows people to build and test small-scale versions of the networks on their own laptop, and then once they are happy with them, to run them on SpiNNaker. All of the robots can be connected to via WiFi?.

We will provide tutorials on

  • Nengo, the general toolkit for developing models using the Neural Engineering Framework (the same system that was used to develop Spaun, the first brain simulation capable of performing multiple cognitive tasks).
  • SpiNNaker, a parallel computing system optimized for running large-scale neural models on a large array of ARM processors
  • PushBot?, a robot developed at TUM with onboard eDVS, inertial sensors, track-based wheels, and a laser pointer

Nengo model example

While participants are free to use whatever neuromorphic approach they desire, we have closely integrated the high-level neural modelling system  Nengo with our hardware. This allows for the quick creation of complex models, as users are able to specify the particular high-level algorithm they would like to implement, and Nengo will translate that algorithm into a spiking neural network suitable for running on CPUs, GPUs, SpiNNaker, or even FPGAs. We will also introduce the Semantic Pointer Architecture (SPA), a library of high-level brain components (cortical working memories, basal ganglia, thalamus, etc) which we have used for flexible cognitive processing, allowing neural models to route and maintain information as needed based on task demands. This is the core system underlying  Spaun, the world's first simulated brain capable of performing multiple tasks.

Neuromorphic Olympics Events

We would like to have multiple teams compete in a collection of events:

  • Sprint: Drive towards a target as quickly as possible. Target is a light flashing at 200Hz.
  • Race: Go to one target, then the next, then the next. Each target is a light flashing at a different frequency.
  • Obstacle Course: In a cluttered environment, find the target.
  • Hide and Seek: One robot tries to avoid the others
  • World Cup: Push small balls into the opponents' goals

Other suggestions are welcome and will be discussed!


Nengo/SpiNNaker Integration

  • To use nengo_spinnaker you need to download and install a copy of the SpiNNaker package from  https://spinnaker.cs.man.ac.uk/. If you only want to use Nengo on SpiNNaker just run python setup.py develop, this will install a new Python package.
  • You will also need to install nengo_spinnaker itself:
  • Try to run one of the examples from the examples directory, for this you will require a SpiNNaker board, and to use Python from the command line.