2013/uns13/resources

Software

Ubuntu Virtual Machine

An Ubuntu virtual machine (based on [www.virtualbox.org virtualbox] with all the software preinstalled is available  here A patch to run Nengo models is  attached to this page.

Documentation and Tutorials

Nengo and SpiNNaker

SpiNNaker implements the Neural Engineering Framework principles onboard to encode and decode values into/from spiking activity of neural populations (for more info about the NEF  check this wiki page).

The nengo_spinnaker_interface can be linked to your Nengo installation by modifying the MY_NENGO variable in the source file to point to the path where Nengo is installed.

# used by setup to set symbolic links in the Nengo directory (/opt/nengo in this example)
MY_NENGO=/opt/nengo/

In order to create the necessary links the following command must be issued from the root directory of your SpiNNaker Package installation:

./setup -n

The commands creates the following links (supposing your spinnaker package is installed in /opt/spinnaker_package

`/opt/nengo/binaries' -> `/opt/spinnaker_package_wiki/binaries'
`/opt/nengo/nengo_spinnaker_interface' -> `/opt/spinnaker_package_wiki/nengo_spinnaker_interface'
`/opt/nengo/nengo-cl-spinnaker' -> `/opt/spinnaker_package_wiki/nengo_spinnaker_interface/nengo-cl-spinnaker'

In order to translate NEF models into SpiNNaker the following lines need to be added to a Nengo script

import nengo_spinnaker_interface.spinn as spinn

... your script here ...

s=spinn.SpiNN(net.network)      # instantiates a SpiNNaker/NEF network
s.print_info()                  # prints informations about the nodes 
s.write_to_file('nengo_spinnaker_interface/nengo_values.py')    # dumps network values into a python file

The script can then be compiled and run using:

./nengo-cl-spinnaker <path to script>

Note that this command must be run in the Nengo installation directory.

The following examples, demonstrated during the tutorial, can be found on the package/patched virtual machine:

  1. communication channel Download
  2. square Download
  3. integrator Download
  4. robot example Download

Once the script is run, the following can be used to visualize the output in the 1-dimensional input and 1-dimensional output case:

./nengo-cl nengo_spinnaker_interface/viewer_1d.py  -n 2d_view -b <board_name> -d 1 -s 2

SpiNNaker Omnibot with Nengo

We have a version of the TUM Omnibot equipped with a 48 node SpiNNaker board and 2 silicon retinas. SpiNNaker can process information coming from the onboard robot sensors and the retinas, and control the robot. The robot and its sensors are directly configurable from PyNN or Nengo.

In order to have a Nengo (2D) population controlling the translation and rotation movements of the robot the following line need to be added to your script:

# s is a spinn.SpiNN instantiation
# target is the name of the Nengo node used to control the robot in the example
s.set_robot_output('target')

An example of how Nengo, SpiNNaker and the Robot can be used all together can be found in  robot_example_nengo.py. A video with the results from this script can be found at  http://www.youtube.com/watch?v=uNVe_RLh0l0&hd=1

SpiNNaker extensions at Telluride2013

We spent a lot of time in this workshop improving the interface between Nengo and Spinnaker. In previous workshops a framework had been established for translating Nengo models onto Spinnaker; this core process worked well, but there were a number of technical challenges.

First, there are practical limitations on the number of synaptic events Spinnaker can process per second. Large Nengo models can sometimes exceed these limits, if there is a high degree of connectivity or high firing rates, thus it is necessary to impose some kind of sparseness constraint on the connectivity. A prototype for incorporating sparseness into the NEF decoding process had been developed previously. The central idea is to calculate a decoding based on a subsample of the presynaptic neurons, rather than all of them, which results in a weight matrix with large zero sections. In this workshop we expanded this function, improving its efficiency and demonstrating that it would allow large Nengo models (e.g., an 8000 neuron integrator) to be implemented on Spinnaker. The code is in the optsparse.py Download attachment.

Second, although the process of simulating Nengo models was fairly automated, getting input to and output from the models was not straightforward, and required a fair amount of specialized knowledge and hand coding. In this workshop we modified that process to be almost completely automated. Now the spinn module will automatically parse the network it is provided, and make the necessary modifications in order to interface with the Spinnaker I/O system. This process is transparent to the user, and will not affect the model’s performance. The code is in the spinn.py Download attachment and example usage in spinn_io_example Download.

Third, as part of streamlining the I/O process we designed a new viewer for Nengo models running on Spinnaker that can handle arbitrary numbers of inputs and outputs, with arbitrary dimension. Again, this process is largely automated. However, it does require the user to provide a list indicating where the input/output populations are located on the Spinnaker board (chip location and core number). This process could be further automated in the future, to remove this step. The code is in spinn_viewer_example.py Download.

One problem we encountered, which remains to be dealt with in the future, is that for large/complex models the nengo_values.py file output from the spinn module (containing a description of the model for the Spinnaker system) can become quite large. Then when the file is imported by the nengo spinnaker compiler the system will run out of memory and crash. The solution to this problem is to read in the file step by step rather than all at once, or to employ a more efficient storage scheme, but we ran out of time to implement these fixes.

Attachments