2015/tna15

The existing Caffe framework for TrueNorth offers very easy access to train neural networks (NN) which already take TrueNorth hardware constraints into account. This is possibly the best way to achieve good results while requiring minimal effort to design the network.

In this group we talked about ideas how to increase flexibility on the type of NNs (including ConvNets?, fully-connected nets, and recurrent nets) on TrueNorth, to allow more experienced NN designers to harvest the full potential of NNs. Specifically, the envisioned workflow starts by training a NN using a common deep learning library like Theano / Pylearn2 (possibly with constrained connectivity) and take care of the other TrueNorth constraints after training.

The main challenges (besides ensuring a correct connectivity) include the converting NN units to spiking neurons and going from 32 bit floating-point weight precision to a highly limited weight precision (somewhere between 4 bit and single bit). The first problem can be solved in most cases by using rectified linear units (ReLU), which are a perfect match for integrate-and-fire neurons in a rate-based mode. A possible constant leak could mimic a non-zero bias of the ReLUs (although bias implementations are still pending due to the increased difficulty). For the second problem a range of solutions are proposed, which can be roughly classified in two classes. The first class are pure post-processing methods like simply rounding the weights after training and the second class are novel training methods which use information about the discretization.

Attachments