Project Members:

Sam Fok Samir Menon Alex Neckar

Project Overview:

By expanding on our previous experience implementing NEF on Neurogrid, we sought to implement an optimal motor control algorithm. The robot we controlled was a simple "RPP" bot with 3 DOF (one rotational, two positional), simulated in software. The inputs to the algorithm are the robot's desired joint angles/positions as well as the current positions/velocities/accelerations of the arm's members. The outputs are torques that will carry out the motion in a smooth manner. A tutorial explaining the problem can be found here: See "Computing Joint Space Dynamic Control". We implemented equation 72.

We chose to focus initially on the two positional degrees of freedom (the two bottom entries of equation 72), and further simplified the math to only take a desired position (rather than desired position, velocity, acceleration). This left us with two functions that needed to be computed, both with 4 inputs, and 1 output.

The most straightforward way to implement these computations is in single 4-dimensional pools. This was a new challenge for us, as previously we had only implemented 1D pools with Neurogrid. Although we wrote the code to work with N-dimensional pools with the usual set of encoding vectors randomly distributed across an N-dimensional hypersphere, for these particular affine functions, it turns out to be more efficient to simply distribute the encoding vectors in N-space only along the coordinate axes. We call these pseudo-N-dimensional pools, because in terms of encoding, the single pool is more like N 1D pools, but we decode from it exactly as if it is a ND pool. Of course, this scheme only works when there are no nonlinearities between the inputs of the functions (for example, using this scheme you can easily decode 2*x + 5*y + z**2 + 21, but not x*y + z).

Below are the tuning curves from Neurogrid. In order to do NEF with Neurogrid, we must first sweep through the variable input space that the neurons might see. In our case, this means sweeping through the current and desired positions/velocities/accelerations and measuring the neurons' responses. Pseudo-ND pools are more efficient because the encoding of each variable input is orthogonal from all other inputs. This means that we need only sweep one input variable at a time and measure the responses of the neurons sensitive to that variable (and only that variable). With normal ND pools, (naively, there are more clever, more difficult ways of going about this) each neuron may be sensitive to all inputs, and we need to measure the response to each point in the ND input space.

By doing a weighted sum of the tuning curves, we are able to reconstruct arbitrary nonlinear functions. Here is the result of the decode for the first joint position:

We didn't quite get the controller working in time for the end of the workshop, but we've managed to get it working fairly well now. It's not perfectly stable, which suggests that we need to improve our decode quality. Right now, we're trying to ensure the maximum diversity of tuning curve shapes.