Pushbots Racing Competition

What: have pushbots competing on a race to the death! The initial track could be a simple corridor that includes obstacles and walls. More complex tracks could be used later.

How: Use the DVS camera to detect obstacles and walls. Use compass for orientation towards the end of the track. Use a Nengo model to process sensory information (DVS spikes, compass...) and control the robot. Alternatively, use existing algorithms to compute optic flow from DVS data for visual navigation.

Why: Using event based camera for visually guided reactive navigation is an exiting and difficult challenge, especially using neural networks.

NOTE: need to talk to computer vision experts ;-)

I. Nengo Models

Here is an extremely basic Nengo model that controls the robot to turn away from the halve of the visual input that generates the most spikes. Inspired from Balanced Strategy.

import nengo_pushbot
import nengo
import numpy as np

model = nengo.Network()
with model:
    bot = nengo_pushbot.PushBotNetwork('10.162.177.49')
    # bot.show_image()

    # only check spikes in left and right upper part of video (not to count spikes generated by laser, which is seen in lower part)
    bot.count_spikes(left_flow = (0,0,63,63),
                    right_flow = (0,64,64,128))
                    
    #track laser to control forward and backward movement
    bot.track_freqs([300], sigma_p=60)
    bot.laser(300)    
    pos = nengo.Ensemble(30, 1, label='pos')
    nengo.Connection(bot.tracker_0[0], pos)

    def move(x):
        if x[0] > 0.5:
            return [1, 1]
        else:
            return [-1, -1]

    nengo.Connection(pos, bot.motor, function=move, transform=0.2)
    nengo.Probe(pos)
    
    #groups encoding spikes from DVS
    count_left = nengo.Ensemble(30, 1, radius=200)
    nengo.Connection(bot.count_left_flow, count_left)    
    count_right = nengo.Ensemble(30, 1, radius=200)
    nengo.Connection(bot.count_right_flow, count_right)
    
    # try using gyro data to inhibit self motion
    #gyro = nengo.Ensemble(30, 1, radius=2)
    #nengo.Connection(bot.gyro[1], gyro, function=abs, transform=20)    
    #nengo.Connection(gyro, count_left.neurons,  transform=-1*np.ones((30,1)))
    #nengo.Connection(gyro, count_right.neurons,  transform=-1*np.ones((30,1)))
    
    #groups encoding DVS spikes control motors
    # nengo.Connection(count_left, bot.motor[0],transform=-0.002)
    nengo.Connection(count_left, bot.motor[1],transform=0.002)
    # nengo.Connection(count_right, bot.motor[1], transform=-0.002)
    nengo.Connection(count_right, bot.motor[0], transform=0.002)    
 
    nengo.Probe(count_left)
    nengo.Probe(count_right)
    nengo.Probe(gyro)


if __name__ == '__main__':
    sim = nengo.Simulator(model)
    sim.run(10000)

Problem: this simple model doesn't work well when the robot is moving and turning. Might need to compensate for/remove self motion.

Simple things to try:

  • divide visual input area in more that two regions
  • move backward when there are no spikes from DVS
  • add bias to move forward by default
  • implement divisive normalization: visual area inhibits itself to keep sparse representation (e.g. only 30% neurons firing)
  • use IMU data to compensate for self motion
  • ...

More tricky things to do:

  • implement visual areas: MT neurons encoding optic flow (will be slow using Nengo on computer...will need to run on SpiNNaker)
  • ...

II. Use optic flow algorithms

Use Ralph or Ryad's existing algorithms to compute optic flow from DVS camera, and find focus of expansion for example.