Pushbots followers

What: Pushbots following each other

How: Use the existing DVS sensor on the pushbot follower and the head led on the master pushbot using a nengo model

Why: Using event based camera for visually guided reactive navigation is an exiting and difficult challenge, especially using neural networks.

I. Nengo Models

Here is an extremely basic Nengo model that controls both the robots. "bot1" is the master pushbot which is controlled with the keyboard by a human.

"bot2" is the robot follower, which is controlled by the tracker_0 sensor (which senses led flickering at a 200Hz frequency).

import nengo
import nengo_pushbot
import numpy as np

    
spinnaker = False
motor_speed_factor=0.2
sheep_motor_speed_factor=0.4
# how fast can the bot go as a maximum in each direction
input_factor=0.9


model = nengo.Network(label='pushbot')
with model:
    input = nengo.Node([0, 0, 0, 0], label='keyboard')
    #forward = nengo.Node([0], label='forward')
    #rotate = nengo.Node([0], label='rotate')
    #input = nengo.Node(lambda t: [0.5*np.sin(t), 0.5*np.cos(t)], label='input')
    l1 = nengo.Ensemble(100, dimensions=1, label='l1')
    r1 = nengo.Ensemble(100, dimensions=1, label='r1')
# speed cannot go faster than one - want it to be forward and right and it goes double the speed. 
    combo1 = nengo.Ensemble(200, dimensions=2, label='combo1', radius = 1.4)
    control = nengo.Ensemble(200, dimensions=2, label='control', radius = 1.4)
    
#connect to populations that control the motors
    l2 = nengo.Ensemble(100, dimensions=1, label='l2')
    r2 = nengo.Ensemble(100, dimensions=1, label='r2')
# replica of what is sent to the motor so that we can see the behavior and allows plotting - this is also what we are probing
    combo2 = nengo.Ensemble(200, dimensions=2, label='combo2', radius = 1.4)

#    if spinnaker:
#        bot = nengo_pushbot.PushBotNetwork('1,0,EAST')
#    else:
    bot1 = nengo_pushbot.PushBotNetwork('10.162.177.57')
    bot2 = nengo_pushbot.PushBotNetwork('10.162.177.43')
    bot1.show_image()
    bot2.led(200)
    #bot1.laser(0)
    #bot1.track_freqs([200, 300])
    bot1.track_freqs([200])
    
    
    half_size = 64.0
    
    y_limit = list()
    y_limit.append(0.0)
    y_limit.append(35.0)
    y_limit.append(40.0)
    y_limit.append(115.0)
    y_limit.append(127.0)
    
    x_limit=list()
    x_limit.append(0.0)
    x_limit.append(18.0)
    x_limit.append(36.0)
    x_limit.append(54.0)
    x_limit.append(73.0)
    x_limit.append(91.0)
    x_limit.append(109.0)
    x_limit.append(127.0)

    nengo.Connection(input, l2, transform=[[input_factor, 0, 0, 0]])
    nengo.Connection(input, r2, transform=[[input_factor, 0, 0, 0]])
    nengo.Connection(input, l2, transform=[[0, input_factor, 0, 0]])
    nengo.Connection(input, r2, transform=[[0, -input_factor, 0, 0]])
    nengo.Connection(input, l2, transform=[[0, 0, 0, input_factor]])
    nengo.Connection(input, r2, transform=[[0, 0, 0, input_factor]])
    nengo.Connection(input, l2, transform=[[0, 0, input_factor, 0]])
    nengo.Connection(input, r2, transform=[[0, 0, -input_factor, 0]])
    nengo.Connection(l2, bot2.motor, synapse=0.01, transform=[[motor_speed_factor], [0]])
    nengo.Connection(r2, bot2.motor, synapse=0.01, transform=[[0], [motor_speed_factor]])
    nengo.Connection(l2, combo2, synapse=0.01, transform=[[motor_speed_factor], [0]])
    nengo.Connection(r2, combo2, synapse=0.01, transform=[[0], [motor_speed_factor]])
    #nengo.Connection(a, bot.motor)
    nengo.Probe(l2)
    nengo.Probe(r2)
    nengo.Probe(combo2)
    
# 3 dimensions are x y and a confidence level for how sure there is something there. 
    pos0 = nengo.Ensemble(100, 3, label='pos0')
# only want 2 of the dimensions - removing the 3rd component for tracking. 
    nengo.Connection(bot1.tracker_0, pos0[:2])
    
    def normalize_coord(x):
        return (x-half_size)/half_size
        
# divides the vision field into 9 positions - says that if the interest area is not in the middle then look up or down and turn either left / right to put the stimuli in the middle. This also allows distance to be maintained. 
    def orient(x):
        if x[0] < normalize_coord(y_limit[1]): #forward/backward
            y_ret = 1
        elif x[0] >= normalize_coord(y_limit[1]) and x[0] < normalize_coord(y_limit[2]):
            y_ret = 0
        else:
            y_ret = -1
            
        if x[1] < normalize_coord(x_limit[3]): #rotate left/right
            x_ret = 0.1
        elif x[1] >= normalize_coord(x_limit[3]) and x[1] <= normalize_coord(x_limit[4]):
            x_ret = 0
        else:
            x_ret = -0.1
            
        return [y_ret, x_ret]
            
    #pos1 = nengo.Ensemble(100, 2, label='pos1')
    #nengo.Connection(bot1.tracker_1, pos1)
    
# this is what controls the robot - from control to position through the function orient - transform is just a one to one mapping of a 2D matrix. 
    nengo.Connection(pos0, control, function=orient, transform=[[1,0],[0,1]], synapse=0.002)
    
# duplication so that both robots are controlled - one with the keyboard and the other with the stimuli. 
    nengo.Connection(control, l1, transform=[[input_factor, 0]])
    nengo.Connection(control, r1, transform=[[input_factor, 0]])
    nengo.Connection(control, l1, transform=[[0, input_factor]])
    nengo.Connection(control, r1, transform=[[0, -input_factor]])
    nengo.Connection(l1, bot1.motor, synapse=0.002, transform=[[sheep_motor_speed_factor], [0]])
    nengo.Connection(r1, bot1.motor, synapse=0.002, transform=[[0], [sheep_motor_speed_factor]])
    nengo.Connection(l1, combo1, synapse=0.002, transform=[[sheep_motor_speed_factor], [0]])
    nengo.Connection(r1, combo1, synapse=0.002, transform=[[0], [sheep_motor_speed_factor]])
    #nengo.Connection(a, bot.motor)
    nengo.Probe(l1)
    nengo.Probe(r1)
    nengo.Probe(combo1)
    
    nengo.Probe(l2)
    nengo.Probe(r2)
    nengo.Probe(combo2)
    
    nengo.Probe(control)
    nengo.Probe(bot1.tracker_0)
    nengo.Probe(pos0)


if __name__ == '__main__':
    #import nengo_gui.javaviz
    #jv = nengo_gui.javaviz.View(model)

    if spinnaker:
        import nengo_spinnaker

        config = nengo_spinnaker.Config()
        config[input].f_of_t = True
        config[input].f_period = 2*np.pi

        sim = nengo_spinnaker.Simulator(model)
    else:
        sim = nengo.Simulator(model)
    #jv.update_model(sim_normal)
    #jv.view()

    sim.run(5000)

II. Result behaviour

The result of this experiment is shown in  this youtube video

Science background:

We are trying to demonstrate a path integrator in neurons We have used less than 1000 neurons, ie 100 or so per ensemble, which is a good number to represent values in these populations - if we where to use less we have more noise, if we where to use more then there is more computational power being used, which may not mean a better performance level. We are using the DVS camera to look at the LED with a high frequency rate rate - higher than what could be detected by conventional raster cameras. We are going to demonstrate this behavior by using mobile robots.

Other methods to use: 1) compass is another way to perform a path integrator and the 2) state variables which identify the location with a coordinate of the model. We are looking to implement a basal ganglia model which can be used to decide which waypoint to head to next. Why use neurons? neurons provide with robust, low computation method of navigating unknown environments. and provide fast implementation. [TODO: comparison to how it is done in the robotics community ?? ] Components: Basal ganglia, cerebellum, state variables for position control.. the state variables provide position and the basal ganglia decision is based on current state variables. ie: if the current state variables are equal to the state I want to be at then I update my way point to look at the next way variable. This can be reinforced with dopamine input for predicting your path (dopamine can act as a prediction system, it is used for reducing prediction error.. and reinforced learning. [TODO: dopamine prediction error by Terry Sejnowski])