Neuromorphic Localization Using Remote Sensing

Team:Soumyajit Mandal, Saeed Afshar, Timmer Horiuchi

The overall goal of this project is to estimate the position of a mobile robot by using a spiking neural network to classify spatially-dependent patterns of reflections ("clutter") generated by sonar and radar sensors. In essence, we are trying to generate the equivalent of place cells, which are known to be used by animals for navigation.

The sonar sensor that we are using was developed in Timmer's lab. It has three forward-facing (directional) ultrasound transducers that are operated in pulsed mode at a center frequency of 40 kHz. The time-delayed echoes generated by each pulse are digitized on-board and transferred to a PC via an USB serial connection. The three transducers are pointed in slightly different directions, which allows the angular direction (azimuth) of a given echo source to be accurately estimated by comparing the amplitudes captured by each transducer. This technique is analogous to the amplitude monopulse method used for radar signal processing. The typical operating range of the sensor is 1 - 10 m with a resolution of a few cm.

The radar sensor that we are using is a commercial low-power pulsed UWB module developed by Time Domain Corporation (Huntsville, AL). It operates in pulsed mode with an instantaneous bandwidth of 2 GHz (from 3-5 GHz). At the moment we are using an omnidirectional dipole antenna, but this can be later replaced by a directional structure (such as a planar Vivaldi antenna) if required. The time-delayed echoes generated by each pulse are digitized on-board and transferred to a PC via an USB serial connection. The typical operating range of the sensor is 1 - 10 m with a resolution of about 10 cm.

The main project goals are:

1. Collect a training data set (using both radar and sonar sensors) of sensor responses versus various locations in 2D.

2. Train a spiking neural network to recognize positions using only the sonar data.

3. Add the radar data and quantify the resultant improvement in localization performance.

4. Estimate the position of the robot in real-time.

Mobile robot with sonar and radar sensors mounted

Mobile robot with sonar sensor (in front) and radar sensor (in back). During mobile operation, a laptop is placed between the two sensors, which are both run off batteries.

Algorithm:

In this project the Synaptic Kernel Inverse Method (SKIM) (Tapson et al. 2013) is used to recognize the Spatio-Temporal patterns generated by the radar and sonar systems. The figure below from (Tapson et al. 2013) shows how the convolution of spike patterns (in this case sonar echos) with multiple synaptic kernels can generate memory traces in a higher dimensional space relative to the input.

An illustrative binary self-localization experiment:

Distinguishing Location1 from Location2. Note labels on the ground. (The rover is in location 2 for this illustrative test)

In the below graph two pings from the two locations, 1 and 2 are shown together along with a supervisory signal. Note that in the figure below the Sonar signals have already been convolved with SKIM's synaptic kernels where as the radar signals used are the raw analog values of the sensor.

Combining spiking sonar data and analog radio signals

After learning the network weights, using the Sonar and Radar spatio-temporal signatures SKIM enables the rover to recognize its current location.

Place cell #2 spiking

Experimental set-up:

Accuracy Results:

Attachments