2010/results/sf

Results from Multimodal Sensory Fusion Topic Area

The multimodal sensory fusion and self organization topic area examined how neural systems and smart robots can learn to work in and interact with an unpredictable environment. In particular, we examined mechanisms and circuitry that combine information from multiple modalities to form a coherent percept of the world. We examined self organizing mechanisms for development of neural systems and studied how animals/robots can learn a spatial representation of their world from a combination of visual/auditory, and motor representations.

There were four projects executed under this workgroup. Click on the links below for a more complete description of the project and the results.

  1. Merging visual, auditory and motor representations
  2. Vergence with Tobi’s dynamic vision sensor
  3. Realistic cortical model of topographic development and plasticity
  4. Converting a spike-based model of cortical development to a rate/delay-based model