2012/learn12

Learning and Computational Intelligence in Neuromorphic Cognitive Systems

Members: Bernabe Linares-Barranco, Christian Brandli, Elisabetta Chicca, Christoph Maier, Jorg Conradt, Daniel B. Fasnacht, Bert Shi, Frederic Broccard, federico corradi, Gert Cauwenberghs, Giacomo Indiveri, Garrick Orchard, Jasmine Berry, Jongkil Park, Jeffrey Pompe, Jonathan Tapson, Kwabena Boahen, Lakshmi Krishnan, Mehdi Khamassi, Mostafa Rahimi Azghadi, Mathis Richter, Matthew Runchey, Nai Ding, Nabil Imam, Nils Peters, Michael Pfeiffer, Jennifer Hasler, Ryad Benjamin Benosman, Sadique Sheik, Sam Fok, Sergio Davies, Shih-Chii Liu, Siddharth Rajaram, Sudarshan Ramenahalli, Tara Julia Hamilton, Timmer Horiuchi, Thomas Murray, Tobi Delbruck, Theodore Yu, Yulia Sandamirskaya

Organizers: Gert Cauwenberghs, Giacomo Indiveri

Faculty Dates

Gert Cauwenberghs UC San Diego 7/8/2012 7/14/2012
Giacomo Indiveri INI 6/30/2012 7/22/2012
Michael Pfeiffer INI 6/30/2-12 7/22/2012
Elisabetta Chicca Uni Bielefeld 6/30/2012 7/22/2012
Yulia Sandamirskaya Ruhr-Uni-Bochum 7/1/2012 7/21/2012
Bernabe Linares-Barranco CSIC 7/3/2012 7/20/2012
Ryad Benosman ISIR 7/1/2012 7/21/2012
Nuno Vasconcelos UCSD 7/8/2012 7/13/2012

Introduction

While the general definition of “neuromorphic cognition” is still under debate, it is clear that a key element of any neuromorphic cognitive system is its ability to solve challenging everyday tasks which require learning, effectively and efficiently. It is generally understood that learning is at the core of computational intelligence in real and artificial neural systems, although mechanisms of learning in the biological setting are not well understood. On the other hand, a vast variety of learning and information extraction techniques have been explored in the machine learning and natural language processing communities that all aim to make effective use of large number of parameters in distributed computational systems, many of which are akin to the neuromorphic systems that we have engineered over the past decades. In this Topic Area we will address the problem of embedding computational intelligence, and other metrics of “cognition”, in neuromorphic engineered systems through the efficient and robust implementation of learning mechanisms, as well as other adaptive information extraction and parameter estimation techniques borrowed from machine learning and natural language processing research communities. Within this area we propose to organize a set of basic tutorials (e.g. on theoretical neuroscience learning models, analog VLSI learning circuits, cognitive and behavioral learning), followed by a set of presentations on state-of-the-art achievements. These lecture-type activities will be complemented by a set of specific hands-on projects involving both SW and HW learning models.

Theory and discussion groups

The theoretical aspects of this topic area will be geared toward the solution of a very specific problem: how to get the neuromorphic spiking systems to compete with machine learning and other computational intelligence systems (AI, ANN, SVM, HMM, Bayesian, etc) in terms of performance and (cognitive) levels of complexity that can be achieved. For this we will propose and coordinate tutorials and lectures from both the set of people invited within this topic area (listed below), as well as a set of other participants that will be coming to Telluride. The lectures will cover classical machine learning topics relevant for neuromorphic cognition, neuroscience talks on low-level spike-timing dependent plasticity (STDP) mechanisms, Dynamic Field Theory and high-level cognitive learning models, as well as circuits and systems talks on hardware implementations of learning algorithms. The learning community is very lively and there are open questions and debates at many different levels within this area. We will propose provocative discussion group topics and promote active brain-storming sessions and lively discussion groups on learning in cognitive systems.

Practical projects

Gesture recognition from DVS retina

Lead: Michael Pfeiffer

The spike-based Expectation Maximization approach allows to learn generative Bayesian models in a winner-take-all (WTA) circuit of spiking neurons with a variant of STDP learning. In this project the goal would be to create a simple instance of generative model learning on a chip. If you can do it with input coming from an event-based sensor like the DVS or silicon cochlea - even better!

Project leader: Michael Pfeiffer

Interested: Giacomo Indiveri, Jeffrey Pompe, Sadique Sheik, Theodore Yu

Gesture recognition from acoustic u-Doppler

Lead: Michael Pfeiffer

The spike-based Expectation Maximization approach allows to learn generative Bayesian models in a winner-take-all (WTA) circuit of spiking neurons with a variant of STDP learning. In this project the goal would be to create a simple instance of generative model learning. The input will be u-Doppler spectra from a real-time device. Doing it on the SpiNNaker would be an additional plus!

Project leader: Michael Pfeiffer

Interested: Andreas Andreou, Thomas Murray, Michael Pfeiffer

DVS sensing, learning and classification on HW

Lead: Sadique Sheik

Interested: Giacomo Indiveri, Michael Pfeiffer, Christoph Maier

In this seemingly simple project we will try to assemble a setup consisting of a DVS sensor sending spikes to a perceptron (directly or through a feed-forward randomly connected network). We will present images from the TIMIT database and train the perceptron and then verify the classification capability of the perceptron. The basic idea of this project is to create a purely neuromorphic spiking system capable of real-time learning and classification of realistic input from the DVS retina.

Interested in the above three projects: 'Jon', 'Yulia', Sadique Sheik, 'Tom', 'Mat', Michael Crosse, Theodore Yu, 'Gert', 'Frederic'

Classification with reservoir computing (aka liquid state machines)

The inherent transient dynamics of the high-dimensional dynamical system formed by a sufficiently large and heterogeneous pool of neurons (reservoir) can serve as a universal analog fading memory and allow computations of rapidly changing spatiotemporal inputs without the need of stable states. The goal of this project is to implement such a reservoir in hardware and use it for classifying rapidly changing inputs (e.g. sound data). Ideally, we'll try to use sound data recorded by sensors available at the workshop.

Lead: Frederic Broccard

Interested: Elisabetta Chicca, Sadique Sheik, Jeffrey Pompe, Nabil Imam

Implementation of HMAX with spikes

Lead: Sergio Davies, Garrick Orchard

Interested: Ralph Etienne-Cummings, Nabil Imam, Sudarshan Ramenahalli, Bernabe Linares-Barranco, Christian Brandli, 'Nuno Vasconcelos', Ryad Benjamin Benosman, Gert Cauwenberghs, Giacomo Indiveri

The idea is to implement the HMAX algorithm using Bernabe's convolutional chips acting as Gabor filters on the input provided by a silicon retina. The outcome is then injected into the Nengo framework for the higher layer of the HMAX algorithm. The aim of the project is to demonstrate a real-time HMAX implementation that can detect features from the view field and identify objects through a classifier.

Spatio-temporal unsupervised STDP learning on hardware

Lead: Sadique Sheik

Interested: Elisabetta Chicca, Tara Julia Hamilton, Nabil Imam, Mostafa Rahimi Azghadi, Christoph Maier, Lakshmi Krishnan, 'Gert', 'Jonkil', Theodore Yu, Sergio Pellis

In this project we would set up an experimental protocol to induce learning in the chip synapses and verify if and to what extent the neurons can be trained to recognize spatio-temporal patterns.

Dynamic Neural Fields on hardware

Lead: Yulia Sandamirskaya

Interested: Mehdi Khamassi, Sergio Davies, Mathis Richter, Nabil Imam, federico corradi, Sadique Sheik, Elisabetta Chicca

We will use BRIAN simulator to design spiking versions of several simple DNF architectures, which perform elementary cognitive functions (working memory and change detection) and then implement these architectures in SpiNNaker and on Giacomo's chip. In the second part of the project we'll add a memory dynamics and implement a learning architecture. By connecting this architecture to a sensor we may learn association between objects and locations, objects and labels, or objects and serial order of their presentation.

Detection of motion direction from an eDVS using Dynamic Field Theory

Lead: Yulia Sandamirskaya

Interested: Mathis Richter

We will try to extract the motion of objects from the sensory signal of an eDVS camera using Dynamic Neural Fields.

Learning coordinate transformations with Dynamic Neural Fields

Lead: Yulia Sandamirskaya

Interested: Jorg Conradt, Matthew Runchey, Sadique Sheik, Mathis Richter

In this project I'd like to connect a DNF architecture that models development of looking behavior to a DVS sensor and a pan/tilt/yaw unit built by Jörg and demonstrate learning of reference frame transformations between the visual (retinal) and motor spaces.

Triplet-based STDP learning on hardware

Lead: Mostafa Rahimi Azghadi

Interested: Jongkil Park, Giacomo Indiveri, Theodore Yu, Sergio Pellis, Tara Julia Hamilton

Triplet-based STDP is an unconventional form of STDP learning rule that governs the synaptic weight changes based on the timing of a combination of three pre- and post-synaptic spikes. The main purpose of this project is to demonstrate the principals of this learning rule on available hardwares (eg. SpiNNaker, Zurich chips and UCSD hardware).

Hardware Implementation of Olfactory Circuits

Lead: Nabil Imam

Interested: Elisabetta Chicca, Christoph Maier

The olfactory system has found solutions to pattern recognition problems in a very high-dimensional odor space. Mimicking the neural circuits that solve these problems gives us novel ways of tackling machine olfaction. In this project, we will make a hardware implementation of the circuits (at various stages of the olfactory pathway) that achieve cluster seperation at several levels of resolution (pleasant -> fruity -> strawberry), allowing both gross classification (via glomerular-layer computations) and precise identification (via spike synchronization and STDP in higher-order neurons)

Sub-project of the STDP group.

Convolution based Stereo Computation on Bernabe's hardware

Lead: Ryad Benjamin Benosman, Bernabe Linares-Barranco

Interested: Christoph Maier, Jongkil Park, Michael Pfeiffer, Theodore Yu

Event matching between two DVS sensors, through convolution modules in FPGAs (click on title for more details).

Hardware tweaking and software interfacing for neuromorphic chips

Lead: Giacomo Indiveri, Jongkil Park, Sadique Sheik

Interested: Frederic Broccard, Mostafa Rahimi Azghadi, Christoph Maier, Elisabetta Chicca, Theodore Yu, Theodore Yu, Sergio Pellis, 'Gert'

Experience the pain.

Neuromorphic Body-part Tracking

Lead: Michael Pfeiffer, Ryad Benjamin Benosman

Tracking human or robot bodies with the DVS and learning body models.

Learning on SpiNNaker with NEF and Nengo

Lead: Sergio Davies

Interested: Chris Eliasmith, Terry Stewart

Implement a learning rule according to NEF theory on SpiNNaker. The outcome is a system which is able to learn transfer function in a supervised environment.
This project is joint between this group and the act group.

Resources

We will be bringing multi-chip setups comprising neuromorphic multi-neuron chips, with both off-chip and on-chip learning circuits. Participants will be expected to learn to do experiments with these chips and use them to carry out the projects listed above. Some of the HW platforms that we will be bringing require you to install some SW on your laptops. So please read-on and download all the relevant material (papers, code, documentation, etc.) before coming to Telluride

  • (if you have your own HW please include it here)
  • Convolution chips and Gabor filter arrays on FPGAs brought from IMSE-CNM-CSIC
  • Additional DVS retinas from IMSE-CNM-CSIC
  • Spartan6 setup to merge up to 4 retinas into a single AER flow on serial SATA using 32-bit events, from IMSE-CNM-CSIC

Reading material

Learning rule implemented on hardware

  • The paper that describes the learning rule that is implemented on the neuromorphic chips is  Brader_etal07.
  • The paper that describes the chip with those learning rule circuits is  Indiveri_Chicca11, and the one on spike-based learning experiments/results is  Mitra_etal09.

  • A short half-page description of the same learning rule, applied to learning of spatio-temporal patterns is  here (section 2.4).
  • A description of the chip that has programmable weights that can be trained with STDP type rules is  Moradi_Indiveri11

Spike-based Learning Rules

State-dependent computation and WTA networks

  • Recurrent Competitive Networks Can Learn Locally Excitatory Topologies  Jug_etal2012

Dynamic Neural Fields architectures

See  http://robotics-school.org/ for more material.

Assembling networks of AER modules

Attachments