Neuromorphic Computer Vision on Wearable Devices

Members: Boris Duran, Luis Camunas, Cheston Tan, Jorg Conradt, Ching Teo, Daniel Neil, David Karpul, Estela Bicho, Francisco Barranco, Cornelia Fermuller, Greg Cohen, Garrick Orchard, Himanshu Akolkar, John Chiasson, Julien Martel, Jamal Molin, Jonathan Tapson, Kayode Sanni, Marcello Mulas, Manu Rastogi, Mark Wang, Michael Pfeiffer, Ryad Benjamin Benosman, Sadique Sheik, Stephen Deiss, Sergio Davies, Shih-Chii Liu, siohoi ieng, shashikant koul, Timmer Horiuchi, Tony Lewis, Tobi Delbruck, Terry Stewart, Vikram Ramanarayanan, Andre van Schaik, Wang Wei Lee, Xavier Lagorce, Yulia Sandamirskaya, Yezhou Yang

Organizers:: Michael Pfeiffer (UZH) Garrick Orchard (SINAPSE, NUS) Cornelia Fermuller (U Maryland) Ryad Benjamin Benosman (UPMC, Institut de la Vision)

Projects This is a list of project ideas. If you have your own, perfect, please let us know! We will support you with equipment, software, and expertise.

  • Databases: collect a database of conventional and ATIS/DVS data from moving vision sensors
  • SLAM: compare classical to event-based SLAM
  • Feature detection: building new event-based feature detectors and matching them
  • Object recognition: mid-level operations and deep networks for recognition of objects
  • Face recognition: recognizing faces from first-person perspective
  • Motion Discrimination: detecting and removing moving objects from scenes
  • Segmentation: boundary-based segmentation of objects from the background, and learning of characteristic boundaries
  • Gesture Recognition: designing a gesture interface for the goggles
  • Augmented Reality: overlay the goggle display with useful information (recognized objects, navigation symbols, ...)

Lectures, Tutorials, and Slides

  • 01 July: Field tutorial Wearable computing: an opportunity for neuromorphic vision (PDF)
  • 01 July: Greg Cohen: Visual SLAM: Informal tutorial
  • 03 July: Cheston Tan: Wearable Computing Technology: A Brief Overview (PDF)
  • 03 July: Xavier Lagorce, Garrick Orchard: The ATIS sensor: Hands-on tutorial
  • 08 July: Michael Pfeiffer: Deep Learning in Computers, Chips and Brains: A Tutorial (PDF)
  • 14 July: Cheston Tan: Towards a unified account of face processing ( Related paper)

Focus and goals of this topic area

Future consumer electronic devices will go from mobile devices such as phones and tablets towards wearable devices such as glasses or watches. Such devices will be constantly on and act as ubiquituous assistants that provide users with useful information about their environment. Because such devices are meant to be worn comfortably, while being active all the time and providing support to the user in real-time, it is crucial that they are small, fast, and consume as little energy as possible. These are criteria which intersect to a large degree with the strengths of neuromorphic engineering, and will therefore become an important future application area for our field. Applications include navigation systems, touch-free user interfaces for public spaces, or smart assistants for specialists like doctors or engineers.

This workgroup will provide projects on a neuromorphic vision system as a smart wearable assistant on goggles that constantly monitor the environment, and provide useful information to their user in realtime in an augmented reality scenario. Our topic area combines state-of-the-art neuromorphic vision sensors with novel event based machine learning approaches for realtime object and scene recognition in hierarchical neural networks, localization and navigation algorithms, and augmented reality technologies.

Our group integrates experts in computer vision, in particular event-based vision on silicon retinas like DVS and ATIS, machine learning and neuromorphic computing. We will invite experts in wearable computing and SLAM ("Self-Localization And Mapping") methods.

ATIS Sensor

Faculty of the Topic Area

Name Institution Expertise Time Website
Ryad Benjamin BenosmanInstitut de la Vision (UPMC Paris) Event-based vision 29 June-19 July  WWW
Garrick OrchardSINAPSE Institute (Singapore) Neuromorphic Vision 29 June-19 July  WWW
Michael PfeifferInstitute of Neuroinformatics (UZH/ETH Zurich) Neuromorphic computing 29 June-19 July  WWW
Cornelia FermullerUniversity of Maryland Computer Vision 29 June-19 July  WWW
Cheston TanI2R (Singapore) Wearable Vision; Neuroscience 29 June-19 July  WWW
Greg Cohen Univ. of Western Sydney Visual SLAM 29 June-19 July  WWW
Xavier LagorceInstitut de la Vision (UPMC Paris) Event-based vision 29 June-19 July
Daniel NeilInstitute of Neuroinformatics (UZH/ETH Zurich) Event-based DBNs 29 June-19 July  WWW
Francisco BarrancoUniversity of Maryland Visual recognition 29 June-19 July  WWW

Description of the scenario A person wearing “smart”-glasses wants to navigate between two locations in the town of Telluride. The user initially chooses a target destination through a touchless, hand-gesture-based interface. At recognizable, previously seen locations, the glasses will display simple navigational commands (turn left, turn right, go forward, turn around) to lead the user to the next location, ultimately arriving at the initially selected destination. And because this is Telluride, our sensors will all be neuromorphic, and the recognition algorithms as well.

In order to achieve the goals, the algorithm constantly needs to localize and orient the person on a previously acquired map from visual input, using low-level features and landmarks (salient houses, street signs, objects, mountain silhouettes, ...). Training data for the landmarks will be collected offline to generate the map. An initial map may be constructed from Google Maps or other mapping service, but will be augmented with the position of the visually recognized objects.

This will be the very first demonstration of a useable neuromorphic wearable device, so we are all very excited to get this going in Telluride, and focus on our one big goal. In order to achieve this big goal, several sub-goals have to be reached, so everyone's expertise will be useful. If you are good at or interested in computer vision, localization algorithms, computer graphics, interfacing of devices, vision with silicon retinas, sensory fusion, machine learning, or related fields, talk to us, we will surely have a project for you.

What hardware will be available?

The lab in Paris has developed a wearable neuromorphic vision display that will be supplied in the form of several electronic glasses connected with ATIS vision sensors, which is worn on a helmet. We will use Epson Moverio BT-100 and BT-200 smart glasses, and will also have various vision sensors (ATIS and DVS neuromorphic vision sensors, and conventional cameras). Code will run on laptops, carried in backbags, and we will make use of additional sensors like GPS and inertial sensors for acquiring ground-truth data.

What models will we use?

For classifying the observations we will use 2 of the most advanced event-based vision frameworks: HFirst (Orchard et al. 2013) is an event-based variant of the well-known HMAX model for computer vision. It allows very fast recognition of objects and landmarks. We will also use event based Deep Belief Networks (DBNs) (O’Connor et al. 2013) for fusing information from different sensors. We will use visual SLAM to learn maps of the environment from neuromorphic vision input as the user moves, and localize the user given the output of the visual classifiers.

What papers should I read?

  • O’Connor, Neil, Liu, Delbruck, Pfeiffer: Real-time classification and sensor fusion with a spike-based Deep Belief Network. Frontiers in Neuromorphic Engineering, 2013. (PDF)
  • Orchard, Jacob, Vogelstein, Etienne-Cummings: Fast neuromimetic object recognition using FPGA outperforms GPU implementations, IEEE TNNLS, 2013. (PDF)
  • Benosman, Clercq, Lagorce, Ieng, Bartolozzi, Event-Based Visual Flow, transaction on neural networks, 2014. (PDF)
  • Li, Wang, Goh, Lim, Tan: A Wearable Cognitive Vision System for Navigation Assistance in Indoor Environment, International Conference on Neural Information Processing (ICONIP) 2013 (PDF)
  • Posch, Matolin, Wohlgenannt, “A QVGA 143dB dynamic range asynchronous address-event PWM dynamic image sensor with lossless pixel-level video compression,” pp. 400–401, Feb 2010. (PDF)

What software should I download?

Our software framework will run under Linux (Debian-based), and should also work on a Mac. It might be possible to run under Windows, but that might require some workarounds. We will have laptops available for the project, on which all necessary software will run, and you can also work on models or data analysis offline, using your favorite software. Software for the ATIS sensor is written in C++, for the DVS sensor in Java ( jAER). For model testing or data analysis we recommend Python or Matlab.

The software for this project will be hosted on Bitbucket. Links will be provided soon.

Deep Learning Software


  • 30 June: Introduction, Project Presentation
  • 1 July, 9-10AM: Field tutorial Wearable computing: an opportunity for neuromorphic vision
  • 2 July, 3-4PM: First group meeting, project assignments