Social Neuroscience and Robotic Pet Project

Members: Christoph Maier, Daniel B. Fasnacht, Diana Sidtis, Heather Bell, James Bonaiuto, Jongkil Park, Kevin Mazurek, Mehdi Khamassi, Magdalena Kogutowska, Matthew Runchey, Nai Ding, Pablo Gomez Esteban, Pam White, Sahar Akram, Sam Fok, Sergi Bermudez i Badia, Sudarshan Ramenahalli, Timmer Horiuchi, Thomas Murray, Tobi Delbruck, Ulysses Bernardet, Ying-Yee Kong, Yulia Sandamirskaya

Organizers: 'Sergi Bermudez', Ulysses Bernardet

Faculty Dates

Sergi Bermúdez Madeira-ITI 7/1/2012 7/21/2012
Ulysses Bernardet 7/1/2012 7/21/2012
Sergio Pellis Univ. Lethbridge 7/9/2012 7/13/2012
Mehdi Khamassi ISIR 6/30/2012 7/8/2012
James Bonaiuto CalTech? 7/8/2012 7/14/2012
Andreas Andreou The Johns Hopkins University 6/29/2012 7/8/2012
Diana Sidtis NYU 7/1/2012 7/5/2012


Focus and goals of the topic area

The topic proposed revolves around the theme of “social neuroscience” and aims at building a robotic pet (cat, dog, hamster, etc) that a user can interact with. The behavior of the robotic pet should be goal oriented, and provide an experience of meaningful play.

The project puts the focus on the fundamental importance that social interaction and communication plays for how biological system have evolved; social interaction is not a “bonus”, but a key faculty for many biological systems. We want to put neurobiological models, e.g. of imitation, empathy, behavior regulation, etc in their functional context, i.e. the context of social interaction.

To achieve this goal the robotic pet will have to interact in a social real-world environment and hence comprise the entire sense-think-act loop.

Applications of the robotic pet include:

Monitoring, companionship, and cognitive stimulation for elderly people, Social interaction, emotion expression and recognition, sustain social interaction for autistic children

To allow closing the sense-think-act loop from the onset, the project will initially use a number of algorithmic components (e.g. SHORE toolkit for user recognition, emotion detection, kinect for gesture identification). Over the course of the project, algorithmic components will gradually be replaced by neuromorphics components.


The “scenario” of the behaviour of the robotic pet (RP) is defined by a number of “primary behaviours” such as Feeding/Drinking?, Playing (social and solitary), Bonding, Defence, Flight, and Sleeping. Each of these behaviours should be meaningful and goal oriented by itself. At the top level an overall control will integrate the “primary behaviours” into coherent behaviour. Detailed descriptions of the “primary behaviours” can be found later in this document.

A prose description should serve to illustrate the gist of the project:

Initially RP is playing in a solitary fashion e.g. by exploring and object or space. Once RP encounters a human (an opportunity), it attempts to bond with him/her by imitating her/his movements. After pursuing this activity for a while, RP changes to foraging and (potential) “consumption” of food. Towards the end of the “meal” RP observes two humans engaged in a playing interaction, and wants to play along. Unfortunately, the humans are getting into a dispute with each other about the game they are playing. To defuse the situation, RP attempts to make the humans end their confrontational behaviour. After all this excitement RP feels rather tired and will retreat for sleeping.

Note that in the above description “primary behaviours” are marked in italic. A key feature of the overall control of the behaviour is that the system responds to “hard” internal goals such as feeding, but also raises to opportunities (soft goals) e.g. the presence of a playmate.

System Architecture

The proposed architecture comprises "primary behaviours" and one single "secondary" control loop (Figure 1). The aim is to have a single architecture that integrates primary behaviours and yields meaningful, coherent behaviour. The organization into these two levels of control is on the one hand motivated by knowledge about the organization of biological system, and on the other hand, the compartmentalization into primary behaviours should facilitate the parallel development of components.

A key feature of the architecture is that all behaviours are regulated and goal oriented. This means that each behaviour must define a target state, and means to measure this state. This approach is different from other approaches to “interaction design” that focus more on (feed-forward) perception and expression of emotions. In these models (e.g. the OCC model (Ortony, Clore, & Collins, 1990)⁠) the experience (and expression) of emotion are the (final) end product of an appraisal process. Emotions are not experienced (and expressed) to achieve a certain state. It is the opinion expressed here that this cannot be an adequate treatment of the function and processing of emotions in humans (of other higher animals), because goal-free emotion experience is not an evolutionarily stable strategy (ESS -> Maynard-Smith).

Figure 1: Overall architecture of the system. “Faculties” are components of sensory information processing (perception, cognition)


As the main integration software platform we will use the iqr simulator ( http://iqr.sourceforge.net/?file=kop1.php). iqr is a simulation software to graphically design and control large-scale neuronal models. Simulations in iqr can control real-world devices in real-time. iqr can be extended by new neuron, and synapse types, and custom interfaces to hardware.
The choice of iqr by no means exclude the use of other simulators (neuronal or other). The choice of iqr is amongst other things based on the successful use of the software as an integration platform for heterogeneous components in larger projects (e.g. Neurochem).

We will have many HW platforms and SW tools available for you to work on your project. That includes an iCub head from the Technical University of Lisbon (thanks to Alexandre Bernardino), 2 AISoy robots provided by AISoy robotics, a Brain Computer interface, the iqr neuronal simulator and much more. Check what will be available here:  http://neuromorphs.net/nm/attachment/wiki/2012/soc12/SNRP_Implementation.pdf

Areas for possible specific topic projects

  1. User recognition and emotion detection
  2. Imitation (e.g. MNS)
  3. Behavior regulation (e.g. Basal ganglia)
  4. High level motor control (e.g. PAG)
  5. Learning of sensorimotor coordination of gaze with (e.g. STDP)

For some examples and a detailed description of possible projects please check the file here:  http://neuromorphs.net/nm/attachment/wiki/2012/soc12/SNRP_Sub%2BProjects.2.pdf

Background literature <-- Must read!


Reinforcement Learning:

Behavior regulation and Play:

Mirror Neuron System modelling:

Social speech: voice, prosody, and formulaic language (https://neuromorphs.net/nm/attachment/wiki/2012/soc12/Social%20speechREV.docx) :

Insect neurobiology