The project focused on the prediction of manipulation actions. When we perform actions with our hands on objects, such as 'drinking from a cup' or 'pouring from a cup,' we first approach the object with our hand(s), then we touch the object, then our hand(s) move with the object to perform the action. We can speak of three action phases: the grasp preparation, the touch, and the movement with the object. Intuitively, the way we approach the object, the way we grasp it, and the beginning of the trajectory of movement already are characteristic of the movement. In this project we wanted to find out about the prediction of the action: how well we can predict and how early we can predict. We developed computational methods for action prediction from images and from DVS data, and for prediction of the forces on the fingers using as input visual data, we evaluated human performance on the same visual task through psychophysical experiments, and we recorded EEG data from subjects performing actions and subjects observing actions and developed methods to classify the action from the EEG signal.

List of Projects

Learning Action Prediction from Vision using a Recurrent Neural Network

Learning Force Prediction from Vision

Predicting actions from forces

Psychophysics on Action Prediction

Action Prediction from EEG data

A DVS based hand tracker

Action Prediction from DVS data