2015/hac15

Human Auditory Cognition: Communicating with EEG and Virtual Reality Links (The Matrix)

Members: arindam basu, Andrew Cassidy, Carina Graversen, Diederik Paul Moeys, David Anderson, Daniel Wong, Emina Alickovic, Emily Graber, Francisco Cervantes Constantino, Cornelia Fermuller, Giovanny Sanchez-Rivera, Garrick Orchard, George Ritmiller, Paul Isaacs, Jens Hjortkjr, Jonathan Tapson, Jie Jack Zhang, Kate Fischl, Kan Li, Lucas Parra, Malcolm Slaney, Elizabeth Margulis, Maarten De Vos, Ernst Niebur, Nima Mesgarani, Peter Diehl, Michael Pfeiffer, Sahar Akram, Shih-Chii Liu, Soumyajit Mandal, Tobi Delbruck, Ulrich Pomper, Zhaokang CHEN, Zonghua Gu

Organizers:: Shihab Shamma (Univ. of Maryland) Malcolm Slaney (Google) Alain de Cheveign (ENS/CNRS, France)

This topic group aims to measure neuronal signals that reflect the state of an individual brain's auditory and visual processing center. Specifically, we seek to develop reliable on-line decoding algorithms that extract from the EEG signal the sensory-cortical responses corresponding to an auditory or visual source amongst many in a complex scene. The goal is to understand how the perception and coding of such complex signals are represented and shaped by top-down cognitive functions (such as attention and recall).

The basic scientific approaches needed are highly interdisciplinary, spanning development of signal-analysis algorithms and models of cortical function, to experimental EEG recordings during performance of challenging psychoacoustic tasks. While the center of our Telluride work will revolve around EEG measurements, we hope in this project to also investigate simple psychophysical experiments that will allow more people to participate in the overall project.

The biggest obstacle to decoding brain states and sensory responses is the difficulty of recording clean and sustained signals that can be reliably associated with ongoing visual and audio stimuli. This is especially important if one is to detect and interpret the relatively small response perturbations due to cognitive priming for example. However recent studies have been quite successful, and we have conducted pilot studies with a group of invited researchers gathered at the Telluride Workshop on Neuromorphic Cognition from 2012-2014. The 2012 work is described informally at  http://www.signalprocessingsociety.org/technical-committees/list/sl-tc/spl-nl/2012-11/TellurideNeuromorphicCognitionWorkshop/ and a paper out the 2012 work was published earlier last year: James A. O’Sullivan, Alan J. Power, Nima Mesgarani, Siddharth Rajaram, John J. Foxe, Barbara G. Shinn-Cunningham, Malcolm Slaney, Shihab A. Shamma and Edmund C. Lalor . Attentional Selection in a Cocktail Party Environment Can Be Decoded from Single-Trial EEG. Cerebral Cortex, 2014.

This year we focus on attention, decoding the direction of a subject's "auditory gaze", the nature of the sound being attended (e.g. speech vs music), the way the past changes our perception of the present (e.g. repetition in music), how feedback can maximize decoder output, and so on. The current sub-projects are:

(1) In-Ear EEG - The EarEEG collects signals from within the ear canal. What kind of signal do we get, how does it compare to scalp EEG? What information is present (or missing)?

(2) Decoding localization EEG Decoding location from EEG - can we determine the direction of a sound source, or the direction of a listener's attention? Left ear vs right ear, spatial position in one of four quadrants?

(3) Decoding Speech vs. Music Decoding the nature of a sound being attended to from EEG. Can we tell Speech vs. Poetry vs Rap vs. Music? Are they more alike or different?

(4) Encoding of repetition in (musical) EEG. The first time we hear a musical phrase it sounds different from when we hear it repeated again (and again...). Can we see EEG correlates of how the music grows on us? A parallel experiment is being run with ECoG using the same stimulus (Eddie Chang). Also in ferrets.

(5) Decoding Algorithms Pushing the limits of EEG decoding with new techniques such as DNN (Deep Neural Network) and other non-linear approaches. Can we do a better job of decoding EEG signals and reconstructing or deciding the input sound?

(6) Enhancing a decoder with bio feedback. Builds upon Mitsuo Kawato's DecNef? paradigm (in which a subject learns to maximize the output of a previously trained decoder). Given any of our decoding projects, can we make it work better via biofeedback?

(7) Human Decoding EEG to sound (and back). We know that it is possible (sort of) to reconstruct the stimulus from the EEG. Can we communicate this way? Can we train the user to see/hear an EEG signal from another person's brain and then make an intelligence decision as to what was being said?

(8) Audio and Video EEG How do images and sound combine to make an EEG signal?

We have data. See the dataset descriptions here: EEG Datasets


Invited Participants of the Topic Area

Name Institution Expertise Time Website
Malcolm SlaneyGoogle and Stanford Audio Perception and Modeling 28 June-18 July  http://www.slaney.org/malcolm/pubs.html
Shihab ShammaENS and UMd 28 June-18 July  https://www.isr.umd.edu/faculty/shamma
Sahar Akram CUNY 28 June-18 July []
Alain de CheveignENS (Paris) 28 June-14 July  http://audition.ens.fr/adc/
Jens Hjortkjr Oticon 28 June-11 July  http://www.dtu.dk/english/Service/Phonebook/Person?id=37956&tab=2&qt=dtupublicationquery
Maarten De VosOxford 28 June-5 July  http://www.ibme.ox.ac.uk/research/biomedical-signal-processing-instrumentation/prof-m-de-vos
Lucas Parra CUNY 28 June-3 July  http://bme.ccny.cuny.edu/people/faculty/lparra
Nima Mesgarani Columbia Neuroinspired computation 1 July to 9 July  http://www.ee.columbia.edu/~nima/web/Home.html
Elizabeth Margulis Arkansas 8 July-18 July  http://www.elizabethmargulis.com/

Attachments