2011/att11/HighLevelSaliency

Implementing high level saliency features and top-down attention for audio

Subproject 1 : Top-down attention

Description: Reynolds and Heeger implement top-down attention using an Attention field which is task related. Here we want to automatically learn these attention fields for a specific purpose in a real task like phoneme classification.

Subproject 2 : High level saliency features

Description: In many of the previous works on auditory attention, salient regions are inevitably temporal windows of high information content. We would like to extend this notion to spectro-temporal regions of high information. This renders all the traditional feature extraction schemes in audio processing intractable. We aim to derive inspiration from image processing techniques like SIFT feature extraction to extract local features. We aim to use this for Speech Processing and/or Auditory Scene Analysis.

Results

Click here for results

Participants

  1. Kailash Patil (JHU) - Lead
  2. Malcolm Slaney (Yahoo)
  3. Mounya Elhilali (JHU)
  4. 'Jude Mitchell' (Salk)
  5. 'Ozlem Kalini' (Sony)