Topic Area Attention and Selection

Group Members:

  • Adam ODonovan, University of Maryland
  • Jorg Conradt, TU Munich
  • Hynek Hermansky, Johns Hopkins University
  • Jonathan Tapson, University of Cape Town
  • Malcolm Slaney, Microsoft Conversational Systems Lab and Stanford
  • Mounya Elhilali, Johns Hopkins University
  • Michele Rucci, Boston University
  • Ryad Benjamin Benosman, University Pierre and Marie Curie
  • Ralph Etienne-Cummings, JHU/ECE Department
  • Song Hui Chon, McGill University
  • Shih-Chii Liu, Institute of Neuroinformatics, UNI/ETHZ
  • Steve Kalik, Toyota Research Institue
  • Timmer Horiuchi, University of Maryland

Organizers: Ernst Niebur & Malcolm Slaney

See wiki:2010/results/att for results of this topic area.


  1. Christoph Posch (Caltech) - First two weeks, June 27 to Jul 11
  2. 'Jeremy Wolfe' (Harvard) - July 1-9
  3. 'Daniel Pressnitzer' (ENS Paris) - All three weeks
  4. 'Clara Suied' - All three weeks
  5. 'Erv Hafter' (UCB) - Unable to attend :-(
  6. Julien Martel (McGill?) - July 11-16
  7. Mounya Elhilali (JHU) - First Week
  8. Shihab Shamma (Maryland) - All three weeks, but at World Cup till 7/11

Honored Guests

  1. Andreas Andreou (JHU) - First 10 days
  2. Howard Horvitz - 6/30-7/3
  3. Hynek Hermansky (JHU) -
  4. Daniel Mendat (JHU) - 7/1-7/5


Modern technology makes available vast streams of sensory data for surveillance and other purposes. In biology, this problem of sensory information overload is by no means recent. Nervous systems have been honed by evolution to solve this problem quickly, efficiently and thoroughly. Essentially all animals, including insects, have developed mechanisms of selective attention. Attention is a primary cognitive function, and arguably one of the most important aspects of understanding the world around us. Without limiting the amount of information that is to be processed in detail in a smart and situation- dependent manner, higher level cognitive processing would be impossible. Auditory attention is used to "hear out" the desired signal, yet attention can shift with sounds that are particularly salient. Visual attention is used to explore our world, helping the animal to direct its attention to portions of the visual field of view needed to understand the scene. Importantly, all these different inputs from different modalities have to be integrated and taken into account for making a "good" behavioral decision.

There are concrete models of attentional selection which are amenable for hardware implementation. In particular the saliency map approach for bottom-up attention, and prototypes of such implementations have been developed in the past, including by Telluride regulars. There are many new results from neurophysiology and psychophysics that need to be discussed and introduced to Telluride participants.

We therefore believe that it is timely to introduce a dedicated topic group on attention and selection into the Telluride curriculum. There have been informal discussion groups in several previous years showing the great interest in the topic, but participants necessarily needed to assign lower priority to it since they had to focus primarily on their primary topics. It is time that attention and selection is recognized as the central problem of cognition that it is. In the coming workshop, we will combine the expertise of modelers, neurophysiologists and psychophysicists to build attentional models--- learning from auditory, visual and cognitive research---and demonstrate their functions in realistic tasks. We will aim to combine expertise from perceptual and cognitive aspects of attention, to better understand this important behavior, and to build and demonstrate models of attention.

Furthermore, we found that the informal attention discussion group naturally attracted participants from several other groups since attention is important in all sensory modalities, and both the auditory and visual groups have independently addressed questions of attention and saliency in the last years. The new group will be a very natural venue to bring together participants from different application backgrounds (visual/auditory/robotics/...) and also from different levels of approach (hardware/biology/cognitive/...).

For instance, we envision a robot navigation application in which different input modalities (vision/audition/sonar/tactile(whiskers)/...) need to be integrated to solve a task, which will surely require a smart selection of which sensory input needs to be utilized in given situation. While the problems associated with the individual sensory modalities will be addressed by the specialists in that field (as, e.g., in the existing auditory and visual working groups), there is the separate class of problems of integration and selection (two sides of the same coin) which cannot be solved by any of these groups alone and which requires to solve the problems of multimodal attention. It is these tasks that our topic group will focus on.

Subproject Pages

Attention & Select Project Directions -- Monday Thinking (Pre Vision People)

  1. Understanding the world around us
    1. Grand Challenge - Understand multisensor data
    2. Binding auditory and/or visual events
  2. Auditory Saliency
    1. Can we define auditory saliency
    2. Can we measure it?
      1. Detection or Reaction time models
      2. EEG
      3. Eye tracking or left/right decision
  3. Learning and Attention
    1. SDPT for learning signals
    2. Do we need attention for learning?

Specific Topics Area Projects -- Some Suggestions

  1. Auditory and Visual Saliency Models, both static and dynamic
  2. Hardware implementations
  3. Binaural Attention Model
  4. Robots that attend/system level integration of intelligent selection (as discussed in main text)
  5. Models of shifting attention
  6. Top-down aspect: psychophysics, neurophysiology, models
  7. Representation of attention: coordinate frames, attention to objects
  8. Definitions and ways to measure auditory saliency