Reverberant Auditory Scene Analysis
The world is rich in sounds and their echoes from reflecting surfaces, making acoustic reverberation a ubiquitous part of everyday life. We usually think of reverberation as a nuisance to overcome (it makes understanding speech or locating sound sources harder), but it also carries useful information, acting as a signature of the space it fills. Reverberation can tell us how big a room is, where we are inside it, and whether there are objects nearby. This has important implications not only for auditory spatial perception in typical individuals, but also in people with sensory loss. Sound supplies the bulk of distal environmental cues to blind and visually impaired people, as vision cannot perform its typical role of quickly assessing the size and shape of a space. Additionally, understanding reverberant perception provides insight into the information that is lost in the cases of hearing (or dual sensory) loss, especially since hearing aids are often optimized to suppress reverberation as background noise (e.g. Simon et al 2012).
Direct and reverberant sounds usually overlap, reaching the ear as a single combined signal; yet, remarkably, the brain routinely separates them as part of a complex process generally termed auditory scene analysis . Auditory scene analysis generally matures on a much longer time scale compared to core hearing (Leibold, 2012), suggesting a cortical locus for these higher-level functions; brain responses in typical adult listeners have been shown to contain independently decodable signatures of source sounds and their reverberant surrounds (Teng et al., 2017). Recent work demonstrates that background noise must have specific statistical acoustic properties to be identified as reverberation and successfully segregated from sound sources (Traer & McDermott, 2016). For example, it must decay exponentially, and at different frequency-dependent rates. Here we pose an unanswered question: are we innately sensitive to those statistical properties, or do we learn them over time? If we learn them, do we learn the different properties simultaneously or at different ages? We will address these questions by presenting children of different ages with sounds under different reverberant conditions. They will be asked to make judgments about the sounds, the reverberation, or both. In this way, we will characterize the factors that influence the developmental trajectory of auditory scene analysis.
We aim to better understand how people perceive, interact with, and move through the world, especially when vision is unavailable. To this end, the lab studies perception and sensory processing in multiple sensory modalities, with particular interests in echolocation and braille reading in blind persons. We are also interested in mobility and navigation, including assistive technology using nonvisual cues. These are wide-ranging topics, which we approach using a combination of psychophysical, neurophysiological, engineering, and computational tools.