The role of motor control in echolocation performance: self-initiated vs. passive listening for object spatial localization.

Presentation

Abstract

Echolocation enables blind individuals to build spatial representations of their environment by emitting tongue clicks and analyzing returning echoes. Proficient echolocators optimize perception by adjusting click patterns and can also extract information from passively perceived echoes. Actively producing clicks improves the perception of room size for larger rooms. Yet it remains open whether self-initiated clicks, as compared to passively heard ones, would also modulate object localization accuracy,  the temporal dynamics of neural representations, the number of clicks used, and the information accumulation rate. 

We investigated these questions by recording EEG activity while a proficient echolocator performed an echoacoustic localization task under two conditions: self-initiated, by pressing a key controlling the timing and number of clicks, and passive, with fixed sequences of evenly spaced clicks. In both tasks, participants listened to synthesized clicks and spatialized echoes from a virtual object 1 m away at azimuths ±5° to ±25° and reported its location (left vs. right).

Our results revealed that object spatial localization was more accurate overall in the self-initiated (95%) vs. passive condition (85%), peaking at 2–5 clicks. Three-click trials were most frequently used (70%), and spatial precision was higher in the self-initiated condition (azimuth-thresholds 1.5° vs. 7°) in the 2–5 click range. EEG analysis showed higher classification accuracy during the first two clicks in the self-initiated condition. This suggests that active echolocation enhances spatial perception and neural processing, likely due to the integration of motor and sensory cues optimized by focused attention on early acoustic information.

Conference Name

23nd International Multisensory Research Forum
Durham, UK.
Year of Publication:
2025