Abstract
Echolocation is an active sensing strategy that leverages spatial hearing to detect, localize and discriminate objects. Some blind individuals use echolocation to sense and navigate their surroundings, complementing other mobility methods. Tongue “clicks” are a common method to ensonify objects and interpret the resulting echoes. Expert blind echolocators outperform non-expert blind and sighted individuals in most echo-acoustic tasks and modulate their clicking pattern dynamically to improve perception. However, it is still unknown how echoacoustic information is integrated across multiple samples (clicks) and how single echoes are neurally represented. To address these questions, we recorded the brain activity of blind and sighted individuals using EEG while they performed an echoacoustic localization task. Participants listened to a train of 2, 5, 8, or 11 synthesized mouth clicks and spatialized echoes from a reflecting object located at ± 5° to ±25° from the midsagittal plane. Subjects indicated whether the echo reflector was located to the left or right of the center. Proficient blind echolocators outperform novice-sighted individuals, with lateralization thresholds decreasing linearly from 2- to 8-click trials. Non-expert sighted performed at chance, with no effect of echo eccentricity or click count. Notably, they performed similarly to proficient echolocators when the emitted click was largely attenuated or removed. In blind experts, the location of the echo was reliably decoded from the EEG response after only one click, and successive click-echo samples linearly sharpened echoacoustic representations until saturation. In novice-sighted controls, spatial information was unavailable during the first clicks. These results suggest that echolocation expertise likely relies on extracting echoes from other masking sounds and integrating them across samples.