Feedback about gaze position improves saccade efficiency



When searching for an unknown number of noisy targets in a limited time, looking at uncertain locations is more efficient than looking at locations with a high probability of the target. Previously, we showed that immediate saccadic feedback that revealed the true identity of a noisy stimulus was effective in improving saccade efficiency (Verghese, Ghahghaei, 2013). However, the stimuli in that task were not limited by visibility, and the feedback artificially removed any ambiguity about the identity of the stimulus as soon as a saccade landed at that location. Here we examine if the increase in visibility upon naturally foveating a target combined with simple knowledge about eye position will encourage a strategy to fixate informative locations. Observers actively searched a brief display (900 ms) with six Gabor patches in noise, located 3 degrees from fixation to locate an unknown number of horizontal targets, among vertical distractors. The contrast of the Gabors was high or low such that the orientation of the high-contrast Gabor was perfectly discriminable at 3 degrees, but that of the low-contrast Gabor only upon foveation. Thus, saccades to low-contrast rather than high-contrast locations were more informative. In separate blocks participants received either (i) no gaze feedback, (ii) delayed gaze feedback at the end of the trial, or (iii) immediate gaze feedback after each saccade in the trial. Feedback was provided by changing the color of the ring that surrounded each location. In the absence of feedback, participants differed in the proportion of saccades to informative locations, with a greater proportion for more experienced participants. Both immediate and delayed feedback increased the proportion of informative saccades for four out of five participants. Furthermore, gaze feedback increased the latency of the first saccade and reduced the number of reflexive saccades to salient locations, making saccades more informative. Meeting abstract presented at VSS 2015.

Year of Publication: