Conventional decision models view reaction time as a consequence of the duration of decision-making process. However, some studies have shown that reaction time distributions may be strongly affected by reinforcement contingencies (Madelain et al., 2007).
Here, we probe the possibility to voluntarily control saccadic latencies in a choice paradigm. Three subjects (including the two authors) tracked a visual target stepping horizontally by 10 deg between two fixed locations on a screen. Any trial with saccadic latencies greater than 300 ms or shorter than 80 ms was interrupted.
Using the first and last quartiles of individual baseline latency distributions, we first defined two classes, i.e. “short” and “long” saccadic latencies, respectively. We then concurrently reinforced each type of latencies in random interval reinforcement schedules: “short” and “long” latencies were reinforced with three different sets of probabilities such that the relative ratioof reinforcing “short” latencies was either 9/1, 1/9 or 1/1.
After training (20 800 trials), we observed bimodal latency distributions –with a peak for “short” and another for “long” latencies– for two subjects and to a lesser extent for the third subject. To further probe the extent of control over saccadic latencies, we then analyzed the data using the generalized matching law (Baum, 1974) –which states that the relative proportion of choices made to an option matches the relative proportion of reinforcers earned from that option. We found an almost perfect match between the relative proportion of “short” and “long” latencies and the relative obtained reinforcement for two subjects (sensitivities were equal to 0.95 and 0.87) and typical undermatching parameter for the third one (sensitivity = 0.58).
These results indicate that saccades may be allocated in time following the reinforcement contingencies in force, which support the idea of a voluntary control of saccadic reaction time.