Projects

  • Completed

    Algorithmic Automated Description (AAD)

    Automated algorithmic description (AAD) uses existing machine-vision techniques to automate specific aspects of description such as camera motion, scene changes, face identification, and the reading of printed text. Such events could be identified by computer routines that automatically add annotations to the video.

  • Completed

    Novel Method to Teach Scotoma Awareness

    This project aims to improve visual function in individuals with age-related macular degeneration (AMD). AMD isassociated with central field loss that cannot be corrected optically.

  • Completed

    Impact of Eye Movements on Reach Performance

    Aim 2 of Reaching with Central Field Loss

  • Completed
  • Completed

    Target Selection in the Real World

    Attention and Segmentation

  • Completed
    Zoomed-in view of appliance display partially obscured by glare

    Display Reader

    The goal of the Display Reader project is to develop a computer vision system that runs on smartphones and tablets to enable blind and visually impaired persons to read appliance displays.

  • Completed
    BLaDE (Barcode Localization and Decoding Engine) smartphone app in action

    BLaDE

    BLaDE (Barcode Localization and Decoding Engine) is an Android smartphone app designed to enable a blind or visually impaired user find and read product barcodes.

  • Completed
    Virtual aerial view of intersection area near a pedestrian's feet, reconstructed by Crosswatch algorithms

    Crosswatch

    Crosswatch is a smartphone-based system developed for providing real-time guidance to blind and visually impaired travelers at traffic intersections.

  • Completed

    Go and Nogo Decision Making

    The decision to make or withhold a saccade has been studied extensively using a go-nogo paradigm, but little is known about the decision process underlying pursuit of moving objects. Prevailing models describe pursuit as a feedback system that responds reactively to a moving stimulus. However, situations often arise in which it is disadvantageous to pursue, and humans can decide not to pursue an object just because it moves. This project explores mechanisms underlying the decision to pursue or maintain fixation. Our paradigm, ocular baseball, involves a target that moves from the periphery toward a central zone called the "plate". Observers must pursue the target if it intersects the plate (strike), or maintain fixation if it bypasses the plate (ball). We have revealed a neural substrate underlying this decision process in an eye movement region in frontal cortex, the supplementary eye fields (SEF).

    Our preliminary human behavioral data has revealed several novel factors that affect the decision to pursue or fixate. One factor is priming, in which past experience influences current behavior. Preceding strike trials caused observers to commit more pursuit errors on ball trials, while preceding ball trials rarely caused fixation errors on strike trials. This implies that priming in the pursuit system is stronger than priming in the fixation system, although preceding fixation trials still appear to affect pursuit dynamics by increasing pursuit latency and decreasing open loop gain. The effect of priming is strong when strike and ball trials are randomized, but can be overcome to a degree by knowledge of trial sequence, such as when strike and ball trials alternate predictably. This is surprising, since in previous work involving visual search tasks, priming dictated behavior even when trial sequence was known. It is possible that the go-nogo rule involving the plate activates a cognitive region of the brain; however, even when the rule is removed and simple pursuit and fixation trials alternate predictably, trial sequence knowledge helps pursuit. This suggests that the fixation mechanism also plays a role in overcoming priming, since removing fixation trials and simply alternating the direction of pursuit renders priming dominant once again. In addition, we have evidence for an impulsive aspect to pursuit that is consistent with the reactive, motion-driven models. The fixation mechanism, which appears to be rooted in cognitive circuitry, then moderates the pursuit impulse. These projects are ongoing in the laboratory.

  • Completed

    Integration and Segregation

    Traditionally, smooth pursuit research has explored how eye movements are generated to follow small, isolated targets that fit within the fovea. Objects in a natural scene, however, are often larger and extend to peripheral retina. They also have components that move in different directions or at different speeds (e.g., wings, legs). To generate a single velocity command for smooth pursuit, motion information from the components must be integrated. Simultaneously, it may be necessary to attend to features of the object while pursuing it. Our goal is to understand attention allocation during pursuit of natural objects. In some experiments, we use random dot cinematograms (RDCs) as pursuit stimuli. Our RDCs consist of a pattern of randomly-spaced dots that usually move at a single velocity and in the same direction. We find that while pursuit of a foveal target is attentive and hinders performance on simultaneous attention-demanding tasks, pursuing an RDC improves pursuit and reduces saccades, and also improves performance on secondary tasks. This indicates that large stimuli release attention from pursuit, facilitating feature inspection. In other experiments, a multiple object tracking (MOT) cloud is pursued. The MOT cloud consists of a number of dots that move in random directions relative to one another in a virtual window that translates across the screen. When observers attentively tracked the targets within an MOT cloud, simultaneous pursuit of the cloud did not hinder performance on the attentive tracking task, suggesting that spatio-temporal integration of individual spot velocities for pursuit is also inattentive and leaves attention free for the segregation process.

  • Completed

    Video-Based Speech Enhancement for Persons with Hearing and Vision Loss

    Observing the visual cues from a speaker such as the shape of the lips and facial expression can greatly improve the speech comprehension capabilities of a person with hearing loss. However, concurrent vision loss can lead to a significant loss in speech perception. We propose developing a prototype device that utilizes a video camera in addition to audio input to enhance the speech signal from a target speaker in everyday situations.

Pages