Scientific

Adrien Chopin, Postdoctoral Researcher Ph.D.

Unraveling Binocular Rivalry Dynamics for Amblyopia Detection: From Pilot Study to R21 Proposal

Unilateral amblyopia, a leading cause of childhood vision loss, demands early intervention for optimal treatment outcomes. Unfortunately, accurate screening tools specifically designed for preschoolers remain elusive. This talk explores the potential of binocular rivalry dynamics as a novel biomarker for amblyopia detection, bridging the gap between a promising pilot study and a future R21 proposal. Pilot Study: Our initial investigation revealed distinct binocular rivalry patterns in amblyopic adults compared to controls. Amblyopic individuals exhibited fewer reversals between rivalrous images presented to each eye, with a higher prevalence of incomplete reversals suggesting disrupted binocular processing. These findings highlight the potential of rivalry dynamics as a diagnostic tool. R21 Proposal: Building upon these initial findings, our R21 proposal aims to: Validate and refine rivalry-based diagnostics: We will expand the study population to include more amblyopia subtypes and controls, exploring both natural viewing conditions and balanced interocular contrast presentations. Additionally, we will optimize the minimum data collection duration for accurate classification, aiming for a sub-5-minute test.Develop an objective eye-movement measure: We will introduce optokinetic nystagmus (OKN) as an objective indicator of rivalry states, potentially eliminating the need for subjective reports, a challenge in preschoolers. By investigating signatures of abnormal rivalry dynamics through both subjective perception and objective OKN measures, we seek to establish a reliable, non-invasive method for amblyopia detection.This combined approach holds significant promise for developing a rapid, objective vision screening tool specifically designed for preschoolers. Early detection of amblyopia may pave the way for timely intervention, ultimately improving visual outcomes and enhancing children’s quality of life. https://ski.org/directory/adrien-chopin

Dr. Fletcher speaking in front of a screen

The 19th Annual Meeting of the Low Vision Rehabilitation Study Group

Sponsored by The Smith-Kettlewell Eye Research Institute (SKERI) & the CPMC Dept. of Ophthalmology Hosted by Don Fletcher, Ron Cole, Gus Colenbrander, Tiffany Chan, and Annemarie Rossi The meeting will be held at SKERI – 2318 Fillmore St., San Francisco, CA 94115 Agenda: Doors open at 8 AM for breakfast and settling in. 3-hour discussion groups on both days from 9 AM to noon & 1:30 to 4:30 PM. Lunch is on your own. Purpose: An informal gathering of clinicians/clinical researchers in low vision rehab. · Discuss problem cases · Share techniques · Brainstorm ideas for new treatments or investigations · Enjoy collegiality Participants: Anyone actively involved in vision rehabilitation. Registration: Contact Don Fletcher at floridafletch@msn.com to save a spot for Friday/Saturday Format: Informal · no invited speakers · bring a case or technique to discuss · no set agenda – we will divide the time between all comers · if time allows, we can discuss and solve all the problems facing the field Promise: We won’t always agree but we’ll have a good time as a group that has a common interest/passion.

Kassandra Lee, Ph.D., Neuroscience Department at UNR

Improving recognition of cluttered objects using motion parallax in simulated prosthetic vision

The efficacy of visual prostheses in object recognition is limited. While various limitations exist, here we focus on reducing the impact of background clutter on object recognition. We have proposed the use of motion parallax via head-mounted camera lateral scanning and computationally stabilizing the object of interest (OI) to support neural background decluttering. We mimicked the proposed effect using simulations in a head-mounted display (HMD), and tested object recognition in normally sighted subjects. Images (24° field of view) were captured from multiple viewpoints and presented at a low resolution (20´20). All viewpoints were centered on the OI. Experimental conditions (2´3) included: clutter (with or without) ´ head scanning (single viewpoint, nine coherent viewpoints corresponding to subjects’ head positions, and nine randomly associated viewpoints). Subjects utilized lateral head movements to view OIs on the HMD and report what they thought the OI was. The median recognition rate without clutter was 40% for all head scanning conditions. Performance with clutter dropped to 10% in the static condition, but it was improved to 20% with the coherent and random head scanning (corrected p = 0.005 and p = 0.049, respectively). Background decluttering using motion parallax cues but not the coherent multiple views of the OI improved object recognition in low-resolution images. The improvement did not fully eliminate the impact of background. Motion parallax is an effective but incomplete decluttering solution for object recognition with visual prostheses. https://www.unr.edu/neuroscience/people/students/kassandra-lee

Graduate student Brian Szekely

Vision across the gait cycle

Walking, a fundamental human activity, presents challenges to visual stability due to motion blur induced by eye movements. The goal of my research is to elucidate the intricate relationship between locomotion, visual function, and oculomotor behavior. These physiological processes, coupled with neural oscillations arising from the periodic nature of walking, may influence visual function. To assess the impact of these behavioral and physiological aspects across the gait cycle, I use various psychophysical techniques, including binocular rivalry and contrast sensitivity, to discern differences in visual processes. With the use of virtual reality, optical tracking systems, and eye tracker technology, I investigate these visual functions at different locomotor phases: heel strike, single stance, double stance, and toe-off. Additionally, I am actively developing saccadic detection techniques specific to natural locomotion. Current saccadic techniques, designed for stationary behavior, exhibit poor performance during unconstrained head and body movements typical during natural walking. These comprehensive investigations aim to provide valuable insights into the interplay between locomotion and visual-motor processes. The findings may offer implications for understanding how the human nervous system adapts to maintain stable vision during walking. Moreover, this research could inform future studies in fields such as rehabilitation and applications in virtual reality. https://www.unr.edu/neuroscience/people/students/brian-szekely

Intern (Research)

Object Recognition to Support Indoor Navigation for Travelers with Visual Impairments

Independent wayfinding is a major challenge for blind and visually impaired (BVI) travelers. I describe recent work on improvements to real-time object recognition algorithms that will be used to support an accessible navigation app for BVI travelers in indoor environments. Recognition of key visual landmarks (such as Exit signs and artwork) in an environment provides information about the user’s current location, and improvements to the recognition algorithms will enhance the localization process used in the navigation app.

Scientific Program Coordinator Charity Pitcher-Cooper

YouDescribeX the Human-in-the-loop AI interface

YouDescribeX (the X is for eXtra eXperimental). In this revised audio description tool for YouTube viewers can request AI described videos in addition to putting videos on the wishlist, the wishlist has been revitalized propting describers to make AD for those videos first, describers will have the option of using the freestyle version (watch the video, make a script, and record their own voice) or to have an AI supported version that auto captures all the text on screen, automatically chooses description track insertion sites by finding the gaps in the dialog, suggests possible audio description copy for correction and improvement by the describer, and then a synthetic voice reads the descriptions. Describers can choose to put their AD into a community editing process where their script and track placement is improved by other describes.  I cannot wait to show you the good (wow is the text on screen capture incredibly good), the bad (the text on screen capture is so good it sometimes will pick up text on background things- like trashcans) and the ugly (descriptions for still images is still relatively poor, dynamic images are even more difficult) but our describers can correct the AI, and as the tool grows, their suggestions will lead to better audio description. If you have an interest in the YouDescribe classic data (10 years of audio description data)! It can be found here: https://github.com/youdescribe-sfsu/You-Described-We-Archived GitHub – youdescribe-sfsu/You-Described-We-Archived: This is the public repository to download from the You Described, We Archived dataset. This is the public repository to download from the You Described, We Archived dataset. – GitHub – youdescribe-sfsu/You-Described-We-Archived: This is the public repository to download from the You…github.com. In the future we will have three sets of data: auto described by AI, human edited from AI, and freestyle audio description.

Dr. Jenny Read, Vision Research

Stereoscopic vision in the praying mantis

Stereopsis – deriving information about distance by comparing views from two eyes – is widespread in vertebrates but so far known in only class of invertebrates, the praying mantids. Understanding stereopsis which has evolved independently in such a different nervous system promises to shed light on the constraints governing any stereo system. Behavioral experiments indicate that insect stereopsis is functionally very different from that studied in vertebrates. Vertebrate stereopsis depends on matching up the pattern of contrast in the two eyes; it works in static scenes, and may have evolved in order to break camouflage rather than to detect distances. Insect stereopsis matches up regions of the image where the luminance is changing; it is insensitive to the detailed pattern of contrast and operates to detect the distance to a moving target. Work from my lab has revealed a network of neurons within the mantis brain which are tuned to binocular disparity, including some that project to early visual areas. This is in contrast to previous theories which postulated that disparity was computed only at a single, late stage, where visual information is passed down to motor neurons. Thus, despite their very different properties, the underlying neural mechanisms supporting vertebrate and insect stereopsis may be computationally more similar than has been assumed.

Yingzi Xiong, Assistant Professor of Ophthalmology, John Hopkins Medicine

Dual Sensory Impairment: Spatial Localization with Combined Vision and Hearing impairment

Dual sensory impairment (DSI) refers to combined vision and hearing impairment. DSI concerns a large population and its prevalence increases drastically with age. In the US, it is estimated that 40% of patients seeking vision rehabilitation also have hearing loss. The majority of people with DSI still have functional vision and hearing and would therefore benefit from rehabilitation to maximize the use of their residual senses. However, vision and hearing rehabilitation traditionally have been two separate systems that are not readily addressing the unique challenges faced by people with DSI. The long-term goal of our research on DSI is to understand the interaction between impaired vision and impaired hearing in important everyday activities, develop novel assessment instruments to assess vision and hearing functions in a unified framework, and establish new rehabilitation strategies to maximize the use of residual vision and hearing in this population.   The specific topic I will present focuses on spatial localization, which refers to the ability to determine the direction and distance of people and objects around us. Spatial localization is critical for safe navigation and effective social interaction. For people with vision impairment, the challenges in spatial localization are often addressed by Orientation and Mobility training and assistive devices, both of which emphasize the use of hearing. For people with DSI, it is critical that we understand the interaction between residual vision and hearing before making rehab recommendations. Using a combination of screening tools, laboratory spatial localization tasks, and questionnaires, we asked how impaired vision and impaired hearing affect an individuals’ spatial localization ability, both independently and in combination. I will comment on the implications of our findings for DSI rehabilitation, and also share the challenges in conducting DSI research and some future directions. https://www.hopkinsmedicine.org/profiles/details/yingzi-xiong

Marcello Maniglia, PhD University of California Riverside, US

Visual perception, eye movements and visual field awareness after central vision loss: Evidence from patients with Macular degeneration, simulated scotoma, and visual training

Macular Degeneration (MD) represents the leading cause of visual impairment in the Western World. Late-stage MD eventually leads to the development of a central region of blindness (scotoma), which compromises their ability to perform everyday life activities like reading and recognizing faces. Patients spontaneously cope with this challenge by adopting compensatory oculomotor strategies, including the development of a new fixation region in their healthy peripheral retina, called Preferred Retinal Locus (PRL), that they use as a functional replacement of the fovea. Given the role of the fovea as a perceptual and oculomotor reference in the healthy visual system, consequences of central vision loss are far-reaching. Moreover, visual deficits in MD can be extensive yet subjectively subtle, often manifesting as distortions of the visual field or being masked by perceptual filling-in mechanisms. Patients are often unaware of characteristics of their visual deficits. Studies with simulated central vision loss in healthy participants suggest that increased awareness of these deficits could be beneficial for visual rehabilitation. Hence, it becomes apparent that perception, eye movements and visual field awareness are all altered by loss of central vision. Therefore, characterization of MD and designing of rehabilitative interventions should address the multifaceted nature of this condition. Here I will present a series of studies investigating perceptual, oculomotor and attentional characteristics of central vision loss, at baseline and after different training interventions. These interventions aimed at improving visual abilities, eye movements and scotoma awareness, in both patients with MD and healthy participants tested with gaze-contingent, simulated scotoma. Taken together, results suggest that a multidimensional approach to low vision might hold the keys for a better understanding of the effects of central vision loss on perceptual, attentional and oculomotor systems. Furthermore, such approaches might help develop successful interventions for patients with MD. 

Fatema Ghasi is an Associate Professor at Cole Eye Institute Cleveland Clinic

Fixational Eye Movements in Strabismus and Amblyopia: Implications for visual function deficits and treatment outcomes

We have developed robust and objective measures that do not depend upon the young patient’s cooperation or provider’s assessments of visual acuity and strabismus angle and quantifies the entire spectrum of visual function deficits in a fast, reliable, and pediatric-friendly way. The systematic analysis of fixation eye movement traces obtained in the lab in patients with binocular vision disorders has revealed several features that can be utilized to detect the presence and severity of amblyopia and angle and control of strabismus. We have found that fixation eye movement abnormalities correlate with reduced light sensitivities and depth perception and extent of suppression of vision experienced by these patients. We have also found that assessment of fixation eye movement characteristics can be a useful tool to predict functional improvement post amblyopia and strabismus repair. https://fescenter.org/team/investigators/ghasia-fatema-md/