Scientific

Catherine Agathos, Post-Doctoral Research Fellow

Implications of age-related central visual field loss for spatial orientation and postural control

 Multisensory integration is essential for postural control and enabling safe navigation. The vestibular system plays a crucial role in these processes, contributing to functions important for maintaining one’s autonomy, such as balance, oculomotor control and spatial orientation. Healthy aging is accompanied by a decline in many perceptual, cognitive, and motor abilities, however, potentially leading to a loss of autonomy and increased health risks, most notably falls. Age-related macular degeneration (AMD), the leading cause of irreversible visual impairment in older adults in industrialized countries, adds another layer of complexity to these age-related changes. Binocular AMD results in central visual field loss, forcing individuals to adopt eccentric viewing strategies and develop  preferred retinal loci (PRLs) in the peripheral retina. This adaptation calls for significant oculomotor and eye-head coordination changes that would require a recalibration with the vestibular system in particular. The interplay of visual impairment, altered oculomotor function, and age-related sensorimotor deficits likely contributes to the balance and mobility difficulties reported in this population, though the mechanisms are poorly understood. For instance, how does the combined loss of central vision and associated oculomotor changes affect visual sampling during motion, space representation, and the integration of visual and body-based signals necessary for accurate perception and adaptive movements? Little is known about whether and how older individuals with AMD adapt to a peripheral PRL in the context of spatial orientation and postural control.In this talk, I will present relevant work from the literature and my studies in healthy aging and central field loss. I will then introduce an R01 proposal evolving from this research, which examines the recalibration of different sensorimotor systems to a PRL in individuals with AMD. The proposal focuses on three key areas: 1) head stabilization and the exploitation of residual vision, 2) gaze-direction induced illusions in spatial orientation perception, and 3) postural adaptation to eccentrically-viewed optic flow stimuli.  This research aims to bridge our knowledge gap in multisensory integration for balance, mobility, and fall risk in AMD. Ultimately, the goal is to inform the development of appropriate interventions, aids, and rehabilitation strategies for this population.

Dr. Hari Palani Principal Researcher & CEO

Multisensory Information access and AI: Advancing opportunities for the Visually Impaired

Lack of timely access to information is a significant problem for the nearly 24 million blind or visually impaired (BVI) individuals in the U.S. and 285 million globally. Significant progress has been made in ensuring BVI individuals get the necessary accommodations they need. However, very little work has been done towards making critical materials accessible in real-time, especially for non-textual materials such as math expressions, graphical representations, and maps. Current methods for authoring, converting, and/or producing accessible versions of non-textual materials requires significant human, time, and financial resources. The process also requires people with rare and specialized knowledge such as braille transcribers and tactile graphic designers.My research program is aimed at addressing these issues and is unified by two complementary strands of applied science, (1) multisensory information access, and (2) AI. In this talk, I will first introduce the notion of multisensory information access, with a focus on how it impacts the perception, cognition, and behavioral characteristics of humans. Then, using the findings as foundation, I will show how we can (and are) using AI to enable multisensory information access in educational and navigational settings to address the long-standing accessibility issues of BVI individuals. https://www.unarlabs.com/

Advanced Psychophysical Methods for Comprehensive Visual Function Assessment

Assessing visual function is a fundamental aspect of eye research. However, existing tests often face limitations due to their design for in-clinic use, the necessity for trained personnel, time consumption, and the provision of coarse result resolution. This presentation will review the current limitations of vision assessment and introduce a range of new visual function tools developed to address these challenges. Specifically, it will describe various rapid, generalizable psychophysical paradigms capable of constructing personalized performance models. Additionally, it will cover tools for continuous perceptual multistability measurement. The advent of these tools has secondary effects, such as enabling the use of machine learning to detect novel categories of atypical vision and to identify redundant and predictive visual functions for specific populations. The presentation will include examples from typical clinical populations such as individuals with refractive errors, color vision deficits, amblyopia, albinism, and retinal disorders. Moreover, the talk will advocate for expanding vision assessments beyond conventional screening of e.g., acuity, contrast, or color, highlighting the importance of evaluating other visual modalities such as form, motion, face, and object perception and their clinical relevance. 

Radoslaw Cichy a Professor in the Department of Education and Psychology at the Free University of Berlin, and PI of the Neural Dynamics of Visual Cognition group

Deep neural networks as scientific models of vision

Artificial deep neural networks (DNNs) are used in many different ways to address scientific questions about how biological vision works. In spite of the wide usage of DNNs in this context, their scientific value is periodically questioned. I will argue that DNNs are good in three ways for vision science: for prediction, for explanation, and for exploration. I will illustrate these claims by recently published or still ongoing projects in the lab. I will also propose future steps to accelerate progress. https://www.ewi-psy.fu-berlin.de/en/psychologie/arbeitsbereiche/neural_dyn_of_vis_cog/team_v2/group_leader/rm_cichy/index.html

Dr. Sarika Gopalakrishnan PhD, FAAO Post-doctoral Research Fellow Envision Research Institute

The Role of virtual reality and augmented reality technologies in low vision rehabilitation

Visual impairment refers to a condition where a person’s eyesight cannot be improved with medical treatment. Such individuals face difficulties in performing daily living activities independently and require assistance from others. They need help in performing tasks that they cannot execute due to low vision or blindness. Virtual reality (VR) technology can be used to understand the visual performance of people with low vision in real-world scenarios. VR scenarios provide a more realistic way of measuring visual parameters such as visual acuity, contrast, eye and head movements, and visual search than clinical settings. Augmented reality (AR) technology can help analyze the functional vision while performing daily activities. Augmented Reality (AR) can help improve the visual functions of people with low vision, such as distance and near visual acuity, distance and near contrast sensitivity, and more. It can also enhance functional vision activities like reading, writing, watching television, working with computers, identifying currency, finding objects in a crowd, etc. AR can greatly enhance the visual experience of people with low vision. In this presentation, we will discuss the applications of virtual reality and augmented reality technology in the field of low vision rehabilitation. https://research.envisionus.com/Team/Sarika-Gopalakrishnan,-PhD,-FAAO

Dr. Shrikant Bharadwaj of the L V Prasad Eye Institute, Hyderabad, India

Temporal instabilities in the human eye’s auto-focus mechanism: characteristics, source and impact on vision

Our eyes are never at rest. Between the microsaccadic eye movements and microfluctuations of the eye’s autofocus mechanism (ocular accommodation), our visual system constantly encounters time-varying information even during a supposed “Steady state” fixation epoch. This talk will focus on the temporal instability of the eye’s accommodation, as observed under physiological conditions and in a condition of binocular vision dysfunction. The talk will be divided into two parts: the first part will describe the characteristics of these instabilities and their putative source in the neural control of accommodation and the second part will describe their impact on vision and a modeling exercise that was undertaken to decode putative decision strategies to optimize vision during such epochs. https://www.lvpei.org/about-us/our-team/research/shrikant-bharadwaj 

Brandon Biggs Engineer MDes, Inclusive Design, OCAD University BA, Music, CSU East Bay AA, Music, Foothill College

The Digital Drawing WYSIWYG Editor for Blind Users is Here

Digital drawing has historically been a significant challenge for blind individuals, with previous solutions requiring extensive imagination and technical skills. However, the Coughlan lab has been developing a groundbreaking “what you hear is what you get” drawing tool, revolutionizing this field. This innovative tool, powered by Audiom, enables blind users to create and edit complex shapes, maps, and art through auditory feedback. Users navigate a canvas using arrow keys, dropping points to form shapes and lines, which are then audibly represented, allowing for an intuitive and accessible drawing experience. Additionally, the tool supports the addition of sounds to objects, enhancing the creative process. This advancement not only facilitates artistic expression among the blind community but also offers educational applications, such as geometry assignments. With the ability to instantly share or export creations, this shape editor represents a significant leap forward in making digital drawing accessible to blind individuals.  Smith-Kettlewell Eye Research Institute is Listening to the Future of Navigation – Meet Engineer, Brandon Biggs (blindabilities.com)

Andrea Narcisi Research Scholar

Point-and-Tap Interaction for Acquiring Detailed Information about Tactile Graphics and 3D Models

I will present a system based on an iPhone app developed in the Coughlan Lab. This system is a novel “Point-and-Tap” interface that enables people who are blind or visually impaired (BVI) to easily acquire multiple levels of information about tactile graphics and 3D models. The interface uses an iPhone’s depth and color cameras to track the user’s hands while they interact with a model. To get basic information about a feature of interest on the model read aloud, the user points to the feature with their index finger. For additional information, the user lifts their index finger and taps the feature again. This process can be repeated multiple times to access additional levels of information. No audio labels are triggered unless the user makes a pointing gesture, which allows the user to explore the model freely with one or both hands. In addition, multiple taps can be issued in rapid succession to skip through to the desired information (an utterance in progress is halted whenever the fingertip is lifted off the feature), which is much faster than having to listen to all levels of information being played aloud in succession to reach the desired level. Experiments with BVI participants demonstrate that the approach is practical, easy to learn, and effective.

Post-Doctoral Research Fellow

Investigating natural head movement and its role in spatial orientation perception: Insight from 50 hours of data collection

Movement is ubiquitous in everyday life, and accounting for our physical position as we move through the world is a constant process. Over the lifespan, experience in estimating one’s position accumulates, and the nervous system’s representation of this prior experience is thought to inform current perception of spatial orientation. Broadly, spatial orientation perception is a multimodal sensory process. The nervous system rapidly monitors, interprets, and integrates sensory information from various sources in an efficient, statistically optimal manner to estimate an organism’s position in its environment. In humans, key information in this process comes from the visual and vestibular systems, which use head-based sense organs. While statistics of natural visual and vestibular stimuli have been characterized, unconstrained head movement and position, which may drive correlated dynamics across these head-based senses in the real world, have not. Furthermore, head-based sensory cues essential to human spatial orientation perception, like estimation of one’s head orientation relative to gravity and heading (the direction of linear-velocity in a head-based coordinate system), have not been robustly measured in unconstrained, natural behaviors. Measurement of these head-based sensory cues in naturalistic settings, even if incomplete, will likely comprise a portion of the behaviors that make up ones’ total prior experience, and the quantitative characteristics of these behaviors may explain previously observed patterns of bias in verticality and heading perception.  In this brown bag, I will discuss methods of motion tracking in and out of the lab, my previous work to characterize natural statistics of head orientation and heading over 50 hours of human activity using these methods, work to use these natural statistics to constrain Bayesian models of sensory processing, and future research and applications that might leverage these data and approaches. Christian Sinnott | Smith-Kettlewell (ski.org) 

Emily Cooper, Assistant Professor at UC Berkeley School of Optometry

A real-world visual illusion

I will describe our research into a surprising visual illusion in which humans misperceive the shape of a highly familiar object in a highly familiar context: their own mobile phone while they hold it in their hand. Unlike many other illusions that rely on controlling visual information, this shape illusion is robust in fully natural conditions, and it requires only that one eye’s retinal image is slightly minified. Our investigations indicate that this illusion results from a failure of the visual system to discard distorted binocular cues for object slant, even if the distorted slant does not reach awareness. This failure challenges our current understanding of sensory cue combination and informs our practical insights into the perceptual effects of prescription spectacles. https://vcresearch.berkeley.edu/faculty/emily-cooper