FVA Full Program

FUNCTIONAL VISION & ACCESSIBILITY (FVA) CONFERENCE

Celebrating 60 Years of Vision Research at the Smith-Kettlewell Eye Research Institute

August 3 – 4, 2023

San Francisco, CA, USA

SKERI Team... 3

Welcome to the Smith-Kettlewell FVA Conference! 4

Our Sponsors. 5

Program at a Glance. 6

Detailed Program... 7

Day 1: Thursday Aug 3, 2023.. 7

Welcome. 7

Introduction.. 7

Session 1: Eye Movements in Health & Disease, Thursday 8:20 am – 10:20 am... 7

Session 2: Oculomotor Control and Binocularity, Thursday 10:40 am – 12:40 pm... 9

Session 3: Advances in the Retinal and Cortical Imaging of Visual Function, Thursday 1:40 – 3:40 pm... 11

Session 4: Brain Plasticity, Thursday 4:00 pm – 6:00 pm... 13

Reception and Posters, Thursday 6:00 pm – 8:00 pm... 15

Day 2: Friday Aug 4, 2023.. 15

Session 5: Computational Models and Machine Learning for Vision Science & Accessibility, Friday 8:00 am – 10:00 am... 15

Session 6: Augmented and Virtual Reality for Vision Screening, Training and Accessibility, Friday 10:20 am – 12:20 pm... 16

Lightning Talks and Lunch with a Scientist, Friday 12:20 pm – 2:20 pm... 18

Session 7: Restoring Vision vs. Using Available Senses, Friday 2:20 pm – 6:20 pm... 18

Poster Abstracts. 21

Lightning Talk Abstracts. 28

Safety Plan.. 31


 

Organizers:

Program:

Preeti Verghese, PhD, Senior Scientist, SKERI

James Coughlan, PhD, Senior Scientist, SKERI

Logistics:

Sony Devis, COO, SKERI

Bebe St. John, Senior Research Administrator, SKERI

Accessibility Coordinator:

Charity Pitcher-Cooper, BSN, PHN, Scientific Program Coordinator, SKERI

Website:

Natela Shandize, PhD, Scientist, SKERI

Advisory Board:

John Brabyn, PhD, CEO, SKERI

Arvind Chandna, MD, FRCS, FRCOphth, Clinician-Scientist, SKERI

Chuan Hou, MD, PhD, Scientist, SKERI

Lora Likova, PhD, Senior Scientist, SKERI

Santani Teng, PhD, Associate Scientist, SKERI

Christopher Tyler, PhD, DSc, Senior Scientist, SKERI

Trainee Liaisons:

Haydée G García-Lázaro, PhD, Postdoctoral Fellow, SKERI

Catherine Agathos, PhD, Postdoctoral Fellow, SKERI

Adrien Chopin, PhD, Postdoctoral Researcher, SKERI

https://www.ski.org/FVAconference

 

We are delighted to welcome you to our Conference on Functional Vision and Accessibility – an event aimed at helping identify the most promising directions for future research in some of the most important problem areas in the field of vision impairment.

The backdrop to the conference is the celebration of Smith-Kettlewell’s 60th anniversary as an independent vision research institute. While we are proud of this achievement, our goal for this meeting is not to dwell on the past but rather look forward to the future, considering emerging issues and techniques for addressing unsolved problems in our field.

The conference encompasses a range of topical issues within Smith-Kettlewell’s traditional areas of research focus. These include binocular vision and the oculomotor system, eye movements, and retinal and cortical imaging techniques. Topics such as computational modeling, AI, brain plasticity, and augmented and virtual reality address both vision and blindness. A final extended session tackles the neglected but increasingly important and sometimes controversial issues of vision restoration versus the use of existing senses in providing access and task performance capabilities for blind individuals.

Within each topic, we have assembled speakers from around the world with particular emphasis on the new and upcoming generation of researchers bringing fresh ideas and perspectives to bear. 

The emphasis throughout is on translational research – i.e., research motivated by real problems faced by people with impaired visual function. To this end, participants include researchers, clinicians, accessibility experts and developers, many of whom are blind and visually impaired stakeholders themselves.

We hope this meeting will provide a useful opportunity for stimulating and discussing new approaches and promising techniques for future research and development aimed at improving the lives of those affected by vision impairment or dysfunction.

In closing I would like to express deep gratitude to our conference Chair and Co-Chair, Preeti Verghese and James Coughlan (and other SKERI staff and scientists), for the countless hours spent in organizing this event, and to our speakers, participants, and sponsors without whom this event would not be possible. Thank you all for attending, and we look forward to an exciting and enjoyable two days!

John Brabyn, PhD

Executive Director, Smith-Kettlewell Eye Research Institute

 

Smith-Kettlewell Eye Research Institute | National Eye Institute (NEI) | National Institute on Disability, Independent Living and Rehabilitation Research (NIDILRR) | EyeSeeTec GmbH | The Teng Lab | Friends of Smith-Kettlewell | Art Photo Academy | Association for Education and Rehabilitation of the Blind and Visually Impaired | Brain Vision | C. Light Technologies | SR Research

* Funding for this conference was made possible in part by 1R13EY035565-01 from the National Eye Institute and by 90REGE0018-01-00 from the National Institute on Disability, Independent Living, and Rehabilitation Research. The views expressed in written conference materials or publications and by speakers and moderators do not necessarily reflect the official policies of the Department of Health and Human Services; nor does mention by trade names, commercial practices, or organizations imply endorsement by the U.S. Government.


 

Day 1: Thursday Aug 3, 2023

7:30 - 8:00 am

Registration

8:00 - 8:10 am

Welcome

John Brabyn, PhD, Executive Director, SKERI

8:10 - 8:20 am

Introduction

Suzanne McKee, PhD, Senior Scientist Emerita, SKERI

8:20 - 10:20 am

Session 1

Eye Movements in Health and Disease

Speakers: Jacob Yates, PhD; Jorge Otero-Millan, PhD; Esther Gonzalez, PhD; Christy Sheehy, PhD

Discussant: Preeti Verghese, PhD

10:20 - 10:40 am

Break

10:40 - 12:40 pm

Session 2

Oculomotor Control and Binocularity

Speakers: Paul Gamlin, PhD; Jenny Read, PhD*; Rowan Candy, PhD; Ewa Niechwiej-Szwedo, PhD

Discussant: Dennis Levi, OD, PhD

12:40 - 1:40 pm

Lunch

Lunch

1:40 - 3:40 pm

Session 3

Advances in the Retinal and Cortical Imaging of Visual Function

Speakers: Omar Mahroo, PhD, FRCOphth; Ravi Jonnal, PhD; Holly Bridge, PhD*; Yoichiro Masuda, MD, PhD

Discussant: Christopher Tyler, PhD, DSc

3:40 – 4:00  pm

Break

4:00 – 6:00 pm

Session 4

Brain Plasticity

Speakers: Mriganka Sur, PhD; Wu Li, PhD*; Lotfi Merabet, OD, PhD, MPH; Ione Fine, PhD

Discussant: Lora Likova, PhD

6:00 - 8:00 pm

Reception and Posters

Day 2: Friday Aug 4, 2023

7:30 - 8:00 am

Registration

8:00 - 10:00 am

Session 5

Computational Models and Machine Learning for Vision Science & Accessibility

Speakers: Miguel Eckstein, PhD; Dan Yamins, PhD; Frank Tong, PhD; Danna Gurari, PhD*

Discussant: Laura Walker, PhD

10:00 - 10:20 am

Break

10:20 - 12:20 pm

Session 6

Augmented and Virtual Reality for Vision Screening, Training and Accessibility

Speakers: Ben Backus, PhD; Brandon Biggs, MDes; Paul Ruvolo, PhD*; Yuhang Zhao, PhD

Discussant: James Coughlan, PhD

12:20 - 2:20 pm

Lunch

Lightning Talks & Lunch with a Scientist

2:20 - 4:00 pm

Session 7

Restoring Vision vs. Using Available Senses

Speakers: Juliette MacGregor, PhD; Michael Beyeler, PhD; Dan Adams, PhD; Gordon Legge, PhD*

4:00 - 4:20 pm

Break                                                      

4:20 – 6:20 pm

Session 7

Restoring Vision vs. Using Available Senses (continued)

Speakers: Joshua Miele, PhD; Sile O’Modhrain, PhD; Don Fletcher, MD; Arvind Chandna, MD, FRCS, FRCOphth*

Discussant: Santani Teng, PhD

* Virtual presentation

Day 1: Thursday Aug 3, 2023

Welcome

Thursday 8:00 am, John Brabyn, PhD, CEO, SKERI

Introduction

Thursday 8:10 am, Suzanne McKee, PhD, Senior Scientist, SKERI

Title: A brief history of a small institute

Session 1: Eye Movements in Health & Disease, Thursday 8:20 am – 10:20 am

This session will discuss how cortical activity is modulated by eye movements, the challenges of eye movements with the loss of the fovea in macular degeneration and the use of fixational eye movements, and eye movements in general, as a biomarker for disease.

1.1, Thursday 8:20 am, Jacob Yates, PhD, Assistant Professor, University of California, Berkeley

Title: Active visual neuroscience in non-human primates

Abstract: Most of the core computational principles in visual neuroscience come from studies using fixating or anesthetized subjects. To overcome the limitations of fixation points, we recently developed a suite of hardware and software tools to study vision during natural behavior in untrained subjects. In this talk, I’ll describe our free-viewing approach to visual neuroscience and how this supports high-resolution measurements of cortical receptive fields (RFs) in the primate fovea during natural oculomotor behavior. Although this approach supports detailed RF measurements, the goal of free-viewing experiments is to generate data under more natural conditions. The end product of a free-viewing experiment is a retinal movie that is aligned with the spike times and oculomotor behavior of the animal. Aligning on saccade times reveals large visual transients in V1 which have temporal latencies that depend on the spatiotemporal tuning of the neuron. Thus, active vision in marmosets consists of a dynamic temporal sequence of neural activity associated with visual sampling. Taken together, the free-viewing paradigm lays the foundations for research in visual neurophysiology that more directly relates to how primates interact with the visual world.

1.2, Thursday 8:45 am, Jorge Otero-Millan, PhD, Assistant Professor, University of California, Berkeley

Title: Effect of head tilt and stimulus tilt on saccade direction biases and their dependency with saccade amplitude

Abstract: When looking around a visual scene, humans make saccadic eye movements to fixate objects of interest. While the extraocular muscles can execute saccades in any direction, not all saccade directions are equally likely: saccades in horizontal and vertical directions are most prevalent. Here, we asked if head orientation and scene orientation affect the saccade direction biases. That is, do the biases align with the head, gravity, the scene, or somewhere in between? In our study, participants viewed natural scenes and abstract fractals (radially symmetric patterns) through a virtual reality headset equipped with eye tracking. Participants’ heads were stabilized in vertical or tilted (clockwise and counterclockwise) positions while viewing the images, which could also be aligned with head, or tilted relative to the head. Participants were also presented with tilted scenes with their head upright and asked to either fixate a central small target or free view the image while we recorded their eye movements. We found that during free viewing of fractals, saccades largely followed the orientation of the head, however when participants viewed an Earth upright natural scene during head tilt, we found that the orientation of the head influenced saccade directions with biases aligned with an orientation in between the head orientation and the scene orientation.  While free viewing tilted scenes with the head upright we found that saccades biases tilted towards the direction of the scene tilt with large amplitude saccades more closely oriented to the scene tilt than small saccades. Accordingly, we found that microsaccade distributions obtained during fixation were not affected by the presence of a tilted scene in the background. These combined results appear to indicate a combined effect of two reference frames in saccade generation, one egocentric which appears to dominate for small saccades and in the absence of visual cues for orientation and another allocentric one that biases the saccades along the orientation of the image.

1.3, Thursday 9:10 am, Esther Gonzalez, PhD, Research Associate, Toronto Western Hospital

Title: Oculomotor Consequences of Macular Degeneration

Abstract: Many people associate macular degeneration with poor visual acuity, but the oculomotor challenges resulting from central vision loss go beyond the loss of visual acuity and contrast sensitivity. In this talk I want to review some of the challenges facing the visual system when it needs to switch to a new oculomotor reference as well as the problems that researchers face when measuring the resulting eye movements of these patients.

1.4, Thursday 9:35 am, Christy Sheehy, PhD, CEO, C. Light Technologies

Title: Quantitative measurements of fixational eye motion in multiple sclerosis

Abstract: We currently lack reliable, rapid, and sensitive methods for prognosis, detection, and disease monitoring in multiple sclerosis (MS). Fixational eye movements are one of the finest motor movements the human body is capable of making and provide a non-invasive window into motor function at the micron-level.

We recruited 205 people with MS and 145 controls, each split into respective test and validation cohorts. Retinal eye-tracking was performed during fixation using a custom-built tabletop device — the tracking scanning laser ophthalmoscope (TSLO).  Fixation abnormalities are detectable early in the course of MS disease and show promise as a highly sensitive, quantitative biomarker to differentiate patients from controls. They also have potential to serve as an outcome measure for early phase clinical trials assessing motor function in MS. End-to-end retinal encoding models could be generalizable and trained on other disease states; future studies are needed to confirm.

1.5, Thursday 10:00 am, Discussion moderated by Preeti Verghese, PhD, Senior Scientist, SKERI

Break, Thursday 10:20 am – 10:40 am

Session 2: Oculomotor Control and Binocularity, Thursday 10:40 am – 12:40 pm

This session will discuss neural circuits underlying vergence and accommodation, the consequences of impaired accommodation and vergence during visual development and their association with strabismus, the importance of binocularity in daily visual function and the consequences of impaired binocularity on eye-hand coordination.

2.1, Thursday 10:40 am, Paul Gamlin, PhD, Professor, University of Alabama, Birmingham

Title: Neural control of vergence eye movements

Abstract: Vergence eye movements are required for aligning the fovea of each eye to fixate on objects at different distances, ensuring binocular fusion and depth perception. In addition, ocular accommodation is required to focus objects at different distances.  The presentation will summarize current knowledge of the neural control of these eye movements. A population of premotor neurons are located dorsal and lateral to the oculomotor nucleus in the so-called supraoculomotor area. These neurons provide medial rectus motoneurons and Edinger-Westphal preganglionic neurons with the requisite near response position and velocity signals to generate these symmetric eye movements.  However, to foveate most targets in 3D space, disconjugate eye movements with unequal saccadic amplitudes are generated. Two potential neural strategies driving these disconjugate eye movements have been proposed. The Helmholtz model proposed that both eyes are directed independently, while the Hering model proposed that the two eyes move as a yoked pair for both conjugate and vergence eye movements.  Our recent neurophysiological studies of neurons near the oculomotor nucleus in the central mesencephalic reticular formation provide, for the first time, clear evidence of an enhanced vergence velocity signal solely during unequal saccadic eye movements, suggesting that the Hering model better accounts for the binocular neural control of eye movements.

2.2, Thursday 11:05 am, Jenny Read, PhD*, Professor, University of Newcastle

Title: Control-theoretic models of vergence and accommodation

Abstract: Vergence and accommodation both aim to direct the visual system to objects at a desired distance. Both can be viewed as negative feedback systems attempting to minimise defocus blur and retinal disparity of the fixated object. In this talk, I aim to compare and contrast our current best control theoretic models of these systems. Both appear to be well described by a “leaky integrator” form of control. It also seems likely that both systems contain a forward model or virtual plant, i.e. that the brain effectively models the oculomotor plant. This enables it to overcome the destabilising effect of latencies by predicting future changes in sensory input, expected due to motor commands already sent. However, the two systems have different strengths and weaknesses. In sensory terms, the visual system is sensitive to a relatively small range of disparities, whereas it can detect larger amounts of defocus blur. In terms of motor response, though, vergence typically responds over a wide range, whereas accommodation is more often limited, due to common pathologies such as myopia and/or normal ageing. Integral control together with limited actuator range causes a problem known as integrator wind-up, and we propose that the brain uses anti-wind-up mechanisms to avoid these. Neural crosslinks between accommodation and vergence enable each system to compensate for its weakness with the strengths of the other. Together, these exquisite control systems work to provide us with a sharp, single view of the world.

2.3, Thursday 11:30 am, Rowan Candy, PhD, Professor, Indiana University

Title: Disparity-driven reflex vergence during infancy and early childhood

Abstract: Key demonstrations of neuroplasticity during early postnatal development have used disruptions of visual experience such as imposed visual deprivation, strabismus or anisometropia.  Young human infants and children must actively coordinate their own retinal image quality and alignment in order to support typical visual development, a process that has received relatively little attention. This presentation will examine the early development of this motor coordination in the context of interacting with the visual environment, including the potential impact of challenging immaturities such as refractive error, reduced interpupillary distance and immature spatial vision. Our current understanding will be discussed in the context of both typical development and clinical conditions.

2.4, Thursday 11:55 am, Ewa Niechwiej-Szwedo, PhD, Associate Professor, University of Waterloo

Title: The effects of amblyopia on the control of eye and hand movements

Abstract: The ability to perform accurate, precise and temporally coordinated goal-directed actions is fundamentally important to activities of daily life, as well as skilled occupational and recreational performance. Vision provides a key sensory input for the normal development of visuomotor skills. Normal visual development is disrupted by amblyopia, a neurodevelopmental disorder characterized by impaired visual acuity in one eye and reduced binocularity, which affects 2-4% of children and adults. This presentation will discuss a growing body of research which demonstrates that binocular vision provides an important input for optimal development of the visuomotor system, specifically visually guided upper limb movements such as reaching and grasping. Research shows that decorrelated binocular experience is associated with both deficits and compensatory adaptations in visuomotor control. Parallel studies with typically developing children and visually normal adults provide converging evidence supporting the contribution of stereopsis to the control of grasping. Overall, this research advances our understanding about the role of binocular vision in the development and performance of visuomotor skills, which is the first step towards developing assessment tools and targeted rehabilitations for children with neurodevelopment disorders at risk of poor visuomotor outcomes.

2.5, Thursday 12:20 pm, Discussion moderated by Dennis Levi, OD, PhD, Prof., Univ. of California, Berkeley

Lunch, Thursday 12:40 pm – 1:40 pm

Session 3: Advances in the Retinal and Cortical Imaging of Visual Function, Thursday 1:40 – 3:40 pm

This session will discuss the functional assessment of vision in health and disease using the electroretinogram, adaptive optics and optical coherence tomography, fMRI to assess visual field integrity, and the use of magnetic resonance spectroscopy to understand the role of inhibitory neurotransmitters in human vision.

3.1, Thursday 1:40 pm, Omar Mahroo, PhD, FRCOphth, FHEA, Professor, University College

Title: Advances in electroretinography

Abstract: The electroretinogram (ERG) represents the summed electrical response of retinal neurons to light stimuli and allows direct non-invasive assessment of human retinal function. Numerous developments have occurred in our understanding of the cellular basis of different ERG components as well as in techniques of recording and analysing waveforms elicited by standard and novel stimulus protocols. Portable devices with automated stimulus adjustment according to pupil diameter permit non-mydriatic recordings to be obtained rapidly in the outpatient clinic. Non-standard stimulus protocols aim to selectively isolate rod or cone-driven contributions to dark-adapted responses and to track changes in rod and cone system sensitivity during dark adaptation, yielding new models of photopigment regeneration kinetics. New methods of analysis have been developed, including fitting of mechanistic models to ERGs, application of machine-learning techniques and interrogation of genetic associations with ERG parameters, the latter potentially shedding light on the effects of common genetic variants in the population. Some of these developments will be discussed, together with insights yielded into retinal physiology and pathophysiology.

3.2, Thursday 2:05 pm, Ravi Jonnal, PhD, Assistant Professor, University of California, Davis

Title: An introduction to optoretinography

Abstract: Optoretinography (ORG) is used to describe measurement of the functional responses of retinal neurons in the living eye using noninvasive, optical methods. ORG methods have many potential applications in basic and clinical science. They have been used to produce knowledge about the biophysical dynamics of photoreceptors and ganglion cells already, and represent potentially powerful new biomarkers for clinical assessment and development of novel therapeutics.

Light-evoked, optical responses from living human photoreceptors were first observed just fifteen years ago and they have been reliably measured and reproduced only within the last few years. Thus as of now, very little is known about their origins, mechanisms, or utility as biomarkers of disease. Moreover, they have been measured using a variety of techniques, each with its own merits, but the implications of methodological choices on practical aspects of detection and measurement have not been investigated. These methods differ in key dimensions that affect their harmonization, such as temporal bandwidth, resolution, coherence, and sensitivity. Measurements are also impacted by sample-dependent variables such as photoreceptor density, layer topography, and wave guiding.

3.3, Thursday 2:30 pm, Yoichiro Masuda MD, PhD, Lecturer, The Jikei University School of Medicine

Title: V1 Projection Zone Signals in Human Macular Degeneration Depend on Task, not Visual Stimulus

Abstract: Juvenile Macular Degeneration (JMD) due to retinal dystrophy results in a loss of central visual field due to almost symmetrical impairment of retinal cells in both eyes. Until the onset of JMD, the primary visual cortex (V1) develops structures and functions essential for 'seeing' from postnatal to the critical period. However, with the onset of JMD, V1 loses its feedforward inputs from the damaged retina, leading to a loss of its 'seeing' function or visual field (referred to as the Lesion Projection Zone: LPZ in the context of retinal lesions). Despite this, it has been reported that the V1-LPZ demonstrates 'task-dependent responses' and remains active.

Why would the 'task-dependent responses' be observed in the V1-LPZ, which has lost its function of 'seeing'? In this lecture, I would like to focus on these 'task-dependent responses' of the V1-LPZ, uniquely observed in JMD, and consider retinal dystrophy from the perspective of the structure and function of the visual cortex.

3.4, Thursday 2:55 pm, Holly Bridge, DPhil*, Professor, Oxford University

Title: Investigating the role of neurochemistry in human visual perception

Abstract: The brain relies on the balance of excitatory and inhibitory neurotransmitters to function correctly, while temporary changes in this balance may facilitate plasticity. Over the past 10 years, magnetic resonance spectroscopy (MRS) has permitted the measurement of neurochemicals within restricted regions of cortex using a standard 3T MRI scanner. In this talk I will present recent data investigating the link between visual perception and GABA and glutamate, the major inhibitory and excitatory neurotransmitters respectively.

First, I will present two studies investigating the role of GABA in 3D vision, specifically in eye dominance. Balanced input from the two eyes is likely established by mutual inhibition, whereby activation of one eye inhibits input from the other eye. Thus, when input from the two eyes is comparable, both will contribute equally to binocular vision to promote 3D perception. Experiment 1 involved temporarily disrupting the balance between the two eyes using short term monocular patching and measuring the evoked change in both GABA concentration in V1 and eye dominance. Then, using a combined fMRI-MRS scan protocol we tested whether GABA concentration in normally sighted participants was related to subtle, but reliable, imbalance in vision between the eyes during visual stimulation. Both these studies indicated a relationship between GABA concentration and eye dominance, supporting the idea of mutual inhibition. These findings may be useful to better understand binocular vision disorders such as amblyopia.

Secondly, I will present data from a recent study in patients with V1 damage resulting in hemianopia. We measured residual visual ability within the blind (hemianopic) region and correlated this residual vision to the concentration of GABA and glutamate in motion area hMT+. Motion area hMT+ is important as there is considerable evidence suggesting the pathway between the lateral geniculate nucleus and hMT+ may support residual vision following damage to V1. We found that both GABA and glutamate correlated inversely with residual vision such that patients with lower neurotransmitter levels showed greater residual vision.

These studies suggest that the concentration of GABA and glutamate within the visual cortex may affect the ability to improve visual perception after damage and future rehabilitation approaches could potentially be augmented by pharmacological intervention.

3.5, Thursday 3:20 pm, Discussion moderated by Christopher Tyler, PhD, DSc, Senior Scientist, SKERI

Break, Thursday 3:40 – 4:00 pm

Session 4: Brain Plasticity, Thursday 4:00 pm – 6:00 pm

This session will discuss the synaptic basis of plasticity, the changes in neuronal connectivity with learning, the effects of non-invasive brain stimulation on brain plasticity, and the effects of long term visual deprivation on human visual processing.

4.1, Thursday 4:00 pm, Mriganka Sur, PhD, Newton Professor of Neuroscience, MIT

Title: Visual cortex plasticity: mechanisms and implications

Abstract: Plasticity of synapses and circuits during brain development is crucial for creating internal representations of the external world. Plasticity of neuronal responses involves activity-dependent changes at specific synapses accompanied by coordinated changes across a larger set of synapses: Hebbian plasticity at individual synapses, for example, needs to be balanced by neuron-wide synaptic scaling as well as local synaptic renormalization in dendrites in order to preserve the information content of neurons and networks. In an important set of studies, we have shown that spike-timing induced receptive field plasticity of visual cortex neurons in vivo is anchored by increases in synaptic strength of identified excitatory synapses on dendritic spines, and is accompanied by a decrease in the strength of adjacent unpotentiated spines on the same dendrite. Molecularly, such locally coordinated potentiation and depression of synapses involves glutamate receptor redistribution via spine-specific expression of activity-dependent genes. Similar changes underlie loss and gain of eye-specific responses by visual cortex neurons following monocular deprivation during a developmental critical period. We have applied these insights to analyze visual cortex circuits in mouse models of Rett Syndrome, a devastating neurodevelopmental disorder. Abnormally prolonged developmental plasticity in mice lacking MeCP2, the gene underlying Rett Syndrome, can be offset by a molecule, IGF1(1-3), that causes excitatory synapses to mature. Following clinical trials spanning many years, the molecule has just been approved by the FDA as the first ever treatment for Rett Syndrome, and the first mechanism-based therapeutic for any neurodevelopmental disorder.

4.2, Thursday 4:25 pm, Wu Li, PhD*, Director, State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University

Title: Modifications of visual cortical processing by implicit learning

Abstract: Repeated experiences can shape visual perception without conscious awareness. To investigate the underlying mechanisms, we conducted neurophysiological recordings in awake behaving monkeys and examined two types of implicit visual learning: perceptual learning that improves performance on visual tasks; and fear learning that associates visual stimuli with aversive emotion. Our studies consistently showed that visual perceptual learning induces concerted changes in both perceptual and cognitive processes, leading to refined sensory representations, top-down influences, and readout processes. Notably, training to detect camouflaged singletons, contours and textures within cluttered backgrounds induces changes in the late, but not early, components of neuronal responses in V1 and V4, indicating the crucial role of feedback modulation. In contrast, visual fear learning can modify the early components of V1 responses, suggesting a change in the feedforward process for proactively tagging visual inputs that are predictive of imminent threat. Our findings demonstrate the adaptability of different cortical processes to recurring visual experiences and highlight the remarkable capability of the early visual cortex to convey task- or behavior-related information from these experiences.

4.3, Thursday 4:50 pm, Lotfi Merabet, OD, PhD, MPH, Associate Professor, Mass. Eye & Ear

Title: Assessing functional vision in early brain-based visual impairment

Abstract: Cerebral visual impairment (CVI) is a brain-based visual disorder and the leading cause of pediatric visual impairment in developed countries. Despite this clear public health concern, our understanding of the functional visual profile and underlying neurophysiology of this condition remain poorly understood. In the setting of early neurological injury, children with CVI typically show deficits associated with higher-order visuospatial processing such as finding a target of interest within a complex scene. It remains unknown how manipulating task demands and other environmental factors influence visual search performance in this population. To address this gap, we have developed a series of novel and naturalistic virtual reality (VR)-based visual search tasks combined with eye tracking. We find that CVI is associated with decreased search efficiency and worsening performance with increased visual task demands when compared to neurotypical controls. Combined with neuroimaging modalities, these novel assessments allow for the characterization of the neural correlates associated with visuospatial impairments in CVI. The results may also have important clinical applications in assessing environmental factors that affect functional visual processing in CVI and identifying adaptations that may help performance.

4.4, Thursday 5:15 pm, Ione Fine, PhD, Professor, University of Washington, Seattle

Title: I can hear what you see: cortical plasticity in congenitally blind individuals

Abstract: One of the most important tasks for vision is tracking the movement of objects in space. Here I discuss work in our laboratory examining the anatomical and computational basis of auditory motion tracking in early blind individuals. Blindness early in life leads to ‘recruitment’ of the ‘visual’ motion area hMT+ for auditory motion signals. Meanwhile, the planum temporale, associated with auditory motion in sighted individuals, shows reduced selectivity for auditory motion. We discuss how this dramatic shift in the cortical substrate of motion processing might influence the neural computations and perceptual experiences underlying motion processing in early blind individuals.

4.5, Thursday, 5:40 pm, Discussion moderated by Lora Likova, PhD, Senior Scientist, SKERI

Reception and Posters, Thursday 6:00 pm – 8:00 pm

Day 2: Friday Aug 4, 2023

Session 5: Computational Models and Machine Learning for Vision Science & Accessibility, Friday 8:00 am – 10:00 am

The session will discuss computational models of human visual processing and machine vision, factors that improve the robustness of deep neural networks in the simulation of biological vision, a deeper computational understanding of visual processing, and computer vision and machine learning models for accessibility.

5.1, Friday 8:00 am, Miguel Eckstein, PhD, Distinguished Professor, University of California, Santa Barbara

Title: From Bayesian ideal observers to deep neural networks

Abstract: For four decades, Bayesian ideal observer models (BIO) have been an important tool to understand human vision. They have been used as benchmarks to compare human perceptual performance against the upper optimal bound, assess whether behaviors arise as byproducts of task information or stimuli priors, and identify sources of suboptimalities in human visual processing. The main limitation of the BIO is that its computation requires full knowledge of the image statistics and cannot be applied to real world scenes without strong feature extraction assumptions. Thus, this limits the use of the BIO to understand natural tasks.

In contrast, Deep Neural Networks (DNNs) can be applied to any image set including real world scenes. However, they do not guarantee optimality and their computational stages are harder to interpret. In this talk, I will discuss how we can learn about properties of human vision with real world scenes and tasks from comparisons of DNNs and human behaviors and from comparisons of the inner properties of DNNs and BIO models.

5.2, Friday 8:25 am, Dan Yamins, PhD, Assistant Professor, Stanford University

Title: Beyond ConvNets: deepening our computational understanding of the visual system

Abstract: The emerging field of NeuroAI has leveraged techniques from artificial intelligence to model brain data. In this talk, I will show that the connection between neuroscience and AI can be fruitful in both directions. Towards "AI driving neuroscience", I will discuss a new candidate universal principal for functional organization in the brain, based on recent advances in self-supervised learning, that explains both fine details as well as large-scale organizational structure in the vision system, and perhaps beyond.  In the direction of "neuroscience guiding AI", I will present a novel cognitively-grounded computational theory of perception that generates robust new learning algorithms for real-world scene understanding.  Taken together, these ideas illustrate how neural networks optimized to solve cognitively-informed tasks provide a unified framework for both understanding the brain and improving AI.

5.3, Friday 8:50 am, Frank Tong, PhD, Centennial Professor, Vanderbilt University

Title: Understanding the computational bases of robust object recognition in humans and deep neural networks

Abstract: Deep neural networks (DNNs) trained on object classification provide the best current models of human vision, with accompanying claims that they have attained or even surpassed human-level performance. However, DNNs tend to fail catastrophically in situations where humans do not, especially when faced with noisy, degraded, or ambiguous visual inputs. Such findings imply that the computations performed by DNNs do not adequately match those performed by the human brain. In this talk, I will discuss whether the brittleness of current DNN models is caused by flaws in their architectural design, imperfections in their learning protocols, or inadequacies in their training experiences. Our studies show that learning has a critical role in the acquisition of robust object representations in both DNNs and human observers. In particular, we hypothesize that everyday encounters with visual blur may be a critical feature for conferring robustness to biological and artificial visual systems.

5.4, Friday 9:15 am, Danna Gurari, PhD*, Assistant Professor, University of Colorado Boulder

Title: AI Descriptions of Visual Content Taken by People With Visual Impairments: The Past Decade and What's Next

Abstract: A natural grand challenge for the AI community is to create computer vision models that can assist people with vision impairments to learn about their visual surroundings.  In this talk, I will begin by discussing my team's work on building the first datasets and AI challenges that originate from this population in authentic use cases.  I will address questions including: What are the challenges for creating large-scale datasets that represent a real use case?  How does data originating from this population compare to data in mainstream (contrived) datasets?  What is the current performance of AI models in delivering the information sought by people with vision impairments?  What are the key AI challenges ahead for supporting users of visual description technologies?  This will include a discussion of the challenges of developing solutions responsibly and efficiently.

5.5, Friday 9:40 am, Discussion, moderated by Laura Walker, PhD, Vision Science, Apple Inc.

Break, Friday 10:00 am – 10:20 pm

Session 6: Augmented and Virtual Reality for Vision Screening, Training and Accessibility, Friday 10:20 am – 12:20 pm

This session will cover a range of virtual and augmented reality applications from visual assessment and training to accessibility tools that convey spatial information non-visually, in addition to tools that provide an enhanced visual display of the environment for people with low vision.

6.1, Friday 10:20 am, Ben Backus, PhD, CTO, Vivid Vision

Title: VR as a platform for new vision tests and treatments: at-home monitoring of fields and a holistic approach towards treating amblyopia

Abstract: Mobile VR headsets are computer devices that show different, large images to each eye. They can be used to create new vision tests and treatments that were not previously practical. Two potential uses for mobile VR headsets are testing visual fields at home, especially for patients with glaucoma, and treating binocular dysfunctions such as amblyopia.

A major problem for clinical visual field testing is the high variability between repeated tests. This causes two kinds of problems. First, some patients go untreated because their loss is not detected, while others with stable vision are treated unnecessarily. Second, clinical trials of new therapeutics take too long to assess visual function, which delays the development of new treatments. The solution is to collect more data. To this end, a new less stressful perimeter has been developed for patients to use at home. Collaborating with UCSF, patients with glaucoma have been taking bundles of 10 tests, which are then averaged to improve precision by a factor of 3.

Treating amblyopia is more complex than visual field testing because binocular vision is complicated, and each patient is unique. The amblyopic visual system suppresses one eye, and studies have consistently shown that simply by treating suppression, acuity improves by 1-2 lines. VR headsets can be used for this treatment. However, 1-2 lines is only a modest improvement. To further improve both acuity and stereopsis, a holistic approach has been implemented in VR that addresses suppression, stereopsis, and vergence ability simultaneously.

6.2, Friday 10:45am, Brandon Biggs, MDes, doctoral candidate at Georgia Tech

Title: XR through the senses: navigating the cross-sensory digital frontier

Abstract: The rapid advancement of augmented, virtual, and extended reality (AR/VR/XR) technologies has opened up new possibilities for training and accessibility for blind individuals. This presentation, "XR through the Senses: Navigating the Cross-Sensory Digital Frontier," will explore the promise and challenges of AR/VR/XR for visual impairment, focusing on the potential for creating inclusive non-visual technology.

Drawing from a range of XR projects, including the investigation of making accessible nonverbal, non-visual social interactions in virtual reality, augmented reality applications that help sighted designers see similar to someone with different visual impairments, applications for virtual reality canes, and the development of accessible 3D model maps using augmented reality, this presentation will delve into the innovative ways XR technology can be harnessed to benefit individuals with visual impairments. We will also examine the use of auditory virtual reality to create digital auditory maps, highlighting the potential for non-traditional conventions in XR to be investigated and developed further.

As headphones, in particular Apple Airpods, are the most ubiquitous virtual reality headset, this presentation will emphasize the significant opportunity this presents for virtual reality and XR researchers and developers. By exploring the potential of cross-sensory experiences in XR, we can work towards a more inclusive digital frontier that caters to the needs of individuals with visual impairments.

6.3, Friday 11:10 am, Paul Ruvolo, PhD*, Assoc. Prof., Olin College of Engineering

Title: Assistive Augmented Reality Technology

Abstract: Mainstream augmented reality systems can form the basis for the creation of novel assistive technologies.  In this talk, I will focus on the specific role that mobile phone-based augmented reality frameworks can play in assisting blind travelers with orientation and mobility.  In addition to my own work on the Clew app for indoor navigation, I’ll also discuss other work in the field including that of researchers at Smith-Kettlewell.   In addition to presenting the technology and its potential impact, I’ll discuss avenues (and obstacles) for bringing this technology to a wide audience.

6.4, Friday 11:35 am, Yuhang Zhao, PhD, Assistant Professor, University of Wisconsin, Madison

Title: Augmented reality systems for people with low vision

Abstract: Low vision is a visual impairment that falls short of blindness but cannot be corrected by eyeglasses or contact lenses. Low vision people face severe challenges in various daily tasks. While current low vision aids (e.g., magnifier, CCTV) support basic vision enhancements, such as magnification and contrast enhancement, these enhancements often arbitrarily alter a user's full field of view without considering the user's context, such as the tasks, the environmental factors, and the user’s visual abilities. As a result, these low vision aids are not sufficient or preferred by low vision users in many important tasks. Augmented reality (AR) technology presents a unique opportunity to enhance low vision people’s visual experience by intelligently recognizing the surrounding environment and presenting suitable visual augmentations. In this talk, I’m going to talk about how I design and build intelligent AR systems with tailored augmentations to support low vision people in visual tasks, such as a head-mounted AR system that presents visual cues to orient users’ attention in a visual search task, as well as a projection-based AR system that projects visual highlights on the stair edges to support safe stair navigation. I will conclude my talk by discussing the future of AR for low vision accessibility, such as integrating eye tracking to generate gaze-based augmentations.

6.5, Friday 12:00 pm, Discussion moderated by James Coughlan, PhD, Senior Scientist, SKERI

Lightning Talks and Lunch with a Scientist, Friday 12:20 pm – 2:20 pm

Session 7: Restoring Vision vs. Using Available Senses, Friday 2:20 pm – 6:20 pm

This is a capstone session that not only discusses recent advances in vision restoration techniques, including optogenetic therapies, AI to improve a bionic eye, and brain-computer interfaces to directly stimulate visual cortex, but also includes the voices of prominent blind and visually impaired scientists on the relative merits of restoring vision versus making the most of available senses. To further amplify the needs and viewpoints of the patient community, this session will also include lay individuals with visual impairment as well as clinicians familiar with the challenges of different low-vision populations.

7.1, Friday 2:20 pm, Juliette McGregor, PhD, Assistant Professor, University of Rochester

Title: Vision restoration at the fovea

Abstract: In humans high quality, high acuity visual experience is mediated by the fovea, a tiny, specialized patch of retina containing the locus of fixation. Despite this, vision restoration strategies are typically developed in animal models without a fovea and the unique features of this structure are ignored. Recently optogenetic therapies have entered clinical trials which aim to confer light sensitivity to the remaining retinal architecture despite photoreceptor loss. This is achieved by expressing light sensitive ion channels in retinal ganglion cells. I will describe pre-clinical testing of optogenetic therapy in non-human primate fovea using an adaptive optics retinal imaging approach and consider some of the barriers and potential solutions to achieving high quality restored vision.

7.2, Friday 2:45 pm, Michael Beyeler, PhD, Assistant Professor, University of California, Santa Barbara

Title: Towards a Smart Bionic Eye: AI-powered artificial vision for the treatment of incurable blindness

Abstract: Despite recent advances in the development of visual neuroprostheses, the quality of current prosthetic vision is still rudimentary and does not differ much across different device technologies. Rather than aiming to represent the visual scene as naturally as possible, a Smart Bionic Eye could provide visual augmentations through the means of artificial intelligence–based scene understanding, tailored to specific real-world tasks that are known to affect the quality of life of people who are blind. The ability of a visual prosthesis to support everyday tasks might make the difference between abandoned technology and a widely adopted next-generation neuroprosthetic device.

7.3, Friday 3:10 pm, Dan Adams, PhD, Principal Investigator, Neuralink

Title: Development of a visual prosthesis using the Neuralink implant

Abstract: The integration of thin-film electrodes, customized silicon, and wireless data transmission technologies has led to the development of a compact, high-channel-count, subcutaneous neural implant. This remarkable device, in conjunction with the ability to robotically insert thousands of stimulating and recording electrodes into the human brain, presents a wide range of therapeutic applications. When placed in the visual cortex, the Neuralink implant holds the potential to serve as a foundation for a visual prosthesis. In this talk, I will present studies of macaques with Neuralink implants placed in the visual cortex. Additionally, I will discuss the features that make the implant a viable device for the restoration of visual perception in visually impaired individuals.

7.4, Friday 3:35 pm, Gordon Legge, PhD*, Professor, University of Minnesota

Title: Vision restoration: the dream

Abstract: What do we mean by vision restoration? It depends who you ask. For many, it may refer to the recovery of full vision from blindness or severe visual impairment. For others, it may refer to fixing some aspect of vision such as use of glasses to correct refractive errors or visual training to correct stereoblindness. I’ll discuss the meaning of vision restoration from my perspective as a vision scientist and also from my personal perspective as someone with low vision. I’ll briefly comment on some famous case studies of vision restoration, and also modern methods of sight restoration including prosthetic vision and gene therapy. I will also reflect on my own unsuccessful experience with vision restoration.

Break, Friday 4:00 pm – 4:20 pm

7.5, Friday 4:20 pm, Joshua Miele, PhD, Accessibility Lead, Amazon

Title: Messianic Dreams and Mundane Realities – the unintended consequences of raising from the dead and restoring sight to the blind

Abstract: Dr. Miele will enumerate and discuss the motivations of a variety of vision stakeholders, their frequently divergent goals, and their shared interests. Ophthalmologists, rehabilitation professionals, vision-loss patients, and Blind thinkers have different perspectives on what constitutes a successful vision-loss outcome, and different priorities on how limited research, healthcare, and rehabilitation resources should be allocated to those ends.  In addition to reviewing the perspectives, Dr. Miele will offer a simple set of considerations to assist practitioners in making ethical decisions around vision-loss intervention.

7.6, Friday 4:45 pm, Sile O’Modhrain, PhD, Associate Professor, University of Michigan

Title: Attuning to the world: sensory substitution or sensorimotor recalibration?

Abstract: One of the most enduring bodies of work associated with Smith-Kettlewell is the work of Bach-y-Rita and colleagues on tactile-visual sensory substitution (TVSS), carried out here in the mid 1980’s. In a series of studies, the researchers demonstrated that images picked up from a camera could be displayed as tactile stimuli on a user’s back.   Further, with training, they showed that people using this device could learn to interpret and react to dynamic information embedded in these stimuli as when they ducked to avoid an approaching ball in response to the rapid expansion or ‘looming’ of the tactile image.

While Bach-y-Rita and colleagues interpreted this as a mechanism for ‘substituting’ the tactile sense for the visual sense, the intervening years of research in the field of embodied cognition provide evidence for an alternative interpretation of their findings.  In this talk, I will briefly discuss relevant theories from the embodied approach to perception, action and cognition to argue that Bach-y-Rita’s work is not predicated on substitution, but on our ability to discover ‘lawfulness’ in how the energy in the sensory milieu around us changes in response to our movements.  It is through our acting, moving bodies, I suggest, that we become attuned to information relevant to what we need to do, no matter whether it comes to us via vision, or touch, or sound.

7.7, Friday 5:10 pm, Don Fletcher, MD, Clinical Scientist, SKERI

Title: Insights from 37 years of seeing patients who can't see me

Abstract: In 37 years, I have cared for over 35,000 visually impaired patients. In that process I have continually learned new things about the visual system. I have also noted that most of the clever new ideas that I have added to my “bag of tricks” have actually come from my patients. As well, I have learned some wonderful lessons for life itself.   In this talk I will review these lessons learned including some amazing default settings within the visual system, innovative adaptation ideas and resiliency skills for anyone dealing with life’s challenges.

7.8, Friday 5:35 pm, Arvind Chandna, MD, FRCS, FRCOphth*, Senior Clinician Scientist, SKERI (in conversation with Mae Lane-Karnas and Katie Lane-Karnas)

Title: Cerebral Visual Impairment (CVI). From Diagnosis to Directing Our Future. A conversation with a 13-year-old CVIer and parent.

Abstract: CVI is now the commonest cause of bilateral visual functional loss in children with a visual impairment and the prevalence is increasing worldwide. CVI is a brain-based condition resulting usually from adverse events at or around birth and sometimes as a consequence of late-acquired cerebral injury. It manifests as a spectrum of higher visual function deficits (HVFDs) affecting visual function in everyday life in the absence of visible eye pathology, and often in the presence of normal visual acuity. The pathophysiology is not fully understood. Assessment, accommodations and interventions for CVI are largely driven by top-down traditional principles for a condition unique and unlike any other in the field of “low vision.” In this conversation a young CVIer and her parent take us through their journey; provide an insight in the visual processing of the CVIer brain; talk about the accommodations that help; challenge the current traditional top-down approach; and discuss directions for the future scientific study that would benefit the CVIer, the family, the school and integration into society.

7.9, Friday 6:00 pm, Discussion moderated by Santani Teng, PhD, Associate Scientist, SKERI

P1, Adrien Chopin, PhD, Smith-Kettlewell Eye Research Institute

Title: Abnormal Dynamics of Sustained Binocular Rivalry in Amblyopic Patients: A Potential Diagnostic Tool

Abstract: Binocular rivalry is a fascinating phenomenon where human perception alternates between disparate images presented continuously to each eye. Previous studies have predominantly focused on rivalry in people with non-amblyopic strabismus, utilized brief presentations, or solely reported overall eye dominance during rivalry. In this study, we investigate the fine dynamics of sustained rivalry in amblyopia by presenting orthogonally-oriented gratings to each eye. A small sample of participants reported their perception during extended 1-minute presentations. Patients with amblyopia experienced fewer complete reversals (e.g. left-to-mixed-to-right percepts) and higher proportions of incomplete reversals (e.g. left-to-mixed-to-left percepts) compared to control participants. Notably, patients with anisometropic amblyopia showed a greater number of incomplete reversals than patients with strabismic/mixed amblyopia, suggesting larger binocular noise levels in anisometropic amblyopia. Patients exhibited modified dominance, primarily by spending less time perceiving the weaker eye stimulus than controls. This result contradicts common expectations from binocular rivalry research, as it was anticipated that patients would spend more time perceiving the stronger eye stimulus compared to controls, according to modified Levelt's proposition II. The reversal rates almost sufficed to distinguish amblyopic and control observers, and incorporating eye dominance during rivalry enabled a complete differentiation between patients with anisometropic amblyopia and those with strabismic/mixed amblyopia. Moreover, by adding the ratio between complete and incomplete reversals, we could entirely separate three groups: control participants, patients with anisometropic amblyopia and patients with strabismic/mixed amblyopia. In conclusion, our study sheds light on the severely abnormal dynamics of sustained binocular rivalry in amblyopic patients and highlights the potential utility of rivalry dynamics as a diagnostic tool for identifying individuals with amblyopia and characterizing their amblyogenic cause.

P2, Susana Wu, MSc, Institute of Medical Science at University of Toronto

Title: Visuomotor control and reading in children with amblyopia

Abstract: Eye movements and eye-hand coordination are key aspects of visuomotor control, which is essential when performing most daily activities. Disruption in visuomotor control, characterized by slower arm movements, grasping errors and slower reading, has been documented in children with amblyopia. This study aimed to characterize the effects of amblyopia on the temporal pattern of eye and hand coordination during the performance of a reaching, precision grasping, and placement task. A secondary aim was to examine if children with poorer eye-hand coordination also present with poorer reading efficiency abilities assessed through speed and accuracy measures as this could indicate impairments in visuomotor control across different domains. A cohort of children (aged 8-14 years) undergoing treatment for anisometropic, strabismic, and mixed amblyopia (n=14) were tested using a prehension task and a standardized reading efficiency test that assessed sight word and phonemic decoding efficiency (Test of Word Reading Efficiency; TOWRE-2). Their performance was compared to a cohort of typically developing children (n=60). Analogous with previous studies, children with amblyopia performed the prehension task significantly slower than an age-matched control group. Examining the eye-hand coordination pattern revealed that poorer performance may be due to a longer fixation, defined as the period between two successive saccades when looking at the bead and the needle. This may indicate a difficulty with encoding the relevant visual features to program and execute the hand movement. There were no group differences in standard scores for the TOWRE-2 reading test. Together, the preliminary results suggest that children with amblyopia are more likely to experience deficits in fine motor skills while their word reading efficiency seems intact. These findings may be used to guide development of more targeted assessments and interventions. However, additional research is required to explore the effects of amblyopia on other domains of visuomotor control.

P3, Simran Purokayastha, MSc, New York University

Title: Microsaccades Around the Visual Field

Abstract: Microsaccades (MS) – tiny fixational eye movements – are known to improve discriminability in high visual acuity tasks in the foveola, but whether they help compensate for low discriminability at perifovea is unknown. To investigate this question, we examined MS characteristics in the context of the adult visual performance field (PF), which is characterized by two perceptual asymmetries: the Horizontal-Vertical Anisotropy (HVA; better discrimination performance along the horizontal meridian than vertical meridian), and the Vertical Meridian Asymmetry (VMA; better discrimination performance along the lower- than upper-vertical meridian). We investigated whether and to what extent the directionality of MS varies when stimuli are placed at isoeccentric locations along the cardinals under conditions of heterogeneous discriminability (Experiment 1) and homogeneous discriminability (equated by adjusting stimulus contrast, Experiment 2). Participants performed a two-alternative forced-choice (2AFC) orientation discrimination task. Our analysis revealed that in both experiments: (1) performance was significantly better on trials without MS than on trials with MS; (2) the rate and temporal profile of MS was very similar; (3) the MS directional pattern was very similar across the trial sequence with no significant differences among any of the locations, except that with heterogenous discriminability (Experiment 1), MS were significantly biased towards the right versus the left along the horizontal meridian and during the response period (once observers knew the target location), MS were directed away from the target.  Our results suggest that the temporal profile of microsaccades and their directions are similar regardless of stimulus discriminability and that the presence of MS correlates with poorer task performance. Thus, we find that MS do not flexibly adapt to task requirements to help compensate for lower discriminability around the visual field.

P4, Manarshhjot Singh, PhD, University of Massachusetts Chan Medical School

Title: Precision customization of spectacle mounted vision rehabilitation systems using craniofacial scans and 3D printing

Abstract: The integration of patient anatomical data and 3D printing technology promises to revolutionize medicine by enabling precision customization of rehabilitation and therapeutic devices. Hence, an application of this innovative technology can have tremendous impact in vision rehabilitation. Traditional methods of fitting vision rehabilitation devices rely on manual measurements which are susceptible to human error and measurement limitations, often resulting in a suboptimal fit. In contrast, the new technique leverages craniofacial scanning techniques to capture detailed three-dimensional data of the patient's facial structure. This comprehensive scan allows for a deeper understanding of the patient's unique contours and dimensions, enabling precise customization of the vision rehabilitation system. The acquired craniofacial data can serve as the foundation for designing and manufacturing a personalized spectacle mounted vision rehabilitation system. The technique ensures optimal integration between the patient's facial structure and the rehabilitation device, considering factors such as alignment, comfort, and aesthetic appeal. This poster presentation highlights in detail the methodology of using 3D scanning technologies to obtain highly accurate and personalized spectacle mounted vision rehabilitation devices. The poster also demonstrates the efficacy and feasibility of precision customization approach using a case study on frame customization for Magnetic Levator Prosthesis for addressing ptosis. As an outcome of this study patients experience improved comfort, and enhanced interpalpebral fissure opening using a custom frame as compared to a standard frame.

P5, Jimmy Murari, MS, Institut de la Vision, Paris

Title: Characterizing fixational eye movements in patients with foveal drusen to find biomarkers for pre-symptomatic AMD

Abstract: Objective: We explore Fixational Eye Movements (FEM) at the presymptomatic stages of people with dry-AMD by using a new high-speed high-resolution retinal tracking technique. We conducted a clinical study to characterize fine spatiotemporal alterations of FEM in patients who start developing foveal drusen but do not have any geographic atrophies. 

Methods: Twenty-seven participants were recruited from the SilverSight cohort of the Vision Institute - Quinze-Vingts National Vision Hospital, Paris. They were chosen as part of one of three groups: healthy young adults, healthy older adults, and older adults with foveal drusen. Gaze-dependent imaging was performed to visualize, count, and measure their diameter and surface. Retinal tracking was based on retinal imaging with an adaptive optics flood illumination ophthalmoscope (AO-FIO). The system allows for sub-arcminute resolution, high-speed and distortion-free videos of the retina in the foveal area. It also integrates a DMD for stimuli projection to perform psychophysical tasks while acquiring videos of the retina at 800Hz. Eye movements were then estimated using a phase-correlation registration algorithm. 

Results: Patients with drusen showed significantly higher microsaccade amplitude, drift diffusion coefficient, and fixation instability -i.e. higher isoline area- than both healthy young and older adults groups. Among the drusen group, the increase of microsaccade amplitude and ISOA was correlated with drusen eccentricity. The closer the drusen were to the center of the fovea, the worse the fixation stability was. 

Conclusions: This study used a novel high-precision retinal tracking technique to better characterize FEM changes as a function of healthy vs. pathological aging. It demonstrated that FEM might provide interesting signatures of initial damages to the retinal structure by drusen at the presymptomatic stages of dry-AMD. Overall, central drusen altered fixation stability characteristics, resulting in compensatory FEM changes that could lead to a biomarker for dry-AMD.

P6, Pinar Demirayak, PhD, University of Alabama at Birmingham

Title: Functional connectivity fingerprints of individuals with Macular Degeneration are shaped by individuals’ experiences

Abstract: Changes in sensory experience of the individuals with Macular degeneration (MD) involve deprivation of sensory input from central vision and preferential, increased usage of non-deprived regions of the retina. While participants have similar experiences with sensory input, their neural strategy for adaptation may differ from individual to individual. Therefore, our aim is to examine experience-dependent plasticity in functional connections for the deprived area of cortex (lesion projection zone, LPZ), an area of increased use (the preferred retinal locus, PRL), and a control region (unpreferred retinal locus, URL) corresponding to areas in V1 in individuals with MD. Here, we explored how experience using (or not using) the visual space associated with a cortical region of interest changed its whole-brain pattern of functional connectivity, relative to the pattern of a typical healthy vision control.  We examined data from 21 MD and 23 controls, performing seed-to-voxel analyses (fingerprints) from the cortical representation of LPZ, PRL, and URL of each participant. This allowed us to examine how similar the whole-brain functional connection pattern for an ROI is to a typical participant's functional connection pattern.

Given that both LPZ and PRL experience different usage in MD, we hypothesized that LPZ and PRL fingerprints would be less typical in the MD participants. Our results were consistent with this hypothesis. We also compared increased usage (PRL) to decreased usage (LPZ). MD fingerprints were less ‘typical’ than control fingerprints for both the PRL and LPZ, but this difference was statistically significantly bigger for the PRL. This pattern of results suggests that the change in connection pattern is stronger for brain regions that undergo increases in ‘use’ than brain regions where sensory input is removed. These results support the idea that functional connections from V1 maintain the capacity to adapt in the adult brain.

P7, Kierstyn Napier-Dovorany, OD, Indiana University

Title: The effect of obstacle contrast on foot clearance when stepping over an obstacle

Abstract: Purpose: Obstacle contrast may be an important component of visibility and may thus impact fall risk. This study investigated the effect of obstacle contrast on foot clearance in young adults when stepping over an obstacle.

Methods: Seven normally sighted young adults walked along a 6-meter walkway covered in a black carpet. An obstacle was present halfway along the walkway that varied in both height (1 cm and 19 cm) and contrast (6% and 90% Michelson contrast). Subjects stepped over each obstacle for 10 trials. Lower limb kinematics were recorded using 13 motion capture cameras. From the marker position data, foot clearance for the lead foot was calculated as the distance between the top of the obstacle and the marker placed on the distal part of the shoe. Repeated-measures ANOVAs were run to assess how foot clearance in the lead and trail foot changed as a function of obstacle height and contrast.

Results: Foot clearance for the lead foot (F1,59= 1045.2, p<0.001) was significantly greater for the tall versus short height obstacle. This is true for the low (p<0.001) and high (p<0.001) contrast obstacle. Foot clearance was smaller for the low contrast obstacle compared to the high contrast obstacle (F1,59 = 8.3, p=0.006); however, pairwise comparisons are not significant for the short (p=0.052) or tall (p=0.124) obstacle.

Conclusions: Reduced foot clearance in low versus high contrast obstacles suggest that young adults with normal vision may have difficulty discriminating an obstacle from the ground if its contrast is low. This could increase fall risk from tripping, as the foot may not clear the obstacle. Consistent with previous studies, foot clearance increased with increased obstacle height. Recruitment of normally sighted and vision impaired participants is ongoing to confirm these observations and to identify individuals at greater risk of tripping and falling.

P8, Catherine Agathos, PhD, Smith-Kettlewell Eye Research Institute

Title: Head/Trunk coordination while walking in older adults with central field loss

Abstract: Individuals with central visual field loss (CFL) often limit their physical activity due to poor vision and concerns about falling. There are known mobility deficits in CFL, though their extent is not well characterized and often attributed to aging. The Timed Up and Go (TUG) is a common clinical measure of functional balance, requiring an individual to stand from a seated position, walk 3m, turn and sit back down and is scored using a stopwatch. With the rise of lightweight wearable technology, the instrumented TUG is increasingly used to obtain postural and gait measures that are more sensitive to mobility changes. Head stabilization is a motor skill, important for providing a stable reference platform for the visual and vestibular systems and to maintain stable gaze. To maintain the head stabilized, the trunk and lower limbs act as shock absorbers by attenuating accelerations that would otherwise be experienced by the head. Examining head stabilization during the TUG may therefore provide a dual advantage in individuals with CFL: on the one hand providing a measure of functional balance and on the other indicating potential compensatory or adaptive changes in body coordination. In this preliminary study, we used the instrumented TUG to characterize head and trunk movement in healthy older adults with and without CFL. TUG duration and head acceleration metrics did not differ between groups. Individuals with CFL reduced their trunk acceleration variability compared to controls, suggesting a more rigid control, potentially to reduce head motion. We also found that better contrast sensitivity was associated with improved head stabilization in those with CFL. These findings suggest that CFL may lead to the adoption of different head stabilizing strategies, and that the degree of visual impairment affects stabilization.

P9, Haydée G García-Lázaro, PhD, Smith-Kettlewell Eye Research Institute

Title: Neural and behavioral correlates of evidence accumulation in human click-based echolocation

Abstract: Echolocation is an active sensing strategy that some blind people use to detect, discriminate, and localize objects in their surroundings. Trained echolocators emit tongue clicks and may vary their clicking pattern dynamically to improve perception under challenging circumstances. However, it is unknown how echo acoustic information is integrated across individual samples (clicks) and how individual echoes are represented neurally. To address these questions, here we recorded the brain activity of blind and sighted individuals using EEG while they performed an echoacoustic localization task. On each trial, subjects listened to a train of 2, 5, 8, or 11 synthesized mouth clicks [3] and spatialized echoes from a reflecting object located at azimuths of ±5° to ±25° relative to the midsagittal plane. The task was to report whether the echo reflector was to the left or right of the center. We hypothesized that the number of clicks in each trial and the echo azimuth would modulate performance. The early blind (EB) expert performed at over 93%, with lateralization thresholds decreasing linearly from 2- to 8-click trials; late blind (LB) and sighted controls (SC) performed at chance, with no effect of echo eccentricity or click count, although they easily lateralized the echoes when the emitted click was removed. Left vs. right location was reliably decoded from the EEG response in EB after only one click. In proficient EB observers, successive click-echo samples linearly sharpen echoacoustic representations until saturation; in LB and SC, the spatial information in the EEG response was unavailable to conscious access. These results suggest that echolocation expertise relies on extracting echoes from other masking sounds and integrating them across samples.

P10, Lily Tukstra, Psychological and Brain Sciences, UC Santa Barbara

Title: Information Needs and Technology Use for Daily Living Activities at Home by People Who Are Blind

Abstract: People who are blind face unique challenges in performing instrumental activities of daily living (iADLs), which require them to rely on their senses as well as assistive technologies and tools. Existing research on the strategies used by people who are blind to conduct different iADLs has focused largely on outdoor activities such as wayfinding and navigation. However, less emphasis has been placed on information needs for indoor activities around the home. We present a mixed-methods approach that combines 16 semi-structured interviews with a follow-up behavioral study to understand current and potential future use of technologies for daily activities around the home, especially for cooking. We identify common practices, challenges, and strategies that exemplify user-specific and task-specific needs for effectively performing iADLs at home. Despite this heterogeneity in user needs, we were able to reveal a near universal preference for tactile over digital aids, which has important implications for the design of future assistive technologies. Our work extends existing research on iADLs at home and identifies barriers to technology adoption. Addressing these barriers will be critical to increasing adoption rates of assistive technologies and improving the overall quality of life for individuals who are blind.

P11, Santani Teng, PhD, Smith-Kettlewell Eye Research Institute

Title: Reading styles modulate perceptual roles of the hands in bimanual braille reading

Abstract: Braille is a haptic modality based on a system of raised dots to represent text. Analogously to eye movements in visual reading, braille readers move the reading hand(s) over the printed material to acquire text. In contrast to visual reading, bimanual braille reading raises the question of each hand’s contributing role, and the possibility of perceptual mechanisms distinct from visual reading. Here we analyzed the hand movement patterns of blind braille readers in order to examine the relationship between reading style, hand kinematics, and reading speed. Participants read standardized IReST text passages aloud while their hand movements were recorded with a specialized tracking system. Preliminary results suggest that reading styles strongly affected hand kinematics and performance. Participants who used a more independent hand movement style (e.g. scissors) completed trials faster on average than those who used more interdependent style (e.g. parallel or left marks). These styles were also characterized by lower intermanual correlation in kinematic markers such as regressive movements. In addition, we found that scissors-style readers were disproportionately likely to exhibit "simultaneous disjoint reading," in which the two hands read different parts of the text in parallel. Notably, this suggests a “memory buffer” mechanism distinct from visual print or serial braille reading, as input acquired in parallel is sorted on the fly to reconstruct a serial text stream. Taken together, our results quantitatively support previous work suggesting that the ability to use independent hand movements may be an important factor in the development of efficient braille reading skills. Our results provide new insights into the benefits and mechanisms of bimanual braille reading strategies, and may have implications for the teaching of braille to blind individuals.

P12, Erich Schneider, PhD, EyeSeeTec

Title: Binocular Video Head Impulse Test in Health and Disease

Abstract: The video head impulse test (vHIT) evaluates the vestibulo-ocular reflex (VOR). It's usually recorded from only one eye. Newer vHIT devices (EyeSeeCam Sci 2) allow a binocular quantification of the VOR. Our study provides results from UVD patients and normative values reflecting the conjugacy of eye movement responses to horizontal binocular vHIT (bvHIT). The normal results were similar to a previous study using gold-standard scleral search coil, which also reported greater VOR gains in the adducting eye than the abducting eye. In analogy to the analysis of saccade conjugacy, we propose the use of a novel bvHIT dysconjugacy ratio to assess differences between adducting and abducting VOR eye movement responses. For an accurate assessment of VOR symmetry, it is recommended that asymmetry indices be used to compare the duction-related VOR gains recorded from both eyes.

LT1, Henning Schulte, MSc, University Medical Center Groningen

Title: Detecting and delineating visual field defects based on free-viewing eye movements

Abstract: Standard Automated Perimetry (SAP) is an important method to measure the visual field (VF). While for visually healthy adults it usually provides reliable results, for many patients it is tiring and difficult to perform.  Confrontational perimetry methods on the other hand are very resource intensive and require highly experienced medical professionals. There is a need for complementary objective and accessible VF tests. Such methods would ideally be equally reliable for low-performing patient groups.   We previously described a novel way of detecting and delineating simulated VF defects from gaze tracked movie viewing. Our method compares a viewer's gaze to that of a group of control participants and derives predictions about the presence of VF defects. Based on this approach, we could distinguish between five archetypical simulated VF defects and no defect. The graphical depiction of the defect's shape and location matched the simulated VF defect shapes on a group level. In the present study, we assessed how well this new method is able to detect and delineate real VF defects.   We applied the method to data from 20 participants with glaucoma and 20 age-matched controls who each monocularly viewed a series of 1-minute movie clips while their gaze was being tracked.   Results showed that for most controls, our new analysis predicted an intact visual field, whereas defects were found in participants with glaucoma. For some participants with glaucoma however, our predicted delineations of their VF defects did not compare well to those produced by SAP.    The observed discrepancies suggest that some participants have learned to adapt their gaze behavior to maximally utilize their remaining visual field. We believe that for many patients, a VF test that incorporates such functional vision aspects might have a high clinical relevance, especially in rehabilitation.

LT2, Marcello Maniglia, PhD, University of California, Riverside

Title: A gaze-contingent paradigm to promote scotoma awareness and rapid Preferred retinal locus (PRL) development in macular degeneration

Abstract: Pathologies affecting central vision, such as Macular degeneration (MD), represents a growing health concern worldwide. These patients, deprived of central vision, tend to adopt spontaneous compensatory strategies, including the development of an eccentric locus of fixation, called preferred retinal locus (PRL). However, clinical evidence indicates that the process is often slow, taking up months, with some patients not developing a PRL at all. Developing a PRL seems to be one of the most effective compensatory strategies in MD, thus understanding mechanisms of PRL development has important translational value. Studies on MD are made difficult by several issues, including recruitment, compliance and heterogeneity, scotoma size. In recent years, eyetracking-guided, gaze-contingent simulation of central vision loss in healthy vision individuals has been used to understand oculomotor characteristics associated with central vision loss and test possible training intervention. In these studies, a circular occluder obstructing central vision is generated in real time on a computer screen while participants are engaged in visual tasks. Evidence from this paradigm suggests that MD-like oculomotor behavior, such as the development of a PRL, can be observed. Crucially, unlike in MD, this happens within few hours of exposure to the simulated scotoma. It has been suggested that the characteristics of the simulated scotoma play a role in this difference. Indeed, MD patients are often unaware of the location, size, and shape of their scotoma, unlike in simulated scotoma studies in which these features are readily available. Here, we trained MD patients with a gaze-contingent display that visualized their retinal scotoma on screen, with the goal of increasing their awareness of its characteristics. Behavioral and oculomotor changes following training, as measured by a series of assessments collected before and after training, will be discussed. This promising technique could accelerate functional adaptation to central vision loss in clinical populations.

LT3, Susan Day, MD, Member, SKERI Board of Directors

Title: Helping blind musicians: neuroplasticity of the visual cortex

Abstract: A special project was performed in collaboration with Dr Isabelle Cossette at McGill University’s Schulich School of Music.  My goal was to understand the clinical relevance of extensive research assessing neuroplasticity of the visual cortex, with particular interest in “early blind” individuals.  A literature review of approximately 230 articles was performed, with 10% studied in depth.  Multiple disciplines of contributing authors included neuropsychology, neurological science, cognitive science, and  music education.  Two thirds of the manuscripts were based in a predominantly scientific discipline, and one third from music educators, with the average number of authors 5-9 in the former, and 1-3 in the latter group. Fourteen countries served as sites for such research.  Neuroplasticity was assessed in large part with functional magnetic resonance imaging (fMRI), though other parameters were also used.  Results included the following: • Visual cortex responsiveness to nonvisual stimuli (including music) occurs in early blind individuals • New “neural network” pathways is evident, with enhancement to auditory, tactile, and memory centers- all of which are relevant to musical aptitude • Thickening of the visual cortex and neural networks is felt to demonstrate increase in synapses (eg, an anatomic indicator for enhanced function)  In addition to fMRI findings, this literature review revealed other aspects of “hard- wired” skills which included enhanced absolute pitch and exceptional transpositional skills.  It is important to note that, although the purpose was to assess the potential for neuroplasticity in “early blind” individuals, multiple studies found that the age of onset of blindness was significant: the earlier the onset of blindness, the greater the neuroplasticity.  Several confounding factors are noted in that specific definitions of blindness were often not provided, nor qualified/ quantified, and causes of blindness varied.  In conclusion, interdisciplinary research findings corroborate the importance of music to blind individuals, the potential relevance of early intervention, and the need for interdisciplinary collaboration and refinement of definitions.

LT4, Qusay Hussein, MSW, Steve Hicks School of Social Work, University of Texas at Austin

Title: Arab Refugees with Physical Disabilities: An Exploration of Barriers within the Resettlement Process

Abstract: A series of global crises – the war on terror and its subsequent geopolitical disruptions, climate change, and political and economic instability – have created new waves of refugees, primarily from the Middle East, North Africa, and Latin America. Among these refugees are people with physical disabilities (PWD). While all refugees require support and encounter challenges to adapting to their resettlement countries, those with physical disabilities face additional barriers to successful resettlement and integration into their resettlement communities. These additional barriers include access to accessibility services for alternative communication methods, orientation and mobility assistance, disability-specific medical support, as well as social support to alleviate social isolation due to their disability. Finally, PWD require specific guidance through the various bureaucracies of disability. This study aims to explore the resettlement journeys of PWD and the barriers they have and continue to face. In this qualitative research project, refugees with physical and sensory physical disabilities in Austin, Texas, were interviewed using a semi-structured interview process. Participant narratives were captured and analyzed to determine the scope of the barriers and challenges they have faced during resettlement and to find commonalities within the resettlement process for refugees with physical disabilities.

LT5, Danielle Montour, Cornell Tech

Title: Bridging the Gap: Conversational AI for Generating SVG Code and Enhancing Tactile Literacy for the Blind and Low Vision Community

Abstract: In a world where visual communication is ubiquitous, blind and low vision people often confront image poverty, a significant disparity in image content that arises when tactile and described images are unavailable. This talk explores an innovative application of conversational AI, using its generative ability to create Scalable Vector Graphics (SVG) code as a means of narrowing the gap. Paired with tactile graphics production methods, this approach provides immediate tactile feedback previously unattainable without access to large-scale image producers or the ability to create content by hand. It adds a compelling method to the self-expression toolbox, encouraging blind and low vision people to explore both tactile and code literacy when they may have previously focused on one... or none. AI-generated SVG code serves as a draft, allowing for subsequent refinement via conversational prompts or direct code editing. Early in its development, it already promotes both spatial thinking and autonomy in image creation. Large language models being used to produce SVG code are only as helpful as their training, though, and there is still much for them to learn about making images optimal for tactile consumption. Braille annotations, appropriate scaling, and use of line and texture variations to replace color are nuances to consider as we advocate for and inform this accessible, hands-on creation tool.


 

To promote an environment that recognizes the inherent worth of every person and group, the Smith-Kettlewell Eye Research Institute (SKERI) is dedicated to providing its employees and attendees of the Functional Vision and Accessibility (FVA) conference a safe and harassment-free experience. Harassment is unwelcome or hostile behavior, including speech that intimidates or interferes with a person’s participation or opportunity for participation in a conference, event, or program.  Harassment in any form, including but not limited to, harassment based on national origin, race, religion, sex, gender, or any other status protected by laws in which the conference or program is being held, will not be tolerated. Harassment includes the use of abusive or degrading language or gestures, intimidation, stalking, harassing photography or recording, inappropriate physical contact, and unwelcome sexual attention.

SKERI has a rigorous, clear, and widely advertised reporting process available to all conferees for confidentially reporting violations of our policy. FVA Conference Organizers and SKERI’s CEO, in conjunction with Legal Counsel (as appropriate), will review allegations of any such behavior on a case-by-case basis, and make inquiries of all those involved and any witnesses.  The following are steps on how to confidentially report any alleged violations:

1.      Alleged violations may be reported by calling our ethics hotline at 415-345-2033 which is a private and confidential line monitored by SKERI’s HR department.

2.      When a report is made, it will be sent to the FVA Conference organizers immediately and will be reviewed with the SKERI CEO and, if appropriate, with legal counsel.

3.      All persons involved and any witnesses will be interviewed and a report of the incident will be completed as expeditiously as possible by HR and conference organizers.

4.      If it is determined that the code of conduct has been violated, at the very least the offender will be asked to leave and apologize to the complainant. 

5.      If the violation requires legal counsel or local authority involvement, any action required will be taken under the advisement of legal counsel.

Complainants are also free to contact local authorities if they wish by dialing 311. Anyone with harassment-related questions, concerns, or complaints is encouraged to contact the conference organizers using the resources above, and/or the HHS Office for Civil Rights (OCR). To file a civil rights complaint with HHS OCR, please consult their webpage on this topic. Please note: filing a complaint with SKERI or the conference organizers is not required before filing a complaint with HHS OCR, nor does it prohibit filing a complaint with HHS OCR.

For more information on how individuals can notify NIH about harassment concerns, including sexual harassment, discrimination, and other forms of inappropriate conduct at NIH-supported conferences, please see the agency’s the NIH webpage on this topic.

The Safety Plan will be posted on the conference website and will be available in printed form along with the registration packet. In addition, an announcement will be made in the beginning of the conference, after lunch, and at the end of the conference that this is a discrimination-free and harassment-free environment. Attendees can review the registration packet or FVA website for more information. All allegations reported to SKERI’s HR department, and the resulting actions taken, will be documented.