Scientific

Hybrid Colloquium: The early life of an extraocular motor neuron: from birth to disease to function

Hybrid Colloquium: The early life of an extraocular motor neuron: from birth to disease to function

Abstract – Normal vision relies on exquisite control of the eye movements. Vertebrate extraocular motor neurons control the six muscles that move each eye. We know comparatively little about the development of extraocular motor neurons and the emergence of the behaviors they subserve. This gap constrains our ability to address developmental disorders of the oculomotor system. To make progress, we have developed the larval zebrafish as a model to study the development of the oculomotor system and the behaviors it subserves. Larval zebrafish are a small vertebrate with exceptional optical and genetic access to developing neural circuits. I’ll share highlights of our lab’s efforts to understand oculomotor development. Specifically, I’ll focus on the development of extraocular motor neurons in cranial nuclei nIII/nIV that are responsible for torsional/vertical eye movements such as those that comprise the gravito-inertial vestibulo-ocular reflex. I’ll begin with published findings establishing that an extraocular motor neuron’s “birthdate” predicts which muscle it will control and where its soma lies within nIII/nIV. Next, I’ll share unpublished progress on two fronts: First, we’re working to discover the molecular determinants responsible for proper development of extraocular motor neurons. In service of this aim, we’ve generated a mutant line that has lost phox2a expression. These fish lose extraocular motor neurons in nIII/nIV leaving only the lateral rectus motor neurons in nVI intact. The eyes deviate towards the ears, similar to human patients with CFEOM type 2, who have mutations in PHOX2A. Finally, I’ll end by showing how we use a new imaging technique (Tilt In Place Microscopy, or TIPM) to map the emergence of selectivity and sensitivity in the responses of individual extraocular motor neurons across development. https://med.nyu.edu/faculty/david-schoppik Abstract Normal vision relies on exquisite control of the eye movements. Vertebrate extraocular motor neurons control the six muscles that move each eye. We know comparatively little about the development of extraocular motor neurons and the emergence of the behaviors they subserve. This gap constrains our ability to address developmental disorders of the oculomotor system. To make progress, we have developed the larval zebrafish as a model to study the development of the oculomotor system and the behaviors it subserves. Larval zebrafish are a small vertebrate with exceptional optical and genetic access to developing neural circuits. I’ll share highlights of our lab’s efforts to understand oculomotor development. Specifically, I’ll focus on the development of extraocular motor neurons in cranial nuclei nIII/nIV that are responsible for torsional/vertical eye movements such as those that comprise the gravito-inertial vestibulo-ocular reflex. I’ll begin with published findings establishing that an extraocular motor neuron’s “birthdate” predicts which muscle it will control and where its soma lies within nIII/nIV. Next, I’ll share unpublished progress on two fronts: First, we’re working to discover the molecular determinants responsible for proper development of extraocular motor neurons. In service of this aim, we’ve generated a mutant line that has lost phox2a expression. These fish lose extraocular motor neurons in nIII/nIV leaving only the lateral rectus motor neurons in nVI intact. The eyes deviate towards the ears, similar to human patients with CFEOM type 2, who have mutations in PHOX2A. Finally, I’ll end by showing how we use a new imaging technique (Tilt In Place Microscopy, or TIPM) to map the emergence of selectivity and sensitivity in the responses of individual extraocular motor neurons across development. https://med.nyu.edu/faculty/david-schoppik 

Hybrid Colloquium: Compensatory oculomotor strategies in central vision loss: Insights from simulated scotoma and visual training

Hybrid Colloquium: Compensatory oculomotor strategies in central vision loss: Insights from simulated scotoma and visual training

Abstract – Macular degeneration (MD) represents one of the main causes of vision loss and a serious health concern worldwide. Late-stage MD leads to the development of a region of retinal blindness (scotoma), forcing patients to adopt compensatory oculomotor strategies to perform everyday tasks such as reading, recognizing faces, and navigation. A common strategy involves functionally replacing the fovea with a portion of spare retina outside of the scotoma, called Preferred retinal locus (PRL). However, the mechanisms underlying PRL development appear elusive, and progresses are complicated by practical issues inherent to MD research, including compliance, transportation to testing facilities, high heterogeneity, and comorbidity. Gaze-contingent simulation of central vision loss offers a potential framework for the study of compensatory oculomotor strategies in which many parameters, such as the onset, size, and shape of the scotoma can be precisely controlled. I will present a simulated scotoma study aimed at understanding the development of compensatory oculomotor strategies: specifically, I will describe a method to extract relevant oculomotor metrics, show how these change with training, and discuss task-specific and individual differences in oculomotor behavior in conditions of simulated central vision loss. Taken together, these results highlight how simulated scotomas may help understand oculomotor patterns following central vision loss, and potentially train eye movements in MD patients. These studies are part of a larger framework that I am currently developing to characterize, evaluate and train central vision loss in an integrated model that addresses the far-reaching consequences of loss of central vision, including changes in low level vision, oculomotor control and higher-level visual functions. https://psychology.ucr.edu/about-our-department/postdoctoral-researchers/ Improving Zoom accessibility for people with hearing impairments People with hearing impairments often use lipreading and speechreading to improve speech comprehension. This approach is helpful but only works if the speaker’s face and mouth are clearly visible. For the benefit of people with hearing impairments on Zoom calls, please enable your device’s camera whenever you are speaking on Zoom, and face the camera while you speak. (Feel free to disable your camera when you aren’t speaking.) Abstract:Macular degeneration (MD) represents one of the main causes of vision loss and a serious health concern worldwide. Late-stage MD leads to the development of a region of retinal blindness (scotoma), forcing patients to adopt compensatory oculomotor strategies to perform everyday tasks such as reading, recognizing faces, and navigation. A common strategy involves functionally replacing the fovea with a portion of spare retina outside of the scotoma, called Preferred retinal locus (PRL).However, the mechanisms underlying PRL development appear elusive, and progresses are complicated by practical issues inherent to MD research, including compliance, transportation to testing facilities, high heterogeneity, and comorbidity.Gaze-contingent simulation of central vision loss offers a potential framework for the study of compensatory oculomotor strategies in which many parameters, such as the onset, size, and shape of the scotoma can be precisely controlled.I will present a simulated scotoma study aimed at understanding the development of compensatory oculomotor strategies: specifically, I will describe a method to extract relevant oculomotor metrics, show how these change with training, and discuss task-specific and individual differences in oculomotor behavior in conditions of simulated central vision loss.Taken together, these results highlight how simulated scotomas may help understand oculomotor patterns following central vision loss, and potentially train eye movements in MD patients.These studies are part of a larger framework that I am currently developing to characterize, evaluate and train central vision loss in an integrated model that addresses the far-reaching consequences of loss of central vision, including changes in low-level vision, oculomotor control, and higher-level visual functions. https://psychology.ucr.edu/about-our-department/postdoctoral-researchers/Improving Zoom accessibility for people with hearing impairmentsPeople with hearing impairments often use lipreading and speechreading to improve speech comprehension. This approach is helpful but only works if the speaker’s face and mouth are clearly visible. For the benefit of people with hearing impairments on Zoom calls, please enable your device’s camera whenever you are speaking on Zoom, and face the camera while you speak. (Feel free to disable your camera when you aren’t speaking.)

Hybrid Colloquium: Cerebellar contributions to visual attention and working memory

Hybrid Colloquium: Cerebellar contributions to visual attention and working memory

Abstract- The amount of sensory information we receive at any one moment far outstrips our brain’s ability to process this information. We can effortlessly withstand this deluge of sensory input due to our ability to prioritize and maintain the subset of information within our environment that is most relevant to our behavioral goals. Attention and working memory, the processes that enable this prioritization and maintenance, are thought to be supported by a network of cerebral cortical areas spanning visual, parietal, and frontal cortices. The cerebellum, a subcortical structure typically associated with the coordination of motor actions, has not been traditionally implicated in attention and working memory. In this talk, I will present evidence from a series of functional magnetic resonance imaging (fMRI) and psychophysical experiments for a cerebellar role in visual attention and working memory processes. In particular, I will present the findings of a recent study that examines whether the cerebellum encodes motor-independent stimulus-specific representations of items maintained in working memory. I will further discuss recent behavioral and eye-tracking work aimed at testing the hypothesis that the cerebellum is critical for the adaptive control of visual attention and working memory processes. https://lsa.umich.edu/psych/people/research-fellows-2/james-brissenden.html

2:00 PM, Hybrid Colloquium: Harnessing the Computer Science-Vision Science Symbiosis

2:00 PM, Hybrid Colloquium: Harnessing the Computer Science-Vision Science Symbiosis

Abstract – Emerging platforms such as Augmented Reality (AR), Virtual Reality (VR), and autonomous machines, all intimately interact with humans. They must be built from the ground up, with principled considerations of human perception. This talk will discuss some of our recent work on exploiting the symbiosis between Computer Science and Vision Science. We will discuss how to jointly optimize imaging, computing, and human perception to obtain unprecedented efficiency. The overarching theme is that a computing problem that seems challenging may become significantly easier when one considers how computing interacts with imaging and human perception in an end-to-end system. In particular, we will discuss two specific projects: real-time eye tracking for AR/VR and power optimization for VR displays. If time permits, I will also briefly discuss our ongoing work using computational techniques to help dichromats regain the trichromatic vision. https://www.cs.rochester.edu/people/faculty/zhu_yuhao/index.html Improving Zoom accessibility for people with hearing impairments People with hearing impairments often use lipreading and speechreading to improve speech comprehension. This approach is helpful but only works if the speaker’s face and mouth are clearly visible. For the benefit of people with hearing impairments on Zoom calls, please enable your device’s camera whenever you are speaking on Zoom, and face the camera while you speak. (Feel free to disable your camera when you aren’t speaking.) Abstract: Emerging platforms such as Augmented Reality (AR), Virtual Reality (VR), and autonomous machines all intimately interact with humans. They must be built from the ground up, with principled considerations of human perception. This talk will discuss some of our recent work on exploiting the symbiosis between Computer Science and Vision Science. We will discuss how to jointly optimize imaging, computing, and human perception to obtain unprecedented efficiency. The overarching theme is that a computing problem that seems challenging may become significantly easier when one considers how computing interacts with imaging and human perception in an end-to-end system. In particular, we will discuss two specific projects: real-time eye tracking for AR/VR and power optimization for VR displays. If time permits, I will also briefly discuss our ongoing work using computational techniques to help dichromats regain the trichromatic vision. https://www.cs.rochester.edu/people/faculty/zhu_yuhao/index.html  

Announcing the Eighteenth Annual Meeting Low Vision Rehabilitation Study Group (In-person Feb 3rd Times: Morning: 9am - 12pm and Afternoon: 1pm - 4pm, and Feb 4th  9am - 12pm)

Low Vision Rehabilitation Study Group 18th Annual Meeting

Announcing the Eighteenth Annual Meeting Low Vision Rehabilitation Study Group (Returning to in-person meeting in San Francisco) Purpose: An informal gathering of clinicians/clinical researchers in low-vision rehab • discuss problem cases • share techniques • brainstorm ideas for new treatments or investigations • enjoy collegiality Location: San Francisco, California • hosted by Don Fletcher, Ron Cole, Gus Colenbrander, Tiffany Chan, Silvia Veitzman, and Annemarie Rossi • sponsored by Smith-Kettlewell Eye Research Institute (SKERI) and CPMC Dept. of Ophthalmology • meeting is held at SKERI – 2318 Fillmore St., San Francisco, CA 94115 (Dr. Fletcher has provided more details of this meeting in the attachment)

Hybrid Colloquium: Developing Disability-First Datasets for Non-Visual Information Access

Hybrid Colloquium: Developing Disability-First Datasets for Non-Visual Information Access

Abstract – Image descriptions, audio descriptions, and tactile media provide non-visual access to the information contained in visual media. As intelligent systems are increasingly developed to provide non-visual access, questions about the accuracy of these systems arise. In this talk, I will present my efforts to involve people who are blind in the development of information taxonomies and annotated datasets towards more accurate and context-aware visual assistance technologies and tactile media interfaces. https://abigalestangl.com/

Hybrid Colloquium: Measuring featural attention using fMRI and MEG

Hybrid Colloquium: Measuring featural attention using fMRI and MEG

Abstract – Attention to low-level visual features can alter the activity of neurons in visual cortex. In principle we expect attention to select the neurons most sensitive to changes in the feature being attended. This means that the precise nature of the neuronal responses may depend on both task and stimulus. Here, we used magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) to examine modulations of neural activity in visual cortex driven purely by both stimulus and task. We presented sequences of achromatic radial frequency pattern targets (200ms, ISI randomized from 1800-2000ms) with occasional small ‘probe’ changes in their contrast, shape and orientation. Probe types were randomized and independent and subjects were cued to attend to specific probe types in blocks of 24s. Responses from 15 subjects (9 F) were recorded in separate fMRI and MEG experiments. Support vector machines were used to decode MEG sensor space data at 5ms intervals and fMRI voxel-wise responses from retinotopically-defined regions of interest. We show that both attentional state and target events cause changes in ongoing neuronal activity and that we are able to distinguish between different types of low-level featural attention with good temporal and spatial resolution. Some of his most cited vision research and neuroimaging papers  Improving Zoom accessibility for people with hearing impairments People with hearing impairments often use lipreading and speechreading to improve speech comprehension. This approach is helpful but only works if the speaker’s face and mouth are clearly visible. For the benefit of people with hearing impairments on Zoom calls, please enable your device’s camera whenever you are speaking on Zoom, and face the camera while you speak. (Feel free to disable your camera when you aren’t speaking.)

Hybrid Colloquium: Gaze and Gait: Changes in gaze behavior during locomotor learning

Hybrid Colloquium: Gaze and Gait: Changes in gaze behavior during locomotor learning

Abstract – During walking, people use vision to both create movement plans about future steps and correct the execution of the current step. However, the importance of these types of visual information changes based on the movement ability of the person and the difficulty of the terrain. In this talk, I will present the results from two experiments that explore how visual sampling strategies and visual reliance changes with locomotor learning. The first characterizes how visual sampling strategies change as people practice a treadmill-based target stepping task. The second examines how visual reliance changes during the same target stepping task by altering what visual information is available at different points of the locomotor learning process. I will conclude by presenting some preliminary results of these techniques applied to a clinical population, specifically individuals with a concussion. People with a concussion typically exhibit both oculomotor deficits (which would impact what visual information is available) and gait deficits (which often persist beyond the point of recovery when symptoms have returned to baseline). My current work, therefore, proposes that there may be lingering changes to an individual’s gaze behavior which may be causing these persistent gait deficits. https://www.alexcates.com/ Improving Zoom accessibility for people with hearing impairments People with hearing impairments often use lipreading and speechreading to improve speech comprehension. This approach is helpful but only works if the speaker’s face and mouth are clearly visible. For the benefit of people with hearing impairments on Zoom calls, please enable your device’s camera whenever you are speaking on Zoom, and face the camera while you speak. (Feel free to disable your camera when you aren’t speaking.)

Affiliate Senior Scientist

Hybrid Brown Bag: My perspective on Vision and Vision Rehabilitation

Ophthalmology, visual science, and vision rehabilitation have significant areas of overlap. In this presentation, I want to discuss insights into their interaction and stress aspects that are often overlooked. https://ski.org/users/august-colenbrander Improving Zoom accessibility for people with hearing impairments People with hearing impairments often use lipreading and speechreading to improve speech comprehension. This approach is helpful but only works if the speaker’s face and mouth are clearly visible. For the benefit of people with hearing impairments on Zoom calls, please enable your device’s camera whenever you are speaking on Zoom, and face the camera while you speak. (Feel free to disable your camera when you aren’t speaking.)

Zoom Colloquium: Improving Comics Accessibility for People with Visual Impairments

Zoom Colloquium: Improving Comics for People with Visual Impairments

Abstract: A number of researches have been conducted to improve the accessibility of various types of images on the web (e.g., photos and artworks) for people with visual impairments. However, little has been studied on making comics accessible. As a formative study, we first conducted an online survey with 68 participants who are blind or have low vision. Based on their prior experiences with audio-books and eBooks, we propose an accessible digital comic book reader for people with visual impairments. An interview study and prototype evaluation with eight participants with visual impairments revealed implications that can further improve the accessibility of comic books for people with visual impairments. We then focused on a specific type of digital comics called webtoon, which is read online where readers can leave comments to share their thoughts on the story. To improve the webtoon reading experience for BLV users, we propose another interactive webtoon reader that leverages comments into the design of novel webtoon interactions. Since comments can identify story highlights and provide additional context, we designed a system that provides 1) comments-based adaptive descriptions with selective access to details and 2) panel-anchored comments for easy access to relevant descriptive comments. Our evaluation showed that Cocomix users could adapt the description for various needs and better utilize comments. https://hcil-ewha.github.io/homepage/index.html Improving Zoom accessibility for people with hearing impairments People with hearing impairments often use lipreading and speechreading to improve speech comprehension. This approach is helpful but only works if the speaker’s face and mouth are clearly visible. For the benefit of people with hearing impairments on Zoom calls, please enable your device’s camera whenever you are speaking on Zoom, and face the camera while you speak. (Feel free to disable your camera when you aren’t speaking.)