Scientific

Sensory substitution as a framework to access visual and spatial information for the blind and visually impaired

Abstract: Despite the progress on assistive technologies for the visually impaired, numerous sources of information remain difficult to access without vision. Most of the existing solutions for visually impaired people rely on language to convey information, whether it is through an auditory (text-to-speech synthesis) or tactile (braille) medium. However, those solutions often fall short when the information to provide is highly dynamic or difficult to efficiently convey through language (e.g. spatial or image-based information). The Sensory Substitution framework provides a means to overcome those limitations by providing said information through low-level sensory stimulation which can be, after some training, processed and interpreted quickly, with very little attentional resources. However, a lot of questions on this particular type of human-machine communication still remain open. During this talk, I will present the projects I have been working on during the first 2 years of my Ph.D., in the context of sensory-substitution-based assistive devices, both on the accessibility of image-based content and on the autonomous navigation of visually impaired people.

Ethics Seminar on Peer Review and Research Collaboration

Abstract: This is part of our ongoing Ethics Seminar series and it is required that all fellows and their mentors attend. The Ethics Seminar is from 12 noon to 1:30 pm on Wednesday, Dec 5, 2018 in the Main Conference Room 204. Natela and I will be presenting. Please take a look at the materials attached and come prepared to discuss the material. Lunch will be served

zheng_ma17

Object-based and multi-frame motion information predict human eye movement patterns during video viewing

Abstract: Compared to low-level saliency, higher-level information better predicts human eye movement in static images. In the current study, we tested how both types of information predict eye movements while observers view videos. We generated multiple eye movement prediction maps based on low-level saliency features, as well as higher-level information that requires cognition, and therefore cannot be interpreted with only bottom-up processes. We investigated eye movement patterns to both static and dynamic features that contained either low- or higher-level information. We found that higher-level object-based and multi-frame motion information better predict human eye movement patterns than static saliency and two-frame motion information, that higher-level static and dynamic features provide equally good predictions. The results suggest that object-based processes and temporal integration of multiple video frames are essential to guide human eye movements during video viewing.    

An informal Colloquium:

Teachers of The Visually Impaired (TVI) from the Blind Babies Foundation (BBF) will be visiting us this Thursday, November 15, 2018.   This is an opportunity to engage in an informal “around the table” discussion with Dr Pam Chapin, Head of BBF and colleagues.   Please do take time to look at their website where you will find that they provide services to babies, young children, and also to adults.   https://www.wayfinderfamily.org/program/blind-babies-foundation   Here are a few topics that Dr. Chapin and colleagues will discuss:     ·         What exactly they do at BBF ·         What the significant challenges facing children, their parents/caregivers are ·         TVI in vision rehabilitation of visually impaired children ·         What they hope research would focus on and finally how they might work with us.   You will have an opportunity to listen and ask questions to enlighten your knowledge on vision impairment in children and the services they provide to the adult visually impaired.   Pam Chapin and about 10 colleagues will visit SK from 10:30 a.m. – 3:00 p.m.   Everyone is welcome and lunch will be provided. 

macneilage_main

Self-motion perception: interactions between visual, vestibular and motor signals

Abstract: To reconstruct how the head is moving relative to the environment, the nervous system relies on a combination of visual and vestibular sensory information. Vestibular signals are driven by head movement whereas visual motion signals are driven by both head movement and eye movement relative to the head. Knowledge of eye movement, most likely based on motor efference, is therefore, necessary to allow for comparison and integration of visual cues with vestibular cues. Motor efference signals associated with head-on-body movement can also supplement sensory estimates of head movement. In this talk, I will present results of several psychophysical studies investigating interactions among these self-motion signals. Visual stimuli consist of optic flow patterns presented using either stereo monitors or head-mounted displays. Vestibular stimuli are presented by passively moving observers seated on either a motion platform or rotating chair. Oculomotor behavior is manipulated by varying movement of the fixation point between conditions. Participants judge their own movement or movement of the environment. Results suggest that motor signals play an important role in mediating visual-vestibular interactions. To relate these experimental results to natural behavior we have developed a system to track head and eye movement during everyday behavior, and we have begun characterizing typical visual, vestibular, and motor signals outside the lab. https://www.unr.edu/psychology/faculty/paul-macneilage

1200x1200_1531696218-89dd3102decc0a9c-37195280_10101126339019408_107236464273653760_n

Motion Perception in Central Field Loss

  Abstract: Healthy peripheral retina is exquisitely sensitive to fast speeds. Individuals with central field loss (CFL) typically only have residual peripheral vision and studies suggest they become adept at using peripheral motion information, as in the case of vection (Tarita-Nistor et al. 2008). However, we showed that smooth pursuit in CFL is impaired across a range of speeds and visual acuity cannot explain this decrement in performance (Shanidze et al. 2016). Thus, the question remains whether this deficiency is due to oculomotor limitations, or a potential impairment of peripheral motion processing, as indicated by Eisenbarth et al. 2007. I will show data comparing the ability of CFL participants, age-matched controls and young controls to discriminate speed and direction of motion in a two spatial alternative forced-choice design. Our results indicate that age is a much stronger predictor of motion discrimination performance and suggest that CFL participants’ deficits in smooth pursuit are likely not due to motion perception deficits.

The NIH Study Study Section Review Process – a special Lunchtime talk

Abstract: I will give a brief overview of the review process at the SPC study section at NEI. This will include review criteria including the blank critique form, instructions to reviewers on how to distribute scores, advice on giving the Overall Impact Score, the emphasis on scientific premise, scientific rigor. James Coughlan will provide his insight into the review process at BNVT. Bebe and the fellows who attended the NIH Grants training session last week will also be available for this Q&A session.

v4

Interocular suppression and selective visual attention in amblyopia

Abstract Attention allows us to select the most important information while ignoring irrelevant information. One prominent role of attention is that visual attention operates through the facilitation of neural responses at the attended location and suppression of neural responses at the unattended location. This opens the possibility that amblyopic suppression may be a form of “attentional neglect” of visual input from the amblyopic eye to overcome “double vision” or “visual blur” due to strabismus or anisometropia. We speculate that a long-term “attentional neglect” of the visual input to the amblyopic eye may weaken attentional modulation in visual cortex. To test this hypothesis, we first measured attentional allocation and modulatory effects of spatial attention in the early visual cortex of human strabismic amblyopia using fMRI-informed EEG source imaging. Then, we related these findings to the psychophysical evidence of interocular suppression and the depth of amblyopia. In the latter part of the presentation, I will discuss the preliminary results from our ongoing studies.

mandu

(Computer) Vision without Sight: Finding, Reading, and Magnifying Text

Abstract: Reading is a pervasive activity in our daily life. We read text printed on books and documents, shown on directional signs and advertisement, and displayed on computer and smartphone screens. People who are blind can read text using OCR on their smartphone; those with low vision may magnify onscreen content. But these tasks are not always easy. Reading a document with OCR requires taking a well-framed picture of it at an appropriate distance, something that is hard to do without visual feedback. Accessing “scene text” (e.g., a name tag or a directional sign) is even harder, as one needs to first figure out where the text might be. Screen magnification presents a different set of problems. One needs to manually control the center of magnification using the mouse or trackpad, all the while maintaining awareness of the current position in the document (the “page navigation problem”). In this talk, I will present a number of different projects in my lab addressing these problems. First, I will show how fast “text spotting” algorithms can be used to generate real-time feedback for blind users, indicating the presence of scene text in the camera’s field of view, or guiding the user to take a correctly framed picture of a document. I will then propose a simple gaze-contingent model for screen magnification control. Although our system currently uses an IR-based eye gaze tracker, we are planning to integrate it with an appearance-based tracker using data from the computer’s own camera. During the talk, I will present a number of experimental studies with blind and low vision participants, motivating and validating the proposed technology.   https://users.soe.ucsc.edu/~manduchi/