Scientific

Zoom Brown Bag: Normal and Disrupted Binocular Interactions in Human Striate and Extra-striate Visual Cortex

Zoom Brown Bag: Normal and Disrupted Binocular Interactions in Human Striate and Extra-striate Visual Cortex

Abstract – I will describe recent studies in the lab on binocular interactions in striate and extra-striate visual cortex in normal and amblyopic vision. Using source-imaged SSVEP and frequency-domain analysis of dichoptic stimuli, we measured two forms of binocular interactions: one is associated with the individual stimuli that represent interocular suppression from each eye, and the other is a direct measure of interocular interaction between inputs from the two eyes. We have shown that both forms of binocular interactions share a common gain control mechanism in striate and extra-striate cortex. Furthermore, our model fits, revealed different patterns of binocular interaction along the visual cortical hierarchy, particularly in terms of excitatory and suppressive contributions in normal and amblyopic vision. https://ski.org/users/chuan-hou https://ski.org/users/chuan-hou

emily_small_1

Perceptual Science for Augmented Reality

Abstract – Recent years have seen impressive advances in near-eye display systems. These systems are the basis of most virtual and augmented reality experiences. There are, however, a unique set of perceptual challenges associated with designing a display system that is worn directly on the user’s face. I will present work on two projects that address challenges and opportunities presented by augmented reality. In the first, we develop perceptual guidelines for designing display hardware with good image quality across a range of users. In the second, we explore how these systems can enhance visual information about the environment for users with impaired vision. https://vcresearch.berkeley.edu/faculty/emily-cooper

russell_hamer2

SK’s 1st Zoom Colloquium “Surreal Artist as Visual Neuroscientist: Perceptuo-Cognitive Analysis of Selected Works of René Magritte”

Abstract – Magritte, the famous Belgian Surrealist, played masterfully with aspects of visual perception that we in the field of  Sensory-Perceptual neuroscience have studied for many years. His art draws us into reflections on the very nature of perception, on what is seen and what is hidden, on the ‘silent’ and hierarchical nature of object segregation and scene construction that reveal a dynamic interplay between the stream of bottom-up sensory information modulated quickly and automatically by top-down neural influences. Magritte reveals to us our unconscious perceptual “rules”, and examines, almost as a scientist, the very nature of representation itself. His works reveal layers of surprising effects of interest to both artists and scientists alike. I will discuss selected works from Magritte’ huge oeuvre that vividly illustrate the neuro-perceptual impact of his approach. Also, I touch on the communicative, discursive features, the cognitive and emotional conversations Magritte initiates between him and us, the viewers, regarding meaning and shared human experience.  As expected, the meanings evoked from Magritte’s conceptual palette are not literal, and decidedly resonant with delicious ambiguity, borne by the magical interplay between Magritte’s imagination and our automatic perceptions, expectations, and idiosyncratic memories.

cavanagh

Perception of location computed outside the visual cortex

Abstract – Recent results indicate that an object’s visual location is constructed at a high level where, critically, an object’s motion is discounted to recover its current location. As a result, we sometimes see a target far from its actual location. One particular target, the double-drift stimulus, develops very large illusory shifts based on an integration time of well over a second suggesting the involvement of processes with the time course of short-term memory. fMRI results show that the shifted percept does not emerge in the visual cortex but is seen in the frontal lobes, where visual-spatial short-term memory areas would have the temporal integration required to support the effect. In summary, these findings suggest, surprisingly, that the neural correlates of conscious perception of location are in the frontal lobes, although where or why remains to be understood.

macneilage-paul-psych-profile

Part I. Title: Characterization of natural eye and head movements driving retinal flow. Part II. Underwater virtual reality system for neutral buoyancy training: development and evaluation

There will be two parts (half-hour each) to this colloquium.   Abstract –   Part I. Characterization of natural eye and head movements driving retinal flow In the absence of moving objects, retinal flow is determined by eye velocity relative to the environment as well as by the structure of the environment. Eye velocity in space is the sum of head-in-space and eye-in-head velocity. To gain a better understanding of head and eye movement driving retinal flow, we developed an ideal observer model of this process based on the assumption that observers tend to fixate features of the stationary environment. The model predicts retinal flow is driven most strongly by 1) linear head velocity, 2) fixation direction and distance, and 3) the structure of the environment. We also developed a system to measure both head and eye velocity during everyday behaviors outside the lab. The system consists of a Pupil Labs eye tracker with an Intel RealSense t265 tracking camera rigidly attached to the world camera. The tracking camera reconstructs head velocity using a computer vision algorithm known as simultaneous localization and mapping (SLAM). Head and eye movements were recorded for participants walking around campus. We present preliminary data collected using this device. Specifically, we present statistics of linear head velocity and head orientation relative to gravity and discuss the implication for the perception of heading and orientation as well as for statistics of retinal flow.   Part II. – Underwater virtual reality system for neutral buoyancy training: development and evaluation Abstract During terrestrial activities, sensation of pressure on the skin and tension in muscles and joints provides information about how the body is oriented relative to gravity and how the body is moving relative to the surrounding environment. In contrast, in aquatic environments when suspended in a state of neutral buoyancy, the weight of the body and limbs is offloaded rendering these cues uninformative. It is not yet known how this altered sensory environment impacts virtual reality experiences. To investigate this question, we converted a full-face SCUBA mask into an underwater head-mounted display and developed software to simulate jetpack locomotion outside the international space station. Our goal was to emulate conditions experienced by astronauts during training at NASA’s Neutral Buoyancy Lab. A user study was conducted to evaluate both sickness and presence when using virtual reality in this altered sensory environment. We observed an increase in nausea related symptoms underwater, but we cannot conclude that this is due to VR use. Other measures of sickness and presence underwater were comparable to measures taken above water. We conclude with suggestions for improved underwater VR systems and improved methods for evaluation these systems based on our experience. https://www.unr.edu/neuroscience/people/paul-macneilage

agostino-gibaldi-200x200

Special Colloquium- The visual system and the natural environment: perceptual, behavioral and computational advances

Abstract: – As humans investigate the visual environment, they make two to three fixations a second. At each fixation, each eye has to focus on the target to obtain sharp vision, the two eyes must be correctly aligned on the target, and the retinal images formed on the two retinas are to be fused into a single percept. These processes must be achieved rapidly and effectively, in order to form a robust and reliable perception before we move to the next fixation. We know that the visual system exploits regularities of the natural environment to facilitate the perceptual task. The natural environment has specific regularities: it contains many opaque objects such that farther objects are often occluded by nearer ones. It is also structured by gravity, so many surfaces are earth-horizontal (e.g., grounds and floors) or earth-vertical (trees and walls). As a result, these regularities are constrained and dependent on position in the visual field. We’ve shown how the visual system is finely adapted to these regularities to optimize binocular coordination and stereopsis but also the eye focus. This result provides an insight on the neural mechanisms of human vision, that can be exploited in different research and applicative fields from neuroscience and experimental psychology to virtual and augmented reality, but also computer and robot vision. https://vision.berkeley.edu/people/agostino-gibaldi-phd

alijpg

Beacon-based wayfinding for people with disabilities

Abstract – There are currently few options for navigational aids for people with disabilities, especially those who are blind, and visually impaired (BVI) in indoor spaces and around buildings. Indoor environments can be geographically large and intimidating such as grocery stores, airports, sports stadiums, large office buildings, and hotels. Thus, reading and following signs still remain the most common mechanism for providing and receiving wayfinding information in such spaces. These indoor spaces can be difficult to navigate even for those without disabilities if they are disoriented due to unfamiliarity or other reasons. This study presents a wayfinding system called GuideBeacon that can be used by BVI individuals to navigate their surroundings beyond what is possible with just a GPS based system. The GuideBeacon system allows users equipped with smartphones to interact with Bluetooth-based beacons deployed strategically within the indoor space of interest to navigate their surroundings. A major challenge in deploying such beacon-based navigation systems is the need to employ a time and labor-expensive beacon planning process, to identify potential beacon placement locations and arrive at a topological structure representing the indoor space. Thus, this work presents a technique called IBeaconMap for creating such topological structures to use with beacon-based navigation. Using GuideBeacon as the underlying layer, this study additionally proposes an inclusive emergency evacuation system called SafeExit4All that empowers people with disabilities (in addition to the general population) to independently find a safe exit under emergency scenarios. Finally, this work describes an indoor-outdoor navigation system called CityGuide which leverages the GuideBeacon in conjunction with GPS signals to enable a BVI individual to query and get turn-by-turn shortest route directions from an indoor location to their desired destination outdoor. https://ski.org/users/ali-cheraghi  

d_pawluk

Providing Real-time Access to Graphics for Individuals who are Blind or Visually Impaired

Abstract – A significant problem for individuals who are blind or visually impaired (BVIs) is the lack of access to graphical information. Part of this is due to the expertise, time and cost needed to translate visual graphics into appropriate non-visual representations, such as tactile diagrams. Another part is due to a lack of effective real-time, refreshable interfaces to explore these graphics spatially.  This talk will begin with an overview of our work tackling the components of a system intended to provide the most effective access to electronic graphics. This includes consideration of: the diagram format, automatic conversion from an electronic diagram or photo into an appropriate simplified form, multi-fingered audio/tactile access, and user controlled dynamic interaction with virtual diagrams.  Then focus on our most recent areas of research in automatic conversion of electronic diagrams and multi-fingered audio/tactile access. https://ski.org/project/t-scratch-tangible-programming-environment Virginia Commonwealth University

VPEM Journal Club Meeting

Title: Effects of Pure Vergence Training on Initiation and Binocular Coordination of Saccades Paper: Morize et al., 2017, IOVS   Abstract: Purpose: We hypothesized that saccade eye movement properties, particularly latency and binocular coordination, depend on vergence quality. Methods: We studied 11 students clinically diagnosed for vergence disorders versus 8 healthy controls. Rehabilitation of vergence disorders was done with a novel research-based method, using vergence in midsagittal plane. Vergence and saccades were recorded in separate blocks, before and after five weekly rehabilitation sessions. Results: Healthy controls showed higher accuracy and velocity of convergence and divergence relative to the vergence disorders group; then rehabilitation led to significant decrease of latency and increase of gain and peak velocity of vergence. Before rehabilitation of the vergence disorders, saccade parameters did not differ significantly from healthy controls, except the binocular coordination that was significantly deteriorated. Following vergence rehabilitation, saccade properties improved: The latency decreased significantly, the gain increased particularly at far, and the binocular coordination improved significantly. Latency and accuracy improved in a durable way, with values even better than the range of accuracy measured in healthy controls; binocular coordination of saccades, although improved, did not normalize. In healthy controls, binocular coordination was optimal at 40 cm (working distance), and the vergence disorders group showed improvement at 40 cm. Results confirm the hypothesis, which is further corroborated by the correlation between vergence and saccade latency. Conclusions: Results are in line with the hypothesis of permanent interaction between saccades and vergence, even when the task requires only saccades. Relevance of such interaction is emphasized by improvements of binocular saccades through the novel research-based method of vergence rehabilitation.

Using audio augmented reality to guide blind people in the street

Abstract – At Dreamwaves we are developing an intuitive audio navigation system using augmented reality and spatial audio to provide an easy navigation experience. One of our main target groups is blind and visually impaired people as they are the individuals that can benefit the most from such a navigation tool. Therefore, we focus strongly on designing our app to be accessible to this group while keeping it pleasant and appealing for other groups. On the other hand, in order to be intuitive, we require precise localization of the user’s device within the city as well as realistic 3D sound. In this talk, I will discuss how we try to meet this triple challenge. For the app to thrive we need to be on the forefront of computer vision methods for accurate localization, of binaural virtual audio for the most intuitive audio rendering and universal design to deliver the right interaction and experience. Though this may seem overwhelming, it is also what makes it extremely exciting. https://www.dreamwaves.io/