Scientific

Non-visual viewing 3D Models, Web Page Layouts, and Code Structure using Tactile Displays

Abstract – Dr. Alexa will be presenting work from 3 projects: shapeCAD: 3D Modelling Workflow for the Blind and Visually-Impaired Via 2.5D Shape Displays http://shape.stanford.edu/research/shapeCAD/ Editing Spatial Layouts through Tactile Templates for People with Visual Impairments: http://shape.stanford.edu/research/bviLayout/ Tactile Code Skimmer: A Tool to Help Blind Programmers Feel the Structure of Code: http://shape.stanford.edu/research/TCS/

Visual Processing & Eye Movement Journal Club

Kids are surprisingly old (~10.5 years) before they combine some sensory cues efficiently. Audrey will present evidence that this reflects the developmental trajectory of sensory-level fusion, rather than post-perceptual decision processes.   Presenter: Audrey Wong-Kee-You Title: Late Development of Cue Integration Is Linked to Sensory Fusion in Cortex Paper: Dekker et al., 2015, Current Biology   Abstract: Adults optimize perceptual judgements by integrating different types of sensory information. This engages specialized neural circuits that fuse signals from the same or different modalities. Whereas young children can use sensory cues independently, adult-like precision gains from cue combination only emerge around ages 10 to 11 years. Why does it take so long to make best use of sensory information? Existing data cannot distinguish whether this reflects surprisingly late changes in sensory processing (sensory integration mechanisms in the brain are still developing) or depends on post- perceptual changes (integration in sensory cortex is adult-like, but higher-level decision processes do not access the information). We tested visual depth cue integration in the developing brain to distinguish these possibilities. We presented children aged 6–12 years with displays depicting depth from binocular disparity and relative motion and made measurements using psychophysics, retinotopic mapping, and pattern classification fMRI. Older children (>10.5 years) showed clear evidence for sensory fusion in V3B, a visual area thought to integrate depth cues in the adult brain. By contrast, in younger children (<10.5 years), there was no evidence for sensory fusion in any visual area. This significant age difference was paired with a shift in perceptual performance around ages 10 to 11 years and could not be ex- plained by motion artifacts, visual attention, or signal quality differences. Thus, whereas many basic visual processes mature early in childhood, the brain circuits that fuse cues take a very long time to develop.

prof

Special Colloquium Multisensory interactions and plasticity – shooting hidden assumptions, revealing postdictive aspects

Abstract – Multisensory psychophysics and neuroscience have bloomed in the last several decades. In this talk, I first aim to give a critical review of the field by a list of commonly-accepted propositions/beliefs and provide (at least partially) counterevidence from our home-based laboratory. Some of such ‘commonsensical’ beliefs and counterevidence are as the following: 1) Vision is dominant, affecting other modalities, not vice versa, (as seemingly indicated by well-known illusions in the field, such as the McGurk and the Ventriloquism effects). Our “double flash illusion” may be a prominent exception to this principle, where the auditory input determines the content of visual percept (Shams, et al., 2000). A more flexible view may be needed, beyond the classical theories such as vision dominance or the modality specificity. 2) Concurrent crossmodal stimulation is needed for multisensory interactions. That is, a percept is determined exclusively by concurrent stimuli in the other modalities. 3) Multisensory integration follows the Bayesian/Maximum Likelihood predictions. Our study on crossmodal, temporal frequency adaptation seemingly offers a counterexample to both these principles above (i.e., the necessity of concurrent stimulation, and the Maximum Likelihood) (Levitan, et al., 2015). The pattern of results, including crossmodal transfer of adaptation, does not seem to match the straightforward predictions from Maximum Likelihood, at least. 4) There are no intrinsic mappings across modalities, other than associatively learned from experience. “Intrinsic correspondence” (C. Spence), or synesthesia-like associations are known. On top of that, our study of multisensory associations in usage of a sensory-substitution device (vOICe) provides additional evidence of such intrinsic associations (Stiles & Shimojo, 2015). 5) The psychological body is restricted to the physical body, not having much flexibility based on experience. The classical “inverted vision goggles” experiments (G. Stratton, I. Kohler, etc.) indicate rapid recalibration of multisensory body schema. In addition, the latest “Visual-Tactile Rabbit” demonstration in the VR environment points to dynamic flexibility of the multisensory body (Berger & Gonzalez-Franco, 2018). 6) Conscious experiences of multisensory perception are governed by predictive processes. We carefully examined how our own “Auditory-Visual Rabbit” illusion depends on temporal parameters. The results revealed not only predictive but also postdictive aspects, where a stimulus presented in the other modality later in physical time still can affect the integrated event perception (Shimojo, 2014; Stiles, et al., 2018). Altogether, these findings guide us to a more dynamic, and flexible view of multisensory integration. References Berger, C.C. and Gonzalez-Franco, M. Expanding the sense of touch outside the body. Proc 15th ACM Symp Appl Percept – SAP ’18:1–9, 2018. Levitan, C.A., Ban, Y-H.A., Noelle R. B. Stiles, N.R.B. and Shimojo, S.  Rate perception adapts across the senses: evidence for a unified timing mechanism. Sci. Reports, 5:8857,  doi:10.1038/srep08857, 2015. Shams, L., Kamitani, Y. and Shimojo, S. What you see is what you hear. Nature, 408, 788, 2000. Shimojo, S. Postdiction: its implications on visual awareness, hindsight, and sense of agency. Frontiers in Psychology, 196, 1-19, 2014. doi: 10.3389/fpsyg.2014.00196, 2014. Stiles, N. R. B. and Shimojo, S. Auditory Sensory Substitution is Intuitive and Automatic with Texture Stimuli. Scientific Reports, 5:15628, DOI: 10.1038/srep15628, 2015. Stiles, N.R.B., Li, M., Levitan, C.A., Kamitani, Y., Shimojo, S.What you saw is what you will hear: two new illusions with audiovisual postdictive effects. PLOS ONE, 13(10): e0204217.https://doi.org/10.1371/journal.pone.0204217, 2018.

VPEM Journal Club Meeting

Chuan will be presenting the following paper: Riesen et al., 2019, J. Neurosci, “Humans Perceive Binocular Rivalry and Fusion in a Tristable Dynamic State”. Abstract: Human vision combines inputs from the two eyes into one percept. Small differences “fuse” together, whereas larger differences are seen “rivalrously” from one eye at a time. These outcomes are typically treated as mutually exclusive processes, with paradigms targeting one or the other and fusion being unreported in most rivalry studies. Is fusion truly a default, stable state that only breaks into rivalry for non-fusible stimuli? Or are monocular and fused percepts three sub-states of one dynamical system? To determine whether fusion and rivalry are separate processes, we measured human perception of Gabor patches with a range of interocular orientation disparities. Observers (10 female, 5 male) reported rivalrous, fused, and uncertain percepts over time. We found a dynamic “tristable” zone spanning from  25–35° of orientation disparity where fused, left-eye-, or right-eye-dominant percepts could all occur. The temporal characteristics of fusion and non-fusion periods during tristability matched other bistable processes. We tested statistical models with fusion as a higher-level bistable process alternating with rivalry against our findings. None of these fit our data, but a simple bistable model extended to have three states reproduced many of our observations. We conclude that rivalry and fusion are multistable substates capable of direct competition, rather than separate bistable processes.

Photo of Santiago Velasquez

Eyesyght, Towards a Dynamic Visual and tactile Touch Tablet

Abstract- Currently, tactile displays are extremely expensive and hard to produce. They often consist of multiple moving parts, and have a limited fidelity. To show a  tactile image to the user, it generally requires a significant amount of manual labor to convert a visual image to something that can be felt. The Eyesyght  project aims to create a tactile tablet touchscreen that also can show visual images. The display uses electromagnetic impulses to represent shapes on the  screen. This approach has limited moving parts, and can potentially show high enough fidelity to represent both braille and visual shapes, which has never  been done before in a digital display. The tablet can show images that have been converted to grayscale. The Eyesyght project presents a new method for  representing tactile images, and has the potential to replace actuator pins for general tactile displays.   There will be two parts to the session:  I’ll be showing the actual device and discussing its’ broader implications. Hands-on – for those who are interested, I will have an NDA to sign, and those who sign it can learn about the technical underpinnings of the device.   TEDX talk: https://www.youtube.com/watch?v=LNryuVpF1Pw LinkedIn: http://linkedin.com/in/santiago-velasquez-909b3111b

VPEM Journal Club Meeting

Steve will be presenting the following paper: Lappi et al., 2017, Frontiers in Psychology, “Systematic Observation of an Expert Driver’s Gaze Strategy—An On-Road Case Study”. Abstract:  In this paper we present and qualitatively analyze an expert driver’s gaze behavior in natural driving on a real road, with no specific experimental task or instruction. Previous eye tracking research on naturalistic tasks has revealed recurring patterns of gaze behavior that are surprisingly regular and repeatable. Lappi (2016) identified in the literature seven “qualitative laws of gaze behavior in the wild”: recurring patterns that tend to go together, the more so the more naturalistic the setting, all of them expected in extended sequences of fully naturalistic behavior. However, no study to date has observed all in a single experiment. Here, we wanted to do just that: present observations supporting all the “laws” in a single behavioral sequence by a single subject. We discuss the laws in terms of unresolved issues in driver modeling and open challenges for experimental and theoretical development.

Photo of Christian Vogler

“Projects at the Hearing Impairment RERC”

Abstract – RERC and current projects generally Accessibility of Alexa and similar voice assistants to people who do not speak clearly or prefer to sign The genesis of Google Live Transcribe and transition to a public release as an example of successful inclusive design Closed caption quality research, importance of punctuation in closed captions, new work on caption reading speeds Consumer-focused train-the-trainer technology training framework (which was just concluded in the previous RERC, and released for publication) Impact of audio quality parameters on receptive listening (the recent ASSETS publication plus some stuff that is not yet published, including some conversational study results and practices) Impact of AV sync and AV frame rates on receptive listening Smart home alerting for deaf/hh using off the shelf IoT technology  

Photo of Michael A. Webster

Individual differences in color perception and their implications for color coding

Abstract – Despite centuries of study, the principles and processes mediating our color perception remain poorly understood. We have explored these principles by examining individual differences in color vision. On the one hand, color percepts are largely discounted for the spectral sensitivity of the observer, allowing the individual to experience stable percepts despite marked optical and neural variations. On the other hand, these stable percepts (e.g. which hues look pure or unique) vary widely across observers. Analyses of these inter-observer differences reveal a number of surprising properties about the relationships between different color categories and how these categories are represented in the human visual system. lab: http://wolfweb.unr.edu/~mwebster COBRE: http://www.unr.edu/neuroscience

VPEM Journal Club Meeting

Santani will be presenting the following paper: Norman & Thaler, 2019, Proc R. Soc B, “Retinotopic-like maps of spatial sound in primary “visual” cortex of blind human echolocators”.