Saliency, Selective Tuning, and Eye Movements: How do these come together?

Event Date

Thursday, October 29th, 2020 – 2:00pm to 3:00pm

Speaker

Dr. John Tsotsos, Centre for Vision Research and Department of Electrical Engineering and Computer Science, York University

Host

Audrey Wong-Kee-You

Abstract

Abstract - Most who have spent any time working on visual attention intuitively know that saliency, selection, and eye movements are somehow connected, but details on their linkages seem elusive. These connections are important to the evolution of our new cognitive architecture STAR (Selective Tuning Attentive Reference). This presentation will begin with very brief overviews of STAR, of our Selective Tuning model of visual attention (ST) and of our visual saliency model AIM (Attention via Information Maximization). It will then develop how ST and AIM interact and what their roles might be in the decisions to move eye gaze within STAR. The result, the STAR-FC model (STAR - Fixation Controller) will be described and its performance detailed. Although a fully implemented computational model, it is unique in that not only does it exhibit human-like performance but it also provides a wealth of falsifiable predictions for future experimental investigation.
http://www.cse.yorku.ca/~tsotsos/Tsotsos/Home.html Improving Zoom accessibility for people with hearing impairmentsPeople with hearing impairments often use lipreading and speechreading to improve speech comprehension. This approach is helpful but only works if the speaker’s face and mouth are clearly visible. For the benefit of people with hearing impairments on Zoom calls, please enable your device’s camera whenever you are speaking on Zoom, and face the camera while you speak. (Feel free to disable your camera when you aren’t speaking.)   
 

Event Category

Event Type