Post-AVRO/VSS Talks

Event Date

Wednesday, May 30th, 2018 – 12:00pm to 1:00pm

Host

Arvind Chandna

Abstract

Post-AVRO/VSS Talks Today!

 I’m looking forward to seeing you all for this rejuvenated Brown Bag today. 10 minute presentations and 5 minute discussions

start promptly at 12 midday Room 204 (today’s seminar may run over by 10 minutes).

Here are the volunteers (thank you!) for Brown Bag feddabck from ARVO and VSS.

1. Chris Tyler

Release of Cone-Rod Suppression as a Key Mechanism for Concussion-Induced Light Sensitivity 

2. Preeti Verghese

3. Natela Shanidze

4. Saeideh Ghahghaei

5. Zheng Ma

Population receptive fields in high-level visual cortex are tuned for specific categories. 

Deep Convolutional Networks do not Make Classifications Based on Global Object Shape

(abstracts for this presentation appended below)

Zheng Ma presentation

Abstracts: 

Population receptive fields in high-level visual cortex are tuned for specific categories

Edward H Silson1, Richard C Reynolds2, Daniel Janini1, Chris I Baker1, Dwight J Kravitz3
1Section on Learning & Plasticity, Laboratory of Brain & Cognition, National Institute of Mental Health, Bethesda, MD, USA.
2Scientific and Statistical Computing Core, National Institute of Mental Health, Bethesda, MD, USA.
33. Department of Psychology, The George Washington University, 2125 G St, NW, Washington DC 20052, USA

High-level visual cortex contains regions that selectively and differentially process certain categories, such as words, scenes and faces, but little is known about how they are optimized to support such processing. Here, using a population receptive field (pRF) model that allows for estimates of elliptical and orientated pRFs, we show that two regions, the visual word form area (VWFA) and parrahippocampal place area (PPA), which subserve word reading and scene processing, respectively, integrate information across visual space in vastly different ways, each optimized to support their preferred category.
 
Eighteen participants completed pRF mapping experiments and category-selective functional localizers. A combination of group-based and individual participant data was used to define VWFA, whereas PPA was defined in each individual.  
 
Word-selective cortex VWFA contained pRFs that were simultaneously foveal, elliptical, and predominantly horizontal, the ideal parameters for recognizing word forms, whilst those in scene-selective PPA were peripheral, more circular, and more broadly tuned in orientation. Importantly, these pRF patterns also differ from those observed in early visual cortex, highlighting different processing mechanisms between low- and high-level visual regions.
 
These differing patterns of pRF properties suggest that high-level visual cortex is fundamentally optimized to support the processing of specific visual categories through the differential integration of information across visual space.

Deep Convolutional Networks do not Make Classifications Based on Global Object Shape

Nicholas Baker1, Hongjing Lu1, Gennady Erlikhman2, Philip J Kellman1
1University of California, Los Angeles
2University of Nevada, Reno

Deep convolutional networks (DCNNs) have achieved previously unseen performance in object classification, raising questions about whether DCNNs operate similarly to human vision. In biological vision, shape is arguably the most important cue for recognition. We tested whether DCNNs utilize object shape information. In Experiments 1 and 2, we tested DCNNs on shapes lacking typical context and surface texture, using glass figurines and silhouettes. The network showed no ability to classify glass figurines but correctly classified some silhouettes. Specific aspects of the results led us to hypothesize that DCNNs do not distinguish object’s bounding contours from other edge information, and that DCNNs access some local shape features, but not global shape. In Experiment 3, we scrambled correctly classified silhouette images to test classification accuracy when local features were preserved but global shape was disrupted. DCNNs gave the same classification labels despite disruptions of global form that reduced human accuracy to 28%. In Experiment 4, we retrained the decision layer of a DCNN to discriminate between circles and squares. Then, we tested the network on circles composed of local half-square elements and squares composed of half-circle elements. The network classified the former as squares and the latter as circles. In Experiment 5, we attempted to retrain the decision layer of a DCNN to discriminate between circles and ellipses. The network was unable to learn this discrimination, maintaining chance performance even after extended training. These results provide evidence that DCNNs may have access to some local shape information in the form of local edge relations, but they have no access to global object shapes.

Event Type