Edward H Silson1, Richard C Reynolds2, Daniel Janini1, Chris I Baker1, Dwight J Kravitz3 1Section on Learning & Plasticity, Laboratory of Brain & Cognition, National Institute of Mental Health, Bethesda, MD, USA. 2Scientific and Statistical Computing Core, National Institute of Mental Health, Bethesda, MD, USA. 33. Department of Psychology, The George Washington University, 2125 G St, NW, Washington DC 20052, USA |
High-level visual cortex contains regions that selectively and differentially process certain categories, such as words, scenes and faces, but little is known about how they are optimized to support such processing. Here, using a population receptive field (pRF) model that allows for estimates of elliptical and orientated pRFs, we show that two regions, the visual word form area (VWFA) and parrahippocampal place area (PPA), which subserve word reading and scene processing, respectively, integrate information across visual space in vastly different ways, each optimized to support their preferred category. Eighteen participants completed pRF mapping experiments and category-selective functional localizers. A combination of group-based and individual participant data was used to define VWFA, whereas PPA was defined in each individual. Word-selective cortex VWFA contained pRFs that were simultaneously foveal, elliptical, and predominantly horizontal, the ideal parameters for recognizing word forms, whilst those in scene-selective PPA were peripheral, more circular, and more broadly tuned in orientation. Importantly, these pRF patterns also differ from those observed in early visual cortex, highlighting different processing mechanisms between low- and high-level visual regions. These differing patterns of pRF properties suggest that high-level visual cortex is fundamentally optimized to support the processing of specific visual categories through the differential integration of information across visual space. Deep Convolutional Networks do not Make Classifications Based on Global Object Shape | Nicholas Baker1, Hongjing Lu1, Gennady Erlikhman2, Philip J Kellman1 1University of California, Los Angeles 2University of Nevada, Reno | Deep convolutional networks (DCNNs) have achieved previously unseen performance in object classification, raising questions about whether DCNNs operate similarly to human vision. In biological vision, shape is arguably the most important cue for recognition. We tested whether DCNNs utilize object shape information. In Experiments 1 and 2, we tested DCNNs on shapes lacking typical context and surface texture, using glass figurines and silhouettes. The network showed no ability to classify glass figurines but correctly classified some silhouettes. Specific aspects of the results led us to hypothesize that DCNNs do not distinguish object’s bounding contours from other edge information, and that DCNNs access some local shape features, but not global shape. In Experiment 3, we scrambled correctly classified silhouette images to test classification accuracy when local features were preserved but global shape was disrupted. DCNNs gave the same classification labels despite disruptions of global form that reduced human accuracy to 28%. In Experiment 4, we retrained the decision layer of a DCNN to discriminate between circles and squares. Then, we tested the network on circles composed of local half-square elements and squares composed of half-circle elements. The network classified the former as squares and the latter as circles. In Experiment 5, we attempted to retrain the decision layer of a DCNN to discriminate between circles and ellipses. The network was unable to learn this discrimination, maintaining chance performance even after extended training. These results provide evidence that DCNNs may have access to some local shape information in the form of local edge relations, but they have no access to global object shapes. |
|