Occipital network for figure/ground organization

Lora Likova's picture Written by Lora Likova On the 0 Comments
TitleOccipital network for figure/ground organization
Publication TypeJournal Article
Year of Publication2008
AuthorsLikova, LT, Tyler, CW
JournalExperimental Brain Research
Volume189
Pagination257–267
KeywordsContextual interactions, Figure and ground, hMT+, Perceptual Organization, Salience, Suppression, Temporal asynchrony, Top-down feedback, V1, V2, Visual cortex
Abstract

To study the cortical mechanism of figure/ground categorization in the human brain, we employedfMRI and the temporal-asynchrony paradigm. This paradigmis able to eliminate any diferential activation for localstimulus features, and thus to identify only global perceptualinteractions. Strong segmentation of the image intodiferent spatial configurations was generated solely fromtemporal asynchronies between zones of homogeneous dynamic noise. The figure/ground configuration was a singlegeometric figure enclosed in a larger surround region. Ina control condition, the figure/ground organization waseliminated by segmenting the noise field into many identicaltemporal-asynchrony stripes. The manipulation of thetype of perceptual organization triggered dramatic reorganizationin the cortical activation pattern. The figure/groundconfiguration generated suppression of the ground representation(limited to early retinotopic visual cortex, V1 andV2) and strong activation in the motion complex hMT+/V5+; conversely, both responses were abolished when thefigure/ground organization was eliminated. These resultssuggest that figure/ground processing is mediated by topdownsuppression of the ground representation in the earliestvisual areas V1/V2 through a signal arising in themotion complex. We propose a model of a recurrent corticalarchitecture incorporating suppressive feedback thatoperates in a topographic manner, forming a figure/groundcategorization network distinct from that for “pure” scenesegmentation and thus underlying the perceptual organizationof dynamic scenes into cognitively relevant components.

DOI10.1007/s00221-008-1417-6

Related Centers, Labs, Projects