Monthly
208 pp. per issue
8 1/2 x 11, illustrated
ISSN
0898-929X
E-ISSN
1530-8898
2014 Impact factor:
4.69

Journal of Cognitive Neuroscience

January 1997, Vol. 9, No. 1, Pages 117-132
(doi: 10.1162/jocn.1997.9.1.117)
© 1997 by the Massachusetts Institute of Technology
Cortical Synchronization and Perceptual Framing
Article PDF (1.78 MB)
Abstract

How does the brain group together different parts of an object into a coherent visual object representation? Different parts of an object may be processed by the brain at different rates and may thus become desynchronized. Perceptual framing is a process that resynchronizes cortical activities corresponding to the same retinal object. A neural network model is presented that is able to rapidly resynchronize desynchronized neural activities. The model provides a link between perceptual and brain data. Model properties quantitatively simulate perceptual framing data, including psychophysical data about temporal order judgments and the reduction of threshold contrast as a function of stimulus length. Such a model has earlier been used to explain data about illusory contour formation, texture segregation, shape-from-shading, 3-D vision, and cortical receptive fields. The model hereby shows how many data may be understood as manifestations of a cortical grouping process that can rapidly resynchronize image parts that belong together in visual object representations. The model exhibits better synchronization in the presence of noise than without noise, a type of stochastic resonance, and synchronizes robustly when cells that represent different stimulus orientations compete. These properties arise when fast long-range cooperation and slow short-range competition interact via nonlinear feedback interactions with cells that obey shunting equations.