288 pp. per issue
6 x 9, illustrated
2014 Impact factor:

Neural Computation

May 1995, Vol. 7, No. 3, Pages 507-517
(doi: 10.1162/neco.1995.7.3.507)
© 1995 Massachusetts Institute of Technology
Reduced Representation by Neural Networks with Restricted Receptive Fields
Article PDF (509.35 KB)

Model neural networks can perform dimensional reductions of input data sets using correlation-based learning rules to adjust their weights. Simple Hebbian learning rules lead to an optimal reduction at the single unit level but result in highly redundant network representations. More complex rules designed to reduce or remove this redundancy can develop optimal principal component representations, but they are not very compelling from a biological perspective. Neurons in biological networks have restricted receptive fields limiting their access to the input data space. We find that, within this restricted receptive field architecture, simple correlation-based learning rules can produce surprisingly efficient reduced representations. When noise is present, the size of the receptive fields can be optimally tuned to maximize the accuracy of reconstructions of input data from a reduced representation.