Monthly
288 pp. per issue
6 x 9, illustrated
ISSN
0899-7667
E-ISSN
1530-888X
2014 Impact factor:
2.21

Neural Computation

November 1995, Vol. 7, No. 6, Pages 1191-1205
(doi: 10.1162/neco.1995.7.6.1191)
© 1995 Massachusetts Institute of Technology
Introducing Asymmetry into Interneuron Learning
Article PDF (667.67 KB)
Abstract

A review is given of a new artificial neural network architecture in which the weights converge to the principal component subspace. The weights learn by only simple Hebbian learning yet require no clipping, normalization or weight decay. The net self-organizes using negative feedback of activation from a set of "interneurons" to the input neurons. By allowing this negative feedback from the interneurons to act on other interneurons we can introduce the necessary asymmetry to cause convergence to the actual principal components. Simulations and analysis confirm such convergence.