Monthly
288 pp. per issue
6 x 9, illustrated
ISSN
0899-7667
E-ISSN
1530-888X
2014 Impact factor:
2.21

Neural Computation

February 15, 1996, Vol. 8, No. 2, Pages 451-460
(doi: 10.1162/neco.1996.8.2.451)
© 1996 Massachusetts Institute of Technology
The Interchangeability of Learning Rate and Gain in Backpropagation Neural Networks
Article PDF (440.79 KB)
Abstract

The backpropagation algorithm is widely used for training multilayer neural networks. In this publication the gain of its activation function(s) is investigated. In specific, it is proven that changing the gain of the activation function is equivalent to changing the learning rate and the weights. This simplifies the backpropagation learning rule by eliminating one of its parameters. The theorem can be extended to hold for some well-known variations on the backpropagation algorithm, such as using a momentum term, flat spot elimination, or adaptive gain. Furthermore, it is successfully applied to compensate for the nonstandard gain of optical sigmoids for optical neural networks.