Monthly
288 pp. per issue
6 x 9, illustrated
ISSN
0899-7667
E-ISSN
1530-888X
2014 Impact factor:
2.21

Neural Computation

July 1, 1996, Vol. 8, No. 5, Pages 1085-1106.
(doi: 10.1162/neco.1996.8.5.1085)
© 1996 Massachusetts Institute of Technology
A Numerical Study on Learning Curves in Stochastic Multilayer Feedforward Networks
Article PDF (912.65 KB)
Abstract

The universal asymptotic scaling laws proposed by Amari et al. are studied in large scale simulations using a CM5. Small stochastic multilayer feedforward networks trained with backpropagation are investigated. In the range of a large number of training patterns t, the asymptotic generalization error scales as 1/t as predicted. For a medium range t a faster 1/t2 scaling is observed. This effect is explained by using higher order corrections of the likelihood expansion. It is shown for small t that the scaling law changes drastically, when the network undergoes a transition from strong overfitting to effective learning.