Neural Computation
May 1993, Vol. 5, No. 3, Pages 371-373
(doi: 10.1162/neco.1993.5.3.371)
Vapnik-Chervonenkis Dimension Bounds for Two- and Three-Layer Networks
Article PDF (171.22 KB)
Abstract
We show that the Vapnik-Chervonenkis dimension of the class of functions that can be computed by arbitrary two-layer or some completely connected three-layer threshold networks with real inputs is at least linear in the number of weights in the network. In Valiant's "probably approximately correct" learning framework, this implies that the number of random training examples necessary for learning in these networks is at least linear in the number of weights.