It has been established that the generalization ability of an artificial neural network is strongly dependent on the number of hidden processing elements and weights (Baum and Haussler 1989). There have been several attempts to determine the optimal size of a neural network as part of the learning process. These typically alter the number of hidden nodes and/or connection weightings in a multilayer perceptron by either heuristic methods (Le Cun et al. 1990; Fahlman and Lebiere 1990) or inherently via some network size penalty (Chauvin 1989; Weigend et al. 1991; Nowlan and Hinton 1992). In this note an objective method for network optimization is proposed that eliminates the need for a network size penalty parameter.